00:00:00.000 Started by upstream project "autotest-nightly" build number 4357 00:00:00.000 originally caused by: 00:00:00.000 Started by upstream project "nightly-trigger" build number 3720 00:00:00.000 originally caused by: 00:00:00.000 Started by timer 00:00:00.094 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.094 The recommended git tool is: git 00:00:00.094 using credential 00000000-0000-0000-0000-000000000002 00:00:00.096 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.141 Fetching changes from the remote Git repository 00:00:00.143 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.192 Using shallow fetch with depth 1 00:00:00.192 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.192 > git --version # timeout=10 00:00:00.242 > git --version # 'git version 2.39.2' 00:00:00.242 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.277 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.277 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:05.239 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:05.253 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:05.265 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:05.265 > git config core.sparsecheckout # timeout=10 00:00:05.275 > git read-tree -mu HEAD # timeout=10 00:00:05.291 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:05.317 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:05.317 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:05.416 [Pipeline] Start of Pipeline 00:00:05.429 [Pipeline] library 00:00:05.430 Loading library shm_lib@master 00:00:05.430 Library shm_lib@master is cached. Copying from home. 00:00:05.447 [Pipeline] node 00:00:05.459 Running on WFP4 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:05.460 [Pipeline] { 00:00:05.467 [Pipeline] catchError 00:00:05.468 [Pipeline] { 00:00:05.479 [Pipeline] wrap 00:00:05.486 [Pipeline] { 00:00:05.495 [Pipeline] stage 00:00:05.497 [Pipeline] { (Prologue) 00:00:05.706 [Pipeline] sh 00:00:05.985 + logger -p user.info -t JENKINS-CI 00:00:05.999 [Pipeline] echo 00:00:06.000 Node: WFP4 00:00:06.005 [Pipeline] sh 00:00:06.297 [Pipeline] setCustomBuildProperty 00:00:06.306 [Pipeline] echo 00:00:06.307 Cleanup processes 00:00:06.311 [Pipeline] sh 00:00:06.590 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.590 3613464 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.599 [Pipeline] sh 00:00:06.879 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.879 ++ grep -v 'sudo pgrep' 00:00:06.879 ++ awk '{print $1}' 00:00:06.879 + sudo kill -9 00:00:06.879 + true 00:00:06.890 [Pipeline] cleanWs 00:00:06.898 [WS-CLEANUP] Deleting project workspace... 00:00:06.898 [WS-CLEANUP] Deferred wipeout is used... 00:00:06.903 [WS-CLEANUP] done 00:00:06.907 [Pipeline] setCustomBuildProperty 00:00:06.918 [Pipeline] sh 00:00:07.198 + sudo git config --global --replace-all safe.directory '*' 00:00:07.279 [Pipeline] httpRequest 00:00:08.005 [Pipeline] echo 00:00:08.007 Sorcerer 10.211.164.20 is alive 00:00:08.016 [Pipeline] retry 00:00:08.018 [Pipeline] { 00:00:08.027 [Pipeline] httpRequest 00:00:08.031 HttpMethod: GET 00:00:08.032 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:08.032 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:08.035 Response Code: HTTP/1.1 200 OK 00:00:08.035 Success: Status code 200 is in the accepted range: 200,404 00:00:08.036 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:09.178 [Pipeline] } 00:00:09.193 [Pipeline] // retry 00:00:09.197 [Pipeline] sh 00:00:09.475 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:09.491 [Pipeline] httpRequest 00:00:10.177 [Pipeline] echo 00:00:10.179 Sorcerer 10.211.164.20 is alive 00:00:10.188 [Pipeline] retry 00:00:10.189 [Pipeline] { 00:00:10.203 [Pipeline] httpRequest 00:00:10.207 HttpMethod: GET 00:00:10.208 URL: http://10.211.164.20/packages/spdk_e01cb43b8578f9155d07a9bc6eee4e70a3af96b0.tar.gz 00:00:10.208 Sending request to url: http://10.211.164.20/packages/spdk_e01cb43b8578f9155d07a9bc6eee4e70a3af96b0.tar.gz 00:00:10.221 Response Code: HTTP/1.1 200 OK 00:00:10.221 Success: Status code 200 is in the accepted range: 200,404 00:00:10.221 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_e01cb43b8578f9155d07a9bc6eee4e70a3af96b0.tar.gz 00:01:20.763 [Pipeline] } 00:01:20.786 [Pipeline] // retry 00:01:20.794 [Pipeline] sh 00:01:21.077 + tar --no-same-owner -xf spdk_e01cb43b8578f9155d07a9bc6eee4e70a3af96b0.tar.gz 00:01:23.622 [Pipeline] sh 00:01:23.905 + git -C spdk log --oneline -n5 00:01:23.905 e01cb43b8 mk/spdk.common.mk sed the minor version 00:01:23.905 d58eef2a2 nvme/rdma: Fix reinserting qpair in connecting list after stale state 00:01:23.905 2104eacf0 test/check_so_deps: use VERSION to look for prior tags 00:01:23.905 66289a6db build: use VERSION file for storing version 00:01:23.905 626389917 nvme/rdma: Don't limit max_sge if UMR is used 00:01:23.915 [Pipeline] } 00:01:23.928 [Pipeline] // stage 00:01:23.936 [Pipeline] stage 00:01:23.938 [Pipeline] { (Prepare) 00:01:23.953 [Pipeline] writeFile 00:01:23.968 [Pipeline] sh 00:01:24.252 + logger -p user.info -t JENKINS-CI 00:01:24.264 [Pipeline] sh 00:01:24.547 + logger -p user.info -t JENKINS-CI 00:01:24.610 [Pipeline] sh 00:01:24.893 + cat autorun-spdk.conf 00:01:24.893 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:24.893 SPDK_TEST_NVMF=1 00:01:24.893 SPDK_TEST_NVME_CLI=1 00:01:24.893 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:24.893 SPDK_TEST_NVMF_NICS=e810 00:01:24.893 SPDK_RUN_ASAN=1 00:01:24.893 SPDK_RUN_UBSAN=1 00:01:24.893 NET_TYPE=phy 00:01:24.901 RUN_NIGHTLY=1 00:01:24.906 [Pipeline] readFile 00:01:24.930 [Pipeline] withEnv 00:01:24.932 [Pipeline] { 00:01:24.944 [Pipeline] sh 00:01:25.228 + set -ex 00:01:25.228 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:01:25.228 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:25.228 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:25.228 ++ SPDK_TEST_NVMF=1 00:01:25.228 ++ SPDK_TEST_NVME_CLI=1 00:01:25.228 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:25.228 ++ SPDK_TEST_NVMF_NICS=e810 00:01:25.228 ++ SPDK_RUN_ASAN=1 00:01:25.228 ++ SPDK_RUN_UBSAN=1 00:01:25.228 ++ NET_TYPE=phy 00:01:25.228 ++ RUN_NIGHTLY=1 00:01:25.228 + case $SPDK_TEST_NVMF_NICS in 00:01:25.228 + DRIVERS=ice 00:01:25.228 + [[ tcp == \r\d\m\a ]] 00:01:25.228 + [[ -n ice ]] 00:01:25.228 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:25.228 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:25.228 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:01:25.228 rmmod: ERROR: Module i40iw is not currently loaded 00:01:25.228 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:25.228 + true 00:01:25.228 + for D in $DRIVERS 00:01:25.228 + sudo modprobe ice 00:01:25.228 + exit 0 00:01:25.237 [Pipeline] } 00:01:25.253 [Pipeline] // withEnv 00:01:25.258 [Pipeline] } 00:01:25.272 [Pipeline] // stage 00:01:25.280 [Pipeline] catchError 00:01:25.283 [Pipeline] { 00:01:25.298 [Pipeline] timeout 00:01:25.298 Timeout set to expire in 1 hr 0 min 00:01:25.300 [Pipeline] { 00:01:25.313 [Pipeline] stage 00:01:25.315 [Pipeline] { (Tests) 00:01:25.329 [Pipeline] sh 00:01:25.612 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:25.613 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:25.613 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:25.613 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:01:25.613 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:25.613 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:25.613 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:01:25.613 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:25.613 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:25.613 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:25.613 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:01:25.613 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:25.613 + source /etc/os-release 00:01:25.613 ++ NAME='Fedora Linux' 00:01:25.613 ++ VERSION='39 (Cloud Edition)' 00:01:25.613 ++ ID=fedora 00:01:25.613 ++ VERSION_ID=39 00:01:25.613 ++ VERSION_CODENAME= 00:01:25.613 ++ PLATFORM_ID=platform:f39 00:01:25.613 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:25.613 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:25.613 ++ LOGO=fedora-logo-icon 00:01:25.613 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:25.613 ++ HOME_URL=https://fedoraproject.org/ 00:01:25.613 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:25.613 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:25.613 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:25.613 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:25.613 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:25.613 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:25.613 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:25.613 ++ SUPPORT_END=2024-11-12 00:01:25.613 ++ VARIANT='Cloud Edition' 00:01:25.613 ++ VARIANT_ID=cloud 00:01:25.613 + uname -a 00:01:25.613 Linux spdk-wfp-04 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 05:41:37 UTC 2024 x86_64 GNU/Linux 00:01:25.613 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:28.148 Hugepages 00:01:28.148 node hugesize free / total 00:01:28.148 node0 1048576kB 0 / 0 00:01:28.148 node0 2048kB 0 / 0 00:01:28.148 node1 1048576kB 0 / 0 00:01:28.148 node1 2048kB 0 / 0 00:01:28.148 00:01:28.148 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:28.148 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:01:28.148 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:01:28.148 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:01:28.149 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:01:28.149 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:01:28.149 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:01:28.149 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:01:28.149 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:01:28.149 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:01:28.149 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:01:28.149 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:01:28.149 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:01:28.149 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:01:28.149 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:01:28.149 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:01:28.149 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:01:28.149 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:01:28.149 + rm -f /tmp/spdk-ld-path 00:01:28.149 + source autorun-spdk.conf 00:01:28.149 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:28.149 ++ SPDK_TEST_NVMF=1 00:01:28.149 ++ SPDK_TEST_NVME_CLI=1 00:01:28.149 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:28.149 ++ SPDK_TEST_NVMF_NICS=e810 00:01:28.149 ++ SPDK_RUN_ASAN=1 00:01:28.149 ++ SPDK_RUN_UBSAN=1 00:01:28.149 ++ NET_TYPE=phy 00:01:28.149 ++ RUN_NIGHTLY=1 00:01:28.149 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:28.149 + [[ -n '' ]] 00:01:28.149 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:28.149 + for M in /var/spdk/build-*-manifest.txt 00:01:28.149 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:28.149 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:28.149 + for M in /var/spdk/build-*-manifest.txt 00:01:28.149 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:28.149 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:28.149 + for M in /var/spdk/build-*-manifest.txt 00:01:28.149 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:28.149 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:28.149 ++ uname 00:01:28.149 + [[ Linux == \L\i\n\u\x ]] 00:01:28.149 + sudo dmesg -T 00:01:28.149 + sudo dmesg --clear 00:01:28.149 + dmesg_pid=3614924 00:01:28.149 + [[ Fedora Linux == FreeBSD ]] 00:01:28.149 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:28.149 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:28.149 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:28.149 + [[ -x /usr/src/fio-static/fio ]] 00:01:28.149 + export FIO_BIN=/usr/src/fio-static/fio 00:01:28.149 + FIO_BIN=/usr/src/fio-static/fio 00:01:28.149 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:28.149 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:28.149 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:28.149 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:28.149 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:28.149 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:28.149 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:28.149 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:28.149 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:28.149 + sudo dmesg -Tw 00:01:28.149 10:03:21 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:01:28.149 10:03:21 -- spdk/autorun.sh@20 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:28.149 10:03:21 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:28.149 10:03:21 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:01:28.149 10:03:21 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@3 -- $ SPDK_TEST_NVME_CLI=1 00:01:28.149 10:03:21 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@4 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:28.149 10:03:21 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@5 -- $ SPDK_TEST_NVMF_NICS=e810 00:01:28.149 10:03:21 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@6 -- $ SPDK_RUN_ASAN=1 00:01:28.149 10:03:21 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@7 -- $ SPDK_RUN_UBSAN=1 00:01:28.149 10:03:21 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@8 -- $ NET_TYPE=phy 00:01:28.149 10:03:21 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@9 -- $ RUN_NIGHTLY=1 00:01:28.149 10:03:21 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:01:28.149 10:03:21 -- spdk/autorun.sh@25 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autobuild.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:28.149 10:03:21 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:01:28.149 10:03:21 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:28.149 10:03:21 -- scripts/common.sh@15 -- $ shopt -s extglob 00:01:28.149 10:03:21 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:28.149 10:03:21 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:28.149 10:03:21 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:28.149 10:03:21 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:28.149 10:03:21 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:28.149 10:03:21 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:28.149 10:03:21 -- paths/export.sh@5 -- $ export PATH 00:01:28.149 10:03:21 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:28.149 10:03:21 -- common/autobuild_common.sh@492 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:28.149 10:03:21 -- common/autobuild_common.sh@493 -- $ date +%s 00:01:28.149 10:03:21 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1734080601.XXXXXX 00:01:28.149 10:03:21 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1734080601.sETXdF 00:01:28.149 10:03:21 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:01:28.149 10:03:21 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:01:28.149 10:03:21 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:01:28.149 10:03:21 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:28.149 10:03:21 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:28.149 10:03:21 -- common/autobuild_common.sh@509 -- $ get_config_params 00:01:28.149 10:03:21 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:01:28.149 10:03:21 -- common/autotest_common.sh@10 -- $ set +x 00:01:28.149 10:03:21 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk' 00:01:28.149 10:03:21 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:01:28.149 10:03:21 -- pm/common@17 -- $ local monitor 00:01:28.149 10:03:21 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:28.149 10:03:21 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:28.149 10:03:21 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:28.149 10:03:21 -- pm/common@21 -- $ date +%s 00:01:28.149 10:03:21 -- pm/common@21 -- $ date +%s 00:01:28.149 10:03:21 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:28.149 10:03:21 -- pm/common@25 -- $ sleep 1 00:01:28.149 10:03:21 -- pm/common@21 -- $ date +%s 00:01:28.149 10:03:21 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1734080601 00:01:28.149 10:03:21 -- pm/common@21 -- $ date +%s 00:01:28.149 10:03:21 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1734080601 00:01:28.149 10:03:21 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1734080601 00:01:28.149 10:03:21 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1734080601 00:01:28.149 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1734080601_collect-vmstat.pm.log 00:01:28.149 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1734080601_collect-cpu-load.pm.log 00:01:28.149 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1734080601_collect-cpu-temp.pm.log 00:01:28.149 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1734080601_collect-bmc-pm.bmc.pm.log 00:01:29.527 10:03:22 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:01:29.527 10:03:22 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:29.527 10:03:22 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:29.527 10:03:22 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:29.527 10:03:22 -- spdk/autobuild.sh@16 -- $ date -u 00:01:29.527 Fri Dec 13 09:03:22 AM UTC 2024 00:01:29.527 10:03:22 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:29.527 v25.01-rc1-2-ge01cb43b8 00:01:29.527 10:03:22 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:01:29.527 10:03:22 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:01:29.527 10:03:22 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:29.527 10:03:22 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:29.527 10:03:22 -- common/autotest_common.sh@10 -- $ set +x 00:01:29.527 ************************************ 00:01:29.527 START TEST asan 00:01:29.527 ************************************ 00:01:29.527 10:03:23 asan -- common/autotest_common.sh@1129 -- $ echo 'using asan' 00:01:29.527 using asan 00:01:29.527 00:01:29.527 real 0m0.000s 00:01:29.527 user 0m0.000s 00:01:29.527 sys 0m0.000s 00:01:29.527 10:03:23 asan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:01:29.527 10:03:23 asan -- common/autotest_common.sh@10 -- $ set +x 00:01:29.527 ************************************ 00:01:29.527 END TEST asan 00:01:29.527 ************************************ 00:01:29.527 10:03:23 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:29.527 10:03:23 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:29.527 10:03:23 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:29.527 10:03:23 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:29.527 10:03:23 -- common/autotest_common.sh@10 -- $ set +x 00:01:29.527 ************************************ 00:01:29.527 START TEST ubsan 00:01:29.527 ************************************ 00:01:29.527 10:03:23 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:01:29.527 using ubsan 00:01:29.527 00:01:29.527 real 0m0.000s 00:01:29.527 user 0m0.000s 00:01:29.527 sys 0m0.000s 00:01:29.527 10:03:23 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:01:29.527 10:03:23 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:29.527 ************************************ 00:01:29.527 END TEST ubsan 00:01:29.527 ************************************ 00:01:29.527 10:03:23 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:29.527 10:03:23 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:29.527 10:03:23 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:29.527 10:03:23 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:29.527 10:03:23 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:29.527 10:03:23 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:29.527 10:03:23 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:29.527 10:03:23 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:29.527 10:03:23 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-shared 00:01:29.527 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:01:29.527 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:29.786 Using 'verbs' RDMA provider 00:01:42.930 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:55.146 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:55.146 Creating mk/config.mk...done. 00:01:55.146 Creating mk/cc.flags.mk...done. 00:01:55.146 Type 'make' to build. 00:01:55.146 10:03:47 -- spdk/autobuild.sh@70 -- $ run_test make make -j96 00:01:55.146 10:03:47 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:55.146 10:03:47 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:55.146 10:03:47 -- common/autotest_common.sh@10 -- $ set +x 00:01:55.146 ************************************ 00:01:55.146 START TEST make 00:01:55.146 ************************************ 00:01:55.146 10:03:47 make -- common/autotest_common.sh@1129 -- $ make -j96 00:02:03.280 The Meson build system 00:02:03.280 Version: 1.5.0 00:02:03.280 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:02:03.280 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:02:03.280 Build type: native build 00:02:03.280 Program cat found: YES (/usr/bin/cat) 00:02:03.280 Project name: DPDK 00:02:03.280 Project version: 24.03.0 00:02:03.280 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:03.280 C linker for the host machine: cc ld.bfd 2.40-14 00:02:03.280 Host machine cpu family: x86_64 00:02:03.280 Host machine cpu: x86_64 00:02:03.280 Message: ## Building in Developer Mode ## 00:02:03.280 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:03.280 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:02:03.280 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:03.280 Program python3 found: YES (/usr/bin/python3) 00:02:03.280 Program cat found: YES (/usr/bin/cat) 00:02:03.280 Compiler for C supports arguments -march=native: YES 00:02:03.280 Checking for size of "void *" : 8 00:02:03.280 Checking for size of "void *" : 8 (cached) 00:02:03.280 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:02:03.280 Library m found: YES 00:02:03.280 Library numa found: YES 00:02:03.280 Has header "numaif.h" : YES 00:02:03.280 Library fdt found: NO 00:02:03.280 Library execinfo found: NO 00:02:03.280 Has header "execinfo.h" : YES 00:02:03.280 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:03.280 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:03.280 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:03.280 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:03.280 Run-time dependency openssl found: YES 3.1.1 00:02:03.280 Run-time dependency libpcap found: YES 1.10.4 00:02:03.280 Has header "pcap.h" with dependency libpcap: YES 00:02:03.280 Compiler for C supports arguments -Wcast-qual: YES 00:02:03.280 Compiler for C supports arguments -Wdeprecated: YES 00:02:03.280 Compiler for C supports arguments -Wformat: YES 00:02:03.280 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:03.280 Compiler for C supports arguments -Wformat-security: NO 00:02:03.280 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:03.280 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:03.280 Compiler for C supports arguments -Wnested-externs: YES 00:02:03.280 Compiler for C supports arguments -Wold-style-definition: YES 00:02:03.280 Compiler for C supports arguments -Wpointer-arith: YES 00:02:03.280 Compiler for C supports arguments -Wsign-compare: YES 00:02:03.280 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:03.280 Compiler for C supports arguments -Wundef: YES 00:02:03.280 Compiler for C supports arguments -Wwrite-strings: YES 00:02:03.280 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:03.280 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:03.280 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:03.280 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:03.280 Program objdump found: YES (/usr/bin/objdump) 00:02:03.280 Compiler for C supports arguments -mavx512f: YES 00:02:03.280 Checking if "AVX512 checking" compiles: YES 00:02:03.280 Fetching value of define "__SSE4_2__" : 1 00:02:03.280 Fetching value of define "__AES__" : 1 00:02:03.280 Fetching value of define "__AVX__" : 1 00:02:03.280 Fetching value of define "__AVX2__" : 1 00:02:03.280 Fetching value of define "__AVX512BW__" : 1 00:02:03.280 Fetching value of define "__AVX512CD__" : 1 00:02:03.280 Fetching value of define "__AVX512DQ__" : 1 00:02:03.280 Fetching value of define "__AVX512F__" : 1 00:02:03.280 Fetching value of define "__AVX512VL__" : 1 00:02:03.280 Fetching value of define "__PCLMUL__" : 1 00:02:03.280 Fetching value of define "__RDRND__" : 1 00:02:03.280 Fetching value of define "__RDSEED__" : 1 00:02:03.280 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:03.280 Fetching value of define "__znver1__" : (undefined) 00:02:03.280 Fetching value of define "__znver2__" : (undefined) 00:02:03.280 Fetching value of define "__znver3__" : (undefined) 00:02:03.280 Fetching value of define "__znver4__" : (undefined) 00:02:03.280 Library asan found: YES 00:02:03.280 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:03.280 Message: lib/log: Defining dependency "log" 00:02:03.280 Message: lib/kvargs: Defining dependency "kvargs" 00:02:03.280 Message: lib/telemetry: Defining dependency "telemetry" 00:02:03.280 Library rt found: YES 00:02:03.280 Checking for function "getentropy" : NO 00:02:03.280 Message: lib/eal: Defining dependency "eal" 00:02:03.280 Message: lib/ring: Defining dependency "ring" 00:02:03.280 Message: lib/rcu: Defining dependency "rcu" 00:02:03.280 Message: lib/mempool: Defining dependency "mempool" 00:02:03.280 Message: lib/mbuf: Defining dependency "mbuf" 00:02:03.280 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:03.280 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:03.280 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:03.280 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:03.280 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:03.280 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:02:03.280 Compiler for C supports arguments -mpclmul: YES 00:02:03.280 Compiler for C supports arguments -maes: YES 00:02:03.280 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:03.280 Compiler for C supports arguments -mavx512bw: YES 00:02:03.280 Compiler for C supports arguments -mavx512dq: YES 00:02:03.280 Compiler for C supports arguments -mavx512vl: YES 00:02:03.280 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:03.280 Compiler for C supports arguments -mavx2: YES 00:02:03.280 Compiler for C supports arguments -mavx: YES 00:02:03.280 Message: lib/net: Defining dependency "net" 00:02:03.280 Message: lib/meter: Defining dependency "meter" 00:02:03.280 Message: lib/ethdev: Defining dependency "ethdev" 00:02:03.280 Message: lib/pci: Defining dependency "pci" 00:02:03.280 Message: lib/cmdline: Defining dependency "cmdline" 00:02:03.280 Message: lib/hash: Defining dependency "hash" 00:02:03.280 Message: lib/timer: Defining dependency "timer" 00:02:03.280 Message: lib/compressdev: Defining dependency "compressdev" 00:02:03.280 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:03.280 Message: lib/dmadev: Defining dependency "dmadev" 00:02:03.281 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:03.281 Message: lib/power: Defining dependency "power" 00:02:03.281 Message: lib/reorder: Defining dependency "reorder" 00:02:03.281 Message: lib/security: Defining dependency "security" 00:02:03.281 Has header "linux/userfaultfd.h" : YES 00:02:03.281 Has header "linux/vduse.h" : YES 00:02:03.281 Message: lib/vhost: Defining dependency "vhost" 00:02:03.281 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:03.281 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:03.281 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:03.281 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:03.281 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:03.281 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:03.281 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:03.281 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:03.281 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:03.281 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:03.281 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:03.281 Configuring doxy-api-html.conf using configuration 00:02:03.281 Configuring doxy-api-man.conf using configuration 00:02:03.281 Program mandb found: YES (/usr/bin/mandb) 00:02:03.281 Program sphinx-build found: NO 00:02:03.281 Configuring rte_build_config.h using configuration 00:02:03.281 Message: 00:02:03.281 ================= 00:02:03.281 Applications Enabled 00:02:03.281 ================= 00:02:03.281 00:02:03.281 apps: 00:02:03.281 00:02:03.281 00:02:03.281 Message: 00:02:03.281 ================= 00:02:03.281 Libraries Enabled 00:02:03.281 ================= 00:02:03.281 00:02:03.281 libs: 00:02:03.281 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:03.281 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:03.281 cryptodev, dmadev, power, reorder, security, vhost, 00:02:03.281 00:02:03.281 Message: 00:02:03.281 =============== 00:02:03.281 Drivers Enabled 00:02:03.281 =============== 00:02:03.281 00:02:03.281 common: 00:02:03.281 00:02:03.281 bus: 00:02:03.281 pci, vdev, 00:02:03.281 mempool: 00:02:03.281 ring, 00:02:03.281 dma: 00:02:03.281 00:02:03.281 net: 00:02:03.281 00:02:03.281 crypto: 00:02:03.281 00:02:03.281 compress: 00:02:03.281 00:02:03.281 vdpa: 00:02:03.281 00:02:03.281 00:02:03.281 Message: 00:02:03.281 ================= 00:02:03.281 Content Skipped 00:02:03.281 ================= 00:02:03.281 00:02:03.281 apps: 00:02:03.281 dumpcap: explicitly disabled via build config 00:02:03.281 graph: explicitly disabled via build config 00:02:03.281 pdump: explicitly disabled via build config 00:02:03.281 proc-info: explicitly disabled via build config 00:02:03.281 test-acl: explicitly disabled via build config 00:02:03.281 test-bbdev: explicitly disabled via build config 00:02:03.281 test-cmdline: explicitly disabled via build config 00:02:03.281 test-compress-perf: explicitly disabled via build config 00:02:03.281 test-crypto-perf: explicitly disabled via build config 00:02:03.281 test-dma-perf: explicitly disabled via build config 00:02:03.281 test-eventdev: explicitly disabled via build config 00:02:03.281 test-fib: explicitly disabled via build config 00:02:03.281 test-flow-perf: explicitly disabled via build config 00:02:03.281 test-gpudev: explicitly disabled via build config 00:02:03.281 test-mldev: explicitly disabled via build config 00:02:03.281 test-pipeline: explicitly disabled via build config 00:02:03.281 test-pmd: explicitly disabled via build config 00:02:03.281 test-regex: explicitly disabled via build config 00:02:03.281 test-sad: explicitly disabled via build config 00:02:03.281 test-security-perf: explicitly disabled via build config 00:02:03.281 00:02:03.281 libs: 00:02:03.281 argparse: explicitly disabled via build config 00:02:03.281 metrics: explicitly disabled via build config 00:02:03.281 acl: explicitly disabled via build config 00:02:03.281 bbdev: explicitly disabled via build config 00:02:03.281 bitratestats: explicitly disabled via build config 00:02:03.281 bpf: explicitly disabled via build config 00:02:03.281 cfgfile: explicitly disabled via build config 00:02:03.281 distributor: explicitly disabled via build config 00:02:03.281 efd: explicitly disabled via build config 00:02:03.281 eventdev: explicitly disabled via build config 00:02:03.281 dispatcher: explicitly disabled via build config 00:02:03.281 gpudev: explicitly disabled via build config 00:02:03.281 gro: explicitly disabled via build config 00:02:03.281 gso: explicitly disabled via build config 00:02:03.281 ip_frag: explicitly disabled via build config 00:02:03.281 jobstats: explicitly disabled via build config 00:02:03.281 latencystats: explicitly disabled via build config 00:02:03.281 lpm: explicitly disabled via build config 00:02:03.281 member: explicitly disabled via build config 00:02:03.281 pcapng: explicitly disabled via build config 00:02:03.281 rawdev: explicitly disabled via build config 00:02:03.281 regexdev: explicitly disabled via build config 00:02:03.281 mldev: explicitly disabled via build config 00:02:03.281 rib: explicitly disabled via build config 00:02:03.281 sched: explicitly disabled via build config 00:02:03.281 stack: explicitly disabled via build config 00:02:03.281 ipsec: explicitly disabled via build config 00:02:03.281 pdcp: explicitly disabled via build config 00:02:03.281 fib: explicitly disabled via build config 00:02:03.281 port: explicitly disabled via build config 00:02:03.281 pdump: explicitly disabled via build config 00:02:03.281 table: explicitly disabled via build config 00:02:03.281 pipeline: explicitly disabled via build config 00:02:03.281 graph: explicitly disabled via build config 00:02:03.281 node: explicitly disabled via build config 00:02:03.281 00:02:03.281 drivers: 00:02:03.281 common/cpt: not in enabled drivers build config 00:02:03.281 common/dpaax: not in enabled drivers build config 00:02:03.281 common/iavf: not in enabled drivers build config 00:02:03.281 common/idpf: not in enabled drivers build config 00:02:03.281 common/ionic: not in enabled drivers build config 00:02:03.281 common/mvep: not in enabled drivers build config 00:02:03.281 common/octeontx: not in enabled drivers build config 00:02:03.281 bus/auxiliary: not in enabled drivers build config 00:02:03.281 bus/cdx: not in enabled drivers build config 00:02:03.281 bus/dpaa: not in enabled drivers build config 00:02:03.281 bus/fslmc: not in enabled drivers build config 00:02:03.281 bus/ifpga: not in enabled drivers build config 00:02:03.281 bus/platform: not in enabled drivers build config 00:02:03.281 bus/uacce: not in enabled drivers build config 00:02:03.281 bus/vmbus: not in enabled drivers build config 00:02:03.281 common/cnxk: not in enabled drivers build config 00:02:03.281 common/mlx5: not in enabled drivers build config 00:02:03.281 common/nfp: not in enabled drivers build config 00:02:03.281 common/nitrox: not in enabled drivers build config 00:02:03.281 common/qat: not in enabled drivers build config 00:02:03.281 common/sfc_efx: not in enabled drivers build config 00:02:03.281 mempool/bucket: not in enabled drivers build config 00:02:03.281 mempool/cnxk: not in enabled drivers build config 00:02:03.281 mempool/dpaa: not in enabled drivers build config 00:02:03.281 mempool/dpaa2: not in enabled drivers build config 00:02:03.282 mempool/octeontx: not in enabled drivers build config 00:02:03.282 mempool/stack: not in enabled drivers build config 00:02:03.282 dma/cnxk: not in enabled drivers build config 00:02:03.282 dma/dpaa: not in enabled drivers build config 00:02:03.282 dma/dpaa2: not in enabled drivers build config 00:02:03.282 dma/hisilicon: not in enabled drivers build config 00:02:03.282 dma/idxd: not in enabled drivers build config 00:02:03.282 dma/ioat: not in enabled drivers build config 00:02:03.282 dma/skeleton: not in enabled drivers build config 00:02:03.282 net/af_packet: not in enabled drivers build config 00:02:03.282 net/af_xdp: not in enabled drivers build config 00:02:03.282 net/ark: not in enabled drivers build config 00:02:03.282 net/atlantic: not in enabled drivers build config 00:02:03.282 net/avp: not in enabled drivers build config 00:02:03.282 net/axgbe: not in enabled drivers build config 00:02:03.282 net/bnx2x: not in enabled drivers build config 00:02:03.282 net/bnxt: not in enabled drivers build config 00:02:03.282 net/bonding: not in enabled drivers build config 00:02:03.282 net/cnxk: not in enabled drivers build config 00:02:03.282 net/cpfl: not in enabled drivers build config 00:02:03.282 net/cxgbe: not in enabled drivers build config 00:02:03.282 net/dpaa: not in enabled drivers build config 00:02:03.282 net/dpaa2: not in enabled drivers build config 00:02:03.282 net/e1000: not in enabled drivers build config 00:02:03.282 net/ena: not in enabled drivers build config 00:02:03.282 net/enetc: not in enabled drivers build config 00:02:03.282 net/enetfec: not in enabled drivers build config 00:02:03.282 net/enic: not in enabled drivers build config 00:02:03.282 net/failsafe: not in enabled drivers build config 00:02:03.282 net/fm10k: not in enabled drivers build config 00:02:03.282 net/gve: not in enabled drivers build config 00:02:03.282 net/hinic: not in enabled drivers build config 00:02:03.282 net/hns3: not in enabled drivers build config 00:02:03.282 net/i40e: not in enabled drivers build config 00:02:03.282 net/iavf: not in enabled drivers build config 00:02:03.282 net/ice: not in enabled drivers build config 00:02:03.282 net/idpf: not in enabled drivers build config 00:02:03.282 net/igc: not in enabled drivers build config 00:02:03.282 net/ionic: not in enabled drivers build config 00:02:03.282 net/ipn3ke: not in enabled drivers build config 00:02:03.282 net/ixgbe: not in enabled drivers build config 00:02:03.282 net/mana: not in enabled drivers build config 00:02:03.282 net/memif: not in enabled drivers build config 00:02:03.282 net/mlx4: not in enabled drivers build config 00:02:03.282 net/mlx5: not in enabled drivers build config 00:02:03.282 net/mvneta: not in enabled drivers build config 00:02:03.282 net/mvpp2: not in enabled drivers build config 00:02:03.282 net/netvsc: not in enabled drivers build config 00:02:03.282 net/nfb: not in enabled drivers build config 00:02:03.282 net/nfp: not in enabled drivers build config 00:02:03.282 net/ngbe: not in enabled drivers build config 00:02:03.282 net/null: not in enabled drivers build config 00:02:03.282 net/octeontx: not in enabled drivers build config 00:02:03.282 net/octeon_ep: not in enabled drivers build config 00:02:03.282 net/pcap: not in enabled drivers build config 00:02:03.282 net/pfe: not in enabled drivers build config 00:02:03.282 net/qede: not in enabled drivers build config 00:02:03.282 net/ring: not in enabled drivers build config 00:02:03.282 net/sfc: not in enabled drivers build config 00:02:03.282 net/softnic: not in enabled drivers build config 00:02:03.282 net/tap: not in enabled drivers build config 00:02:03.282 net/thunderx: not in enabled drivers build config 00:02:03.282 net/txgbe: not in enabled drivers build config 00:02:03.282 net/vdev_netvsc: not in enabled drivers build config 00:02:03.282 net/vhost: not in enabled drivers build config 00:02:03.282 net/virtio: not in enabled drivers build config 00:02:03.282 net/vmxnet3: not in enabled drivers build config 00:02:03.282 raw/*: missing internal dependency, "rawdev" 00:02:03.282 crypto/armv8: not in enabled drivers build config 00:02:03.282 crypto/bcmfs: not in enabled drivers build config 00:02:03.282 crypto/caam_jr: not in enabled drivers build config 00:02:03.282 crypto/ccp: not in enabled drivers build config 00:02:03.282 crypto/cnxk: not in enabled drivers build config 00:02:03.282 crypto/dpaa_sec: not in enabled drivers build config 00:02:03.282 crypto/dpaa2_sec: not in enabled drivers build config 00:02:03.282 crypto/ipsec_mb: not in enabled drivers build config 00:02:03.282 crypto/mlx5: not in enabled drivers build config 00:02:03.282 crypto/mvsam: not in enabled drivers build config 00:02:03.282 crypto/nitrox: not in enabled drivers build config 00:02:03.282 crypto/null: not in enabled drivers build config 00:02:03.282 crypto/octeontx: not in enabled drivers build config 00:02:03.282 crypto/openssl: not in enabled drivers build config 00:02:03.282 crypto/scheduler: not in enabled drivers build config 00:02:03.282 crypto/uadk: not in enabled drivers build config 00:02:03.282 crypto/virtio: not in enabled drivers build config 00:02:03.282 compress/isal: not in enabled drivers build config 00:02:03.282 compress/mlx5: not in enabled drivers build config 00:02:03.282 compress/nitrox: not in enabled drivers build config 00:02:03.282 compress/octeontx: not in enabled drivers build config 00:02:03.282 compress/zlib: not in enabled drivers build config 00:02:03.282 regex/*: missing internal dependency, "regexdev" 00:02:03.282 ml/*: missing internal dependency, "mldev" 00:02:03.282 vdpa/ifc: not in enabled drivers build config 00:02:03.282 vdpa/mlx5: not in enabled drivers build config 00:02:03.282 vdpa/nfp: not in enabled drivers build config 00:02:03.282 vdpa/sfc: not in enabled drivers build config 00:02:03.282 event/*: missing internal dependency, "eventdev" 00:02:03.282 baseband/*: missing internal dependency, "bbdev" 00:02:03.282 gpu/*: missing internal dependency, "gpudev" 00:02:03.282 00:02:03.282 00:02:03.282 Build targets in project: 85 00:02:03.282 00:02:03.282 DPDK 24.03.0 00:02:03.282 00:02:03.282 User defined options 00:02:03.282 buildtype : debug 00:02:03.282 default_library : shared 00:02:03.282 libdir : lib 00:02:03.282 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:02:03.282 b_sanitize : address 00:02:03.282 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:03.282 c_link_args : 00:02:03.282 cpu_instruction_set: native 00:02:03.282 disable_apps : test-dma-perf,test,test-sad,test-acl,test-pmd,test-mldev,test-compress-perf,test-cmdline,test-regex,test-fib,graph,test-bbdev,dumpcap,test-gpudev,proc-info,test-pipeline,test-flow-perf,test-crypto-perf,pdump,test-eventdev,test-security-perf 00:02:03.282 disable_libs : port,lpm,ipsec,regexdev,dispatcher,argparse,bitratestats,rawdev,stack,graph,acl,bbdev,pipeline,member,sched,pcapng,mldev,eventdev,efd,metrics,latencystats,cfgfile,ip_frag,jobstats,pdump,pdcp,rib,node,fib,distributor,gso,table,bpf,gpudev,gro 00:02:03.282 enable_docs : false 00:02:03.282 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:02:03.282 enable_kmods : false 00:02:03.282 max_lcores : 128 00:02:03.282 tests : false 00:02:03.282 00:02:03.282 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:03.282 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:02:03.282 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:03.282 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:03.282 [3/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:03.282 [4/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:03.282 [5/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:03.282 [6/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:03.282 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:03.282 [8/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:03.282 [9/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:03.282 [10/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:03.282 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:03.282 [12/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:03.282 [13/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:03.282 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:03.282 [15/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:03.282 [16/268] Linking static target lib/librte_kvargs.a 00:02:03.282 [17/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:03.282 [18/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:03.282 [19/268] Linking static target lib/librte_log.a 00:02:03.282 [20/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:03.282 [21/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:03.282 [22/268] Linking static target lib/librte_pci.a 00:02:03.283 [23/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:03.283 [24/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:03.283 [25/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:03.283 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:03.283 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:03.283 [28/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:03.283 [29/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:03.283 [30/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:03.283 [31/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:03.283 [32/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:03.283 [33/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:03.283 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:03.283 [35/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:03.283 [36/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:03.283 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:03.283 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:03.283 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:03.283 [40/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:03.283 [41/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:03.283 [42/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:03.283 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:03.283 [44/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:03.283 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:03.283 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:03.283 [47/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:03.283 [48/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:03.283 [49/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:03.283 [50/268] Linking static target lib/librte_meter.a 00:02:03.283 [51/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:03.283 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:03.283 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:03.283 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:03.283 [55/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:03.283 [56/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:03.283 [57/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:03.283 [58/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:03.283 [59/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:03.283 [60/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:03.283 [61/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:03.283 [62/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:03.283 [63/268] Linking static target lib/librte_ring.a 00:02:03.283 [64/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:03.283 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:03.283 [66/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:03.283 [67/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:03.283 [68/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:03.283 [69/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:03.283 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:03.283 [71/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:03.283 [72/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:03.544 [73/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:03.544 [74/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:03.544 [75/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:03.544 [76/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:03.544 [77/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:03.544 [78/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:03.544 [79/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:03.544 [80/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:03.544 [81/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:03.544 [82/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:03.544 [83/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:03.544 [84/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:03.544 [85/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:03.544 [86/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:03.544 [87/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:03.544 [88/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:03.544 [89/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:03.544 [90/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:03.544 [91/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:03.544 [92/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:03.544 [93/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.544 [94/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:03.544 [95/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:03.544 [96/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:03.544 [97/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:03.544 [98/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:03.544 [99/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:03.544 [100/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:03.544 [101/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:03.544 [102/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:03.544 [103/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:03.544 [104/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:03.544 [105/268] Linking static target lib/librte_telemetry.a 00:02:03.544 [106/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.544 [107/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:03.544 [108/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:03.544 [109/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:03.544 [110/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:03.544 [111/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:03.544 [112/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:03.544 [113/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:03.544 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:03.544 [115/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:03.544 [116/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:03.544 [117/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:03.544 [118/268] Linking static target lib/librte_cmdline.a 00:02:03.544 [119/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:03.544 [120/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:03.804 [121/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:03.804 [122/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:03.804 [123/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.804 [124/268] Linking static target lib/librte_mempool.a 00:02:03.804 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:03.804 [126/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:03.804 [127/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.804 [128/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.804 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:03.804 [130/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:03.804 [131/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:03.804 [132/268] Linking static target lib/librte_net.a 00:02:03.804 [133/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:03.804 [134/268] Linking static target lib/librte_eal.a 00:02:03.804 [135/268] Linking target lib/librte_log.so.24.1 00:02:03.804 [136/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:03.804 [137/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:03.804 [138/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:03.804 [139/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:03.804 [140/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:03.804 [141/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:03.804 [142/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:03.804 [143/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:03.804 [144/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:03.804 [145/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:03.804 [146/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:03.804 [147/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:03.804 [148/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:03.804 [149/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:03.804 [150/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:03.804 [151/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:03.804 [152/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:03.804 [153/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:03.804 [154/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:03.804 [155/268] Linking static target lib/librte_timer.a 00:02:03.804 [156/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:03.804 [157/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:03.804 [158/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:03.804 [159/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:04.063 [160/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:04.063 [161/268] Linking target lib/librte_kvargs.so.24.1 00:02:04.063 [162/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:04.063 [163/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:04.063 [164/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:04.063 [165/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:04.063 [166/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.063 [167/268] Linking static target lib/librte_dmadev.a 00:02:04.063 [168/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:04.063 [169/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:04.063 [170/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:04.063 [171/268] Linking static target lib/librte_power.a 00:02:04.063 [172/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:04.063 [173/268] Linking static target lib/librte_rcu.a 00:02:04.063 [174/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:04.063 [175/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:04.063 [176/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:04.063 [177/268] Linking target lib/librte_telemetry.so.24.1 00:02:04.063 [178/268] Linking static target lib/librte_compressdev.a 00:02:04.063 [179/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:04.063 [180/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:04.063 [181/268] Linking static target drivers/librte_bus_vdev.a 00:02:04.063 [182/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:04.063 [183/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.063 [184/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:04.063 [185/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:04.063 [186/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:04.063 [187/268] Linking static target lib/librte_reorder.a 00:02:04.063 [188/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:04.322 [189/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:04.322 [190/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:04.322 [191/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:04.322 [192/268] Linking static target lib/librte_security.a 00:02:04.322 [193/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:04.322 [194/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:04.322 [195/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:04.322 [196/268] Linking static target lib/librte_mbuf.a 00:02:04.322 [197/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:04.322 [198/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.322 [199/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.322 [200/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:04.322 [201/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:04.322 [202/268] Linking static target drivers/librte_bus_pci.a 00:02:04.322 [203/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:04.322 [204/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.322 [205/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:04.322 [206/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:04.322 [207/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:04.322 [208/268] Linking static target lib/librte_hash.a 00:02:04.580 [209/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:04.580 [210/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:04.580 [211/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.580 [212/268] Linking static target drivers/librte_mempool_ring.a 00:02:04.580 [213/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.580 [214/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.580 [215/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.580 [216/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:04.580 [217/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.839 [218/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.839 [219/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:04.839 [220/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.839 [221/268] Linking static target lib/librte_cryptodev.a 00:02:05.098 [222/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.098 [223/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.358 [224/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.617 [225/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:05.617 [226/268] Linking static target lib/librte_ethdev.a 00:02:06.553 [227/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:06.553 [228/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.839 [229/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:09.839 [230/268] Linking static target lib/librte_vhost.a 00:02:11.743 [231/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.119 [232/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.119 [233/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.377 [234/268] Linking target lib/librte_eal.so.24.1 00:02:13.377 [235/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:13.377 [236/268] Linking target lib/librte_meter.so.24.1 00:02:13.377 [237/268] Linking target lib/librte_ring.so.24.1 00:02:13.377 [238/268] Linking target lib/librte_dmadev.so.24.1 00:02:13.377 [239/268] Linking target drivers/librte_bus_vdev.so.24.1 00:02:13.377 [240/268] Linking target lib/librte_pci.so.24.1 00:02:13.377 [241/268] Linking target lib/librte_timer.so.24.1 00:02:13.635 [242/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:13.635 [243/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:13.635 [244/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:13.635 [245/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:13.635 [246/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:13.635 [247/268] Linking target lib/librte_rcu.so.24.1 00:02:13.635 [248/268] Linking target lib/librte_mempool.so.24.1 00:02:13.635 [249/268] Linking target drivers/librte_bus_pci.so.24.1 00:02:13.635 [250/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:13.635 [251/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:13.635 [252/268] Linking target drivers/librte_mempool_ring.so.24.1 00:02:13.635 [253/268] Linking target lib/librte_mbuf.so.24.1 00:02:13.893 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:13.893 [255/268] Linking target lib/librte_compressdev.so.24.1 00:02:13.893 [256/268] Linking target lib/librte_reorder.so.24.1 00:02:13.893 [257/268] Linking target lib/librte_net.so.24.1 00:02:13.893 [258/268] Linking target lib/librte_cryptodev.so.24.1 00:02:14.151 [259/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:14.151 [260/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:14.151 [261/268] Linking target lib/librte_cmdline.so.24.1 00:02:14.151 [262/268] Linking target lib/librte_hash.so.24.1 00:02:14.151 [263/268] Linking target lib/librte_security.so.24.1 00:02:14.151 [264/268] Linking target lib/librte_ethdev.so.24.1 00:02:14.151 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:14.151 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:14.409 [267/268] Linking target lib/librte_power.so.24.1 00:02:14.409 [268/268] Linking target lib/librte_vhost.so.24.1 00:02:14.409 INFO: autodetecting backend as ninja 00:02:14.409 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 96 00:02:26.608 CC lib/ut_mock/mock.o 00:02:26.608 CC lib/log/log.o 00:02:26.608 CC lib/ut/ut.o 00:02:26.608 CC lib/log/log_flags.o 00:02:26.608 CC lib/log/log_deprecated.o 00:02:26.608 LIB libspdk_ut.a 00:02:26.608 LIB libspdk_ut_mock.a 00:02:26.608 LIB libspdk_log.a 00:02:26.608 SO libspdk_ut.so.2.0 00:02:26.608 SO libspdk_ut_mock.so.6.0 00:02:26.608 SO libspdk_log.so.7.1 00:02:26.608 SYMLINK libspdk_ut.so 00:02:26.608 SYMLINK libspdk_ut_mock.so 00:02:26.608 SYMLINK libspdk_log.so 00:02:26.608 CC lib/dma/dma.o 00:02:26.608 CC lib/ioat/ioat.o 00:02:26.608 CC lib/util/base64.o 00:02:26.608 CC lib/util/bit_array.o 00:02:26.608 CC lib/util/cpuset.o 00:02:26.608 CC lib/util/crc16.o 00:02:26.608 CC lib/util/crc32.o 00:02:26.608 CC lib/util/crc32c.o 00:02:26.608 CXX lib/trace_parser/trace.o 00:02:26.608 CC lib/util/crc32_ieee.o 00:02:26.608 CC lib/util/crc64.o 00:02:26.608 CC lib/util/dif.o 00:02:26.608 CC lib/util/fd.o 00:02:26.608 CC lib/util/fd_group.o 00:02:26.608 CC lib/util/file.o 00:02:26.608 CC lib/util/hexlify.o 00:02:26.608 CC lib/util/math.o 00:02:26.608 CC lib/util/iov.o 00:02:26.608 CC lib/util/net.o 00:02:26.608 CC lib/util/pipe.o 00:02:26.608 CC lib/util/strerror_tls.o 00:02:26.608 CC lib/util/string.o 00:02:26.608 CC lib/util/uuid.o 00:02:26.608 CC lib/util/xor.o 00:02:26.608 CC lib/util/zipf.o 00:02:26.608 CC lib/util/md5.o 00:02:26.608 CC lib/vfio_user/host/vfio_user_pci.o 00:02:26.608 CC lib/vfio_user/host/vfio_user.o 00:02:26.608 LIB libspdk_dma.a 00:02:26.608 SO libspdk_dma.so.5.0 00:02:26.608 SYMLINK libspdk_dma.so 00:02:26.608 LIB libspdk_ioat.a 00:02:26.608 SO libspdk_ioat.so.7.0 00:02:26.608 SYMLINK libspdk_ioat.so 00:02:26.608 LIB libspdk_vfio_user.a 00:02:26.608 SO libspdk_vfio_user.so.5.0 00:02:26.608 SYMLINK libspdk_vfio_user.so 00:02:26.608 LIB libspdk_util.a 00:02:26.608 SO libspdk_util.so.10.1 00:02:26.608 LIB libspdk_trace_parser.a 00:02:26.608 SYMLINK libspdk_util.so 00:02:26.608 SO libspdk_trace_parser.so.6.0 00:02:26.608 SYMLINK libspdk_trace_parser.so 00:02:26.866 CC lib/rdma_utils/rdma_utils.o 00:02:26.866 CC lib/idxd/idxd.o 00:02:26.866 CC lib/idxd/idxd_user.o 00:02:26.866 CC lib/idxd/idxd_kernel.o 00:02:26.866 CC lib/conf/conf.o 00:02:26.866 CC lib/env_dpdk/env.o 00:02:26.866 CC lib/env_dpdk/memory.o 00:02:26.866 CC lib/env_dpdk/init.o 00:02:26.866 CC lib/env_dpdk/threads.o 00:02:26.866 CC lib/env_dpdk/pci.o 00:02:26.866 CC lib/json/json_parse.o 00:02:26.866 CC lib/json/json_util.o 00:02:26.866 CC lib/env_dpdk/pci_ioat.o 00:02:26.866 CC lib/json/json_write.o 00:02:26.867 CC lib/env_dpdk/pci_vmd.o 00:02:26.867 CC lib/env_dpdk/pci_virtio.o 00:02:26.867 CC lib/env_dpdk/pci_idxd.o 00:02:26.867 CC lib/env_dpdk/pci_event.o 00:02:26.867 CC lib/env_dpdk/pci_dpdk.o 00:02:26.867 CC lib/env_dpdk/sigbus_handler.o 00:02:26.867 CC lib/vmd/vmd.o 00:02:26.867 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:26.867 CC lib/vmd/led.o 00:02:26.867 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:27.125 LIB libspdk_conf.a 00:02:27.125 LIB libspdk_rdma_utils.a 00:02:27.125 SO libspdk_conf.so.6.0 00:02:27.125 SO libspdk_rdma_utils.so.1.0 00:02:27.125 LIB libspdk_json.a 00:02:27.125 SYMLINK libspdk_conf.so 00:02:27.125 SO libspdk_json.so.6.0 00:02:27.125 SYMLINK libspdk_rdma_utils.so 00:02:27.383 SYMLINK libspdk_json.so 00:02:27.383 LIB libspdk_idxd.a 00:02:27.383 SO libspdk_idxd.so.12.1 00:02:27.642 LIB libspdk_vmd.a 00:02:27.642 CC lib/rdma_provider/common.o 00:02:27.642 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:27.642 SO libspdk_vmd.so.6.0 00:02:27.642 SYMLINK libspdk_idxd.so 00:02:27.642 CC lib/jsonrpc/jsonrpc_server.o 00:02:27.642 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:27.642 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:27.642 CC lib/jsonrpc/jsonrpc_client.o 00:02:27.642 SYMLINK libspdk_vmd.so 00:02:27.642 LIB libspdk_rdma_provider.a 00:02:27.642 SO libspdk_rdma_provider.so.7.0 00:02:27.900 SYMLINK libspdk_rdma_provider.so 00:02:27.900 LIB libspdk_jsonrpc.a 00:02:27.900 SO libspdk_jsonrpc.so.6.0 00:02:27.900 SYMLINK libspdk_jsonrpc.so 00:02:28.159 LIB libspdk_env_dpdk.a 00:02:28.159 SO libspdk_env_dpdk.so.15.1 00:02:28.418 CC lib/rpc/rpc.o 00:02:28.418 SYMLINK libspdk_env_dpdk.so 00:02:28.418 LIB libspdk_rpc.a 00:02:28.418 SO libspdk_rpc.so.6.0 00:02:28.676 SYMLINK libspdk_rpc.so 00:02:28.934 CC lib/keyring/keyring.o 00:02:28.934 CC lib/keyring/keyring_rpc.o 00:02:28.934 CC lib/trace/trace.o 00:02:28.934 CC lib/trace/trace_flags.o 00:02:28.934 CC lib/trace/trace_rpc.o 00:02:28.934 CC lib/notify/notify.o 00:02:28.934 CC lib/notify/notify_rpc.o 00:02:29.192 LIB libspdk_notify.a 00:02:29.192 SO libspdk_notify.so.6.0 00:02:29.192 LIB libspdk_keyring.a 00:02:29.192 SO libspdk_keyring.so.2.0 00:02:29.192 LIB libspdk_trace.a 00:02:29.192 SYMLINK libspdk_notify.so 00:02:29.192 SO libspdk_trace.so.11.0 00:02:29.192 SYMLINK libspdk_keyring.so 00:02:29.192 SYMLINK libspdk_trace.so 00:02:29.759 CC lib/thread/thread.o 00:02:29.759 CC lib/thread/iobuf.o 00:02:29.759 CC lib/sock/sock.o 00:02:29.759 CC lib/sock/sock_rpc.o 00:02:30.017 LIB libspdk_sock.a 00:02:30.017 SO libspdk_sock.so.10.0 00:02:30.017 SYMLINK libspdk_sock.so 00:02:30.583 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:30.583 CC lib/nvme/nvme_ctrlr.o 00:02:30.583 CC lib/nvme/nvme_fabric.o 00:02:30.583 CC lib/nvme/nvme_ns_cmd.o 00:02:30.583 CC lib/nvme/nvme_ns.o 00:02:30.583 CC lib/nvme/nvme_qpair.o 00:02:30.583 CC lib/nvme/nvme_pcie_common.o 00:02:30.583 CC lib/nvme/nvme_pcie.o 00:02:30.583 CC lib/nvme/nvme.o 00:02:30.583 CC lib/nvme/nvme_quirks.o 00:02:30.583 CC lib/nvme/nvme_transport.o 00:02:30.583 CC lib/nvme/nvme_discovery.o 00:02:30.583 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:30.583 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:30.583 CC lib/nvme/nvme_tcp.o 00:02:30.583 CC lib/nvme/nvme_opal.o 00:02:30.583 CC lib/nvme/nvme_io_msg.o 00:02:30.583 CC lib/nvme/nvme_poll_group.o 00:02:30.583 CC lib/nvme/nvme_zns.o 00:02:30.583 CC lib/nvme/nvme_stubs.o 00:02:30.583 CC lib/nvme/nvme_auth.o 00:02:30.583 CC lib/nvme/nvme_cuse.o 00:02:30.583 CC lib/nvme/nvme_rdma.o 00:02:31.149 LIB libspdk_thread.a 00:02:31.149 SO libspdk_thread.so.11.0 00:02:31.149 SYMLINK libspdk_thread.so 00:02:31.406 CC lib/fsdev/fsdev.o 00:02:31.406 CC lib/fsdev/fsdev_io.o 00:02:31.406 CC lib/fsdev/fsdev_rpc.o 00:02:31.406 CC lib/accel/accel.o 00:02:31.406 CC lib/blob/blobstore.o 00:02:31.406 CC lib/blob/request.o 00:02:31.406 CC lib/blob/zeroes.o 00:02:31.406 CC lib/blob/blob_bs_dev.o 00:02:31.406 CC lib/accel/accel_rpc.o 00:02:31.406 CC lib/accel/accel_sw.o 00:02:31.406 CC lib/init/json_config.o 00:02:31.406 CC lib/init/subsystem.o 00:02:31.406 CC lib/init/subsystem_rpc.o 00:02:31.406 CC lib/init/rpc.o 00:02:31.406 CC lib/virtio/virtio_vfio_user.o 00:02:31.406 CC lib/virtio/virtio.o 00:02:31.406 CC lib/virtio/virtio_vhost_user.o 00:02:31.406 CC lib/virtio/virtio_pci.o 00:02:31.663 LIB libspdk_init.a 00:02:31.663 SO libspdk_init.so.6.0 00:02:31.663 SYMLINK libspdk_init.so 00:02:31.663 LIB libspdk_virtio.a 00:02:31.921 SO libspdk_virtio.so.7.0 00:02:31.921 SYMLINK libspdk_virtio.so 00:02:31.921 LIB libspdk_fsdev.a 00:02:32.179 SO libspdk_fsdev.so.2.0 00:02:32.179 CC lib/event/app.o 00:02:32.179 CC lib/event/log_rpc.o 00:02:32.179 CC lib/event/reactor.o 00:02:32.179 CC lib/event/scheduler_static.o 00:02:32.179 CC lib/event/app_rpc.o 00:02:32.179 SYMLINK libspdk_fsdev.so 00:02:32.438 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:02:32.438 LIB libspdk_accel.a 00:02:32.438 SO libspdk_accel.so.16.0 00:02:32.438 LIB libspdk_event.a 00:02:32.438 LIB libspdk_nvme.a 00:02:32.696 SO libspdk_event.so.14.0 00:02:32.696 SYMLINK libspdk_accel.so 00:02:32.696 SO libspdk_nvme.so.15.0 00:02:32.696 SYMLINK libspdk_event.so 00:02:32.953 SYMLINK libspdk_nvme.so 00:02:32.954 CC lib/bdev/bdev.o 00:02:32.954 CC lib/bdev/bdev_rpc.o 00:02:32.954 CC lib/bdev/bdev_zone.o 00:02:32.954 CC lib/bdev/part.o 00:02:32.954 CC lib/bdev/scsi_nvme.o 00:02:32.954 LIB libspdk_fuse_dispatcher.a 00:02:33.211 SO libspdk_fuse_dispatcher.so.1.0 00:02:33.211 SYMLINK libspdk_fuse_dispatcher.so 00:02:34.585 LIB libspdk_blob.a 00:02:34.585 SO libspdk_blob.so.12.0 00:02:34.585 SYMLINK libspdk_blob.so 00:02:34.842 CC lib/lvol/lvol.o 00:02:34.842 CC lib/blobfs/blobfs.o 00:02:34.842 CC lib/blobfs/tree.o 00:02:35.407 LIB libspdk_bdev.a 00:02:35.407 SO libspdk_bdev.so.17.0 00:02:35.407 SYMLINK libspdk_bdev.so 00:02:35.665 LIB libspdk_blobfs.a 00:02:35.665 SO libspdk_blobfs.so.11.0 00:02:35.665 LIB libspdk_lvol.a 00:02:35.665 SO libspdk_lvol.so.11.0 00:02:35.665 SYMLINK libspdk_blobfs.so 00:02:35.923 CC lib/nbd/nbd.o 00:02:35.923 CC lib/nbd/nbd_rpc.o 00:02:35.923 CC lib/nvmf/ctrlr.o 00:02:35.923 CC lib/nvmf/ctrlr_discovery.o 00:02:35.923 CC lib/ftl/ftl_core.o 00:02:35.923 CC lib/nvmf/ctrlr_bdev.o 00:02:35.923 CC lib/nvmf/subsystem.o 00:02:35.923 CC lib/nvmf/nvmf_rpc.o 00:02:35.923 CC lib/nvmf/nvmf.o 00:02:35.923 CC lib/ftl/ftl_init.o 00:02:35.923 CC lib/nvmf/transport.o 00:02:35.923 CC lib/nvmf/tcp.o 00:02:35.923 CC lib/ftl/ftl_layout.o 00:02:35.923 CC lib/nvmf/mdns_server.o 00:02:35.923 CC lib/nvmf/stubs.o 00:02:35.923 CC lib/ftl/ftl_debug.o 00:02:35.923 CC lib/ftl/ftl_io.o 00:02:35.923 CC lib/nvmf/rdma.o 00:02:35.923 CC lib/ftl/ftl_sb.o 00:02:35.923 CC lib/nvmf/auth.o 00:02:35.923 CC lib/ftl/ftl_l2p.o 00:02:35.923 CC lib/ftl/ftl_l2p_flat.o 00:02:35.923 CC lib/ftl/ftl_nv_cache.o 00:02:35.923 CC lib/ftl/ftl_band.o 00:02:35.923 CC lib/ftl/ftl_band_ops.o 00:02:35.923 CC lib/ftl/ftl_writer.o 00:02:35.923 CC lib/ftl/ftl_rq.o 00:02:35.923 CC lib/ftl/ftl_reloc.o 00:02:35.923 CC lib/ftl/ftl_l2p_cache.o 00:02:35.923 CC lib/ftl/ftl_p2l_log.o 00:02:35.923 CC lib/ftl/ftl_p2l.o 00:02:35.923 CC lib/scsi/dev.o 00:02:35.923 CC lib/ftl/mngt/ftl_mngt.o 00:02:35.923 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:35.923 CC lib/scsi/lun.o 00:02:35.923 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:35.923 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:35.923 CC lib/scsi/port.o 00:02:35.923 CC lib/scsi/scsi.o 00:02:35.923 CC lib/scsi/scsi_bdev.o 00:02:35.923 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:35.923 CC lib/scsi/scsi_pr.o 00:02:35.923 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:35.923 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:35.923 CC lib/ublk/ublk.o 00:02:35.923 CC lib/scsi/scsi_rpc.o 00:02:35.923 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:35.923 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:35.923 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:35.923 CC lib/scsi/task.o 00:02:35.923 CC lib/ublk/ublk_rpc.o 00:02:35.923 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:35.923 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:35.923 CC lib/ftl/utils/ftl_conf.o 00:02:35.923 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:35.923 CC lib/ftl/utils/ftl_md.o 00:02:35.923 CC lib/ftl/utils/ftl_mempool.o 00:02:35.923 CC lib/ftl/utils/ftl_property.o 00:02:35.923 CC lib/ftl/utils/ftl_bitmap.o 00:02:35.923 SYMLINK libspdk_lvol.so 00:02:35.923 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:35.923 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:35.923 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:35.923 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:35.923 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:35.923 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:35.923 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:35.923 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:35.923 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:35.924 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:35.924 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:35.924 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:02:35.924 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:02:35.924 CC lib/ftl/base/ftl_base_dev.o 00:02:35.924 CC lib/ftl/base/ftl_base_bdev.o 00:02:35.924 CC lib/ftl/ftl_trace.o 00:02:36.489 LIB libspdk_scsi.a 00:02:36.489 LIB libspdk_nbd.a 00:02:36.489 SO libspdk_scsi.so.9.0 00:02:36.489 SO libspdk_nbd.so.7.0 00:02:36.746 SYMLINK libspdk_scsi.so 00:02:36.746 SYMLINK libspdk_nbd.so 00:02:36.746 LIB libspdk_ublk.a 00:02:36.746 SO libspdk_ublk.so.3.0 00:02:37.003 SYMLINK libspdk_ublk.so 00:02:37.003 CC lib/vhost/vhost.o 00:02:37.003 CC lib/vhost/vhost_rpc.o 00:02:37.003 CC lib/vhost/vhost_scsi.o 00:02:37.003 CC lib/vhost/rte_vhost_user.o 00:02:37.003 CC lib/vhost/vhost_blk.o 00:02:37.003 CC lib/iscsi/conn.o 00:02:37.003 CC lib/iscsi/init_grp.o 00:02:37.004 CC lib/iscsi/iscsi.o 00:02:37.004 CC lib/iscsi/param.o 00:02:37.004 CC lib/iscsi/iscsi_subsystem.o 00:02:37.004 CC lib/iscsi/portal_grp.o 00:02:37.004 CC lib/iscsi/tgt_node.o 00:02:37.004 CC lib/iscsi/iscsi_rpc.o 00:02:37.004 CC lib/iscsi/task.o 00:02:37.004 LIB libspdk_ftl.a 00:02:37.261 SO libspdk_ftl.so.9.0 00:02:37.518 SYMLINK libspdk_ftl.so 00:02:37.775 LIB libspdk_vhost.a 00:02:38.033 SO libspdk_vhost.so.8.0 00:02:38.033 SYMLINK libspdk_vhost.so 00:02:38.291 LIB libspdk_nvmf.a 00:02:38.291 LIB libspdk_iscsi.a 00:02:38.291 SO libspdk_nvmf.so.20.0 00:02:38.291 SO libspdk_iscsi.so.8.0 00:02:38.548 SYMLINK libspdk_iscsi.so 00:02:38.548 SYMLINK libspdk_nvmf.so 00:02:39.112 CC module/env_dpdk/env_dpdk_rpc.o 00:02:39.112 LIB libspdk_env_dpdk_rpc.a 00:02:39.112 CC module/accel/iaa/accel_iaa.o 00:02:39.112 CC module/accel/error/accel_error_rpc.o 00:02:39.112 CC module/accel/iaa/accel_iaa_rpc.o 00:02:39.112 CC module/accel/error/accel_error.o 00:02:39.112 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:39.112 CC module/keyring/file/keyring.o 00:02:39.112 CC module/accel/dsa/accel_dsa_rpc.o 00:02:39.112 CC module/keyring/file/keyring_rpc.o 00:02:39.112 CC module/accel/dsa/accel_dsa.o 00:02:39.113 CC module/accel/ioat/accel_ioat.o 00:02:39.113 CC module/accel/ioat/accel_ioat_rpc.o 00:02:39.113 CC module/keyring/linux/keyring.o 00:02:39.113 CC module/keyring/linux/keyring_rpc.o 00:02:39.113 CC module/scheduler/gscheduler/gscheduler.o 00:02:39.113 CC module/fsdev/aio/fsdev_aio.o 00:02:39.113 CC module/sock/posix/posix.o 00:02:39.113 CC module/fsdev/aio/fsdev_aio_rpc.o 00:02:39.113 CC module/fsdev/aio/linux_aio_mgr.o 00:02:39.113 CC module/blob/bdev/blob_bdev.o 00:02:39.113 SO libspdk_env_dpdk_rpc.so.6.0 00:02:39.113 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:39.370 SYMLINK libspdk_env_dpdk_rpc.so 00:02:39.370 LIB libspdk_keyring_file.a 00:02:39.370 LIB libspdk_scheduler_gscheduler.a 00:02:39.370 LIB libspdk_keyring_linux.a 00:02:39.370 LIB libspdk_scheduler_dpdk_governor.a 00:02:39.370 SO libspdk_keyring_file.so.2.0 00:02:39.370 SO libspdk_keyring_linux.so.1.0 00:02:39.370 SO libspdk_scheduler_gscheduler.so.4.0 00:02:39.370 LIB libspdk_accel_iaa.a 00:02:39.370 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:39.370 LIB libspdk_accel_ioat.a 00:02:39.370 LIB libspdk_accel_error.a 00:02:39.370 SO libspdk_accel_error.so.2.0 00:02:39.370 SO libspdk_accel_iaa.so.3.0 00:02:39.370 SO libspdk_accel_ioat.so.6.0 00:02:39.370 SYMLINK libspdk_scheduler_gscheduler.so 00:02:39.370 LIB libspdk_scheduler_dynamic.a 00:02:39.370 SYMLINK libspdk_keyring_file.so 00:02:39.370 SYMLINK libspdk_keyring_linux.so 00:02:39.370 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:39.370 SO libspdk_scheduler_dynamic.so.4.0 00:02:39.370 LIB libspdk_blob_bdev.a 00:02:39.370 SYMLINK libspdk_accel_error.so 00:02:39.370 SYMLINK libspdk_accel_iaa.so 00:02:39.370 SYMLINK libspdk_accel_ioat.so 00:02:39.370 LIB libspdk_accel_dsa.a 00:02:39.628 SO libspdk_blob_bdev.so.12.0 00:02:39.628 SO libspdk_accel_dsa.so.5.0 00:02:39.628 SYMLINK libspdk_scheduler_dynamic.so 00:02:39.628 SYMLINK libspdk_blob_bdev.so 00:02:39.628 SYMLINK libspdk_accel_dsa.so 00:02:39.886 LIB libspdk_fsdev_aio.a 00:02:39.886 SO libspdk_fsdev_aio.so.1.0 00:02:39.886 LIB libspdk_sock_posix.a 00:02:39.886 SO libspdk_sock_posix.so.6.0 00:02:39.886 SYMLINK libspdk_fsdev_aio.so 00:02:40.145 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:40.145 CC module/bdev/delay/vbdev_delay.o 00:02:40.145 CC module/bdev/malloc/bdev_malloc.o 00:02:40.145 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:40.145 CC module/bdev/gpt/gpt.o 00:02:40.145 CC module/bdev/gpt/vbdev_gpt.o 00:02:40.145 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:40.145 CC module/bdev/lvol/vbdev_lvol.o 00:02:40.145 CC module/bdev/nvme/bdev_nvme.o 00:02:40.145 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:40.145 CC module/bdev/ftl/bdev_ftl.o 00:02:40.145 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:40.145 CC module/bdev/iscsi/bdev_iscsi.o 00:02:40.145 CC module/bdev/nvme/nvme_rpc.o 00:02:40.145 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:40.145 CC module/bdev/nvme/bdev_mdns_client.o 00:02:40.145 SYMLINK libspdk_sock_posix.so 00:02:40.145 CC module/bdev/nvme/vbdev_opal.o 00:02:40.145 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:40.145 CC module/bdev/error/vbdev_error.o 00:02:40.145 CC module/blobfs/bdev/blobfs_bdev.o 00:02:40.145 CC module/bdev/error/vbdev_error_rpc.o 00:02:40.145 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:40.145 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:40.145 CC module/bdev/passthru/vbdev_passthru.o 00:02:40.145 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:40.145 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:40.145 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:40.145 CC module/bdev/split/vbdev_split.o 00:02:40.145 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:40.145 CC module/bdev/split/vbdev_split_rpc.o 00:02:40.145 CC module/bdev/aio/bdev_aio.o 00:02:40.145 CC module/bdev/aio/bdev_aio_rpc.o 00:02:40.145 CC module/bdev/null/bdev_null.o 00:02:40.145 CC module/bdev/null/bdev_null_rpc.o 00:02:40.145 CC module/bdev/raid/bdev_raid.o 00:02:40.145 CC module/bdev/raid/bdev_raid_rpc.o 00:02:40.145 CC module/bdev/raid/raid0.o 00:02:40.145 CC module/bdev/raid/bdev_raid_sb.o 00:02:40.145 CC module/bdev/raid/raid1.o 00:02:40.145 CC module/bdev/raid/concat.o 00:02:40.145 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:40.145 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:40.402 LIB libspdk_blobfs_bdev.a 00:02:40.402 SO libspdk_blobfs_bdev.so.6.0 00:02:40.402 LIB libspdk_bdev_gpt.a 00:02:40.402 LIB libspdk_bdev_split.a 00:02:40.402 SO libspdk_bdev_split.so.6.0 00:02:40.402 SO libspdk_bdev_gpt.so.6.0 00:02:40.402 SYMLINK libspdk_blobfs_bdev.so 00:02:40.402 LIB libspdk_bdev_error.a 00:02:40.402 LIB libspdk_bdev_null.a 00:02:40.402 LIB libspdk_bdev_passthru.a 00:02:40.402 LIB libspdk_bdev_ftl.a 00:02:40.402 SO libspdk_bdev_null.so.6.0 00:02:40.403 SO libspdk_bdev_error.so.6.0 00:02:40.403 SO libspdk_bdev_passthru.so.6.0 00:02:40.403 SYMLINK libspdk_bdev_split.so 00:02:40.403 SYMLINK libspdk_bdev_gpt.so 00:02:40.403 LIB libspdk_bdev_iscsi.a 00:02:40.403 SO libspdk_bdev_ftl.so.6.0 00:02:40.403 LIB libspdk_bdev_zone_block.a 00:02:40.403 LIB libspdk_bdev_aio.a 00:02:40.403 LIB libspdk_bdev_delay.a 00:02:40.661 LIB libspdk_bdev_malloc.a 00:02:40.661 SO libspdk_bdev_iscsi.so.6.0 00:02:40.661 SO libspdk_bdev_aio.so.6.0 00:02:40.661 SYMLINK libspdk_bdev_null.so 00:02:40.661 SYMLINK libspdk_bdev_error.so 00:02:40.661 SO libspdk_bdev_zone_block.so.6.0 00:02:40.661 SYMLINK libspdk_bdev_passthru.so 00:02:40.661 SO libspdk_bdev_malloc.so.6.0 00:02:40.661 SO libspdk_bdev_delay.so.6.0 00:02:40.661 SYMLINK libspdk_bdev_ftl.so 00:02:40.661 SYMLINK libspdk_bdev_iscsi.so 00:02:40.661 SYMLINK libspdk_bdev_zone_block.so 00:02:40.661 SYMLINK libspdk_bdev_aio.so 00:02:40.661 SYMLINK libspdk_bdev_delay.so 00:02:40.661 SYMLINK libspdk_bdev_malloc.so 00:02:40.661 LIB libspdk_bdev_lvol.a 00:02:40.661 LIB libspdk_bdev_virtio.a 00:02:40.661 SO libspdk_bdev_lvol.so.6.0 00:02:40.661 SO libspdk_bdev_virtio.so.6.0 00:02:40.661 SYMLINK libspdk_bdev_lvol.so 00:02:40.661 SYMLINK libspdk_bdev_virtio.so 00:02:41.227 LIB libspdk_bdev_raid.a 00:02:41.227 SO libspdk_bdev_raid.so.6.0 00:02:41.227 SYMLINK libspdk_bdev_raid.so 00:02:42.601 LIB libspdk_bdev_nvme.a 00:02:42.601 SO libspdk_bdev_nvme.so.7.1 00:02:42.601 SYMLINK libspdk_bdev_nvme.so 00:02:43.534 CC module/event/subsystems/keyring/keyring.o 00:02:43.534 CC module/event/subsystems/vmd/vmd.o 00:02:43.534 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:43.534 CC module/event/subsystems/iobuf/iobuf.o 00:02:43.534 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:43.534 CC module/event/subsystems/sock/sock.o 00:02:43.534 CC module/event/subsystems/fsdev/fsdev.o 00:02:43.534 CC module/event/subsystems/scheduler/scheduler.o 00:02:43.534 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:43.534 LIB libspdk_event_scheduler.a 00:02:43.534 LIB libspdk_event_keyring.a 00:02:43.534 LIB libspdk_event_vhost_blk.a 00:02:43.534 LIB libspdk_event_fsdev.a 00:02:43.534 LIB libspdk_event_vmd.a 00:02:43.534 LIB libspdk_event_sock.a 00:02:43.534 LIB libspdk_event_iobuf.a 00:02:43.534 SO libspdk_event_scheduler.so.4.0 00:02:43.534 SO libspdk_event_keyring.so.1.0 00:02:43.534 SO libspdk_event_fsdev.so.1.0 00:02:43.534 SO libspdk_event_vhost_blk.so.3.0 00:02:43.534 SO libspdk_event_vmd.so.6.0 00:02:43.534 SO libspdk_event_sock.so.5.0 00:02:43.534 SO libspdk_event_iobuf.so.3.0 00:02:43.534 SYMLINK libspdk_event_keyring.so 00:02:43.534 SYMLINK libspdk_event_scheduler.so 00:02:43.534 SYMLINK libspdk_event_vhost_blk.so 00:02:43.534 SYMLINK libspdk_event_fsdev.so 00:02:43.534 SYMLINK libspdk_event_sock.so 00:02:43.534 SYMLINK libspdk_event_vmd.so 00:02:43.534 SYMLINK libspdk_event_iobuf.so 00:02:44.099 CC module/event/subsystems/accel/accel.o 00:02:44.099 LIB libspdk_event_accel.a 00:02:44.099 SO libspdk_event_accel.so.6.0 00:02:44.358 SYMLINK libspdk_event_accel.so 00:02:44.615 CC module/event/subsystems/bdev/bdev.o 00:02:44.615 LIB libspdk_event_bdev.a 00:02:44.873 SO libspdk_event_bdev.so.6.0 00:02:44.873 SYMLINK libspdk_event_bdev.so 00:02:45.214 CC module/event/subsystems/nbd/nbd.o 00:02:45.214 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:45.214 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:45.214 CC module/event/subsystems/scsi/scsi.o 00:02:45.214 CC module/event/subsystems/ublk/ublk.o 00:02:45.214 LIB libspdk_event_nbd.a 00:02:45.214 SO libspdk_event_nbd.so.6.0 00:02:45.504 LIB libspdk_event_ublk.a 00:02:45.504 LIB libspdk_event_scsi.a 00:02:45.504 SYMLINK libspdk_event_nbd.so 00:02:45.504 SO libspdk_event_ublk.so.3.0 00:02:45.504 SO libspdk_event_scsi.so.6.0 00:02:45.504 LIB libspdk_event_nvmf.a 00:02:45.504 SYMLINK libspdk_event_ublk.so 00:02:45.504 SYMLINK libspdk_event_scsi.so 00:02:45.504 SO libspdk_event_nvmf.so.6.0 00:02:45.504 SYMLINK libspdk_event_nvmf.so 00:02:45.815 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:45.815 CC module/event/subsystems/iscsi/iscsi.o 00:02:45.815 LIB libspdk_event_vhost_scsi.a 00:02:45.815 SO libspdk_event_vhost_scsi.so.3.0 00:02:45.815 LIB libspdk_event_iscsi.a 00:02:46.088 SYMLINK libspdk_event_vhost_scsi.so 00:02:46.088 SO libspdk_event_iscsi.so.6.0 00:02:46.088 SYMLINK libspdk_event_iscsi.so 00:02:46.088 SO libspdk.so.6.0 00:02:46.346 SYMLINK libspdk.so 00:02:46.613 CC app/spdk_lspci/spdk_lspci.o 00:02:46.613 CXX app/trace/trace.o 00:02:46.613 CC app/trace_record/trace_record.o 00:02:46.613 CC app/spdk_nvme_identify/identify.o 00:02:46.613 CC app/spdk_top/spdk_top.o 00:02:46.613 CC app/spdk_nvme_discover/discovery_aer.o 00:02:46.613 CC app/spdk_nvme_perf/perf.o 00:02:46.613 TEST_HEADER include/spdk/accel_module.h 00:02:46.613 TEST_HEADER include/spdk/accel.h 00:02:46.613 TEST_HEADER include/spdk/assert.h 00:02:46.613 CC test/rpc_client/rpc_client_test.o 00:02:46.613 TEST_HEADER include/spdk/barrier.h 00:02:46.613 TEST_HEADER include/spdk/bdev.h 00:02:46.613 TEST_HEADER include/spdk/base64.h 00:02:46.613 TEST_HEADER include/spdk/bdev_module.h 00:02:46.613 TEST_HEADER include/spdk/bdev_zone.h 00:02:46.613 TEST_HEADER include/spdk/bit_array.h 00:02:46.613 TEST_HEADER include/spdk/bit_pool.h 00:02:46.613 TEST_HEADER include/spdk/blob_bdev.h 00:02:46.613 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:46.613 TEST_HEADER include/spdk/blob.h 00:02:46.613 TEST_HEADER include/spdk/blobfs.h 00:02:46.613 TEST_HEADER include/spdk/conf.h 00:02:46.613 TEST_HEADER include/spdk/config.h 00:02:46.613 TEST_HEADER include/spdk/cpuset.h 00:02:46.613 TEST_HEADER include/spdk/crc32.h 00:02:46.613 TEST_HEADER include/spdk/crc16.h 00:02:46.613 TEST_HEADER include/spdk/crc64.h 00:02:46.613 TEST_HEADER include/spdk/dif.h 00:02:46.613 TEST_HEADER include/spdk/dma.h 00:02:46.613 TEST_HEADER include/spdk/endian.h 00:02:46.613 TEST_HEADER include/spdk/env.h 00:02:46.613 TEST_HEADER include/spdk/env_dpdk.h 00:02:46.613 TEST_HEADER include/spdk/event.h 00:02:46.613 TEST_HEADER include/spdk/fd_group.h 00:02:46.613 TEST_HEADER include/spdk/fd.h 00:02:46.613 TEST_HEADER include/spdk/file.h 00:02:46.613 TEST_HEADER include/spdk/fsdev.h 00:02:46.613 TEST_HEADER include/spdk/fsdev_module.h 00:02:46.614 TEST_HEADER include/spdk/ftl.h 00:02:46.614 TEST_HEADER include/spdk/gpt_spec.h 00:02:46.614 TEST_HEADER include/spdk/hexlify.h 00:02:46.614 TEST_HEADER include/spdk/histogram_data.h 00:02:46.614 CC app/iscsi_tgt/iscsi_tgt.o 00:02:46.614 TEST_HEADER include/spdk/idxd.h 00:02:46.614 TEST_HEADER include/spdk/idxd_spec.h 00:02:46.614 TEST_HEADER include/spdk/init.h 00:02:46.614 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:46.614 TEST_HEADER include/spdk/ioat.h 00:02:46.614 TEST_HEADER include/spdk/ioat_spec.h 00:02:46.614 TEST_HEADER include/spdk/iscsi_spec.h 00:02:46.614 TEST_HEADER include/spdk/json.h 00:02:46.614 TEST_HEADER include/spdk/jsonrpc.h 00:02:46.614 TEST_HEADER include/spdk/keyring_module.h 00:02:46.614 TEST_HEADER include/spdk/log.h 00:02:46.614 TEST_HEADER include/spdk/keyring.h 00:02:46.614 TEST_HEADER include/spdk/likely.h 00:02:46.614 TEST_HEADER include/spdk/lvol.h 00:02:46.614 TEST_HEADER include/spdk/md5.h 00:02:46.614 CC app/spdk_dd/spdk_dd.o 00:02:46.614 TEST_HEADER include/spdk/memory.h 00:02:46.614 TEST_HEADER include/spdk/nbd.h 00:02:46.614 TEST_HEADER include/spdk/mmio.h 00:02:46.614 TEST_HEADER include/spdk/net.h 00:02:46.614 TEST_HEADER include/spdk/notify.h 00:02:46.614 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:46.614 TEST_HEADER include/spdk/nvme.h 00:02:46.614 TEST_HEADER include/spdk/nvme_intel.h 00:02:46.614 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:46.614 TEST_HEADER include/spdk/nvme_spec.h 00:02:46.614 TEST_HEADER include/spdk/nvme_zns.h 00:02:46.614 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:46.614 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:46.614 TEST_HEADER include/spdk/nvmf.h 00:02:46.614 CC app/nvmf_tgt/nvmf_main.o 00:02:46.614 TEST_HEADER include/spdk/nvmf_spec.h 00:02:46.614 TEST_HEADER include/spdk/nvmf_transport.h 00:02:46.614 TEST_HEADER include/spdk/opal.h 00:02:46.614 TEST_HEADER include/spdk/opal_spec.h 00:02:46.614 TEST_HEADER include/spdk/pci_ids.h 00:02:46.614 TEST_HEADER include/spdk/pipe.h 00:02:46.614 TEST_HEADER include/spdk/queue.h 00:02:46.614 TEST_HEADER include/spdk/reduce.h 00:02:46.614 TEST_HEADER include/spdk/rpc.h 00:02:46.614 TEST_HEADER include/spdk/scheduler.h 00:02:46.614 TEST_HEADER include/spdk/scsi.h 00:02:46.614 TEST_HEADER include/spdk/scsi_spec.h 00:02:46.614 TEST_HEADER include/spdk/sock.h 00:02:46.614 TEST_HEADER include/spdk/stdinc.h 00:02:46.614 TEST_HEADER include/spdk/string.h 00:02:46.614 TEST_HEADER include/spdk/thread.h 00:02:46.614 TEST_HEADER include/spdk/trace.h 00:02:46.614 TEST_HEADER include/spdk/trace_parser.h 00:02:46.614 TEST_HEADER include/spdk/tree.h 00:02:46.614 CC app/spdk_tgt/spdk_tgt.o 00:02:46.614 TEST_HEADER include/spdk/util.h 00:02:46.614 TEST_HEADER include/spdk/ublk.h 00:02:46.614 TEST_HEADER include/spdk/uuid.h 00:02:46.614 TEST_HEADER include/spdk/version.h 00:02:46.614 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:46.614 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:46.614 TEST_HEADER include/spdk/vmd.h 00:02:46.614 TEST_HEADER include/spdk/vhost.h 00:02:46.614 TEST_HEADER include/spdk/xor.h 00:02:46.614 TEST_HEADER include/spdk/zipf.h 00:02:46.614 CXX test/cpp_headers/accel.o 00:02:46.614 CXX test/cpp_headers/assert.o 00:02:46.614 CXX test/cpp_headers/accel_module.o 00:02:46.614 CXX test/cpp_headers/barrier.o 00:02:46.614 CXX test/cpp_headers/base64.o 00:02:46.614 CXX test/cpp_headers/bdev.o 00:02:46.614 CXX test/cpp_headers/bdev_module.o 00:02:46.614 CXX test/cpp_headers/bit_pool.o 00:02:46.614 CXX test/cpp_headers/bdev_zone.o 00:02:46.614 CXX test/cpp_headers/bit_array.o 00:02:46.614 CXX test/cpp_headers/blob_bdev.o 00:02:46.614 CXX test/cpp_headers/blobfs_bdev.o 00:02:46.614 CXX test/cpp_headers/blobfs.o 00:02:46.614 CXX test/cpp_headers/conf.o 00:02:46.614 CXX test/cpp_headers/config.o 00:02:46.614 CXX test/cpp_headers/blob.o 00:02:46.614 CXX test/cpp_headers/cpuset.o 00:02:46.614 CXX test/cpp_headers/crc16.o 00:02:46.614 CXX test/cpp_headers/crc32.o 00:02:46.614 CXX test/cpp_headers/crc64.o 00:02:46.614 CXX test/cpp_headers/dif.o 00:02:46.614 CXX test/cpp_headers/dma.o 00:02:46.614 CXX test/cpp_headers/env.o 00:02:46.614 CXX test/cpp_headers/endian.o 00:02:46.614 CXX test/cpp_headers/env_dpdk.o 00:02:46.614 CXX test/cpp_headers/fd_group.o 00:02:46.614 CXX test/cpp_headers/event.o 00:02:46.614 CXX test/cpp_headers/fd.o 00:02:46.614 CXX test/cpp_headers/fsdev.o 00:02:46.614 CXX test/cpp_headers/file.o 00:02:46.614 CXX test/cpp_headers/fsdev_module.o 00:02:46.614 CXX test/cpp_headers/ftl.o 00:02:46.614 CXX test/cpp_headers/hexlify.o 00:02:46.614 CXX test/cpp_headers/histogram_data.o 00:02:46.614 CXX test/cpp_headers/idxd.o 00:02:46.614 CXX test/cpp_headers/idxd_spec.o 00:02:46.614 CXX test/cpp_headers/init.o 00:02:46.614 CXX test/cpp_headers/gpt_spec.o 00:02:46.614 CXX test/cpp_headers/ioat_spec.o 00:02:46.614 CXX test/cpp_headers/ioat.o 00:02:46.614 CXX test/cpp_headers/iscsi_spec.o 00:02:46.614 CXX test/cpp_headers/json.o 00:02:46.614 CXX test/cpp_headers/jsonrpc.o 00:02:46.614 CXX test/cpp_headers/keyring.o 00:02:46.614 CXX test/cpp_headers/keyring_module.o 00:02:46.614 CXX test/cpp_headers/likely.o 00:02:46.614 CXX test/cpp_headers/log.o 00:02:46.614 CXX test/cpp_headers/memory.o 00:02:46.614 CXX test/cpp_headers/lvol.o 00:02:46.614 CXX test/cpp_headers/md5.o 00:02:46.614 CXX test/cpp_headers/mmio.o 00:02:46.614 CXX test/cpp_headers/net.o 00:02:46.614 CXX test/cpp_headers/nbd.o 00:02:46.614 CXX test/cpp_headers/nvme_intel.o 00:02:46.614 CXX test/cpp_headers/nvme_ocssd.o 00:02:46.614 CXX test/cpp_headers/notify.o 00:02:46.614 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:46.614 CXX test/cpp_headers/nvme.o 00:02:46.614 CXX test/cpp_headers/nvme_spec.o 00:02:46.614 CXX test/cpp_headers/nvme_zns.o 00:02:46.614 CXX test/cpp_headers/nvmf_cmd.o 00:02:46.614 CXX test/cpp_headers/nvmf.o 00:02:46.614 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:46.614 CXX test/cpp_headers/nvmf_spec.o 00:02:46.614 CXX test/cpp_headers/nvmf_transport.o 00:02:46.614 CXX test/cpp_headers/opal.o 00:02:46.883 CXX test/cpp_headers/opal_spec.o 00:02:46.883 LINK spdk_lspci 00:02:46.883 CC test/thread/poller_perf/poller_perf.o 00:02:46.883 CC examples/util/zipf/zipf.o 00:02:46.883 CC test/env/pci/pci_ut.o 00:02:46.883 CC test/env/vtophys/vtophys.o 00:02:46.883 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:46.883 CC test/app/histogram_perf/histogram_perf.o 00:02:46.883 CXX test/cpp_headers/pci_ids.o 00:02:46.883 CC test/env/memory/memory_ut.o 00:02:46.883 CC examples/ioat/verify/verify.o 00:02:46.883 CC test/app/stub/stub.o 00:02:46.883 CC test/app/jsoncat/jsoncat.o 00:02:46.883 CC examples/ioat/perf/perf.o 00:02:46.883 CC app/fio/nvme/fio_plugin.o 00:02:46.883 CC test/app/bdev_svc/bdev_svc.o 00:02:46.883 CC test/dma/test_dma/test_dma.o 00:02:46.883 CC app/fio/bdev/fio_plugin.o 00:02:47.153 LINK rpc_client_test 00:02:47.153 LINK spdk_trace_record 00:02:47.153 LINK interrupt_tgt 00:02:47.153 LINK spdk_tgt 00:02:47.153 CC test/env/mem_callbacks/mem_callbacks.o 00:02:47.153 LINK spdk_nvme_discover 00:02:47.153 CXX test/cpp_headers/pipe.o 00:02:47.153 LINK poller_perf 00:02:47.153 LINK nvmf_tgt 00:02:47.153 LINK zipf 00:02:47.414 LINK histogram_perf 00:02:47.414 LINK iscsi_tgt 00:02:47.414 CXX test/cpp_headers/queue.o 00:02:47.414 CXX test/cpp_headers/reduce.o 00:02:47.414 CXX test/cpp_headers/rpc.o 00:02:47.414 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:47.414 CXX test/cpp_headers/scheduler.o 00:02:47.414 CXX test/cpp_headers/scsi.o 00:02:47.414 LINK env_dpdk_post_init 00:02:47.414 CXX test/cpp_headers/scsi_spec.o 00:02:47.414 CXX test/cpp_headers/sock.o 00:02:47.414 CXX test/cpp_headers/stdinc.o 00:02:47.414 CXX test/cpp_headers/string.o 00:02:47.414 CXX test/cpp_headers/thread.o 00:02:47.414 CXX test/cpp_headers/trace.o 00:02:47.414 CXX test/cpp_headers/trace_parser.o 00:02:47.414 CXX test/cpp_headers/tree.o 00:02:47.414 CXX test/cpp_headers/ublk.o 00:02:47.414 CXX test/cpp_headers/util.o 00:02:47.414 CXX test/cpp_headers/uuid.o 00:02:47.414 CXX test/cpp_headers/version.o 00:02:47.414 CXX test/cpp_headers/vfio_user_pci.o 00:02:47.414 CXX test/cpp_headers/vfio_user_spec.o 00:02:47.414 CXX test/cpp_headers/vhost.o 00:02:47.414 CXX test/cpp_headers/vmd.o 00:02:47.414 CXX test/cpp_headers/xor.o 00:02:47.414 CXX test/cpp_headers/zipf.o 00:02:47.414 LINK vtophys 00:02:47.414 LINK jsoncat 00:02:47.414 LINK verify 00:02:47.414 LINK ioat_perf 00:02:47.414 LINK bdev_svc 00:02:47.414 LINK stub 00:02:47.414 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:47.414 LINK spdk_trace 00:02:47.414 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:47.414 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:47.672 LINK spdk_dd 00:02:47.672 LINK pci_ut 00:02:47.672 LINK test_dma 00:02:47.930 CC examples/idxd/perf/perf.o 00:02:47.930 CC examples/vmd/led/led.o 00:02:47.930 CC examples/vmd/lsvmd/lsvmd.o 00:02:47.930 CC examples/sock/hello_world/hello_sock.o 00:02:47.930 CC test/event/reactor/reactor.o 00:02:47.930 CC examples/thread/thread/thread_ex.o 00:02:47.930 CC test/event/reactor_perf/reactor_perf.o 00:02:47.930 CC test/event/event_perf/event_perf.o 00:02:47.930 CC test/event/app_repeat/app_repeat.o 00:02:47.930 CC test/event/scheduler/scheduler.o 00:02:47.930 CC app/vhost/vhost.o 00:02:47.930 LINK spdk_bdev 00:02:47.930 LINK nvme_fuzz 00:02:47.930 LINK lsvmd 00:02:47.930 LINK spdk_nvme_identify 00:02:47.930 LINK mem_callbacks 00:02:47.930 LINK led 00:02:47.930 LINK vhost_fuzz 00:02:47.930 LINK reactor 00:02:47.930 LINK spdk_nvme 00:02:47.930 LINK reactor_perf 00:02:48.188 LINK event_perf 00:02:48.188 LINK app_repeat 00:02:48.188 LINK hello_sock 00:02:48.188 LINK spdk_top 00:02:48.188 LINK spdk_nvme_perf 00:02:48.188 LINK vhost 00:02:48.188 LINK thread 00:02:48.188 LINK idxd_perf 00:02:48.188 LINK scheduler 00:02:48.188 CC test/nvme/simple_copy/simple_copy.o 00:02:48.188 CC test/nvme/boot_partition/boot_partition.o 00:02:48.188 CC test/nvme/connect_stress/connect_stress.o 00:02:48.188 CC test/nvme/aer/aer.o 00:02:48.188 CC test/nvme/startup/startup.o 00:02:48.188 CC test/nvme/reset/reset.o 00:02:48.188 CC test/nvme/cuse/cuse.o 00:02:48.188 CC test/nvme/compliance/nvme_compliance.o 00:02:48.188 CC test/nvme/e2edp/nvme_dp.o 00:02:48.188 CC test/nvme/fdp/fdp.o 00:02:48.188 CC test/nvme/reserve/reserve.o 00:02:48.188 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:48.188 CC test/nvme/sgl/sgl.o 00:02:48.188 CC test/nvme/err_injection/err_injection.o 00:02:48.188 CC test/nvme/overhead/overhead.o 00:02:48.188 CC test/nvme/fused_ordering/fused_ordering.o 00:02:48.188 CC test/blobfs/mkfs/mkfs.o 00:02:48.188 CC test/accel/dif/dif.o 00:02:48.446 CC test/lvol/esnap/esnap.o 00:02:48.446 LINK boot_partition 00:02:48.446 LINK connect_stress 00:02:48.446 LINK startup 00:02:48.446 LINK doorbell_aers 00:02:48.446 LINK err_injection 00:02:48.446 LINK reserve 00:02:48.446 LINK simple_copy 00:02:48.446 LINK fused_ordering 00:02:48.446 LINK mkfs 00:02:48.446 LINK memory_ut 00:02:48.446 CC examples/nvme/reconnect/reconnect.o 00:02:48.446 CC examples/nvme/hello_world/hello_world.o 00:02:48.446 LINK reset 00:02:48.704 CC examples/nvme/abort/abort.o 00:02:48.704 CC examples/nvme/hotplug/hotplug.o 00:02:48.704 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:48.704 LINK aer 00:02:48.704 LINK nvme_dp 00:02:48.704 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:48.704 CC examples/nvme/arbitration/arbitration.o 00:02:48.704 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:48.704 LINK sgl 00:02:48.704 LINK overhead 00:02:48.704 LINK fdp 00:02:48.704 CC examples/accel/perf/accel_perf.o 00:02:48.704 LINK nvme_compliance 00:02:48.704 CC examples/blob/hello_world/hello_blob.o 00:02:48.704 CC examples/blob/cli/blobcli.o 00:02:48.704 CC examples/fsdev/hello_world/hello_fsdev.o 00:02:48.704 LINK cmb_copy 00:02:48.704 LINK pmr_persistence 00:02:48.963 LINK hotplug 00:02:48.963 LINK hello_world 00:02:48.963 LINK reconnect 00:02:48.963 LINK hello_blob 00:02:48.963 LINK arbitration 00:02:48.963 LINK abort 00:02:48.963 LINK hello_fsdev 00:02:48.963 LINK dif 00:02:49.221 LINK nvme_manage 00:02:49.221 LINK accel_perf 00:02:49.221 LINK blobcli 00:02:49.480 LINK iscsi_fuzz 00:02:49.480 LINK cuse 00:02:49.480 CC test/bdev/bdevio/bdevio.o 00:02:49.739 CC examples/bdev/hello_world/hello_bdev.o 00:02:49.739 CC examples/bdev/bdevperf/bdevperf.o 00:02:49.998 LINK bdevio 00:02:49.998 LINK hello_bdev 00:02:50.566 LINK bdevperf 00:02:50.825 CC examples/nvmf/nvmf/nvmf.o 00:02:51.397 LINK nvmf 00:02:53.303 LINK esnap 00:02:53.562 00:02:53.562 real 0m59.681s 00:02:53.562 user 8m52.389s 00:02:53.562 sys 3m34.335s 00:02:53.562 10:04:47 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:53.562 10:04:47 make -- common/autotest_common.sh@10 -- $ set +x 00:02:53.562 ************************************ 00:02:53.562 END TEST make 00:02:53.562 ************************************ 00:02:53.562 10:04:47 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:53.562 10:04:47 -- pm/common@29 -- $ signal_monitor_resources TERM 00:02:53.562 10:04:47 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:02:53.562 10:04:47 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:53.562 10:04:47 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:02:53.562 10:04:47 -- pm/common@44 -- $ pid=3614966 00:02:53.562 10:04:47 -- pm/common@50 -- $ kill -TERM 3614966 00:02:53.562 10:04:47 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:53.562 10:04:47 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:02:53.562 10:04:47 -- pm/common@44 -- $ pid=3614968 00:02:53.562 10:04:47 -- pm/common@50 -- $ kill -TERM 3614968 00:02:53.562 10:04:47 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:53.562 10:04:47 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:02:53.562 10:04:47 -- pm/common@44 -- $ pid=3614970 00:02:53.562 10:04:47 -- pm/common@50 -- $ kill -TERM 3614970 00:02:53.562 10:04:47 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:53.562 10:04:47 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:02:53.562 10:04:47 -- pm/common@44 -- $ pid=3614991 00:02:53.562 10:04:47 -- pm/common@50 -- $ sudo -E kill -TERM 3614991 00:02:53.562 10:04:47 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:02:53.562 10:04:47 -- spdk/autorun.sh@27 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:53.562 10:04:47 -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:02:53.562 10:04:47 -- common/autotest_common.sh@1711 -- # lcov --version 00:02:53.562 10:04:47 -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:02:53.821 10:04:47 -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:02:53.821 10:04:47 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:02:53.821 10:04:47 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:02:53.821 10:04:47 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:02:53.821 10:04:47 -- scripts/common.sh@336 -- # IFS=.-: 00:02:53.821 10:04:47 -- scripts/common.sh@336 -- # read -ra ver1 00:02:53.821 10:04:47 -- scripts/common.sh@337 -- # IFS=.-: 00:02:53.821 10:04:47 -- scripts/common.sh@337 -- # read -ra ver2 00:02:53.821 10:04:47 -- scripts/common.sh@338 -- # local 'op=<' 00:02:53.821 10:04:47 -- scripts/common.sh@340 -- # ver1_l=2 00:02:53.821 10:04:47 -- scripts/common.sh@341 -- # ver2_l=1 00:02:53.821 10:04:47 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:02:53.821 10:04:47 -- scripts/common.sh@344 -- # case "$op" in 00:02:53.821 10:04:47 -- scripts/common.sh@345 -- # : 1 00:02:53.821 10:04:47 -- scripts/common.sh@364 -- # (( v = 0 )) 00:02:53.821 10:04:47 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:53.821 10:04:47 -- scripts/common.sh@365 -- # decimal 1 00:02:53.821 10:04:47 -- scripts/common.sh@353 -- # local d=1 00:02:53.821 10:04:47 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:02:53.821 10:04:47 -- scripts/common.sh@355 -- # echo 1 00:02:53.821 10:04:47 -- scripts/common.sh@365 -- # ver1[v]=1 00:02:53.821 10:04:47 -- scripts/common.sh@366 -- # decimal 2 00:02:53.821 10:04:47 -- scripts/common.sh@353 -- # local d=2 00:02:53.821 10:04:47 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:02:53.821 10:04:47 -- scripts/common.sh@355 -- # echo 2 00:02:53.821 10:04:47 -- scripts/common.sh@366 -- # ver2[v]=2 00:02:53.821 10:04:47 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:02:53.821 10:04:47 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:02:53.821 10:04:47 -- scripts/common.sh@368 -- # return 0 00:02:53.821 10:04:47 -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:02:53.821 10:04:47 -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:02:53.821 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:53.821 --rc genhtml_branch_coverage=1 00:02:53.821 --rc genhtml_function_coverage=1 00:02:53.821 --rc genhtml_legend=1 00:02:53.821 --rc geninfo_all_blocks=1 00:02:53.821 --rc geninfo_unexecuted_blocks=1 00:02:53.821 00:02:53.821 ' 00:02:53.821 10:04:47 -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:02:53.821 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:53.821 --rc genhtml_branch_coverage=1 00:02:53.821 --rc genhtml_function_coverage=1 00:02:53.821 --rc genhtml_legend=1 00:02:53.821 --rc geninfo_all_blocks=1 00:02:53.821 --rc geninfo_unexecuted_blocks=1 00:02:53.821 00:02:53.821 ' 00:02:53.821 10:04:47 -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:02:53.821 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:53.821 --rc genhtml_branch_coverage=1 00:02:53.821 --rc genhtml_function_coverage=1 00:02:53.821 --rc genhtml_legend=1 00:02:53.821 --rc geninfo_all_blocks=1 00:02:53.821 --rc geninfo_unexecuted_blocks=1 00:02:53.821 00:02:53.821 ' 00:02:53.821 10:04:47 -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:02:53.821 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:53.821 --rc genhtml_branch_coverage=1 00:02:53.821 --rc genhtml_function_coverage=1 00:02:53.821 --rc genhtml_legend=1 00:02:53.821 --rc geninfo_all_blocks=1 00:02:53.821 --rc geninfo_unexecuted_blocks=1 00:02:53.821 00:02:53.821 ' 00:02:53.821 10:04:47 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:02:53.821 10:04:47 -- nvmf/common.sh@7 -- # uname -s 00:02:53.821 10:04:47 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:53.821 10:04:47 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:53.821 10:04:47 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:53.821 10:04:47 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:53.821 10:04:47 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:53.821 10:04:47 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:53.821 10:04:47 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:53.821 10:04:47 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:53.821 10:04:47 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:53.821 10:04:47 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:53.821 10:04:47 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:02:53.821 10:04:47 -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:02:53.821 10:04:47 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:53.821 10:04:47 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:53.821 10:04:47 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:02:53.821 10:04:47 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:53.821 10:04:47 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:53.821 10:04:47 -- scripts/common.sh@15 -- # shopt -s extglob 00:02:53.821 10:04:47 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:53.822 10:04:47 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:53.822 10:04:47 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:53.822 10:04:47 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:53.822 10:04:47 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:53.822 10:04:47 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:53.822 10:04:47 -- paths/export.sh@5 -- # export PATH 00:02:53.822 10:04:47 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:53.822 10:04:47 -- nvmf/common.sh@51 -- # : 0 00:02:53.822 10:04:47 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:02:53.822 10:04:47 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:02:53.822 10:04:47 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:02:53.822 10:04:47 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:53.822 10:04:47 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:53.822 10:04:47 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:02:53.822 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:02:53.822 10:04:47 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:02:53.822 10:04:47 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:02:53.822 10:04:47 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:02:53.822 10:04:47 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:53.822 10:04:47 -- spdk/autotest.sh@32 -- # uname -s 00:02:53.822 10:04:47 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:53.822 10:04:47 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:53.822 10:04:47 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:53.822 10:04:47 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:53.822 10:04:47 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:53.822 10:04:47 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:53.822 10:04:47 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:53.822 10:04:47 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:53.822 10:04:47 -- spdk/autotest.sh@48 -- # udevadm_pid=3679296 00:02:53.822 10:04:47 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:02:53.822 10:04:47 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:53.822 10:04:47 -- pm/common@17 -- # local monitor 00:02:53.822 10:04:47 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:53.822 10:04:47 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:53.822 10:04:47 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:53.822 10:04:47 -- pm/common@21 -- # date +%s 00:02:53.822 10:04:47 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:53.822 10:04:47 -- pm/common@21 -- # date +%s 00:02:53.822 10:04:47 -- pm/common@25 -- # sleep 1 00:02:53.822 10:04:47 -- pm/common@21 -- # date +%s 00:02:53.822 10:04:47 -- pm/common@21 -- # date +%s 00:02:53.822 10:04:47 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1734080687 00:02:53.822 10:04:47 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1734080687 00:02:53.822 10:04:47 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1734080687 00:02:53.822 10:04:47 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1734080687 00:02:53.822 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1734080687_collect-cpu-load.pm.log 00:02:53.822 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1734080687_collect-bmc-pm.bmc.pm.log 00:02:53.822 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1734080687_collect-vmstat.pm.log 00:02:53.822 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1734080687_collect-cpu-temp.pm.log 00:02:54.758 10:04:48 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:54.759 10:04:48 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:02:54.759 10:04:48 -- common/autotest_common.sh@726 -- # xtrace_disable 00:02:54.759 10:04:48 -- common/autotest_common.sh@10 -- # set +x 00:02:54.759 10:04:48 -- spdk/autotest.sh@59 -- # create_test_list 00:02:54.759 10:04:48 -- common/autotest_common.sh@752 -- # xtrace_disable 00:02:54.759 10:04:48 -- common/autotest_common.sh@10 -- # set +x 00:02:54.759 10:04:48 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:02:54.759 10:04:48 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:54.759 10:04:48 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:54.759 10:04:48 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:54.759 10:04:48 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:54.759 10:04:48 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:02:54.759 10:04:48 -- common/autotest_common.sh@1457 -- # uname 00:02:54.759 10:04:48 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:02:54.759 10:04:48 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:02:54.759 10:04:48 -- common/autotest_common.sh@1477 -- # uname 00:02:54.759 10:04:48 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:02:54.759 10:04:48 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:02:54.759 10:04:48 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:02:55.017 lcov: LCOV version 1.15 00:02:55.017 10:04:48 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:03:13.106 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:13.106 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:03:19.682 10:05:12 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:03:19.682 10:05:12 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:19.682 10:05:12 -- common/autotest_common.sh@10 -- # set +x 00:03:19.682 10:05:12 -- spdk/autotest.sh@78 -- # rm -f 00:03:19.682 10:05:12 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:21.588 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:03:21.588 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:03:21.588 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:03:21.588 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:03:21.588 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:03:21.588 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:03:21.588 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:03:21.588 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:03:21.588 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:03:21.588 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:03:21.588 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:03:21.847 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:03:21.847 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:03:21.847 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:03:21.847 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:03:21.847 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:03:21.847 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:03:21.847 10:05:15 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:03:21.847 10:05:15 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:03:21.847 10:05:15 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:03:21.847 10:05:15 -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:03:21.847 10:05:15 -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:03:21.847 10:05:15 -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:03:21.847 10:05:15 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:03:21.847 10:05:15 -- common/autotest_common.sh@1669 -- # bdf=0000:5e:00.0 00:03:21.847 10:05:15 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:03:21.847 10:05:15 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:03:21.847 10:05:15 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:03:21.847 10:05:15 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:21.847 10:05:15 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:21.847 10:05:15 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:03:21.847 10:05:15 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:21.847 10:05:15 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:21.847 10:05:15 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:03:21.847 10:05:15 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:03:21.847 10:05:15 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:21.847 No valid GPT data, bailing 00:03:21.847 10:05:15 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:21.847 10:05:15 -- scripts/common.sh@394 -- # pt= 00:03:21.847 10:05:15 -- scripts/common.sh@395 -- # return 1 00:03:21.847 10:05:15 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:21.847 1+0 records in 00:03:21.847 1+0 records out 00:03:21.847 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00420156 s, 250 MB/s 00:03:21.847 10:05:15 -- spdk/autotest.sh@105 -- # sync 00:03:22.107 10:05:15 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:22.107 10:05:15 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:22.107 10:05:15 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:27.385 10:05:20 -- spdk/autotest.sh@111 -- # uname -s 00:03:27.385 10:05:20 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:03:27.385 10:05:20 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:03:27.385 10:05:20 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:29.922 Hugepages 00:03:29.922 node hugesize free / total 00:03:29.922 node0 1048576kB 0 / 0 00:03:29.922 node0 2048kB 0 / 0 00:03:29.922 node1 1048576kB 0 / 0 00:03:29.922 node1 2048kB 0 / 0 00:03:29.922 00:03:29.922 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:29.922 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:03:29.922 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:03:29.922 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:03:29.922 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:03:29.922 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:03:29.922 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:03:29.922 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:03:29.922 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:03:29.922 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:03:29.922 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:03:29.922 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:03:29.922 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:03:29.922 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:03:29.922 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:03:29.922 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:03:29.922 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:03:29.922 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:03:29.922 10:05:23 -- spdk/autotest.sh@117 -- # uname -s 00:03:29.922 10:05:23 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:03:29.922 10:05:23 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:03:29.922 10:05:23 -- common/autotest_common.sh@1516 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:32.458 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:32.458 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:32.458 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:32.458 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:32.458 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:32.458 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:32.717 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:32.717 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:32.717 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:32.717 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:32.717 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:32.717 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:32.717 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:32.717 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:32.717 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:32.717 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:33.656 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:03:33.656 10:05:27 -- common/autotest_common.sh@1517 -- # sleep 1 00:03:34.594 10:05:28 -- common/autotest_common.sh@1518 -- # bdfs=() 00:03:34.594 10:05:28 -- common/autotest_common.sh@1518 -- # local bdfs 00:03:34.594 10:05:28 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:03:34.594 10:05:28 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:03:34.594 10:05:28 -- common/autotest_common.sh@1498 -- # bdfs=() 00:03:34.594 10:05:28 -- common/autotest_common.sh@1498 -- # local bdfs 00:03:34.594 10:05:28 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:34.594 10:05:28 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:34.594 10:05:28 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:03:34.594 10:05:28 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:03:34.594 10:05:28 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:03:34.594 10:05:28 -- common/autotest_common.sh@1522 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:37.132 Waiting for block devices as requested 00:03:37.132 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:03:37.390 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:03:37.390 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:03:37.390 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:03:37.649 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:03:37.649 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:03:37.649 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:03:38.019 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:03:38.019 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:03:38.019 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:03:38.019 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:03:38.019 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:03:38.278 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:03:38.278 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:03:38.278 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:03:38.278 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:03:38.537 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:03:38.537 10:05:32 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:03:38.537 10:05:32 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:5e:00.0 00:03:38.537 10:05:32 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 00:03:38.537 10:05:32 -- common/autotest_common.sh@1487 -- # grep 0000:5e:00.0/nvme/nvme 00:03:38.537 10:05:32 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:03:38.537 10:05:32 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 ]] 00:03:38.537 10:05:32 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:03:38.537 10:05:32 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:03:38.537 10:05:32 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:03:38.537 10:05:32 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:03:38.537 10:05:32 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:03:38.537 10:05:32 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:03:38.537 10:05:32 -- common/autotest_common.sh@1531 -- # grep oacs 00:03:38.537 10:05:32 -- common/autotest_common.sh@1531 -- # oacs=' 0xf' 00:03:38.537 10:05:32 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:03:38.537 10:05:32 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:03:38.537 10:05:32 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:03:38.537 10:05:32 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:03:38.537 10:05:32 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:03:38.537 10:05:32 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:03:38.537 10:05:32 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:03:38.537 10:05:32 -- common/autotest_common.sh@1543 -- # continue 00:03:38.537 10:05:32 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:03:38.537 10:05:32 -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:38.537 10:05:32 -- common/autotest_common.sh@10 -- # set +x 00:03:38.537 10:05:32 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:03:38.537 10:05:32 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:38.537 10:05:32 -- common/autotest_common.sh@10 -- # set +x 00:03:38.537 10:05:32 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:41.072 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:41.072 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:41.072 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:41.072 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:41.072 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:41.072 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:41.072 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:41.072 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:41.072 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:41.072 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:41.072 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:41.072 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:41.072 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:41.072 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:41.072 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:41.072 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:42.010 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:03:42.010 10:05:35 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:03:42.010 10:05:35 -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:42.010 10:05:35 -- common/autotest_common.sh@10 -- # set +x 00:03:42.010 10:05:35 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:03:42.010 10:05:35 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:03:42.010 10:05:35 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:03:42.010 10:05:35 -- common/autotest_common.sh@1563 -- # bdfs=() 00:03:42.010 10:05:35 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:03:42.010 10:05:35 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:03:42.010 10:05:35 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:03:42.010 10:05:35 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:03:42.010 10:05:35 -- common/autotest_common.sh@1498 -- # bdfs=() 00:03:42.010 10:05:35 -- common/autotest_common.sh@1498 -- # local bdfs 00:03:42.010 10:05:35 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:42.010 10:05:35 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:42.010 10:05:35 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:03:42.010 10:05:35 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:03:42.010 10:05:35 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:03:42.010 10:05:35 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:03:42.010 10:05:35 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:5e:00.0/device 00:03:42.010 10:05:35 -- common/autotest_common.sh@1566 -- # device=0x0a54 00:03:42.010 10:05:35 -- common/autotest_common.sh@1567 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:03:42.010 10:05:35 -- common/autotest_common.sh@1568 -- # bdfs+=($bdf) 00:03:42.010 10:05:35 -- common/autotest_common.sh@1572 -- # (( 1 > 0 )) 00:03:42.010 10:05:35 -- common/autotest_common.sh@1573 -- # printf '%s\n' 0000:5e:00.0 00:03:42.010 10:05:35 -- common/autotest_common.sh@1579 -- # [[ -z 0000:5e:00.0 ]] 00:03:42.010 10:05:35 -- common/autotest_common.sh@1583 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:42.010 10:05:35 -- common/autotest_common.sh@1584 -- # spdk_tgt_pid=3693182 00:03:42.010 10:05:35 -- common/autotest_common.sh@1585 -- # waitforlisten 3693182 00:03:42.010 10:05:35 -- common/autotest_common.sh@835 -- # '[' -z 3693182 ']' 00:03:42.010 10:05:35 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:42.010 10:05:35 -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:42.010 10:05:35 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:42.010 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:42.010 10:05:35 -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:42.010 10:05:35 -- common/autotest_common.sh@10 -- # set +x 00:03:42.269 [2024-12-13 10:05:35.996301] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:03:42.269 [2024-12-13 10:05:35.996392] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3693182 ] 00:03:42.269 [2024-12-13 10:05:36.107913] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:42.528 [2024-12-13 10:05:36.212279] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:03:43.465 10:05:37 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:43.465 10:05:37 -- common/autotest_common.sh@868 -- # return 0 00:03:43.465 10:05:37 -- common/autotest_common.sh@1587 -- # bdf_id=0 00:03:43.465 10:05:37 -- common/autotest_common.sh@1588 -- # for bdf in "${bdfs[@]}" 00:03:43.465 10:05:37 -- common/autotest_common.sh@1589 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:5e:00.0 00:03:46.753 nvme0n1 00:03:46.753 10:05:40 -- common/autotest_common.sh@1591 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:03:46.753 [2024-12-13 10:05:40.253191] nvme_opal.c:2063:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 18 00:03:46.753 [2024-12-13 10:05:40.253243] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 18 00:03:46.753 request: 00:03:46.753 { 00:03:46.753 "nvme_ctrlr_name": "nvme0", 00:03:46.753 "password": "test", 00:03:46.753 "method": "bdev_nvme_opal_revert", 00:03:46.753 "req_id": 1 00:03:46.753 } 00:03:46.753 Got JSON-RPC error response 00:03:46.753 response: 00:03:46.753 { 00:03:46.753 "code": -32603, 00:03:46.753 "message": "Internal error" 00:03:46.753 } 00:03:46.753 10:05:40 -- common/autotest_common.sh@1591 -- # true 00:03:46.753 10:05:40 -- common/autotest_common.sh@1592 -- # (( ++bdf_id )) 00:03:46.753 10:05:40 -- common/autotest_common.sh@1595 -- # killprocess 3693182 00:03:46.753 10:05:40 -- common/autotest_common.sh@954 -- # '[' -z 3693182 ']' 00:03:46.753 10:05:40 -- common/autotest_common.sh@958 -- # kill -0 3693182 00:03:46.753 10:05:40 -- common/autotest_common.sh@959 -- # uname 00:03:46.753 10:05:40 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:46.753 10:05:40 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3693182 00:03:46.753 10:05:40 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:46.753 10:05:40 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:46.753 10:05:40 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3693182' 00:03:46.753 killing process with pid 3693182 00:03:46.753 10:05:40 -- common/autotest_common.sh@973 -- # kill 3693182 00:03:46.753 10:05:40 -- common/autotest_common.sh@978 -- # wait 3693182 00:03:50.104 10:05:43 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:03:50.104 10:05:43 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:03:50.104 10:05:43 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:03:50.104 10:05:43 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:03:50.104 10:05:43 -- spdk/autotest.sh@149 -- # timing_enter lib 00:03:50.104 10:05:43 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:50.104 10:05:43 -- common/autotest_common.sh@10 -- # set +x 00:03:50.104 10:05:43 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:03:50.104 10:05:43 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:03:50.104 10:05:43 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:50.104 10:05:43 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:50.104 10:05:43 -- common/autotest_common.sh@10 -- # set +x 00:03:50.104 ************************************ 00:03:50.104 START TEST env 00:03:50.104 ************************************ 00:03:50.104 10:05:43 env -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:03:50.104 * Looking for test storage... 00:03:50.104 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:03:50.104 10:05:43 env -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:03:50.104 10:05:43 env -- common/autotest_common.sh@1711 -- # lcov --version 00:03:50.104 10:05:43 env -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:03:50.363 10:05:44 env -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:03:50.363 10:05:44 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:50.363 10:05:44 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:50.363 10:05:44 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:50.363 10:05:44 env -- scripts/common.sh@336 -- # IFS=.-: 00:03:50.363 10:05:44 env -- scripts/common.sh@336 -- # read -ra ver1 00:03:50.363 10:05:44 env -- scripts/common.sh@337 -- # IFS=.-: 00:03:50.363 10:05:44 env -- scripts/common.sh@337 -- # read -ra ver2 00:03:50.363 10:05:44 env -- scripts/common.sh@338 -- # local 'op=<' 00:03:50.363 10:05:44 env -- scripts/common.sh@340 -- # ver1_l=2 00:03:50.363 10:05:44 env -- scripts/common.sh@341 -- # ver2_l=1 00:03:50.363 10:05:44 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:50.363 10:05:44 env -- scripts/common.sh@344 -- # case "$op" in 00:03:50.363 10:05:44 env -- scripts/common.sh@345 -- # : 1 00:03:50.363 10:05:44 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:50.363 10:05:44 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:50.363 10:05:44 env -- scripts/common.sh@365 -- # decimal 1 00:03:50.363 10:05:44 env -- scripts/common.sh@353 -- # local d=1 00:03:50.363 10:05:44 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:50.363 10:05:44 env -- scripts/common.sh@355 -- # echo 1 00:03:50.363 10:05:44 env -- scripts/common.sh@365 -- # ver1[v]=1 00:03:50.363 10:05:44 env -- scripts/common.sh@366 -- # decimal 2 00:03:50.363 10:05:44 env -- scripts/common.sh@353 -- # local d=2 00:03:50.363 10:05:44 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:50.363 10:05:44 env -- scripts/common.sh@355 -- # echo 2 00:03:50.363 10:05:44 env -- scripts/common.sh@366 -- # ver2[v]=2 00:03:50.363 10:05:44 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:50.363 10:05:44 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:50.363 10:05:44 env -- scripts/common.sh@368 -- # return 0 00:03:50.363 10:05:44 env -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:50.363 10:05:44 env -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:03:50.363 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:50.363 --rc genhtml_branch_coverage=1 00:03:50.363 --rc genhtml_function_coverage=1 00:03:50.363 --rc genhtml_legend=1 00:03:50.363 --rc geninfo_all_blocks=1 00:03:50.363 --rc geninfo_unexecuted_blocks=1 00:03:50.363 00:03:50.363 ' 00:03:50.363 10:05:44 env -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:03:50.363 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:50.363 --rc genhtml_branch_coverage=1 00:03:50.364 --rc genhtml_function_coverage=1 00:03:50.364 --rc genhtml_legend=1 00:03:50.364 --rc geninfo_all_blocks=1 00:03:50.364 --rc geninfo_unexecuted_blocks=1 00:03:50.364 00:03:50.364 ' 00:03:50.364 10:05:44 env -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:03:50.364 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:50.364 --rc genhtml_branch_coverage=1 00:03:50.364 --rc genhtml_function_coverage=1 00:03:50.364 --rc genhtml_legend=1 00:03:50.364 --rc geninfo_all_blocks=1 00:03:50.364 --rc geninfo_unexecuted_blocks=1 00:03:50.364 00:03:50.364 ' 00:03:50.364 10:05:44 env -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:03:50.364 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:50.364 --rc genhtml_branch_coverage=1 00:03:50.364 --rc genhtml_function_coverage=1 00:03:50.364 --rc genhtml_legend=1 00:03:50.364 --rc geninfo_all_blocks=1 00:03:50.364 --rc geninfo_unexecuted_blocks=1 00:03:50.364 00:03:50.364 ' 00:03:50.364 10:05:44 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:03:50.364 10:05:44 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:50.364 10:05:44 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:50.364 10:05:44 env -- common/autotest_common.sh@10 -- # set +x 00:03:50.364 ************************************ 00:03:50.364 START TEST env_memory 00:03:50.364 ************************************ 00:03:50.364 10:05:44 env.env_memory -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:03:50.364 00:03:50.364 00:03:50.364 CUnit - A unit testing framework for C - Version 2.1-3 00:03:50.364 http://cunit.sourceforge.net/ 00:03:50.364 00:03:50.364 00:03:50.364 Suite: memory 00:03:50.364 Test: alloc and free memory map ...[2024-12-13 10:05:44.122368] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:03:50.364 passed 00:03:50.364 Test: mem map translation ...[2024-12-13 10:05:44.162217] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:03:50.364 [2024-12-13 10:05:44.162241] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:03:50.364 [2024-12-13 10:05:44.162287] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:03:50.364 [2024-12-13 10:05:44.162300] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:03:50.364 passed 00:03:50.364 Test: mem map registration ...[2024-12-13 10:05:44.223875] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:03:50.364 [2024-12-13 10:05:44.223898] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:03:50.364 passed 00:03:50.624 Test: mem map adjacent registrations ...passed 00:03:50.624 00:03:50.624 Run Summary: Type Total Ran Passed Failed Inactive 00:03:50.624 suites 1 1 n/a 0 0 00:03:50.624 tests 4 4 4 0 0 00:03:50.624 asserts 152 152 152 0 n/a 00:03:50.624 00:03:50.624 Elapsed time = 0.226 seconds 00:03:50.624 00:03:50.624 real 0m0.260s 00:03:50.624 user 0m0.234s 00:03:50.624 sys 0m0.025s 00:03:50.624 10:05:44 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:50.624 10:05:44 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:03:50.624 ************************************ 00:03:50.624 END TEST env_memory 00:03:50.624 ************************************ 00:03:50.624 10:05:44 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:50.624 10:05:44 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:50.624 10:05:44 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:50.624 10:05:44 env -- common/autotest_common.sh@10 -- # set +x 00:03:50.624 ************************************ 00:03:50.624 START TEST env_vtophys 00:03:50.624 ************************************ 00:03:50.624 10:05:44 env.env_vtophys -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:50.624 EAL: lib.eal log level changed from notice to debug 00:03:50.624 EAL: Detected lcore 0 as core 0 on socket 0 00:03:50.624 EAL: Detected lcore 1 as core 1 on socket 0 00:03:50.624 EAL: Detected lcore 2 as core 2 on socket 0 00:03:50.624 EAL: Detected lcore 3 as core 3 on socket 0 00:03:50.624 EAL: Detected lcore 4 as core 4 on socket 0 00:03:50.624 EAL: Detected lcore 5 as core 5 on socket 0 00:03:50.624 EAL: Detected lcore 6 as core 6 on socket 0 00:03:50.624 EAL: Detected lcore 7 as core 8 on socket 0 00:03:50.624 EAL: Detected lcore 8 as core 9 on socket 0 00:03:50.624 EAL: Detected lcore 9 as core 10 on socket 0 00:03:50.624 EAL: Detected lcore 10 as core 11 on socket 0 00:03:50.624 EAL: Detected lcore 11 as core 12 on socket 0 00:03:50.624 EAL: Detected lcore 12 as core 13 on socket 0 00:03:50.624 EAL: Detected lcore 13 as core 16 on socket 0 00:03:50.624 EAL: Detected lcore 14 as core 17 on socket 0 00:03:50.624 EAL: Detected lcore 15 as core 18 on socket 0 00:03:50.624 EAL: Detected lcore 16 as core 19 on socket 0 00:03:50.624 EAL: Detected lcore 17 as core 20 on socket 0 00:03:50.624 EAL: Detected lcore 18 as core 21 on socket 0 00:03:50.624 EAL: Detected lcore 19 as core 25 on socket 0 00:03:50.624 EAL: Detected lcore 20 as core 26 on socket 0 00:03:50.624 EAL: Detected lcore 21 as core 27 on socket 0 00:03:50.624 EAL: Detected lcore 22 as core 28 on socket 0 00:03:50.624 EAL: Detected lcore 23 as core 29 on socket 0 00:03:50.624 EAL: Detected lcore 24 as core 0 on socket 1 00:03:50.624 EAL: Detected lcore 25 as core 1 on socket 1 00:03:50.624 EAL: Detected lcore 26 as core 2 on socket 1 00:03:50.624 EAL: Detected lcore 27 as core 3 on socket 1 00:03:50.624 EAL: Detected lcore 28 as core 4 on socket 1 00:03:50.624 EAL: Detected lcore 29 as core 5 on socket 1 00:03:50.624 EAL: Detected lcore 30 as core 6 on socket 1 00:03:50.624 EAL: Detected lcore 31 as core 8 on socket 1 00:03:50.624 EAL: Detected lcore 32 as core 9 on socket 1 00:03:50.624 EAL: Detected lcore 33 as core 10 on socket 1 00:03:50.624 EAL: Detected lcore 34 as core 11 on socket 1 00:03:50.624 EAL: Detected lcore 35 as core 12 on socket 1 00:03:50.624 EAL: Detected lcore 36 as core 13 on socket 1 00:03:50.624 EAL: Detected lcore 37 as core 16 on socket 1 00:03:50.624 EAL: Detected lcore 38 as core 17 on socket 1 00:03:50.624 EAL: Detected lcore 39 as core 18 on socket 1 00:03:50.624 EAL: Detected lcore 40 as core 19 on socket 1 00:03:50.624 EAL: Detected lcore 41 as core 20 on socket 1 00:03:50.624 EAL: Detected lcore 42 as core 21 on socket 1 00:03:50.624 EAL: Detected lcore 43 as core 25 on socket 1 00:03:50.624 EAL: Detected lcore 44 as core 26 on socket 1 00:03:50.624 EAL: Detected lcore 45 as core 27 on socket 1 00:03:50.624 EAL: Detected lcore 46 as core 28 on socket 1 00:03:50.624 EAL: Detected lcore 47 as core 29 on socket 1 00:03:50.624 EAL: Detected lcore 48 as core 0 on socket 0 00:03:50.624 EAL: Detected lcore 49 as core 1 on socket 0 00:03:50.624 EAL: Detected lcore 50 as core 2 on socket 0 00:03:50.624 EAL: Detected lcore 51 as core 3 on socket 0 00:03:50.624 EAL: Detected lcore 52 as core 4 on socket 0 00:03:50.624 EAL: Detected lcore 53 as core 5 on socket 0 00:03:50.624 EAL: Detected lcore 54 as core 6 on socket 0 00:03:50.624 EAL: Detected lcore 55 as core 8 on socket 0 00:03:50.624 EAL: Detected lcore 56 as core 9 on socket 0 00:03:50.624 EAL: Detected lcore 57 as core 10 on socket 0 00:03:50.624 EAL: Detected lcore 58 as core 11 on socket 0 00:03:50.624 EAL: Detected lcore 59 as core 12 on socket 0 00:03:50.624 EAL: Detected lcore 60 as core 13 on socket 0 00:03:50.624 EAL: Detected lcore 61 as core 16 on socket 0 00:03:50.624 EAL: Detected lcore 62 as core 17 on socket 0 00:03:50.624 EAL: Detected lcore 63 as core 18 on socket 0 00:03:50.624 EAL: Detected lcore 64 as core 19 on socket 0 00:03:50.624 EAL: Detected lcore 65 as core 20 on socket 0 00:03:50.624 EAL: Detected lcore 66 as core 21 on socket 0 00:03:50.624 EAL: Detected lcore 67 as core 25 on socket 0 00:03:50.624 EAL: Detected lcore 68 as core 26 on socket 0 00:03:50.624 EAL: Detected lcore 69 as core 27 on socket 0 00:03:50.624 EAL: Detected lcore 70 as core 28 on socket 0 00:03:50.624 EAL: Detected lcore 71 as core 29 on socket 0 00:03:50.624 EAL: Detected lcore 72 as core 0 on socket 1 00:03:50.624 EAL: Detected lcore 73 as core 1 on socket 1 00:03:50.624 EAL: Detected lcore 74 as core 2 on socket 1 00:03:50.624 EAL: Detected lcore 75 as core 3 on socket 1 00:03:50.624 EAL: Detected lcore 76 as core 4 on socket 1 00:03:50.624 EAL: Detected lcore 77 as core 5 on socket 1 00:03:50.624 EAL: Detected lcore 78 as core 6 on socket 1 00:03:50.624 EAL: Detected lcore 79 as core 8 on socket 1 00:03:50.624 EAL: Detected lcore 80 as core 9 on socket 1 00:03:50.624 EAL: Detected lcore 81 as core 10 on socket 1 00:03:50.624 EAL: Detected lcore 82 as core 11 on socket 1 00:03:50.624 EAL: Detected lcore 83 as core 12 on socket 1 00:03:50.624 EAL: Detected lcore 84 as core 13 on socket 1 00:03:50.624 EAL: Detected lcore 85 as core 16 on socket 1 00:03:50.624 EAL: Detected lcore 86 as core 17 on socket 1 00:03:50.624 EAL: Detected lcore 87 as core 18 on socket 1 00:03:50.624 EAL: Detected lcore 88 as core 19 on socket 1 00:03:50.624 EAL: Detected lcore 89 as core 20 on socket 1 00:03:50.624 EAL: Detected lcore 90 as core 21 on socket 1 00:03:50.624 EAL: Detected lcore 91 as core 25 on socket 1 00:03:50.624 EAL: Detected lcore 92 as core 26 on socket 1 00:03:50.624 EAL: Detected lcore 93 as core 27 on socket 1 00:03:50.624 EAL: Detected lcore 94 as core 28 on socket 1 00:03:50.624 EAL: Detected lcore 95 as core 29 on socket 1 00:03:50.624 EAL: Maximum logical cores by configuration: 128 00:03:50.624 EAL: Detected CPU lcores: 96 00:03:50.624 EAL: Detected NUMA nodes: 2 00:03:50.624 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:03:50.624 EAL: Detected shared linkage of DPDK 00:03:50.624 EAL: No shared files mode enabled, IPC will be disabled 00:03:50.624 EAL: Bus pci wants IOVA as 'DC' 00:03:50.624 EAL: Buses did not request a specific IOVA mode. 00:03:50.624 EAL: IOMMU is available, selecting IOVA as VA mode. 00:03:50.624 EAL: Selected IOVA mode 'VA' 00:03:50.624 EAL: Probing VFIO support... 00:03:50.624 EAL: IOMMU type 1 (Type 1) is supported 00:03:50.624 EAL: IOMMU type 7 (sPAPR) is not supported 00:03:50.624 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:03:50.624 EAL: VFIO support initialized 00:03:50.624 EAL: Ask a virtual area of 0x2e000 bytes 00:03:50.624 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:03:50.624 EAL: Setting up physically contiguous memory... 00:03:50.624 EAL: Setting maximum number of open files to 524288 00:03:50.624 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:03:50.624 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:03:50.624 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:03:50.624 EAL: Ask a virtual area of 0x61000 bytes 00:03:50.624 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:03:50.624 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:50.624 EAL: Ask a virtual area of 0x400000000 bytes 00:03:50.624 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:03:50.624 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:03:50.624 EAL: Ask a virtual area of 0x61000 bytes 00:03:50.624 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:03:50.624 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:50.624 EAL: Ask a virtual area of 0x400000000 bytes 00:03:50.624 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:03:50.624 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:03:50.624 EAL: Ask a virtual area of 0x61000 bytes 00:03:50.624 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:03:50.624 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:50.624 EAL: Ask a virtual area of 0x400000000 bytes 00:03:50.624 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:03:50.624 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:03:50.624 EAL: Ask a virtual area of 0x61000 bytes 00:03:50.624 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:03:50.624 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:50.624 EAL: Ask a virtual area of 0x400000000 bytes 00:03:50.624 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:03:50.624 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:03:50.625 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:03:50.625 EAL: Ask a virtual area of 0x61000 bytes 00:03:50.625 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:03:50.625 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:50.625 EAL: Ask a virtual area of 0x400000000 bytes 00:03:50.625 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:03:50.625 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:03:50.625 EAL: Ask a virtual area of 0x61000 bytes 00:03:50.625 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:03:50.625 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:50.625 EAL: Ask a virtual area of 0x400000000 bytes 00:03:50.625 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:03:50.625 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:03:50.625 EAL: Ask a virtual area of 0x61000 bytes 00:03:50.625 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:03:50.625 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:50.625 EAL: Ask a virtual area of 0x400000000 bytes 00:03:50.625 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:03:50.625 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:03:50.625 EAL: Ask a virtual area of 0x61000 bytes 00:03:50.625 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:03:50.625 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:50.625 EAL: Ask a virtual area of 0x400000000 bytes 00:03:50.625 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:03:50.625 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:03:50.625 EAL: Hugepages will be freed exactly as allocated. 00:03:50.625 EAL: No shared files mode enabled, IPC is disabled 00:03:50.625 EAL: No shared files mode enabled, IPC is disabled 00:03:50.625 EAL: TSC frequency is ~2100000 KHz 00:03:50.625 EAL: Main lcore 0 is ready (tid=7f95819b5a40;cpuset=[0]) 00:03:50.625 EAL: Trying to obtain current memory policy. 00:03:50.625 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:50.625 EAL: Restoring previous memory policy: 0 00:03:50.625 EAL: request: mp_malloc_sync 00:03:50.625 EAL: No shared files mode enabled, IPC is disabled 00:03:50.625 EAL: Heap on socket 0 was expanded by 2MB 00:03:50.625 EAL: No shared files mode enabled, IPC is disabled 00:03:50.884 EAL: No PCI address specified using 'addr=' in: bus=pci 00:03:50.884 EAL: Mem event callback 'spdk:(nil)' registered 00:03:50.884 00:03:50.884 00:03:50.884 CUnit - A unit testing framework for C - Version 2.1-3 00:03:50.884 http://cunit.sourceforge.net/ 00:03:50.884 00:03:50.884 00:03:50.884 Suite: components_suite 00:03:51.143 Test: vtophys_malloc_test ...passed 00:03:51.143 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:03:51.143 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:51.143 EAL: Restoring previous memory policy: 4 00:03:51.143 EAL: Calling mem event callback 'spdk:(nil)' 00:03:51.143 EAL: request: mp_malloc_sync 00:03:51.143 EAL: No shared files mode enabled, IPC is disabled 00:03:51.143 EAL: Heap on socket 0 was expanded by 4MB 00:03:51.143 EAL: Calling mem event callback 'spdk:(nil)' 00:03:51.143 EAL: request: mp_malloc_sync 00:03:51.143 EAL: No shared files mode enabled, IPC is disabled 00:03:51.143 EAL: Heap on socket 0 was shrunk by 4MB 00:03:51.143 EAL: Trying to obtain current memory policy. 00:03:51.143 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:51.143 EAL: Restoring previous memory policy: 4 00:03:51.143 EAL: Calling mem event callback 'spdk:(nil)' 00:03:51.143 EAL: request: mp_malloc_sync 00:03:51.143 EAL: No shared files mode enabled, IPC is disabled 00:03:51.143 EAL: Heap on socket 0 was expanded by 6MB 00:03:51.143 EAL: Calling mem event callback 'spdk:(nil)' 00:03:51.143 EAL: request: mp_malloc_sync 00:03:51.143 EAL: No shared files mode enabled, IPC is disabled 00:03:51.143 EAL: Heap on socket 0 was shrunk by 6MB 00:03:51.143 EAL: Trying to obtain current memory policy. 00:03:51.143 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:51.143 EAL: Restoring previous memory policy: 4 00:03:51.143 EAL: Calling mem event callback 'spdk:(nil)' 00:03:51.143 EAL: request: mp_malloc_sync 00:03:51.143 EAL: No shared files mode enabled, IPC is disabled 00:03:51.143 EAL: Heap on socket 0 was expanded by 10MB 00:03:51.143 EAL: Calling mem event callback 'spdk:(nil)' 00:03:51.143 EAL: request: mp_malloc_sync 00:03:51.143 EAL: No shared files mode enabled, IPC is disabled 00:03:51.143 EAL: Heap on socket 0 was shrunk by 10MB 00:03:51.143 EAL: Trying to obtain current memory policy. 00:03:51.143 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:51.143 EAL: Restoring previous memory policy: 4 00:03:51.143 EAL: Calling mem event callback 'spdk:(nil)' 00:03:51.143 EAL: request: mp_malloc_sync 00:03:51.143 EAL: No shared files mode enabled, IPC is disabled 00:03:51.143 EAL: Heap on socket 0 was expanded by 18MB 00:03:51.143 EAL: Calling mem event callback 'spdk:(nil)' 00:03:51.143 EAL: request: mp_malloc_sync 00:03:51.143 EAL: No shared files mode enabled, IPC is disabled 00:03:51.143 EAL: Heap on socket 0 was shrunk by 18MB 00:03:51.143 EAL: Trying to obtain current memory policy. 00:03:51.143 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:51.143 EAL: Restoring previous memory policy: 4 00:03:51.143 EAL: Calling mem event callback 'spdk:(nil)' 00:03:51.143 EAL: request: mp_malloc_sync 00:03:51.143 EAL: No shared files mode enabled, IPC is disabled 00:03:51.143 EAL: Heap on socket 0 was expanded by 34MB 00:03:51.143 EAL: Calling mem event callback 'spdk:(nil)' 00:03:51.143 EAL: request: mp_malloc_sync 00:03:51.143 EAL: No shared files mode enabled, IPC is disabled 00:03:51.143 EAL: Heap on socket 0 was shrunk by 34MB 00:03:51.408 EAL: Trying to obtain current memory policy. 00:03:51.408 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:51.408 EAL: Restoring previous memory policy: 4 00:03:51.408 EAL: Calling mem event callback 'spdk:(nil)' 00:03:51.408 EAL: request: mp_malloc_sync 00:03:51.408 EAL: No shared files mode enabled, IPC is disabled 00:03:51.408 EAL: Heap on socket 0 was expanded by 66MB 00:03:51.408 EAL: Calling mem event callback 'spdk:(nil)' 00:03:51.408 EAL: request: mp_malloc_sync 00:03:51.408 EAL: No shared files mode enabled, IPC is disabled 00:03:51.408 EAL: Heap on socket 0 was shrunk by 66MB 00:03:51.668 EAL: Trying to obtain current memory policy. 00:03:51.668 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:51.668 EAL: Restoring previous memory policy: 4 00:03:51.668 EAL: Calling mem event callback 'spdk:(nil)' 00:03:51.668 EAL: request: mp_malloc_sync 00:03:51.668 EAL: No shared files mode enabled, IPC is disabled 00:03:51.668 EAL: Heap on socket 0 was expanded by 130MB 00:03:51.668 EAL: Calling mem event callback 'spdk:(nil)' 00:03:51.927 EAL: request: mp_malloc_sync 00:03:51.927 EAL: No shared files mode enabled, IPC is disabled 00:03:51.927 EAL: Heap on socket 0 was shrunk by 130MB 00:03:51.927 EAL: Trying to obtain current memory policy. 00:03:51.927 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:52.186 EAL: Restoring previous memory policy: 4 00:03:52.186 EAL: Calling mem event callback 'spdk:(nil)' 00:03:52.186 EAL: request: mp_malloc_sync 00:03:52.186 EAL: No shared files mode enabled, IPC is disabled 00:03:52.186 EAL: Heap on socket 0 was expanded by 258MB 00:03:52.445 EAL: Calling mem event callback 'spdk:(nil)' 00:03:52.445 EAL: request: mp_malloc_sync 00:03:52.445 EAL: No shared files mode enabled, IPC is disabled 00:03:52.445 EAL: Heap on socket 0 was shrunk by 258MB 00:03:53.013 EAL: Trying to obtain current memory policy. 00:03:53.013 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:53.013 EAL: Restoring previous memory policy: 4 00:03:53.013 EAL: Calling mem event callback 'spdk:(nil)' 00:03:53.013 EAL: request: mp_malloc_sync 00:03:53.013 EAL: No shared files mode enabled, IPC is disabled 00:03:53.013 EAL: Heap on socket 0 was expanded by 514MB 00:03:53.950 EAL: Calling mem event callback 'spdk:(nil)' 00:03:53.950 EAL: request: mp_malloc_sync 00:03:53.950 EAL: No shared files mode enabled, IPC is disabled 00:03:53.950 EAL: Heap on socket 0 was shrunk by 514MB 00:03:54.887 EAL: Trying to obtain current memory policy. 00:03:54.887 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:55.146 EAL: Restoring previous memory policy: 4 00:03:55.146 EAL: Calling mem event callback 'spdk:(nil)' 00:03:55.146 EAL: request: mp_malloc_sync 00:03:55.146 EAL: No shared files mode enabled, IPC is disabled 00:03:55.146 EAL: Heap on socket 0 was expanded by 1026MB 00:03:57.051 EAL: Calling mem event callback 'spdk:(nil)' 00:03:57.051 EAL: request: mp_malloc_sync 00:03:57.051 EAL: No shared files mode enabled, IPC is disabled 00:03:57.051 EAL: Heap on socket 0 was shrunk by 1026MB 00:03:58.954 passed 00:03:58.954 00:03:58.954 Run Summary: Type Total Ran Passed Failed Inactive 00:03:58.954 suites 1 1 n/a 0 0 00:03:58.954 tests 2 2 2 0 0 00:03:58.954 asserts 497 497 497 0 n/a 00:03:58.954 00:03:58.954 Elapsed time = 7.765 seconds 00:03:58.954 EAL: Calling mem event callback 'spdk:(nil)' 00:03:58.954 EAL: request: mp_malloc_sync 00:03:58.954 EAL: No shared files mode enabled, IPC is disabled 00:03:58.954 EAL: Heap on socket 0 was shrunk by 2MB 00:03:58.954 EAL: No shared files mode enabled, IPC is disabled 00:03:58.954 EAL: No shared files mode enabled, IPC is disabled 00:03:58.954 EAL: No shared files mode enabled, IPC is disabled 00:03:58.954 00:03:58.954 real 0m8.000s 00:03:58.954 user 0m7.214s 00:03:58.954 sys 0m0.737s 00:03:58.954 10:05:52 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:58.954 10:05:52 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:03:58.954 ************************************ 00:03:58.954 END TEST env_vtophys 00:03:58.954 ************************************ 00:03:58.954 10:05:52 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:03:58.954 10:05:52 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:58.954 10:05:52 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:58.954 10:05:52 env -- common/autotest_common.sh@10 -- # set +x 00:03:58.954 ************************************ 00:03:58.954 START TEST env_pci 00:03:58.954 ************************************ 00:03:58.954 10:05:52 env.env_pci -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:03:58.954 00:03:58.954 00:03:58.954 CUnit - A unit testing framework for C - Version 2.1-3 00:03:58.954 http://cunit.sourceforge.net/ 00:03:58.954 00:03:58.954 00:03:58.954 Suite: pci 00:03:58.954 Test: pci_hook ...[2024-12-13 10:05:52.495602] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 3696026 has claimed it 00:03:58.954 EAL: Cannot find device (10000:00:01.0) 00:03:58.954 EAL: Failed to attach device on primary process 00:03:58.954 passed 00:03:58.954 00:03:58.955 Run Summary: Type Total Ran Passed Failed Inactive 00:03:58.955 suites 1 1 n/a 0 0 00:03:58.955 tests 1 1 1 0 0 00:03:58.955 asserts 25 25 25 0 n/a 00:03:58.955 00:03:58.955 Elapsed time = 0.045 seconds 00:03:58.955 00:03:58.955 real 0m0.120s 00:03:58.955 user 0m0.045s 00:03:58.955 sys 0m0.075s 00:03:58.955 10:05:52 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:58.955 10:05:52 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:03:58.955 ************************************ 00:03:58.955 END TEST env_pci 00:03:58.955 ************************************ 00:03:58.955 10:05:52 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:03:58.955 10:05:52 env -- env/env.sh@15 -- # uname 00:03:58.955 10:05:52 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:03:58.955 10:05:52 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:03:58.955 10:05:52 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:58.955 10:05:52 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:03:58.955 10:05:52 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:58.955 10:05:52 env -- common/autotest_common.sh@10 -- # set +x 00:03:58.955 ************************************ 00:03:58.955 START TEST env_dpdk_post_init 00:03:58.955 ************************************ 00:03:58.955 10:05:52 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:58.955 EAL: Detected CPU lcores: 96 00:03:58.955 EAL: Detected NUMA nodes: 2 00:03:58.955 EAL: Detected shared linkage of DPDK 00:03:58.955 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:58.955 EAL: Selected IOVA mode 'VA' 00:03:58.955 EAL: VFIO support initialized 00:03:58.955 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:59.214 EAL: Using IOMMU type 1 (Type 1) 00:03:59.214 EAL: Ignore mapping IO port bar(1) 00:03:59.214 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.0 (socket 0) 00:03:59.214 EAL: Ignore mapping IO port bar(1) 00:03:59.214 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.1 (socket 0) 00:03:59.214 EAL: Ignore mapping IO port bar(1) 00:03:59.214 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.2 (socket 0) 00:03:59.214 EAL: Ignore mapping IO port bar(1) 00:03:59.214 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.3 (socket 0) 00:03:59.214 EAL: Ignore mapping IO port bar(1) 00:03:59.214 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.4 (socket 0) 00:03:59.214 EAL: Ignore mapping IO port bar(1) 00:03:59.214 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.5 (socket 0) 00:03:59.214 EAL: Ignore mapping IO port bar(1) 00:03:59.214 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.6 (socket 0) 00:03:59.214 EAL: Ignore mapping IO port bar(1) 00:03:59.214 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.7 (socket 0) 00:04:00.151 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:5e:00.0 (socket 0) 00:04:00.151 EAL: Ignore mapping IO port bar(1) 00:04:00.151 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.0 (socket 1) 00:04:00.151 EAL: Ignore mapping IO port bar(1) 00:04:00.151 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.1 (socket 1) 00:04:00.151 EAL: Ignore mapping IO port bar(1) 00:04:00.151 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.2 (socket 1) 00:04:00.151 EAL: Ignore mapping IO port bar(1) 00:04:00.151 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.3 (socket 1) 00:04:00.151 EAL: Ignore mapping IO port bar(1) 00:04:00.151 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.4 (socket 1) 00:04:00.151 EAL: Ignore mapping IO port bar(1) 00:04:00.151 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.5 (socket 1) 00:04:00.151 EAL: Ignore mapping IO port bar(1) 00:04:00.151 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.6 (socket 1) 00:04:00.151 EAL: Ignore mapping IO port bar(1) 00:04:00.151 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.7 (socket 1) 00:04:03.438 EAL: Releasing PCI mapped resource for 0000:5e:00.0 00:04:03.438 EAL: Calling pci_unmap_resource for 0000:5e:00.0 at 0x202001020000 00:04:03.438 Starting DPDK initialization... 00:04:03.438 Starting SPDK post initialization... 00:04:03.438 SPDK NVMe probe 00:04:03.438 Attaching to 0000:5e:00.0 00:04:03.438 Attached to 0000:5e:00.0 00:04:03.438 Cleaning up... 00:04:03.438 00:04:03.438 real 0m4.440s 00:04:03.438 user 0m3.020s 00:04:03.438 sys 0m0.490s 00:04:03.438 10:05:57 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:03.438 10:05:57 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:03.438 ************************************ 00:04:03.438 END TEST env_dpdk_post_init 00:04:03.438 ************************************ 00:04:03.438 10:05:57 env -- env/env.sh@26 -- # uname 00:04:03.438 10:05:57 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:03.438 10:05:57 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:03.438 10:05:57 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:03.438 10:05:57 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:03.438 10:05:57 env -- common/autotest_common.sh@10 -- # set +x 00:04:03.438 ************************************ 00:04:03.438 START TEST env_mem_callbacks 00:04:03.438 ************************************ 00:04:03.438 10:05:57 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:03.438 EAL: Detected CPU lcores: 96 00:04:03.438 EAL: Detected NUMA nodes: 2 00:04:03.438 EAL: Detected shared linkage of DPDK 00:04:03.438 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:03.438 EAL: Selected IOVA mode 'VA' 00:04:03.438 EAL: VFIO support initialized 00:04:03.438 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:03.438 00:04:03.438 00:04:03.438 CUnit - A unit testing framework for C - Version 2.1-3 00:04:03.438 http://cunit.sourceforge.net/ 00:04:03.438 00:04:03.438 00:04:03.438 Suite: memory 00:04:03.438 Test: test ... 00:04:03.438 register 0x200000200000 2097152 00:04:03.438 malloc 3145728 00:04:03.438 register 0x200000400000 4194304 00:04:03.438 buf 0x2000004fffc0 len 3145728 PASSED 00:04:03.438 malloc 64 00:04:03.438 buf 0x2000004ffec0 len 64 PASSED 00:04:03.438 malloc 4194304 00:04:03.438 register 0x200000800000 6291456 00:04:03.438 buf 0x2000009fffc0 len 4194304 PASSED 00:04:03.438 free 0x2000004fffc0 3145728 00:04:03.438 free 0x2000004ffec0 64 00:04:03.438 unregister 0x200000400000 4194304 PASSED 00:04:03.438 free 0x2000009fffc0 4194304 00:04:03.438 unregister 0x200000800000 6291456 PASSED 00:04:03.438 malloc 8388608 00:04:03.438 register 0x200000400000 10485760 00:04:03.438 buf 0x2000005fffc0 len 8388608 PASSED 00:04:03.438 free 0x2000005fffc0 8388608 00:04:03.438 unregister 0x200000400000 10485760 PASSED 00:04:03.697 passed 00:04:03.697 00:04:03.697 Run Summary: Type Total Ran Passed Failed Inactive 00:04:03.697 suites 1 1 n/a 0 0 00:04:03.697 tests 1 1 1 0 0 00:04:03.697 asserts 15 15 15 0 n/a 00:04:03.697 00:04:03.697 Elapsed time = 0.067 seconds 00:04:03.697 00:04:03.697 real 0m0.169s 00:04:03.697 user 0m0.095s 00:04:03.697 sys 0m0.075s 00:04:03.697 10:05:57 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:03.697 10:05:57 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:03.697 ************************************ 00:04:03.697 END TEST env_mem_callbacks 00:04:03.697 ************************************ 00:04:03.697 00:04:03.697 real 0m13.524s 00:04:03.697 user 0m10.842s 00:04:03.697 sys 0m1.737s 00:04:03.697 10:05:57 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:03.697 10:05:57 env -- common/autotest_common.sh@10 -- # set +x 00:04:03.697 ************************************ 00:04:03.697 END TEST env 00:04:03.697 ************************************ 00:04:03.697 10:05:57 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:03.697 10:05:57 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:03.697 10:05:57 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:03.697 10:05:57 -- common/autotest_common.sh@10 -- # set +x 00:04:03.697 ************************************ 00:04:03.697 START TEST rpc 00:04:03.697 ************************************ 00:04:03.697 10:05:57 rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:03.697 * Looking for test storage... 00:04:03.697 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:03.697 10:05:57 rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:03.697 10:05:57 rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:04:03.697 10:05:57 rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:03.957 10:05:57 rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:03.957 10:05:57 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:03.957 10:05:57 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:03.957 10:05:57 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:03.957 10:05:57 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:03.957 10:05:57 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:03.957 10:05:57 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:03.957 10:05:57 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:03.957 10:05:57 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:03.957 10:05:57 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:03.957 10:05:57 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:03.957 10:05:57 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:03.957 10:05:57 rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:03.957 10:05:57 rpc -- scripts/common.sh@345 -- # : 1 00:04:03.957 10:05:57 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:03.957 10:05:57 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:03.957 10:05:57 rpc -- scripts/common.sh@365 -- # decimal 1 00:04:03.957 10:05:57 rpc -- scripts/common.sh@353 -- # local d=1 00:04:03.957 10:05:57 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:03.957 10:05:57 rpc -- scripts/common.sh@355 -- # echo 1 00:04:03.957 10:05:57 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:03.957 10:05:57 rpc -- scripts/common.sh@366 -- # decimal 2 00:04:03.957 10:05:57 rpc -- scripts/common.sh@353 -- # local d=2 00:04:03.957 10:05:57 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:03.957 10:05:57 rpc -- scripts/common.sh@355 -- # echo 2 00:04:03.957 10:05:57 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:03.957 10:05:57 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:03.957 10:05:57 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:03.957 10:05:57 rpc -- scripts/common.sh@368 -- # return 0 00:04:03.957 10:05:57 rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:03.957 10:05:57 rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:03.957 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:03.957 --rc genhtml_branch_coverage=1 00:04:03.957 --rc genhtml_function_coverage=1 00:04:03.957 --rc genhtml_legend=1 00:04:03.957 --rc geninfo_all_blocks=1 00:04:03.957 --rc geninfo_unexecuted_blocks=1 00:04:03.957 00:04:03.957 ' 00:04:03.957 10:05:57 rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:03.957 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:03.957 --rc genhtml_branch_coverage=1 00:04:03.957 --rc genhtml_function_coverage=1 00:04:03.957 --rc genhtml_legend=1 00:04:03.957 --rc geninfo_all_blocks=1 00:04:03.957 --rc geninfo_unexecuted_blocks=1 00:04:03.957 00:04:03.957 ' 00:04:03.957 10:05:57 rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:03.957 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:03.957 --rc genhtml_branch_coverage=1 00:04:03.957 --rc genhtml_function_coverage=1 00:04:03.957 --rc genhtml_legend=1 00:04:03.957 --rc geninfo_all_blocks=1 00:04:03.957 --rc geninfo_unexecuted_blocks=1 00:04:03.957 00:04:03.957 ' 00:04:03.957 10:05:57 rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:03.957 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:03.957 --rc genhtml_branch_coverage=1 00:04:03.957 --rc genhtml_function_coverage=1 00:04:03.957 --rc genhtml_legend=1 00:04:03.957 --rc geninfo_all_blocks=1 00:04:03.957 --rc geninfo_unexecuted_blocks=1 00:04:03.957 00:04:03.957 ' 00:04:03.957 10:05:57 rpc -- rpc/rpc.sh@65 -- # spdk_pid=3697049 00:04:03.957 10:05:57 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:03.957 10:05:57 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:04:03.957 10:05:57 rpc -- rpc/rpc.sh@67 -- # waitforlisten 3697049 00:04:03.957 10:05:57 rpc -- common/autotest_common.sh@835 -- # '[' -z 3697049 ']' 00:04:03.957 10:05:57 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:03.957 10:05:57 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:03.957 10:05:57 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:03.957 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:03.957 10:05:57 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:03.957 10:05:57 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:03.957 [2024-12-13 10:05:57.706374] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:04:03.957 [2024-12-13 10:05:57.706471] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3697049 ] 00:04:03.957 [2024-12-13 10:05:57.818424] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:04.216 [2024-12-13 10:05:57.920127] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:04.216 [2024-12-13 10:05:57.920169] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 3697049' to capture a snapshot of events at runtime. 00:04:04.216 [2024-12-13 10:05:57.920180] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:04.216 [2024-12-13 10:05:57.920188] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:04.216 [2024-12-13 10:05:57.920197] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid3697049 for offline analysis/debug. 00:04:04.216 [2024-12-13 10:05:57.921543] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:05.152 10:05:58 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:05.152 10:05:58 rpc -- common/autotest_common.sh@868 -- # return 0 00:04:05.152 10:05:58 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:05.152 10:05:58 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:05.152 10:05:58 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:05.152 10:05:58 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:05.152 10:05:58 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:05.152 10:05:58 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:05.152 10:05:58 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:05.152 ************************************ 00:04:05.152 START TEST rpc_integrity 00:04:05.152 ************************************ 00:04:05.152 10:05:58 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:05.152 10:05:58 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:05.152 10:05:58 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:05.152 10:05:58 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:05.152 10:05:58 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:05.152 10:05:58 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:05.152 10:05:58 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:05.152 10:05:58 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:05.152 10:05:58 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:05.152 10:05:58 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:05.152 10:05:58 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:05.152 10:05:58 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:05.152 10:05:58 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:05.152 10:05:58 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:05.152 10:05:58 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:05.152 10:05:58 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:05.152 10:05:58 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:05.152 10:05:58 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:05.152 { 00:04:05.152 "name": "Malloc0", 00:04:05.152 "aliases": [ 00:04:05.152 "c9c432d1-edd0-44ca-a685-3cd278a63839" 00:04:05.152 ], 00:04:05.152 "product_name": "Malloc disk", 00:04:05.152 "block_size": 512, 00:04:05.152 "num_blocks": 16384, 00:04:05.152 "uuid": "c9c432d1-edd0-44ca-a685-3cd278a63839", 00:04:05.152 "assigned_rate_limits": { 00:04:05.152 "rw_ios_per_sec": 0, 00:04:05.152 "rw_mbytes_per_sec": 0, 00:04:05.152 "r_mbytes_per_sec": 0, 00:04:05.152 "w_mbytes_per_sec": 0 00:04:05.152 }, 00:04:05.152 "claimed": false, 00:04:05.152 "zoned": false, 00:04:05.152 "supported_io_types": { 00:04:05.152 "read": true, 00:04:05.152 "write": true, 00:04:05.152 "unmap": true, 00:04:05.152 "flush": true, 00:04:05.152 "reset": true, 00:04:05.152 "nvme_admin": false, 00:04:05.152 "nvme_io": false, 00:04:05.152 "nvme_io_md": false, 00:04:05.152 "write_zeroes": true, 00:04:05.152 "zcopy": true, 00:04:05.152 "get_zone_info": false, 00:04:05.152 "zone_management": false, 00:04:05.152 "zone_append": false, 00:04:05.152 "compare": false, 00:04:05.152 "compare_and_write": false, 00:04:05.152 "abort": true, 00:04:05.152 "seek_hole": false, 00:04:05.152 "seek_data": false, 00:04:05.152 "copy": true, 00:04:05.152 "nvme_iov_md": false 00:04:05.152 }, 00:04:05.152 "memory_domains": [ 00:04:05.152 { 00:04:05.152 "dma_device_id": "system", 00:04:05.152 "dma_device_type": 1 00:04:05.152 }, 00:04:05.152 { 00:04:05.152 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:05.152 "dma_device_type": 2 00:04:05.152 } 00:04:05.152 ], 00:04:05.152 "driver_specific": {} 00:04:05.152 } 00:04:05.152 ]' 00:04:05.152 10:05:58 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:05.152 10:05:58 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:05.152 10:05:58 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:05.152 10:05:58 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:05.152 10:05:58 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:05.152 [2024-12-13 10:05:58.936045] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:05.152 [2024-12-13 10:05:58.936090] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:05.152 [2024-12-13 10:05:58.936113] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000021c80 00:04:05.152 [2024-12-13 10:05:58.936123] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:05.152 [2024-12-13 10:05:58.938065] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:05.152 [2024-12-13 10:05:58.938092] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:05.152 Passthru0 00:04:05.152 10:05:58 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:05.152 10:05:58 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:05.152 10:05:58 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:05.152 10:05:58 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:05.152 10:05:58 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:05.152 10:05:58 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:05.152 { 00:04:05.152 "name": "Malloc0", 00:04:05.152 "aliases": [ 00:04:05.152 "c9c432d1-edd0-44ca-a685-3cd278a63839" 00:04:05.152 ], 00:04:05.152 "product_name": "Malloc disk", 00:04:05.152 "block_size": 512, 00:04:05.152 "num_blocks": 16384, 00:04:05.152 "uuid": "c9c432d1-edd0-44ca-a685-3cd278a63839", 00:04:05.152 "assigned_rate_limits": { 00:04:05.152 "rw_ios_per_sec": 0, 00:04:05.152 "rw_mbytes_per_sec": 0, 00:04:05.152 "r_mbytes_per_sec": 0, 00:04:05.152 "w_mbytes_per_sec": 0 00:04:05.152 }, 00:04:05.152 "claimed": true, 00:04:05.152 "claim_type": "exclusive_write", 00:04:05.152 "zoned": false, 00:04:05.152 "supported_io_types": { 00:04:05.152 "read": true, 00:04:05.152 "write": true, 00:04:05.152 "unmap": true, 00:04:05.152 "flush": true, 00:04:05.152 "reset": true, 00:04:05.152 "nvme_admin": false, 00:04:05.152 "nvme_io": false, 00:04:05.152 "nvme_io_md": false, 00:04:05.152 "write_zeroes": true, 00:04:05.152 "zcopy": true, 00:04:05.152 "get_zone_info": false, 00:04:05.152 "zone_management": false, 00:04:05.152 "zone_append": false, 00:04:05.152 "compare": false, 00:04:05.152 "compare_and_write": false, 00:04:05.152 "abort": true, 00:04:05.152 "seek_hole": false, 00:04:05.152 "seek_data": false, 00:04:05.152 "copy": true, 00:04:05.152 "nvme_iov_md": false 00:04:05.152 }, 00:04:05.152 "memory_domains": [ 00:04:05.153 { 00:04:05.153 "dma_device_id": "system", 00:04:05.153 "dma_device_type": 1 00:04:05.153 }, 00:04:05.153 { 00:04:05.153 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:05.153 "dma_device_type": 2 00:04:05.153 } 00:04:05.153 ], 00:04:05.153 "driver_specific": {} 00:04:05.153 }, 00:04:05.153 { 00:04:05.153 "name": "Passthru0", 00:04:05.153 "aliases": [ 00:04:05.153 "a8e493b7-3c99-5f7a-b466-a6cfa73b4939" 00:04:05.153 ], 00:04:05.153 "product_name": "passthru", 00:04:05.153 "block_size": 512, 00:04:05.153 "num_blocks": 16384, 00:04:05.153 "uuid": "a8e493b7-3c99-5f7a-b466-a6cfa73b4939", 00:04:05.153 "assigned_rate_limits": { 00:04:05.153 "rw_ios_per_sec": 0, 00:04:05.153 "rw_mbytes_per_sec": 0, 00:04:05.153 "r_mbytes_per_sec": 0, 00:04:05.153 "w_mbytes_per_sec": 0 00:04:05.153 }, 00:04:05.153 "claimed": false, 00:04:05.153 "zoned": false, 00:04:05.153 "supported_io_types": { 00:04:05.153 "read": true, 00:04:05.153 "write": true, 00:04:05.153 "unmap": true, 00:04:05.153 "flush": true, 00:04:05.153 "reset": true, 00:04:05.153 "nvme_admin": false, 00:04:05.153 "nvme_io": false, 00:04:05.153 "nvme_io_md": false, 00:04:05.153 "write_zeroes": true, 00:04:05.153 "zcopy": true, 00:04:05.153 "get_zone_info": false, 00:04:05.153 "zone_management": false, 00:04:05.153 "zone_append": false, 00:04:05.153 "compare": false, 00:04:05.153 "compare_and_write": false, 00:04:05.153 "abort": true, 00:04:05.153 "seek_hole": false, 00:04:05.153 "seek_data": false, 00:04:05.153 "copy": true, 00:04:05.153 "nvme_iov_md": false 00:04:05.153 }, 00:04:05.153 "memory_domains": [ 00:04:05.153 { 00:04:05.153 "dma_device_id": "system", 00:04:05.153 "dma_device_type": 1 00:04:05.153 }, 00:04:05.153 { 00:04:05.153 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:05.153 "dma_device_type": 2 00:04:05.153 } 00:04:05.153 ], 00:04:05.153 "driver_specific": { 00:04:05.153 "passthru": { 00:04:05.153 "name": "Passthru0", 00:04:05.153 "base_bdev_name": "Malloc0" 00:04:05.153 } 00:04:05.153 } 00:04:05.153 } 00:04:05.153 ]' 00:04:05.153 10:05:58 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:05.153 10:05:59 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:05.153 10:05:59 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:05.153 10:05:59 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:05.153 10:05:59 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:05.153 10:05:59 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:05.153 10:05:59 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:05.153 10:05:59 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:05.153 10:05:59 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:05.411 10:05:59 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:05.411 10:05:59 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:05.411 10:05:59 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:05.411 10:05:59 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:05.411 10:05:59 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:05.411 10:05:59 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:05.411 10:05:59 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:05.411 10:05:59 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:05.411 00:04:05.411 real 0m0.313s 00:04:05.411 user 0m0.178s 00:04:05.411 sys 0m0.041s 00:04:05.411 10:05:59 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:05.411 10:05:59 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:05.411 ************************************ 00:04:05.411 END TEST rpc_integrity 00:04:05.411 ************************************ 00:04:05.411 10:05:59 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:05.411 10:05:59 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:05.411 10:05:59 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:05.411 10:05:59 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:05.411 ************************************ 00:04:05.411 START TEST rpc_plugins 00:04:05.411 ************************************ 00:04:05.411 10:05:59 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:04:05.411 10:05:59 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:05.411 10:05:59 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:05.411 10:05:59 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:05.411 10:05:59 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:05.411 10:05:59 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:05.411 10:05:59 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:05.411 10:05:59 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:05.411 10:05:59 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:05.411 10:05:59 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:05.411 10:05:59 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:05.411 { 00:04:05.411 "name": "Malloc1", 00:04:05.411 "aliases": [ 00:04:05.411 "c7e1224e-fec1-4cc6-85cb-d70f2f1aa4a1" 00:04:05.411 ], 00:04:05.411 "product_name": "Malloc disk", 00:04:05.411 "block_size": 4096, 00:04:05.411 "num_blocks": 256, 00:04:05.411 "uuid": "c7e1224e-fec1-4cc6-85cb-d70f2f1aa4a1", 00:04:05.411 "assigned_rate_limits": { 00:04:05.411 "rw_ios_per_sec": 0, 00:04:05.411 "rw_mbytes_per_sec": 0, 00:04:05.411 "r_mbytes_per_sec": 0, 00:04:05.411 "w_mbytes_per_sec": 0 00:04:05.411 }, 00:04:05.411 "claimed": false, 00:04:05.411 "zoned": false, 00:04:05.411 "supported_io_types": { 00:04:05.411 "read": true, 00:04:05.411 "write": true, 00:04:05.411 "unmap": true, 00:04:05.411 "flush": true, 00:04:05.411 "reset": true, 00:04:05.411 "nvme_admin": false, 00:04:05.411 "nvme_io": false, 00:04:05.411 "nvme_io_md": false, 00:04:05.411 "write_zeroes": true, 00:04:05.411 "zcopy": true, 00:04:05.411 "get_zone_info": false, 00:04:05.411 "zone_management": false, 00:04:05.411 "zone_append": false, 00:04:05.411 "compare": false, 00:04:05.411 "compare_and_write": false, 00:04:05.411 "abort": true, 00:04:05.411 "seek_hole": false, 00:04:05.411 "seek_data": false, 00:04:05.411 "copy": true, 00:04:05.411 "nvme_iov_md": false 00:04:05.411 }, 00:04:05.411 "memory_domains": [ 00:04:05.411 { 00:04:05.411 "dma_device_id": "system", 00:04:05.411 "dma_device_type": 1 00:04:05.411 }, 00:04:05.411 { 00:04:05.411 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:05.411 "dma_device_type": 2 00:04:05.411 } 00:04:05.411 ], 00:04:05.411 "driver_specific": {} 00:04:05.411 } 00:04:05.411 ]' 00:04:05.411 10:05:59 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:05.411 10:05:59 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:05.411 10:05:59 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:05.411 10:05:59 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:05.411 10:05:59 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:05.411 10:05:59 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:05.411 10:05:59 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:05.411 10:05:59 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:05.411 10:05:59 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:05.411 10:05:59 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:05.412 10:05:59 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:05.412 10:05:59 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:05.670 10:05:59 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:05.670 00:04:05.670 real 0m0.139s 00:04:05.670 user 0m0.083s 00:04:05.670 sys 0m0.016s 00:04:05.670 10:05:59 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:05.670 10:05:59 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:05.670 ************************************ 00:04:05.670 END TEST rpc_plugins 00:04:05.670 ************************************ 00:04:05.670 10:05:59 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:05.670 10:05:59 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:05.670 10:05:59 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:05.670 10:05:59 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:05.670 ************************************ 00:04:05.670 START TEST rpc_trace_cmd_test 00:04:05.670 ************************************ 00:04:05.671 10:05:59 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:04:05.671 10:05:59 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:05.671 10:05:59 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:05.671 10:05:59 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:05.671 10:05:59 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:05.671 10:05:59 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:05.671 10:05:59 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:05.671 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid3697049", 00:04:05.671 "tpoint_group_mask": "0x8", 00:04:05.671 "iscsi_conn": { 00:04:05.671 "mask": "0x2", 00:04:05.671 "tpoint_mask": "0x0" 00:04:05.671 }, 00:04:05.671 "scsi": { 00:04:05.671 "mask": "0x4", 00:04:05.671 "tpoint_mask": "0x0" 00:04:05.671 }, 00:04:05.671 "bdev": { 00:04:05.671 "mask": "0x8", 00:04:05.671 "tpoint_mask": "0xffffffffffffffff" 00:04:05.671 }, 00:04:05.671 "nvmf_rdma": { 00:04:05.671 "mask": "0x10", 00:04:05.671 "tpoint_mask": "0x0" 00:04:05.671 }, 00:04:05.671 "nvmf_tcp": { 00:04:05.671 "mask": "0x20", 00:04:05.671 "tpoint_mask": "0x0" 00:04:05.671 }, 00:04:05.671 "ftl": { 00:04:05.671 "mask": "0x40", 00:04:05.671 "tpoint_mask": "0x0" 00:04:05.671 }, 00:04:05.671 "blobfs": { 00:04:05.671 "mask": "0x80", 00:04:05.671 "tpoint_mask": "0x0" 00:04:05.671 }, 00:04:05.671 "dsa": { 00:04:05.671 "mask": "0x200", 00:04:05.671 "tpoint_mask": "0x0" 00:04:05.671 }, 00:04:05.671 "thread": { 00:04:05.671 "mask": "0x400", 00:04:05.671 "tpoint_mask": "0x0" 00:04:05.671 }, 00:04:05.671 "nvme_pcie": { 00:04:05.671 "mask": "0x800", 00:04:05.671 "tpoint_mask": "0x0" 00:04:05.671 }, 00:04:05.671 "iaa": { 00:04:05.671 "mask": "0x1000", 00:04:05.671 "tpoint_mask": "0x0" 00:04:05.671 }, 00:04:05.671 "nvme_tcp": { 00:04:05.671 "mask": "0x2000", 00:04:05.671 "tpoint_mask": "0x0" 00:04:05.671 }, 00:04:05.671 "bdev_nvme": { 00:04:05.671 "mask": "0x4000", 00:04:05.671 "tpoint_mask": "0x0" 00:04:05.671 }, 00:04:05.671 "sock": { 00:04:05.671 "mask": "0x8000", 00:04:05.671 "tpoint_mask": "0x0" 00:04:05.671 }, 00:04:05.671 "blob": { 00:04:05.671 "mask": "0x10000", 00:04:05.671 "tpoint_mask": "0x0" 00:04:05.671 }, 00:04:05.671 "bdev_raid": { 00:04:05.671 "mask": "0x20000", 00:04:05.671 "tpoint_mask": "0x0" 00:04:05.671 }, 00:04:05.671 "scheduler": { 00:04:05.671 "mask": "0x40000", 00:04:05.671 "tpoint_mask": "0x0" 00:04:05.671 } 00:04:05.671 }' 00:04:05.671 10:05:59 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:05.671 10:05:59 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:04:05.671 10:05:59 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:05.671 10:05:59 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:05.671 10:05:59 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:05.671 10:05:59 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:05.671 10:05:59 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:05.671 10:05:59 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:05.671 10:05:59 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:05.671 10:05:59 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:05.671 00:04:05.671 real 0m0.176s 00:04:05.671 user 0m0.152s 00:04:05.671 sys 0m0.018s 00:04:05.671 10:05:59 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:05.671 10:05:59 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:05.671 ************************************ 00:04:05.671 END TEST rpc_trace_cmd_test 00:04:05.671 ************************************ 00:04:05.930 10:05:59 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:05.930 10:05:59 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:05.930 10:05:59 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:05.930 10:05:59 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:05.930 10:05:59 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:05.930 10:05:59 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:05.930 ************************************ 00:04:05.930 START TEST rpc_daemon_integrity 00:04:05.930 ************************************ 00:04:05.930 10:05:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:05.930 10:05:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:05.930 10:05:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:05.930 10:05:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:05.930 10:05:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:05.930 10:05:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:05.930 10:05:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:05.930 10:05:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:05.930 10:05:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:05.930 10:05:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:05.930 10:05:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:05.930 10:05:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:05.930 10:05:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:05.930 10:05:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:05.930 10:05:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:05.930 10:05:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:05.930 10:05:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:05.930 10:05:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:05.930 { 00:04:05.930 "name": "Malloc2", 00:04:05.930 "aliases": [ 00:04:05.930 "3e76484e-e68b-4cf6-9f26-8f339a11900b" 00:04:05.930 ], 00:04:05.930 "product_name": "Malloc disk", 00:04:05.930 "block_size": 512, 00:04:05.930 "num_blocks": 16384, 00:04:05.930 "uuid": "3e76484e-e68b-4cf6-9f26-8f339a11900b", 00:04:05.930 "assigned_rate_limits": { 00:04:05.930 "rw_ios_per_sec": 0, 00:04:05.930 "rw_mbytes_per_sec": 0, 00:04:05.930 "r_mbytes_per_sec": 0, 00:04:05.930 "w_mbytes_per_sec": 0 00:04:05.930 }, 00:04:05.930 "claimed": false, 00:04:05.930 "zoned": false, 00:04:05.930 "supported_io_types": { 00:04:05.930 "read": true, 00:04:05.930 "write": true, 00:04:05.930 "unmap": true, 00:04:05.930 "flush": true, 00:04:05.930 "reset": true, 00:04:05.930 "nvme_admin": false, 00:04:05.930 "nvme_io": false, 00:04:05.930 "nvme_io_md": false, 00:04:05.930 "write_zeroes": true, 00:04:05.930 "zcopy": true, 00:04:05.930 "get_zone_info": false, 00:04:05.930 "zone_management": false, 00:04:05.931 "zone_append": false, 00:04:05.931 "compare": false, 00:04:05.931 "compare_and_write": false, 00:04:05.931 "abort": true, 00:04:05.931 "seek_hole": false, 00:04:05.931 "seek_data": false, 00:04:05.931 "copy": true, 00:04:05.931 "nvme_iov_md": false 00:04:05.931 }, 00:04:05.931 "memory_domains": [ 00:04:05.931 { 00:04:05.931 "dma_device_id": "system", 00:04:05.931 "dma_device_type": 1 00:04:05.931 }, 00:04:05.931 { 00:04:05.931 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:05.931 "dma_device_type": 2 00:04:05.931 } 00:04:05.931 ], 00:04:05.931 "driver_specific": {} 00:04:05.931 } 00:04:05.931 ]' 00:04:05.931 10:05:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:05.931 10:05:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:05.931 10:05:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:05.931 10:05:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:05.931 10:05:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:05.931 [2024-12-13 10:05:59.769439] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:05.931 [2024-12-13 10:05:59.769484] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:05.931 [2024-12-13 10:05:59.769503] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000022e80 00:04:05.931 [2024-12-13 10:05:59.769513] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:05.931 [2024-12-13 10:05:59.771362] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:05.931 [2024-12-13 10:05:59.771386] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:05.931 Passthru0 00:04:05.931 10:05:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:05.931 10:05:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:05.931 10:05:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:05.931 10:05:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:05.931 10:05:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:05.931 10:05:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:05.931 { 00:04:05.931 "name": "Malloc2", 00:04:05.931 "aliases": [ 00:04:05.931 "3e76484e-e68b-4cf6-9f26-8f339a11900b" 00:04:05.931 ], 00:04:05.931 "product_name": "Malloc disk", 00:04:05.931 "block_size": 512, 00:04:05.931 "num_blocks": 16384, 00:04:05.931 "uuid": "3e76484e-e68b-4cf6-9f26-8f339a11900b", 00:04:05.931 "assigned_rate_limits": { 00:04:05.931 "rw_ios_per_sec": 0, 00:04:05.931 "rw_mbytes_per_sec": 0, 00:04:05.931 "r_mbytes_per_sec": 0, 00:04:05.931 "w_mbytes_per_sec": 0 00:04:05.931 }, 00:04:05.931 "claimed": true, 00:04:05.931 "claim_type": "exclusive_write", 00:04:05.931 "zoned": false, 00:04:05.931 "supported_io_types": { 00:04:05.931 "read": true, 00:04:05.931 "write": true, 00:04:05.931 "unmap": true, 00:04:05.931 "flush": true, 00:04:05.931 "reset": true, 00:04:05.931 "nvme_admin": false, 00:04:05.931 "nvme_io": false, 00:04:05.931 "nvme_io_md": false, 00:04:05.931 "write_zeroes": true, 00:04:05.931 "zcopy": true, 00:04:05.931 "get_zone_info": false, 00:04:05.931 "zone_management": false, 00:04:05.931 "zone_append": false, 00:04:05.931 "compare": false, 00:04:05.931 "compare_and_write": false, 00:04:05.931 "abort": true, 00:04:05.931 "seek_hole": false, 00:04:05.931 "seek_data": false, 00:04:05.931 "copy": true, 00:04:05.931 "nvme_iov_md": false 00:04:05.931 }, 00:04:05.931 "memory_domains": [ 00:04:05.931 { 00:04:05.931 "dma_device_id": "system", 00:04:05.931 "dma_device_type": 1 00:04:05.931 }, 00:04:05.931 { 00:04:05.931 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:05.931 "dma_device_type": 2 00:04:05.931 } 00:04:05.931 ], 00:04:05.931 "driver_specific": {} 00:04:05.931 }, 00:04:05.931 { 00:04:05.931 "name": "Passthru0", 00:04:05.931 "aliases": [ 00:04:05.931 "5ab27466-e1a6-55e2-8a89-a2a92db642a7" 00:04:05.931 ], 00:04:05.931 "product_name": "passthru", 00:04:05.931 "block_size": 512, 00:04:05.931 "num_blocks": 16384, 00:04:05.931 "uuid": "5ab27466-e1a6-55e2-8a89-a2a92db642a7", 00:04:05.931 "assigned_rate_limits": { 00:04:05.931 "rw_ios_per_sec": 0, 00:04:05.931 "rw_mbytes_per_sec": 0, 00:04:05.931 "r_mbytes_per_sec": 0, 00:04:05.931 "w_mbytes_per_sec": 0 00:04:05.931 }, 00:04:05.931 "claimed": false, 00:04:05.931 "zoned": false, 00:04:05.931 "supported_io_types": { 00:04:05.931 "read": true, 00:04:05.931 "write": true, 00:04:05.931 "unmap": true, 00:04:05.931 "flush": true, 00:04:05.931 "reset": true, 00:04:05.931 "nvme_admin": false, 00:04:05.931 "nvme_io": false, 00:04:05.931 "nvme_io_md": false, 00:04:05.931 "write_zeroes": true, 00:04:05.931 "zcopy": true, 00:04:05.931 "get_zone_info": false, 00:04:05.931 "zone_management": false, 00:04:05.931 "zone_append": false, 00:04:05.931 "compare": false, 00:04:05.931 "compare_and_write": false, 00:04:05.931 "abort": true, 00:04:05.931 "seek_hole": false, 00:04:05.931 "seek_data": false, 00:04:05.931 "copy": true, 00:04:05.931 "nvme_iov_md": false 00:04:05.931 }, 00:04:05.931 "memory_domains": [ 00:04:05.931 { 00:04:05.931 "dma_device_id": "system", 00:04:05.931 "dma_device_type": 1 00:04:05.931 }, 00:04:05.931 { 00:04:05.931 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:05.931 "dma_device_type": 2 00:04:05.931 } 00:04:05.931 ], 00:04:05.931 "driver_specific": { 00:04:05.931 "passthru": { 00:04:05.931 "name": "Passthru0", 00:04:05.931 "base_bdev_name": "Malloc2" 00:04:05.931 } 00:04:05.931 } 00:04:05.931 } 00:04:05.931 ]' 00:04:05.931 10:05:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:06.190 10:05:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:06.190 10:05:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:06.190 10:05:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:06.190 10:05:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:06.190 10:05:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:06.190 10:05:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:06.190 10:05:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:06.190 10:05:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:06.190 10:05:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:06.190 10:05:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:06.190 10:05:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:06.190 10:05:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:06.190 10:05:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:06.190 10:05:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:06.190 10:05:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:06.190 10:05:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:06.190 00:04:06.190 real 0m0.316s 00:04:06.190 user 0m0.180s 00:04:06.190 sys 0m0.038s 00:04:06.190 10:05:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:06.191 10:05:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:06.191 ************************************ 00:04:06.191 END TEST rpc_daemon_integrity 00:04:06.191 ************************************ 00:04:06.191 10:05:59 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:06.191 10:05:59 rpc -- rpc/rpc.sh@84 -- # killprocess 3697049 00:04:06.191 10:05:59 rpc -- common/autotest_common.sh@954 -- # '[' -z 3697049 ']' 00:04:06.191 10:05:59 rpc -- common/autotest_common.sh@958 -- # kill -0 3697049 00:04:06.191 10:05:59 rpc -- common/autotest_common.sh@959 -- # uname 00:04:06.191 10:05:59 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:06.191 10:05:59 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3697049 00:04:06.191 10:06:00 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:06.191 10:06:00 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:06.191 10:06:00 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3697049' 00:04:06.191 killing process with pid 3697049 00:04:06.191 10:06:00 rpc -- common/autotest_common.sh@973 -- # kill 3697049 00:04:06.191 10:06:00 rpc -- common/autotest_common.sh@978 -- # wait 3697049 00:04:08.725 00:04:08.725 real 0m4.943s 00:04:08.725 user 0m5.536s 00:04:08.725 sys 0m0.805s 00:04:08.725 10:06:02 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:08.725 10:06:02 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:08.725 ************************************ 00:04:08.725 END TEST rpc 00:04:08.725 ************************************ 00:04:08.725 10:06:02 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:08.725 10:06:02 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:08.725 10:06:02 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:08.725 10:06:02 -- common/autotest_common.sh@10 -- # set +x 00:04:08.725 ************************************ 00:04:08.725 START TEST skip_rpc 00:04:08.725 ************************************ 00:04:08.725 10:06:02 skip_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:08.725 * Looking for test storage... 00:04:08.725 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:08.725 10:06:02 skip_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:08.725 10:06:02 skip_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:04:08.725 10:06:02 skip_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:08.985 10:06:02 skip_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:08.985 10:06:02 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:08.985 10:06:02 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:08.985 10:06:02 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:08.985 10:06:02 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:08.985 10:06:02 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:08.985 10:06:02 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:08.985 10:06:02 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:08.985 10:06:02 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:08.985 10:06:02 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:08.985 10:06:02 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:08.985 10:06:02 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:08.985 10:06:02 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:08.985 10:06:02 skip_rpc -- scripts/common.sh@345 -- # : 1 00:04:08.985 10:06:02 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:08.985 10:06:02 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:08.985 10:06:02 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:08.985 10:06:02 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:04:08.985 10:06:02 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:08.985 10:06:02 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:04:08.985 10:06:02 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:08.985 10:06:02 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:08.985 10:06:02 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:04:08.985 10:06:02 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:08.985 10:06:02 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:04:08.985 10:06:02 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:08.985 10:06:02 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:08.985 10:06:02 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:08.985 10:06:02 skip_rpc -- scripts/common.sh@368 -- # return 0 00:04:08.985 10:06:02 skip_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:08.985 10:06:02 skip_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:08.985 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:08.985 --rc genhtml_branch_coverage=1 00:04:08.985 --rc genhtml_function_coverage=1 00:04:08.985 --rc genhtml_legend=1 00:04:08.985 --rc geninfo_all_blocks=1 00:04:08.985 --rc geninfo_unexecuted_blocks=1 00:04:08.985 00:04:08.985 ' 00:04:08.985 10:06:02 skip_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:08.985 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:08.985 --rc genhtml_branch_coverage=1 00:04:08.985 --rc genhtml_function_coverage=1 00:04:08.985 --rc genhtml_legend=1 00:04:08.985 --rc geninfo_all_blocks=1 00:04:08.985 --rc geninfo_unexecuted_blocks=1 00:04:08.985 00:04:08.985 ' 00:04:08.985 10:06:02 skip_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:08.985 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:08.985 --rc genhtml_branch_coverage=1 00:04:08.985 --rc genhtml_function_coverage=1 00:04:08.985 --rc genhtml_legend=1 00:04:08.985 --rc geninfo_all_blocks=1 00:04:08.985 --rc geninfo_unexecuted_blocks=1 00:04:08.985 00:04:08.985 ' 00:04:08.985 10:06:02 skip_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:08.985 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:08.985 --rc genhtml_branch_coverage=1 00:04:08.985 --rc genhtml_function_coverage=1 00:04:08.985 --rc genhtml_legend=1 00:04:08.985 --rc geninfo_all_blocks=1 00:04:08.985 --rc geninfo_unexecuted_blocks=1 00:04:08.985 00:04:08.985 ' 00:04:08.985 10:06:02 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:08.985 10:06:02 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:08.985 10:06:02 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:08.985 10:06:02 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:08.985 10:06:02 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:08.985 10:06:02 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:08.985 ************************************ 00:04:08.985 START TEST skip_rpc 00:04:08.985 ************************************ 00:04:08.985 10:06:02 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:04:08.985 10:06:02 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=3698157 00:04:08.985 10:06:02 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:08.985 10:06:02 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:08.985 10:06:02 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:08.985 [2024-12-13 10:06:02.761956] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:04:08.985 [2024-12-13 10:06:02.762034] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3698157 ] 00:04:08.985 [2024-12-13 10:06:02.873660] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:09.244 [2024-12-13 10:06:02.979719] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:14.518 10:06:07 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:14.518 10:06:07 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:04:14.518 10:06:07 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:14.518 10:06:07 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:04:14.518 10:06:07 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:14.518 10:06:07 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:04:14.518 10:06:07 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:14.518 10:06:07 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:04:14.518 10:06:07 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:14.518 10:06:07 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:14.518 10:06:07 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:14.518 10:06:07 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:04:14.518 10:06:07 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:14.518 10:06:07 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:14.518 10:06:07 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:14.518 10:06:07 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:14.518 10:06:07 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 3698157 00:04:14.518 10:06:07 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 3698157 ']' 00:04:14.518 10:06:07 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 3698157 00:04:14.518 10:06:07 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:04:14.518 10:06:07 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:14.518 10:06:07 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3698157 00:04:14.518 10:06:07 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:14.518 10:06:07 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:14.519 10:06:07 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3698157' 00:04:14.519 killing process with pid 3698157 00:04:14.519 10:06:07 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 3698157 00:04:14.519 10:06:07 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 3698157 00:04:16.422 00:04:16.422 real 0m7.464s 00:04:16.422 user 0m7.078s 00:04:16.422 sys 0m0.407s 00:04:16.422 10:06:10 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:16.422 10:06:10 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:16.422 ************************************ 00:04:16.422 END TEST skip_rpc 00:04:16.422 ************************************ 00:04:16.422 10:06:10 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:16.422 10:06:10 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:16.422 10:06:10 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:16.422 10:06:10 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:16.422 ************************************ 00:04:16.422 START TEST skip_rpc_with_json 00:04:16.422 ************************************ 00:04:16.422 10:06:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:04:16.422 10:06:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:16.422 10:06:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=3699779 00:04:16.422 10:06:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:16.422 10:06:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:16.422 10:06:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 3699779 00:04:16.422 10:06:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 3699779 ']' 00:04:16.422 10:06:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:16.422 10:06:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:16.422 10:06:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:16.422 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:16.422 10:06:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:16.422 10:06:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:16.422 [2024-12-13 10:06:10.296415] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:04:16.422 [2024-12-13 10:06:10.296524] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3699779 ] 00:04:16.681 [2024-12-13 10:06:10.411335] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:16.681 [2024-12-13 10:06:10.516862] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:17.617 10:06:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:17.617 10:06:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:04:17.617 10:06:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:17.617 10:06:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:17.617 10:06:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:17.617 [2024-12-13 10:06:11.333592] nvmf_rpc.c:2707:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:17.617 request: 00:04:17.617 { 00:04:17.617 "trtype": "tcp", 00:04:17.617 "method": "nvmf_get_transports", 00:04:17.617 "req_id": 1 00:04:17.617 } 00:04:17.617 Got JSON-RPC error response 00:04:17.617 response: 00:04:17.617 { 00:04:17.617 "code": -19, 00:04:17.617 "message": "No such device" 00:04:17.617 } 00:04:17.617 10:06:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:17.617 10:06:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:17.617 10:06:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:17.617 10:06:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:17.617 [2024-12-13 10:06:11.341711] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:17.617 10:06:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:17.617 10:06:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:17.617 10:06:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:17.617 10:06:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:17.617 10:06:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:17.617 10:06:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:17.617 { 00:04:17.617 "subsystems": [ 00:04:17.617 { 00:04:17.617 "subsystem": "fsdev", 00:04:17.617 "config": [ 00:04:17.617 { 00:04:17.617 "method": "fsdev_set_opts", 00:04:17.617 "params": { 00:04:17.617 "fsdev_io_pool_size": 65535, 00:04:17.617 "fsdev_io_cache_size": 256 00:04:17.617 } 00:04:17.617 } 00:04:17.617 ] 00:04:17.617 }, 00:04:17.617 { 00:04:17.617 "subsystem": "keyring", 00:04:17.617 "config": [] 00:04:17.617 }, 00:04:17.617 { 00:04:17.617 "subsystem": "iobuf", 00:04:17.618 "config": [ 00:04:17.618 { 00:04:17.618 "method": "iobuf_set_options", 00:04:17.618 "params": { 00:04:17.618 "small_pool_count": 8192, 00:04:17.618 "large_pool_count": 1024, 00:04:17.618 "small_bufsize": 8192, 00:04:17.618 "large_bufsize": 135168, 00:04:17.618 "enable_numa": false 00:04:17.618 } 00:04:17.618 } 00:04:17.618 ] 00:04:17.618 }, 00:04:17.618 { 00:04:17.618 "subsystem": "sock", 00:04:17.618 "config": [ 00:04:17.618 { 00:04:17.618 "method": "sock_set_default_impl", 00:04:17.618 "params": { 00:04:17.618 "impl_name": "posix" 00:04:17.618 } 00:04:17.618 }, 00:04:17.618 { 00:04:17.618 "method": "sock_impl_set_options", 00:04:17.618 "params": { 00:04:17.618 "impl_name": "ssl", 00:04:17.618 "recv_buf_size": 4096, 00:04:17.618 "send_buf_size": 4096, 00:04:17.618 "enable_recv_pipe": true, 00:04:17.618 "enable_quickack": false, 00:04:17.618 "enable_placement_id": 0, 00:04:17.618 "enable_zerocopy_send_server": true, 00:04:17.618 "enable_zerocopy_send_client": false, 00:04:17.618 "zerocopy_threshold": 0, 00:04:17.618 "tls_version": 0, 00:04:17.618 "enable_ktls": false 00:04:17.618 } 00:04:17.618 }, 00:04:17.618 { 00:04:17.618 "method": "sock_impl_set_options", 00:04:17.618 "params": { 00:04:17.618 "impl_name": "posix", 00:04:17.618 "recv_buf_size": 2097152, 00:04:17.618 "send_buf_size": 2097152, 00:04:17.618 "enable_recv_pipe": true, 00:04:17.618 "enable_quickack": false, 00:04:17.618 "enable_placement_id": 0, 00:04:17.618 "enable_zerocopy_send_server": true, 00:04:17.618 "enable_zerocopy_send_client": false, 00:04:17.618 "zerocopy_threshold": 0, 00:04:17.618 "tls_version": 0, 00:04:17.618 "enable_ktls": false 00:04:17.618 } 00:04:17.618 } 00:04:17.618 ] 00:04:17.618 }, 00:04:17.618 { 00:04:17.618 "subsystem": "vmd", 00:04:17.618 "config": [] 00:04:17.618 }, 00:04:17.618 { 00:04:17.618 "subsystem": "accel", 00:04:17.618 "config": [ 00:04:17.618 { 00:04:17.618 "method": "accel_set_options", 00:04:17.618 "params": { 00:04:17.618 "small_cache_size": 128, 00:04:17.618 "large_cache_size": 16, 00:04:17.618 "task_count": 2048, 00:04:17.618 "sequence_count": 2048, 00:04:17.618 "buf_count": 2048 00:04:17.618 } 00:04:17.618 } 00:04:17.618 ] 00:04:17.618 }, 00:04:17.618 { 00:04:17.618 "subsystem": "bdev", 00:04:17.618 "config": [ 00:04:17.618 { 00:04:17.618 "method": "bdev_set_options", 00:04:17.618 "params": { 00:04:17.618 "bdev_io_pool_size": 65535, 00:04:17.618 "bdev_io_cache_size": 256, 00:04:17.618 "bdev_auto_examine": true, 00:04:17.618 "iobuf_small_cache_size": 128, 00:04:17.618 "iobuf_large_cache_size": 16 00:04:17.618 } 00:04:17.618 }, 00:04:17.618 { 00:04:17.618 "method": "bdev_raid_set_options", 00:04:17.618 "params": { 00:04:17.618 "process_window_size_kb": 1024, 00:04:17.618 "process_max_bandwidth_mb_sec": 0 00:04:17.618 } 00:04:17.618 }, 00:04:17.618 { 00:04:17.618 "method": "bdev_iscsi_set_options", 00:04:17.618 "params": { 00:04:17.618 "timeout_sec": 30 00:04:17.618 } 00:04:17.618 }, 00:04:17.618 { 00:04:17.618 "method": "bdev_nvme_set_options", 00:04:17.618 "params": { 00:04:17.618 "action_on_timeout": "none", 00:04:17.618 "timeout_us": 0, 00:04:17.618 "timeout_admin_us": 0, 00:04:17.618 "keep_alive_timeout_ms": 10000, 00:04:17.618 "arbitration_burst": 0, 00:04:17.618 "low_priority_weight": 0, 00:04:17.618 "medium_priority_weight": 0, 00:04:17.618 "high_priority_weight": 0, 00:04:17.618 "nvme_adminq_poll_period_us": 10000, 00:04:17.618 "nvme_ioq_poll_period_us": 0, 00:04:17.618 "io_queue_requests": 0, 00:04:17.618 "delay_cmd_submit": true, 00:04:17.618 "transport_retry_count": 4, 00:04:17.618 "bdev_retry_count": 3, 00:04:17.618 "transport_ack_timeout": 0, 00:04:17.618 "ctrlr_loss_timeout_sec": 0, 00:04:17.618 "reconnect_delay_sec": 0, 00:04:17.618 "fast_io_fail_timeout_sec": 0, 00:04:17.618 "disable_auto_failback": false, 00:04:17.618 "generate_uuids": false, 00:04:17.618 "transport_tos": 0, 00:04:17.618 "nvme_error_stat": false, 00:04:17.618 "rdma_srq_size": 0, 00:04:17.618 "io_path_stat": false, 00:04:17.618 "allow_accel_sequence": false, 00:04:17.618 "rdma_max_cq_size": 0, 00:04:17.618 "rdma_cm_event_timeout_ms": 0, 00:04:17.618 "dhchap_digests": [ 00:04:17.618 "sha256", 00:04:17.618 "sha384", 00:04:17.618 "sha512" 00:04:17.618 ], 00:04:17.618 "dhchap_dhgroups": [ 00:04:17.618 "null", 00:04:17.618 "ffdhe2048", 00:04:17.618 "ffdhe3072", 00:04:17.618 "ffdhe4096", 00:04:17.618 "ffdhe6144", 00:04:17.618 "ffdhe8192" 00:04:17.618 ], 00:04:17.618 "rdma_umr_per_io": false 00:04:17.618 } 00:04:17.618 }, 00:04:17.618 { 00:04:17.618 "method": "bdev_nvme_set_hotplug", 00:04:17.618 "params": { 00:04:17.618 "period_us": 100000, 00:04:17.618 "enable": false 00:04:17.618 } 00:04:17.618 }, 00:04:17.618 { 00:04:17.618 "method": "bdev_wait_for_examine" 00:04:17.618 } 00:04:17.618 ] 00:04:17.618 }, 00:04:17.618 { 00:04:17.618 "subsystem": "scsi", 00:04:17.618 "config": null 00:04:17.618 }, 00:04:17.618 { 00:04:17.618 "subsystem": "scheduler", 00:04:17.618 "config": [ 00:04:17.618 { 00:04:17.618 "method": "framework_set_scheduler", 00:04:17.618 "params": { 00:04:17.618 "name": "static" 00:04:17.618 } 00:04:17.618 } 00:04:17.618 ] 00:04:17.618 }, 00:04:17.618 { 00:04:17.618 "subsystem": "vhost_scsi", 00:04:17.618 "config": [] 00:04:17.618 }, 00:04:17.618 { 00:04:17.618 "subsystem": "vhost_blk", 00:04:17.618 "config": [] 00:04:17.618 }, 00:04:17.618 { 00:04:17.618 "subsystem": "ublk", 00:04:17.618 "config": [] 00:04:17.618 }, 00:04:17.618 { 00:04:17.618 "subsystem": "nbd", 00:04:17.618 "config": [] 00:04:17.618 }, 00:04:17.618 { 00:04:17.618 "subsystem": "nvmf", 00:04:17.618 "config": [ 00:04:17.618 { 00:04:17.618 "method": "nvmf_set_config", 00:04:17.618 "params": { 00:04:17.618 "discovery_filter": "match_any", 00:04:17.618 "admin_cmd_passthru": { 00:04:17.618 "identify_ctrlr": false 00:04:17.618 }, 00:04:17.618 "dhchap_digests": [ 00:04:17.618 "sha256", 00:04:17.618 "sha384", 00:04:17.618 "sha512" 00:04:17.618 ], 00:04:17.618 "dhchap_dhgroups": [ 00:04:17.618 "null", 00:04:17.618 "ffdhe2048", 00:04:17.618 "ffdhe3072", 00:04:17.618 "ffdhe4096", 00:04:17.618 "ffdhe6144", 00:04:17.618 "ffdhe8192" 00:04:17.618 ] 00:04:17.618 } 00:04:17.618 }, 00:04:17.618 { 00:04:17.618 "method": "nvmf_set_max_subsystems", 00:04:17.618 "params": { 00:04:17.618 "max_subsystems": 1024 00:04:17.618 } 00:04:17.618 }, 00:04:17.618 { 00:04:17.618 "method": "nvmf_set_crdt", 00:04:17.618 "params": { 00:04:17.618 "crdt1": 0, 00:04:17.618 "crdt2": 0, 00:04:17.618 "crdt3": 0 00:04:17.618 } 00:04:17.618 }, 00:04:17.618 { 00:04:17.618 "method": "nvmf_create_transport", 00:04:17.618 "params": { 00:04:17.618 "trtype": "TCP", 00:04:17.618 "max_queue_depth": 128, 00:04:17.618 "max_io_qpairs_per_ctrlr": 127, 00:04:17.618 "in_capsule_data_size": 4096, 00:04:17.618 "max_io_size": 131072, 00:04:17.618 "io_unit_size": 131072, 00:04:17.618 "max_aq_depth": 128, 00:04:17.618 "num_shared_buffers": 511, 00:04:17.618 "buf_cache_size": 4294967295, 00:04:17.618 "dif_insert_or_strip": false, 00:04:17.618 "zcopy": false, 00:04:17.618 "c2h_success": true, 00:04:17.618 "sock_priority": 0, 00:04:17.618 "abort_timeout_sec": 1, 00:04:17.618 "ack_timeout": 0, 00:04:17.618 "data_wr_pool_size": 0 00:04:17.618 } 00:04:17.618 } 00:04:17.618 ] 00:04:17.618 }, 00:04:17.618 { 00:04:17.618 "subsystem": "iscsi", 00:04:17.618 "config": [ 00:04:17.618 { 00:04:17.618 "method": "iscsi_set_options", 00:04:17.618 "params": { 00:04:17.618 "node_base": "iqn.2016-06.io.spdk", 00:04:17.618 "max_sessions": 128, 00:04:17.618 "max_connections_per_session": 2, 00:04:17.618 "max_queue_depth": 64, 00:04:17.618 "default_time2wait": 2, 00:04:17.618 "default_time2retain": 20, 00:04:17.618 "first_burst_length": 8192, 00:04:17.618 "immediate_data": true, 00:04:17.618 "allow_duplicated_isid": false, 00:04:17.618 "error_recovery_level": 0, 00:04:17.618 "nop_timeout": 60, 00:04:17.618 "nop_in_interval": 30, 00:04:17.618 "disable_chap": false, 00:04:17.618 "require_chap": false, 00:04:17.618 "mutual_chap": false, 00:04:17.618 "chap_group": 0, 00:04:17.618 "max_large_datain_per_connection": 64, 00:04:17.618 "max_r2t_per_connection": 4, 00:04:17.618 "pdu_pool_size": 36864, 00:04:17.618 "immediate_data_pool_size": 16384, 00:04:17.618 "data_out_pool_size": 2048 00:04:17.618 } 00:04:17.618 } 00:04:17.618 ] 00:04:17.618 } 00:04:17.618 ] 00:04:17.618 } 00:04:17.618 10:06:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:17.877 10:06:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 3699779 00:04:17.877 10:06:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 3699779 ']' 00:04:17.877 10:06:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 3699779 00:04:17.877 10:06:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:17.877 10:06:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:17.877 10:06:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3699779 00:04:17.877 10:06:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:17.877 10:06:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:17.877 10:06:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3699779' 00:04:17.877 killing process with pid 3699779 00:04:17.877 10:06:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 3699779 00:04:17.877 10:06:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 3699779 00:04:20.410 10:06:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=3700479 00:04:20.410 10:06:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:20.410 10:06:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:25.690 10:06:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 3700479 00:04:25.690 10:06:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 3700479 ']' 00:04:25.690 10:06:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 3700479 00:04:25.690 10:06:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:25.690 10:06:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:25.690 10:06:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3700479 00:04:25.690 10:06:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:25.690 10:06:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:25.690 10:06:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3700479' 00:04:25.690 killing process with pid 3700479 00:04:25.690 10:06:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 3700479 00:04:25.690 10:06:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 3700479 00:04:27.595 10:06:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:27.595 10:06:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:27.595 00:04:27.595 real 0m11.025s 00:04:27.595 user 0m10.638s 00:04:27.595 sys 0m0.835s 00:04:27.595 10:06:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:27.595 10:06:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:27.595 ************************************ 00:04:27.595 END TEST skip_rpc_with_json 00:04:27.595 ************************************ 00:04:27.595 10:06:21 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:27.595 10:06:21 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:27.595 10:06:21 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:27.595 10:06:21 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:27.595 ************************************ 00:04:27.595 START TEST skip_rpc_with_delay 00:04:27.595 ************************************ 00:04:27.595 10:06:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:04:27.595 10:06:21 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:27.595 10:06:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:04:27.595 10:06:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:27.595 10:06:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:27.595 10:06:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:27.595 10:06:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:27.595 10:06:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:27.595 10:06:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:27.595 10:06:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:27.595 10:06:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:27.595 10:06:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:27.595 10:06:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:27.595 [2024-12-13 10:06:21.374399] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:27.595 10:06:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:04:27.595 10:06:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:27.595 10:06:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:27.595 10:06:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:27.595 00:04:27.595 real 0m0.135s 00:04:27.595 user 0m0.069s 00:04:27.595 sys 0m0.065s 00:04:27.595 10:06:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:27.595 10:06:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:27.595 ************************************ 00:04:27.595 END TEST skip_rpc_with_delay 00:04:27.595 ************************************ 00:04:27.595 10:06:21 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:27.595 10:06:21 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:27.595 10:06:21 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:27.595 10:06:21 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:27.595 10:06:21 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:27.595 10:06:21 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:27.854 ************************************ 00:04:27.854 START TEST exit_on_failed_rpc_init 00:04:27.854 ************************************ 00:04:27.854 10:06:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:04:27.854 10:06:21 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=3701765 00:04:27.854 10:06:21 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 3701765 00:04:27.854 10:06:21 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:27.854 10:06:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 3701765 ']' 00:04:27.854 10:06:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:27.854 10:06:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:27.854 10:06:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:27.854 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:27.854 10:06:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:27.854 10:06:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:27.854 [2024-12-13 10:06:21.580815] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:04:27.854 [2024-12-13 10:06:21.580904] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3701765 ] 00:04:27.854 [2024-12-13 10:06:21.693506] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:28.113 [2024-12-13 10:06:21.798804] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:29.049 10:06:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:29.049 10:06:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:04:29.049 10:06:22 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:29.049 10:06:22 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:29.049 10:06:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:04:29.049 10:06:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:29.049 10:06:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:29.049 10:06:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:29.049 10:06:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:29.049 10:06:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:29.049 10:06:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:29.049 10:06:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:29.049 10:06:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:29.049 10:06:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:29.049 10:06:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:29.049 [2024-12-13 10:06:22.678971] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:04:29.049 [2024-12-13 10:06:22.679075] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3701888 ] 00:04:29.049 [2024-12-13 10:06:22.791723] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:29.049 [2024-12-13 10:06:22.900804] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:04:29.049 [2024-12-13 10:06:22.900897] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:29.049 [2024-12-13 10:06:22.900915] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:29.050 [2024-12-13 10:06:22.900925] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:29.308 10:06:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:04:29.308 10:06:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:29.308 10:06:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:04:29.308 10:06:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:04:29.308 10:06:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:04:29.308 10:06:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:29.308 10:06:23 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:29.308 10:06:23 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 3701765 00:04:29.308 10:06:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 3701765 ']' 00:04:29.308 10:06:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 3701765 00:04:29.308 10:06:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:04:29.308 10:06:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:29.308 10:06:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3701765 00:04:29.308 10:06:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:29.308 10:06:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:29.308 10:06:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3701765' 00:04:29.308 killing process with pid 3701765 00:04:29.308 10:06:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 3701765 00:04:29.308 10:06:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 3701765 00:04:31.843 00:04:31.843 real 0m3.986s 00:04:31.843 user 0m4.368s 00:04:31.843 sys 0m0.562s 00:04:31.843 10:06:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:31.843 10:06:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:31.843 ************************************ 00:04:31.843 END TEST exit_on_failed_rpc_init 00:04:31.843 ************************************ 00:04:31.843 10:06:25 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:31.843 00:04:31.843 real 0m23.054s 00:04:31.843 user 0m22.366s 00:04:31.843 sys 0m2.131s 00:04:31.843 10:06:25 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:31.843 10:06:25 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:31.843 ************************************ 00:04:31.843 END TEST skip_rpc 00:04:31.843 ************************************ 00:04:31.843 10:06:25 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:31.843 10:06:25 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:31.843 10:06:25 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:31.843 10:06:25 -- common/autotest_common.sh@10 -- # set +x 00:04:31.843 ************************************ 00:04:31.843 START TEST rpc_client 00:04:31.843 ************************************ 00:04:31.843 10:06:25 rpc_client -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:31.843 * Looking for test storage... 00:04:31.843 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:04:31.843 10:06:25 rpc_client -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:31.843 10:06:25 rpc_client -- common/autotest_common.sh@1711 -- # lcov --version 00:04:31.843 10:06:25 rpc_client -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:31.843 10:06:25 rpc_client -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:31.843 10:06:25 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:31.843 10:06:25 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:31.843 10:06:25 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:31.843 10:06:25 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:04:31.843 10:06:25 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:04:31.843 10:06:25 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:04:31.843 10:06:25 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:04:31.843 10:06:25 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:04:31.843 10:06:25 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:04:31.843 10:06:25 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:04:31.843 10:06:25 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:31.843 10:06:25 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:04:31.843 10:06:25 rpc_client -- scripts/common.sh@345 -- # : 1 00:04:31.843 10:06:25 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:31.843 10:06:25 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:31.843 10:06:25 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:04:31.843 10:06:25 rpc_client -- scripts/common.sh@353 -- # local d=1 00:04:31.843 10:06:25 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:31.843 10:06:25 rpc_client -- scripts/common.sh@355 -- # echo 1 00:04:32.102 10:06:25 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:04:32.102 10:06:25 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:04:32.102 10:06:25 rpc_client -- scripts/common.sh@353 -- # local d=2 00:04:32.102 10:06:25 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:32.102 10:06:25 rpc_client -- scripts/common.sh@355 -- # echo 2 00:04:32.102 10:06:25 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:04:32.102 10:06:25 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:32.102 10:06:25 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:32.102 10:06:25 rpc_client -- scripts/common.sh@368 -- # return 0 00:04:32.102 10:06:25 rpc_client -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:32.102 10:06:25 rpc_client -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:32.102 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:32.102 --rc genhtml_branch_coverage=1 00:04:32.102 --rc genhtml_function_coverage=1 00:04:32.102 --rc genhtml_legend=1 00:04:32.102 --rc geninfo_all_blocks=1 00:04:32.102 --rc geninfo_unexecuted_blocks=1 00:04:32.102 00:04:32.102 ' 00:04:32.102 10:06:25 rpc_client -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:32.102 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:32.102 --rc genhtml_branch_coverage=1 00:04:32.102 --rc genhtml_function_coverage=1 00:04:32.102 --rc genhtml_legend=1 00:04:32.102 --rc geninfo_all_blocks=1 00:04:32.102 --rc geninfo_unexecuted_blocks=1 00:04:32.102 00:04:32.102 ' 00:04:32.102 10:06:25 rpc_client -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:32.102 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:32.102 --rc genhtml_branch_coverage=1 00:04:32.102 --rc genhtml_function_coverage=1 00:04:32.102 --rc genhtml_legend=1 00:04:32.102 --rc geninfo_all_blocks=1 00:04:32.102 --rc geninfo_unexecuted_blocks=1 00:04:32.102 00:04:32.102 ' 00:04:32.102 10:06:25 rpc_client -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:32.102 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:32.102 --rc genhtml_branch_coverage=1 00:04:32.102 --rc genhtml_function_coverage=1 00:04:32.102 --rc genhtml_legend=1 00:04:32.102 --rc geninfo_all_blocks=1 00:04:32.102 --rc geninfo_unexecuted_blocks=1 00:04:32.102 00:04:32.102 ' 00:04:32.102 10:06:25 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:04:32.102 OK 00:04:32.102 10:06:25 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:32.102 00:04:32.102 real 0m0.217s 00:04:32.102 user 0m0.124s 00:04:32.102 sys 0m0.103s 00:04:32.102 10:06:25 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:32.102 10:06:25 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:32.102 ************************************ 00:04:32.102 END TEST rpc_client 00:04:32.102 ************************************ 00:04:32.102 10:06:25 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:32.102 10:06:25 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:32.102 10:06:25 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:32.102 10:06:25 -- common/autotest_common.sh@10 -- # set +x 00:04:32.102 ************************************ 00:04:32.103 START TEST json_config 00:04:32.103 ************************************ 00:04:32.103 10:06:25 json_config -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:32.103 10:06:25 json_config -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:32.103 10:06:25 json_config -- common/autotest_common.sh@1711 -- # lcov --version 00:04:32.103 10:06:25 json_config -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:32.362 10:06:26 json_config -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:32.362 10:06:26 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:32.362 10:06:26 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:32.362 10:06:26 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:32.362 10:06:26 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:04:32.362 10:06:26 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:04:32.362 10:06:26 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:04:32.362 10:06:26 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:04:32.362 10:06:26 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:04:32.362 10:06:26 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:04:32.362 10:06:26 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:04:32.362 10:06:26 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:32.362 10:06:26 json_config -- scripts/common.sh@344 -- # case "$op" in 00:04:32.362 10:06:26 json_config -- scripts/common.sh@345 -- # : 1 00:04:32.362 10:06:26 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:32.362 10:06:26 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:32.362 10:06:26 json_config -- scripts/common.sh@365 -- # decimal 1 00:04:32.362 10:06:26 json_config -- scripts/common.sh@353 -- # local d=1 00:04:32.362 10:06:26 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:32.362 10:06:26 json_config -- scripts/common.sh@355 -- # echo 1 00:04:32.362 10:06:26 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:04:32.362 10:06:26 json_config -- scripts/common.sh@366 -- # decimal 2 00:04:32.362 10:06:26 json_config -- scripts/common.sh@353 -- # local d=2 00:04:32.363 10:06:26 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:32.363 10:06:26 json_config -- scripts/common.sh@355 -- # echo 2 00:04:32.363 10:06:26 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:04:32.363 10:06:26 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:32.363 10:06:26 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:32.363 10:06:26 json_config -- scripts/common.sh@368 -- # return 0 00:04:32.363 10:06:26 json_config -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:32.363 10:06:26 json_config -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:32.363 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:32.363 --rc genhtml_branch_coverage=1 00:04:32.363 --rc genhtml_function_coverage=1 00:04:32.363 --rc genhtml_legend=1 00:04:32.363 --rc geninfo_all_blocks=1 00:04:32.363 --rc geninfo_unexecuted_blocks=1 00:04:32.363 00:04:32.363 ' 00:04:32.363 10:06:26 json_config -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:32.363 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:32.363 --rc genhtml_branch_coverage=1 00:04:32.363 --rc genhtml_function_coverage=1 00:04:32.363 --rc genhtml_legend=1 00:04:32.363 --rc geninfo_all_blocks=1 00:04:32.363 --rc geninfo_unexecuted_blocks=1 00:04:32.363 00:04:32.363 ' 00:04:32.363 10:06:26 json_config -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:32.363 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:32.363 --rc genhtml_branch_coverage=1 00:04:32.363 --rc genhtml_function_coverage=1 00:04:32.363 --rc genhtml_legend=1 00:04:32.363 --rc geninfo_all_blocks=1 00:04:32.363 --rc geninfo_unexecuted_blocks=1 00:04:32.363 00:04:32.363 ' 00:04:32.363 10:06:26 json_config -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:32.363 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:32.363 --rc genhtml_branch_coverage=1 00:04:32.363 --rc genhtml_function_coverage=1 00:04:32.363 --rc genhtml_legend=1 00:04:32.363 --rc geninfo_all_blocks=1 00:04:32.363 --rc geninfo_unexecuted_blocks=1 00:04:32.363 00:04:32.363 ' 00:04:32.363 10:06:26 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:32.363 10:06:26 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:32.363 10:06:26 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:32.363 10:06:26 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:32.363 10:06:26 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:32.363 10:06:26 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:32.363 10:06:26 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:32.363 10:06:26 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:32.363 10:06:26 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:32.363 10:06:26 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:32.363 10:06:26 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:32.363 10:06:26 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:32.363 10:06:26 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:04:32.363 10:06:26 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:04:32.363 10:06:26 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:32.363 10:06:26 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:32.363 10:06:26 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:32.363 10:06:26 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:32.363 10:06:26 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:32.363 10:06:26 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:04:32.363 10:06:26 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:32.363 10:06:26 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:32.363 10:06:26 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:32.363 10:06:26 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:32.363 10:06:26 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:32.363 10:06:26 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:32.363 10:06:26 json_config -- paths/export.sh@5 -- # export PATH 00:04:32.363 10:06:26 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:32.363 10:06:26 json_config -- nvmf/common.sh@51 -- # : 0 00:04:32.363 10:06:26 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:32.363 10:06:26 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:32.363 10:06:26 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:32.363 10:06:26 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:32.363 10:06:26 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:32.363 10:06:26 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:32.363 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:32.363 10:06:26 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:32.363 10:06:26 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:32.363 10:06:26 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:32.363 10:06:26 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:32.363 10:06:26 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:32.363 10:06:26 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:32.363 10:06:26 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:32.363 10:06:26 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:32.363 10:06:26 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:04:32.363 10:06:26 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:04:32.363 10:06:26 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:32.363 10:06:26 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:04:32.363 10:06:26 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:32.363 10:06:26 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:04:32.363 10:06:26 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:04:32.363 10:06:26 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:04:32.363 10:06:26 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:04:32.363 10:06:26 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:32.363 10:06:26 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:04:32.363 INFO: JSON configuration test init 00:04:32.363 10:06:26 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:04:32.363 10:06:26 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:04:32.363 10:06:26 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:32.363 10:06:26 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:32.363 10:06:26 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:04:32.363 10:06:26 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:32.363 10:06:26 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:32.363 10:06:26 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:04:32.363 10:06:26 json_config -- json_config/common.sh@9 -- # local app=target 00:04:32.363 10:06:26 json_config -- json_config/common.sh@10 -- # shift 00:04:32.363 10:06:26 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:32.363 10:06:26 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:32.363 10:06:26 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:32.363 10:06:26 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:32.363 10:06:26 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:32.363 10:06:26 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=3702679 00:04:32.363 10:06:26 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:32.363 Waiting for target to run... 00:04:32.363 10:06:26 json_config -- json_config/common.sh@25 -- # waitforlisten 3702679 /var/tmp/spdk_tgt.sock 00:04:32.363 10:06:26 json_config -- common/autotest_common.sh@835 -- # '[' -z 3702679 ']' 00:04:32.363 10:06:26 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:32.363 10:06:26 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:32.363 10:06:26 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:32.363 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:32.363 10:06:26 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:32.363 10:06:26 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:32.363 10:06:26 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:32.363 [2024-12-13 10:06:26.137754] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:04:32.363 [2024-12-13 10:06:26.137846] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3702679 ] 00:04:32.623 [2024-12-13 10:06:26.462575] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:32.882 [2024-12-13 10:06:26.559403] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:33.140 10:06:26 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:33.140 10:06:26 json_config -- common/autotest_common.sh@868 -- # return 0 00:04:33.140 10:06:26 json_config -- json_config/common.sh@26 -- # echo '' 00:04:33.140 00:04:33.140 10:06:26 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:04:33.140 10:06:26 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:04:33.140 10:06:26 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:33.140 10:06:26 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:33.140 10:06:26 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:04:33.141 10:06:26 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:04:33.141 10:06:26 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:33.141 10:06:26 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:33.141 10:06:26 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:04:33.141 10:06:26 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:04:33.141 10:06:26 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:04:37.332 10:06:30 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:04:37.332 10:06:30 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:04:37.332 10:06:30 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:37.332 10:06:30 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:37.332 10:06:30 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:04:37.332 10:06:30 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:04:37.332 10:06:30 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:04:37.332 10:06:30 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:04:37.332 10:06:30 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:04:37.332 10:06:30 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:04:37.332 10:06:30 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:04:37.332 10:06:30 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:04:37.332 10:06:30 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:04:37.332 10:06:30 json_config -- json_config/json_config.sh@51 -- # local get_types 00:04:37.332 10:06:30 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:04:37.332 10:06:30 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:04:37.332 10:06:30 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:04:37.332 10:06:30 json_config -- json_config/json_config.sh@54 -- # sort 00:04:37.332 10:06:30 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:04:37.332 10:06:30 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:04:37.332 10:06:30 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:04:37.332 10:06:30 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:04:37.332 10:06:30 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:37.332 10:06:30 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:37.332 10:06:30 json_config -- json_config/json_config.sh@62 -- # return 0 00:04:37.332 10:06:30 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:04:37.332 10:06:30 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:04:37.332 10:06:30 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:04:37.332 10:06:30 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:04:37.332 10:06:30 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:04:37.332 10:06:30 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:04:37.332 10:06:30 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:37.332 10:06:30 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:37.332 10:06:30 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:04:37.332 10:06:30 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:04:37.332 10:06:30 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:04:37.332 10:06:30 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:37.332 10:06:30 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:37.332 MallocForNvmf0 00:04:37.332 10:06:31 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:37.332 10:06:31 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:37.591 MallocForNvmf1 00:04:37.591 10:06:31 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:04:37.591 10:06:31 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:04:37.591 [2024-12-13 10:06:31.480337] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:37.850 10:06:31 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:37.850 10:06:31 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:37.850 10:06:31 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:37.850 10:06:31 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:38.108 10:06:31 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:38.108 10:06:31 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:38.367 10:06:32 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:38.367 10:06:32 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:38.367 [2024-12-13 10:06:32.210684] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:38.367 10:06:32 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:04:38.367 10:06:32 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:38.367 10:06:32 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:38.626 10:06:32 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:04:38.626 10:06:32 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:38.626 10:06:32 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:38.626 10:06:32 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:04:38.626 10:06:32 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:38.626 10:06:32 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:38.626 MallocBdevForConfigChangeCheck 00:04:38.626 10:06:32 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:04:38.626 10:06:32 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:38.626 10:06:32 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:38.885 10:06:32 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:04:38.885 10:06:32 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:39.144 10:06:32 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:04:39.144 INFO: shutting down applications... 00:04:39.144 10:06:32 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:04:39.144 10:06:32 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:04:39.144 10:06:32 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:04:39.144 10:06:32 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:04:40.520 Calling clear_iscsi_subsystem 00:04:40.520 Calling clear_nvmf_subsystem 00:04:40.520 Calling clear_nbd_subsystem 00:04:40.520 Calling clear_ublk_subsystem 00:04:40.520 Calling clear_vhost_blk_subsystem 00:04:40.520 Calling clear_vhost_scsi_subsystem 00:04:40.520 Calling clear_bdev_subsystem 00:04:40.520 10:06:34 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:04:40.520 10:06:34 json_config -- json_config/json_config.sh@350 -- # count=100 00:04:40.520 10:06:34 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:04:40.520 10:06:34 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:40.520 10:06:34 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:04:40.520 10:06:34 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:04:41.088 10:06:34 json_config -- json_config/json_config.sh@352 -- # break 00:04:41.088 10:06:34 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:04:41.088 10:06:34 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:04:41.088 10:06:34 json_config -- json_config/common.sh@31 -- # local app=target 00:04:41.088 10:06:34 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:41.088 10:06:34 json_config -- json_config/common.sh@35 -- # [[ -n 3702679 ]] 00:04:41.088 10:06:34 json_config -- json_config/common.sh@38 -- # kill -SIGINT 3702679 00:04:41.088 10:06:34 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:41.088 10:06:34 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:41.088 10:06:34 json_config -- json_config/common.sh@41 -- # kill -0 3702679 00:04:41.088 10:06:34 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:04:41.348 10:06:35 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:04:41.348 10:06:35 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:41.348 10:06:35 json_config -- json_config/common.sh@41 -- # kill -0 3702679 00:04:41.348 10:06:35 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:04:41.915 10:06:35 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:04:41.915 10:06:35 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:41.915 10:06:35 json_config -- json_config/common.sh@41 -- # kill -0 3702679 00:04:41.915 10:06:35 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:41.915 10:06:35 json_config -- json_config/common.sh@43 -- # break 00:04:41.915 10:06:35 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:41.915 10:06:35 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:41.915 SPDK target shutdown done 00:04:41.915 10:06:35 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:04:41.915 INFO: relaunching applications... 00:04:41.915 10:06:35 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:41.915 10:06:35 json_config -- json_config/common.sh@9 -- # local app=target 00:04:41.915 10:06:35 json_config -- json_config/common.sh@10 -- # shift 00:04:41.915 10:06:35 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:41.915 10:06:35 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:41.915 10:06:35 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:41.915 10:06:35 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:41.915 10:06:35 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:41.915 10:06:35 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=3704384 00:04:41.915 10:06:35 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:41.915 Waiting for target to run... 00:04:41.915 10:06:35 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:41.915 10:06:35 json_config -- json_config/common.sh@25 -- # waitforlisten 3704384 /var/tmp/spdk_tgt.sock 00:04:41.915 10:06:35 json_config -- common/autotest_common.sh@835 -- # '[' -z 3704384 ']' 00:04:41.915 10:06:35 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:41.915 10:06:35 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:41.915 10:06:35 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:41.915 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:41.915 10:06:35 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:41.915 10:06:35 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:42.174 [2024-12-13 10:06:35.809416] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:04:42.174 [2024-12-13 10:06:35.809518] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3704384 ] 00:04:42.433 [2024-12-13 10:06:36.301292] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:42.692 [2024-12-13 10:06:36.407699] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:47.006 [2024-12-13 10:06:40.092462] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:47.006 [2024-12-13 10:06:40.124793] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:47.006 10:06:40 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:47.006 10:06:40 json_config -- common/autotest_common.sh@868 -- # return 0 00:04:47.006 10:06:40 json_config -- json_config/common.sh@26 -- # echo '' 00:04:47.006 00:04:47.006 10:06:40 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:04:47.006 10:06:40 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:04:47.006 INFO: Checking if target configuration is the same... 00:04:47.006 10:06:40 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:47.006 10:06:40 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:04:47.006 10:06:40 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:47.006 + '[' 2 -ne 2 ']' 00:04:47.006 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:47.006 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:47.006 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:47.006 +++ basename /dev/fd/62 00:04:47.006 ++ mktemp /tmp/62.XXX 00:04:47.006 + tmp_file_1=/tmp/62.viC 00:04:47.006 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:47.006 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:47.006 + tmp_file_2=/tmp/spdk_tgt_config.json.Wot 00:04:47.006 + ret=0 00:04:47.006 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:47.006 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:47.006 + diff -u /tmp/62.viC /tmp/spdk_tgt_config.json.Wot 00:04:47.006 + echo 'INFO: JSON config files are the same' 00:04:47.006 INFO: JSON config files are the same 00:04:47.006 + rm /tmp/62.viC /tmp/spdk_tgt_config.json.Wot 00:04:47.006 + exit 0 00:04:47.006 10:06:40 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:04:47.006 10:06:40 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:04:47.006 INFO: changing configuration and checking if this can be detected... 00:04:47.006 10:06:40 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:47.006 10:06:40 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:47.006 10:06:40 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:47.006 10:06:40 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:04:47.006 10:06:40 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:47.006 + '[' 2 -ne 2 ']' 00:04:47.006 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:47.006 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:47.006 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:47.006 +++ basename /dev/fd/62 00:04:47.006 ++ mktemp /tmp/62.XXX 00:04:47.006 + tmp_file_1=/tmp/62.TgS 00:04:47.006 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:47.006 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:47.006 + tmp_file_2=/tmp/spdk_tgt_config.json.DNt 00:04:47.006 + ret=0 00:04:47.006 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:47.265 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:47.265 + diff -u /tmp/62.TgS /tmp/spdk_tgt_config.json.DNt 00:04:47.265 + ret=1 00:04:47.265 + echo '=== Start of file: /tmp/62.TgS ===' 00:04:47.265 + cat /tmp/62.TgS 00:04:47.265 + echo '=== End of file: /tmp/62.TgS ===' 00:04:47.265 + echo '' 00:04:47.265 + echo '=== Start of file: /tmp/spdk_tgt_config.json.DNt ===' 00:04:47.265 + cat /tmp/spdk_tgt_config.json.DNt 00:04:47.265 + echo '=== End of file: /tmp/spdk_tgt_config.json.DNt ===' 00:04:47.265 + echo '' 00:04:47.265 + rm /tmp/62.TgS /tmp/spdk_tgt_config.json.DNt 00:04:47.265 + exit 1 00:04:47.265 10:06:41 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:04:47.265 INFO: configuration change detected. 00:04:47.265 10:06:41 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:04:47.265 10:06:41 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:04:47.265 10:06:41 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:47.265 10:06:41 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:47.265 10:06:41 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:04:47.265 10:06:41 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:04:47.265 10:06:41 json_config -- json_config/json_config.sh@324 -- # [[ -n 3704384 ]] 00:04:47.265 10:06:41 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:04:47.265 10:06:41 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:04:47.265 10:06:41 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:47.265 10:06:41 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:47.265 10:06:41 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:04:47.265 10:06:41 json_config -- json_config/json_config.sh@200 -- # uname -s 00:04:47.265 10:06:41 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:04:47.265 10:06:41 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:04:47.265 10:06:41 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:04:47.265 10:06:41 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:04:47.265 10:06:41 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:47.265 10:06:41 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:47.524 10:06:41 json_config -- json_config/json_config.sh@330 -- # killprocess 3704384 00:04:47.524 10:06:41 json_config -- common/autotest_common.sh@954 -- # '[' -z 3704384 ']' 00:04:47.524 10:06:41 json_config -- common/autotest_common.sh@958 -- # kill -0 3704384 00:04:47.524 10:06:41 json_config -- common/autotest_common.sh@959 -- # uname 00:04:47.524 10:06:41 json_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:47.524 10:06:41 json_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3704384 00:04:47.524 10:06:41 json_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:47.524 10:06:41 json_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:47.524 10:06:41 json_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3704384' 00:04:47.524 killing process with pid 3704384 00:04:47.524 10:06:41 json_config -- common/autotest_common.sh@973 -- # kill 3704384 00:04:47.524 10:06:41 json_config -- common/autotest_common.sh@978 -- # wait 3704384 00:04:50.059 10:06:43 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:50.059 10:06:43 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:04:50.059 10:06:43 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:50.059 10:06:43 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:50.059 10:06:43 json_config -- json_config/json_config.sh@335 -- # return 0 00:04:50.059 10:06:43 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:04:50.059 INFO: Success 00:04:50.059 00:04:50.059 real 0m17.580s 00:04:50.059 user 0m17.981s 00:04:50.059 sys 0m2.784s 00:04:50.059 10:06:43 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:50.059 10:06:43 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:50.059 ************************************ 00:04:50.059 END TEST json_config 00:04:50.059 ************************************ 00:04:50.059 10:06:43 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:50.059 10:06:43 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:50.059 10:06:43 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:50.059 10:06:43 -- common/autotest_common.sh@10 -- # set +x 00:04:50.059 ************************************ 00:04:50.059 START TEST json_config_extra_key 00:04:50.059 ************************************ 00:04:50.059 10:06:43 json_config_extra_key -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:50.059 10:06:43 json_config_extra_key -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:50.059 10:06:43 json_config_extra_key -- common/autotest_common.sh@1711 -- # lcov --version 00:04:50.059 10:06:43 json_config_extra_key -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:50.059 10:06:43 json_config_extra_key -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:50.059 10:06:43 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:50.059 10:06:43 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:50.059 10:06:43 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:50.059 10:06:43 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:04:50.059 10:06:43 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:04:50.059 10:06:43 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:04:50.059 10:06:43 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:04:50.059 10:06:43 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:04:50.059 10:06:43 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:04:50.059 10:06:43 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:04:50.059 10:06:43 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:50.059 10:06:43 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:04:50.059 10:06:43 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:04:50.059 10:06:43 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:50.059 10:06:43 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:50.059 10:06:43 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:04:50.059 10:06:43 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:04:50.059 10:06:43 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:50.059 10:06:43 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:04:50.059 10:06:43 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:04:50.059 10:06:43 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:04:50.059 10:06:43 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:04:50.059 10:06:43 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:50.059 10:06:43 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:04:50.059 10:06:43 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:04:50.059 10:06:43 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:50.059 10:06:43 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:50.059 10:06:43 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:04:50.059 10:06:43 json_config_extra_key -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:50.059 10:06:43 json_config_extra_key -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:50.059 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.059 --rc genhtml_branch_coverage=1 00:04:50.059 --rc genhtml_function_coverage=1 00:04:50.059 --rc genhtml_legend=1 00:04:50.059 --rc geninfo_all_blocks=1 00:04:50.059 --rc geninfo_unexecuted_blocks=1 00:04:50.059 00:04:50.059 ' 00:04:50.059 10:06:43 json_config_extra_key -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:50.059 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.059 --rc genhtml_branch_coverage=1 00:04:50.059 --rc genhtml_function_coverage=1 00:04:50.059 --rc genhtml_legend=1 00:04:50.059 --rc geninfo_all_blocks=1 00:04:50.059 --rc geninfo_unexecuted_blocks=1 00:04:50.059 00:04:50.059 ' 00:04:50.059 10:06:43 json_config_extra_key -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:50.059 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.059 --rc genhtml_branch_coverage=1 00:04:50.059 --rc genhtml_function_coverage=1 00:04:50.059 --rc genhtml_legend=1 00:04:50.059 --rc geninfo_all_blocks=1 00:04:50.059 --rc geninfo_unexecuted_blocks=1 00:04:50.059 00:04:50.059 ' 00:04:50.059 10:06:43 json_config_extra_key -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:50.059 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.059 --rc genhtml_branch_coverage=1 00:04:50.059 --rc genhtml_function_coverage=1 00:04:50.059 --rc genhtml_legend=1 00:04:50.059 --rc geninfo_all_blocks=1 00:04:50.059 --rc geninfo_unexecuted_blocks=1 00:04:50.059 00:04:50.059 ' 00:04:50.059 10:06:43 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:50.059 10:06:43 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:50.059 10:06:43 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:50.059 10:06:43 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:50.059 10:06:43 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:50.059 10:06:43 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:50.059 10:06:43 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:50.059 10:06:43 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:50.059 10:06:43 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:50.059 10:06:43 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:50.059 10:06:43 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:50.059 10:06:43 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:50.059 10:06:43 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:04:50.059 10:06:43 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:04:50.059 10:06:43 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:50.059 10:06:43 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:50.059 10:06:43 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:50.059 10:06:43 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:50.059 10:06:43 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:50.059 10:06:43 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:04:50.059 10:06:43 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:50.059 10:06:43 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:50.059 10:06:43 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:50.059 10:06:43 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:50.059 10:06:43 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:50.059 10:06:43 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:50.059 10:06:43 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:50.060 10:06:43 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:50.060 10:06:43 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:04:50.060 10:06:43 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:50.060 10:06:43 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:50.060 10:06:43 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:50.060 10:06:43 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:50.060 10:06:43 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:50.060 10:06:43 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:50.060 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:50.060 10:06:43 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:50.060 10:06:43 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:50.060 10:06:43 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:50.060 10:06:43 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:50.060 10:06:43 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:50.060 10:06:43 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:50.060 10:06:43 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:50.060 10:06:43 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:50.060 10:06:43 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:50.060 10:06:43 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:50.060 10:06:43 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:04:50.060 10:06:43 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:50.060 10:06:43 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:50.060 10:06:43 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:50.060 INFO: launching applications... 00:04:50.060 10:06:43 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:50.060 10:06:43 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:50.060 10:06:43 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:50.060 10:06:43 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:50.060 10:06:43 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:50.060 10:06:43 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:50.060 10:06:43 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:50.060 10:06:43 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:50.060 10:06:43 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=3705861 00:04:50.060 10:06:43 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:50.060 Waiting for target to run... 00:04:50.060 10:06:43 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 3705861 /var/tmp/spdk_tgt.sock 00:04:50.060 10:06:43 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 3705861 ']' 00:04:50.060 10:06:43 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:50.060 10:06:43 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:50.060 10:06:43 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:50.060 10:06:43 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:50.060 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:50.060 10:06:43 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:50.060 10:06:43 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:50.060 [2024-12-13 10:06:43.790263] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:04:50.060 [2024-12-13 10:06:43.790350] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3705861 ] 00:04:50.627 [2024-12-13 10:06:44.286648] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:50.627 [2024-12-13 10:06:44.392048] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:51.563 10:06:45 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:51.563 10:06:45 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:04:51.563 10:06:45 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:51.563 00:04:51.563 10:06:45 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:51.563 INFO: shutting down applications... 00:04:51.563 10:06:45 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:51.563 10:06:45 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:51.563 10:06:45 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:51.563 10:06:45 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 3705861 ]] 00:04:51.563 10:06:45 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 3705861 00:04:51.563 10:06:45 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:51.563 10:06:45 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:51.563 10:06:45 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3705861 00:04:51.563 10:06:45 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:51.822 10:06:45 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:51.822 10:06:45 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:51.822 10:06:45 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3705861 00:04:51.822 10:06:45 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:52.390 10:06:46 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:52.390 10:06:46 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:52.390 10:06:46 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3705861 00:04:52.390 10:06:46 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:52.956 10:06:46 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:52.956 10:06:46 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:52.956 10:06:46 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3705861 00:04:52.956 10:06:46 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:53.523 10:06:47 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:53.523 10:06:47 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:53.523 10:06:47 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3705861 00:04:53.523 10:06:47 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:53.782 10:06:47 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:53.782 10:06:47 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:53.782 10:06:47 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3705861 00:04:53.782 10:06:47 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:54.349 10:06:48 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:54.349 10:06:48 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:54.349 10:06:48 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3705861 00:04:54.349 10:06:48 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:54.349 10:06:48 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:54.349 10:06:48 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:54.349 10:06:48 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:54.349 SPDK target shutdown done 00:04:54.349 10:06:48 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:54.349 Success 00:04:54.349 00:04:54.349 real 0m4.605s 00:04:54.349 user 0m3.904s 00:04:54.349 sys 0m0.718s 00:04:54.349 10:06:48 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:54.349 10:06:48 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:54.349 ************************************ 00:04:54.349 END TEST json_config_extra_key 00:04:54.349 ************************************ 00:04:54.349 10:06:48 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:54.349 10:06:48 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:54.349 10:06:48 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:54.349 10:06:48 -- common/autotest_common.sh@10 -- # set +x 00:04:54.349 ************************************ 00:04:54.349 START TEST alias_rpc 00:04:54.349 ************************************ 00:04:54.349 10:06:48 alias_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:54.608 * Looking for test storage... 00:04:54.608 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:04:54.608 10:06:48 alias_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:54.608 10:06:48 alias_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:04:54.608 10:06:48 alias_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:54.608 10:06:48 alias_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:54.608 10:06:48 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:54.608 10:06:48 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:54.608 10:06:48 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:54.608 10:06:48 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:54.608 10:06:48 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:54.608 10:06:48 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:54.608 10:06:48 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:54.608 10:06:48 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:54.608 10:06:48 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:54.608 10:06:48 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:54.608 10:06:48 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:54.608 10:06:48 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:54.608 10:06:48 alias_rpc -- scripts/common.sh@345 -- # : 1 00:04:54.608 10:06:48 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:54.608 10:06:48 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:54.608 10:06:48 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:54.608 10:06:48 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:04:54.608 10:06:48 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:54.608 10:06:48 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:04:54.608 10:06:48 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:54.608 10:06:48 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:54.609 10:06:48 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:04:54.609 10:06:48 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:54.609 10:06:48 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:04:54.609 10:06:48 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:54.609 10:06:48 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:54.609 10:06:48 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:54.609 10:06:48 alias_rpc -- scripts/common.sh@368 -- # return 0 00:04:54.609 10:06:48 alias_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:54.609 10:06:48 alias_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:54.609 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:54.609 --rc genhtml_branch_coverage=1 00:04:54.609 --rc genhtml_function_coverage=1 00:04:54.609 --rc genhtml_legend=1 00:04:54.609 --rc geninfo_all_blocks=1 00:04:54.609 --rc geninfo_unexecuted_blocks=1 00:04:54.609 00:04:54.609 ' 00:04:54.609 10:06:48 alias_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:54.609 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:54.609 --rc genhtml_branch_coverage=1 00:04:54.609 --rc genhtml_function_coverage=1 00:04:54.609 --rc genhtml_legend=1 00:04:54.609 --rc geninfo_all_blocks=1 00:04:54.609 --rc geninfo_unexecuted_blocks=1 00:04:54.609 00:04:54.609 ' 00:04:54.609 10:06:48 alias_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:54.609 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:54.609 --rc genhtml_branch_coverage=1 00:04:54.609 --rc genhtml_function_coverage=1 00:04:54.609 --rc genhtml_legend=1 00:04:54.609 --rc geninfo_all_blocks=1 00:04:54.609 --rc geninfo_unexecuted_blocks=1 00:04:54.609 00:04:54.609 ' 00:04:54.609 10:06:48 alias_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:54.609 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:54.609 --rc genhtml_branch_coverage=1 00:04:54.609 --rc genhtml_function_coverage=1 00:04:54.609 --rc genhtml_legend=1 00:04:54.609 --rc geninfo_all_blocks=1 00:04:54.609 --rc geninfo_unexecuted_blocks=1 00:04:54.609 00:04:54.609 ' 00:04:54.609 10:06:48 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:54.609 10:06:48 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=3706687 00:04:54.609 10:06:48 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:54.609 10:06:48 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 3706687 00:04:54.609 10:06:48 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 3706687 ']' 00:04:54.609 10:06:48 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:54.609 10:06:48 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:54.609 10:06:48 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:54.609 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:54.609 10:06:48 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:54.609 10:06:48 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:54.609 [2024-12-13 10:06:48.437704] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:04:54.609 [2024-12-13 10:06:48.437797] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3706687 ] 00:04:54.867 [2024-12-13 10:06:48.550134] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:54.867 [2024-12-13 10:06:48.647959] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:55.804 10:06:49 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:55.804 10:06:49 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:55.804 10:06:49 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:04:55.804 10:06:49 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 3706687 00:04:55.804 10:06:49 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 3706687 ']' 00:04:55.804 10:06:49 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 3706687 00:04:55.804 10:06:49 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:04:55.804 10:06:49 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:55.804 10:06:49 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3706687 00:04:56.062 10:06:49 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:56.062 10:06:49 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:56.062 10:06:49 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3706687' 00:04:56.062 killing process with pid 3706687 00:04:56.062 10:06:49 alias_rpc -- common/autotest_common.sh@973 -- # kill 3706687 00:04:56.062 10:06:49 alias_rpc -- common/autotest_common.sh@978 -- # wait 3706687 00:04:58.593 00:04:58.593 real 0m3.876s 00:04:58.593 user 0m3.923s 00:04:58.593 sys 0m0.548s 00:04:58.593 10:06:52 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:58.593 10:06:52 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:58.593 ************************************ 00:04:58.593 END TEST alias_rpc 00:04:58.593 ************************************ 00:04:58.593 10:06:52 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:04:58.593 10:06:52 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:58.593 10:06:52 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:58.593 10:06:52 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:58.593 10:06:52 -- common/autotest_common.sh@10 -- # set +x 00:04:58.593 ************************************ 00:04:58.594 START TEST spdkcli_tcp 00:04:58.594 ************************************ 00:04:58.594 10:06:52 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:58.594 * Looking for test storage... 00:04:58.594 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:04:58.594 10:06:52 spdkcli_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:58.594 10:06:52 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:04:58.594 10:06:52 spdkcli_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:58.594 10:06:52 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:58.594 10:06:52 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:58.594 10:06:52 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:58.594 10:06:52 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:58.594 10:06:52 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:04:58.594 10:06:52 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:04:58.594 10:06:52 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:04:58.594 10:06:52 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:04:58.594 10:06:52 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:04:58.594 10:06:52 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:04:58.594 10:06:52 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:04:58.594 10:06:52 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:58.594 10:06:52 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:04:58.594 10:06:52 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:04:58.594 10:06:52 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:58.594 10:06:52 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:58.594 10:06:52 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:04:58.594 10:06:52 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:04:58.594 10:06:52 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:58.594 10:06:52 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:04:58.594 10:06:52 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:04:58.594 10:06:52 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:04:58.594 10:06:52 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:04:58.594 10:06:52 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:58.594 10:06:52 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:04:58.594 10:06:52 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:04:58.594 10:06:52 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:58.594 10:06:52 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:58.594 10:06:52 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:04:58.594 10:06:52 spdkcli_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:58.594 10:06:52 spdkcli_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:58.594 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.594 --rc genhtml_branch_coverage=1 00:04:58.594 --rc genhtml_function_coverage=1 00:04:58.594 --rc genhtml_legend=1 00:04:58.594 --rc geninfo_all_blocks=1 00:04:58.594 --rc geninfo_unexecuted_blocks=1 00:04:58.594 00:04:58.594 ' 00:04:58.594 10:06:52 spdkcli_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:58.594 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.594 --rc genhtml_branch_coverage=1 00:04:58.594 --rc genhtml_function_coverage=1 00:04:58.594 --rc genhtml_legend=1 00:04:58.594 --rc geninfo_all_blocks=1 00:04:58.594 --rc geninfo_unexecuted_blocks=1 00:04:58.594 00:04:58.594 ' 00:04:58.594 10:06:52 spdkcli_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:58.594 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.594 --rc genhtml_branch_coverage=1 00:04:58.594 --rc genhtml_function_coverage=1 00:04:58.594 --rc genhtml_legend=1 00:04:58.594 --rc geninfo_all_blocks=1 00:04:58.594 --rc geninfo_unexecuted_blocks=1 00:04:58.594 00:04:58.594 ' 00:04:58.594 10:06:52 spdkcli_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:58.594 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.594 --rc genhtml_branch_coverage=1 00:04:58.594 --rc genhtml_function_coverage=1 00:04:58.594 --rc genhtml_legend=1 00:04:58.594 --rc geninfo_all_blocks=1 00:04:58.594 --rc geninfo_unexecuted_blocks=1 00:04:58.594 00:04:58.594 ' 00:04:58.594 10:06:52 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:04:58.594 10:06:52 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:04:58.594 10:06:52 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:04:58.594 10:06:52 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:58.594 10:06:52 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:58.594 10:06:52 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:58.594 10:06:52 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:58.594 10:06:52 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:58.594 10:06:52 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:58.594 10:06:52 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=3707369 00:04:58.594 10:06:52 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 3707369 00:04:58.594 10:06:52 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:58.594 10:06:52 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 3707369 ']' 00:04:58.594 10:06:52 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:58.594 10:06:52 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:58.594 10:06:52 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:58.594 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:58.594 10:06:52 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:58.594 10:06:52 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:58.594 [2024-12-13 10:06:52.405261] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:04:58.594 [2024-12-13 10:06:52.405355] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3707369 ] 00:04:58.853 [2024-12-13 10:06:52.521988] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:58.853 [2024-12-13 10:06:52.630490] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:58.853 [2024-12-13 10:06:52.630499] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:04:59.790 10:06:53 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:59.790 10:06:53 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:04:59.790 10:06:53 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=3707558 00:04:59.790 10:06:53 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:59.790 10:06:53 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:59.790 [ 00:04:59.790 "bdev_malloc_delete", 00:04:59.790 "bdev_malloc_create", 00:04:59.790 "bdev_null_resize", 00:04:59.790 "bdev_null_delete", 00:04:59.790 "bdev_null_create", 00:04:59.790 "bdev_nvme_cuse_unregister", 00:04:59.790 "bdev_nvme_cuse_register", 00:04:59.790 "bdev_opal_new_user", 00:04:59.790 "bdev_opal_set_lock_state", 00:04:59.790 "bdev_opal_delete", 00:04:59.790 "bdev_opal_get_info", 00:04:59.790 "bdev_opal_create", 00:04:59.790 "bdev_nvme_opal_revert", 00:04:59.790 "bdev_nvme_opal_init", 00:04:59.790 "bdev_nvme_send_cmd", 00:04:59.790 "bdev_nvme_set_keys", 00:04:59.790 "bdev_nvme_get_path_iostat", 00:04:59.790 "bdev_nvme_get_mdns_discovery_info", 00:04:59.790 "bdev_nvme_stop_mdns_discovery", 00:04:59.790 "bdev_nvme_start_mdns_discovery", 00:04:59.790 "bdev_nvme_set_multipath_policy", 00:04:59.790 "bdev_nvme_set_preferred_path", 00:04:59.790 "bdev_nvme_get_io_paths", 00:04:59.790 "bdev_nvme_remove_error_injection", 00:04:59.790 "bdev_nvme_add_error_injection", 00:04:59.790 "bdev_nvme_get_discovery_info", 00:04:59.790 "bdev_nvme_stop_discovery", 00:04:59.790 "bdev_nvme_start_discovery", 00:04:59.790 "bdev_nvme_get_controller_health_info", 00:04:59.790 "bdev_nvme_disable_controller", 00:04:59.790 "bdev_nvme_enable_controller", 00:04:59.790 "bdev_nvme_reset_controller", 00:04:59.790 "bdev_nvme_get_transport_statistics", 00:04:59.790 "bdev_nvme_apply_firmware", 00:04:59.790 "bdev_nvme_detach_controller", 00:04:59.790 "bdev_nvme_get_controllers", 00:04:59.790 "bdev_nvme_attach_controller", 00:04:59.790 "bdev_nvme_set_hotplug", 00:04:59.790 "bdev_nvme_set_options", 00:04:59.790 "bdev_passthru_delete", 00:04:59.790 "bdev_passthru_create", 00:04:59.790 "bdev_lvol_set_parent_bdev", 00:04:59.790 "bdev_lvol_set_parent", 00:04:59.790 "bdev_lvol_check_shallow_copy", 00:04:59.790 "bdev_lvol_start_shallow_copy", 00:04:59.790 "bdev_lvol_grow_lvstore", 00:04:59.790 "bdev_lvol_get_lvols", 00:04:59.790 "bdev_lvol_get_lvstores", 00:04:59.790 "bdev_lvol_delete", 00:04:59.790 "bdev_lvol_set_read_only", 00:04:59.790 "bdev_lvol_resize", 00:04:59.790 "bdev_lvol_decouple_parent", 00:04:59.790 "bdev_lvol_inflate", 00:04:59.790 "bdev_lvol_rename", 00:04:59.790 "bdev_lvol_clone_bdev", 00:04:59.790 "bdev_lvol_clone", 00:04:59.790 "bdev_lvol_snapshot", 00:04:59.790 "bdev_lvol_create", 00:04:59.790 "bdev_lvol_delete_lvstore", 00:04:59.790 "bdev_lvol_rename_lvstore", 00:04:59.790 "bdev_lvol_create_lvstore", 00:04:59.790 "bdev_raid_set_options", 00:04:59.790 "bdev_raid_remove_base_bdev", 00:04:59.790 "bdev_raid_add_base_bdev", 00:04:59.790 "bdev_raid_delete", 00:04:59.790 "bdev_raid_create", 00:04:59.790 "bdev_raid_get_bdevs", 00:04:59.790 "bdev_error_inject_error", 00:04:59.790 "bdev_error_delete", 00:04:59.790 "bdev_error_create", 00:04:59.790 "bdev_split_delete", 00:04:59.790 "bdev_split_create", 00:04:59.790 "bdev_delay_delete", 00:04:59.790 "bdev_delay_create", 00:04:59.790 "bdev_delay_update_latency", 00:04:59.790 "bdev_zone_block_delete", 00:04:59.790 "bdev_zone_block_create", 00:04:59.790 "blobfs_create", 00:04:59.790 "blobfs_detect", 00:04:59.790 "blobfs_set_cache_size", 00:04:59.790 "bdev_aio_delete", 00:04:59.790 "bdev_aio_rescan", 00:04:59.790 "bdev_aio_create", 00:04:59.790 "bdev_ftl_set_property", 00:04:59.790 "bdev_ftl_get_properties", 00:04:59.790 "bdev_ftl_get_stats", 00:04:59.790 "bdev_ftl_unmap", 00:04:59.790 "bdev_ftl_unload", 00:04:59.790 "bdev_ftl_delete", 00:04:59.790 "bdev_ftl_load", 00:04:59.790 "bdev_ftl_create", 00:04:59.790 "bdev_virtio_attach_controller", 00:04:59.790 "bdev_virtio_scsi_get_devices", 00:04:59.790 "bdev_virtio_detach_controller", 00:04:59.790 "bdev_virtio_blk_set_hotplug", 00:04:59.790 "bdev_iscsi_delete", 00:04:59.790 "bdev_iscsi_create", 00:04:59.790 "bdev_iscsi_set_options", 00:04:59.790 "accel_error_inject_error", 00:04:59.790 "ioat_scan_accel_module", 00:04:59.790 "dsa_scan_accel_module", 00:04:59.790 "iaa_scan_accel_module", 00:04:59.790 "keyring_file_remove_key", 00:04:59.790 "keyring_file_add_key", 00:04:59.790 "keyring_linux_set_options", 00:04:59.790 "fsdev_aio_delete", 00:04:59.790 "fsdev_aio_create", 00:04:59.790 "iscsi_get_histogram", 00:04:59.790 "iscsi_enable_histogram", 00:04:59.790 "iscsi_set_options", 00:04:59.790 "iscsi_get_auth_groups", 00:04:59.790 "iscsi_auth_group_remove_secret", 00:04:59.790 "iscsi_auth_group_add_secret", 00:04:59.790 "iscsi_delete_auth_group", 00:04:59.790 "iscsi_create_auth_group", 00:04:59.790 "iscsi_set_discovery_auth", 00:04:59.790 "iscsi_get_options", 00:04:59.790 "iscsi_target_node_request_logout", 00:04:59.790 "iscsi_target_node_set_redirect", 00:04:59.790 "iscsi_target_node_set_auth", 00:04:59.790 "iscsi_target_node_add_lun", 00:04:59.790 "iscsi_get_stats", 00:04:59.790 "iscsi_get_connections", 00:04:59.790 "iscsi_portal_group_set_auth", 00:04:59.790 "iscsi_start_portal_group", 00:04:59.790 "iscsi_delete_portal_group", 00:04:59.790 "iscsi_create_portal_group", 00:04:59.790 "iscsi_get_portal_groups", 00:04:59.790 "iscsi_delete_target_node", 00:04:59.790 "iscsi_target_node_remove_pg_ig_maps", 00:04:59.790 "iscsi_target_node_add_pg_ig_maps", 00:04:59.790 "iscsi_create_target_node", 00:04:59.790 "iscsi_get_target_nodes", 00:04:59.790 "iscsi_delete_initiator_group", 00:04:59.790 "iscsi_initiator_group_remove_initiators", 00:04:59.790 "iscsi_initiator_group_add_initiators", 00:04:59.790 "iscsi_create_initiator_group", 00:04:59.790 "iscsi_get_initiator_groups", 00:04:59.790 "nvmf_set_crdt", 00:04:59.790 "nvmf_set_config", 00:04:59.790 "nvmf_set_max_subsystems", 00:04:59.790 "nvmf_stop_mdns_prr", 00:04:59.790 "nvmf_publish_mdns_prr", 00:04:59.790 "nvmf_subsystem_get_listeners", 00:04:59.790 "nvmf_subsystem_get_qpairs", 00:04:59.790 "nvmf_subsystem_get_controllers", 00:04:59.790 "nvmf_get_stats", 00:04:59.790 "nvmf_get_transports", 00:04:59.790 "nvmf_create_transport", 00:04:59.790 "nvmf_get_targets", 00:04:59.790 "nvmf_delete_target", 00:04:59.790 "nvmf_create_target", 00:04:59.790 "nvmf_subsystem_allow_any_host", 00:04:59.790 "nvmf_subsystem_set_keys", 00:04:59.790 "nvmf_subsystem_remove_host", 00:04:59.790 "nvmf_subsystem_add_host", 00:04:59.790 "nvmf_ns_remove_host", 00:04:59.790 "nvmf_ns_add_host", 00:04:59.790 "nvmf_subsystem_remove_ns", 00:04:59.790 "nvmf_subsystem_set_ns_ana_group", 00:04:59.790 "nvmf_subsystem_add_ns", 00:04:59.790 "nvmf_subsystem_listener_set_ana_state", 00:04:59.790 "nvmf_discovery_get_referrals", 00:04:59.790 "nvmf_discovery_remove_referral", 00:04:59.790 "nvmf_discovery_add_referral", 00:04:59.790 "nvmf_subsystem_remove_listener", 00:04:59.790 "nvmf_subsystem_add_listener", 00:04:59.790 "nvmf_delete_subsystem", 00:04:59.790 "nvmf_create_subsystem", 00:04:59.790 "nvmf_get_subsystems", 00:04:59.790 "env_dpdk_get_mem_stats", 00:04:59.790 "nbd_get_disks", 00:04:59.790 "nbd_stop_disk", 00:04:59.790 "nbd_start_disk", 00:04:59.790 "ublk_recover_disk", 00:04:59.790 "ublk_get_disks", 00:04:59.790 "ublk_stop_disk", 00:04:59.790 "ublk_start_disk", 00:04:59.790 "ublk_destroy_target", 00:04:59.790 "ublk_create_target", 00:04:59.790 "virtio_blk_create_transport", 00:04:59.790 "virtio_blk_get_transports", 00:04:59.790 "vhost_controller_set_coalescing", 00:04:59.790 "vhost_get_controllers", 00:04:59.791 "vhost_delete_controller", 00:04:59.791 "vhost_create_blk_controller", 00:04:59.791 "vhost_scsi_controller_remove_target", 00:04:59.791 "vhost_scsi_controller_add_target", 00:04:59.791 "vhost_start_scsi_controller", 00:04:59.791 "vhost_create_scsi_controller", 00:04:59.791 "thread_set_cpumask", 00:04:59.791 "scheduler_set_options", 00:04:59.791 "framework_get_governor", 00:04:59.791 "framework_get_scheduler", 00:04:59.791 "framework_set_scheduler", 00:04:59.791 "framework_get_reactors", 00:04:59.791 "thread_get_io_channels", 00:04:59.791 "thread_get_pollers", 00:04:59.791 "thread_get_stats", 00:04:59.791 "framework_monitor_context_switch", 00:04:59.791 "spdk_kill_instance", 00:04:59.791 "log_enable_timestamps", 00:04:59.791 "log_get_flags", 00:04:59.791 "log_clear_flag", 00:04:59.791 "log_set_flag", 00:04:59.791 "log_get_level", 00:04:59.791 "log_set_level", 00:04:59.791 "log_get_print_level", 00:04:59.791 "log_set_print_level", 00:04:59.791 "framework_enable_cpumask_locks", 00:04:59.791 "framework_disable_cpumask_locks", 00:04:59.791 "framework_wait_init", 00:04:59.791 "framework_start_init", 00:04:59.791 "scsi_get_devices", 00:04:59.791 "bdev_get_histogram", 00:04:59.791 "bdev_enable_histogram", 00:04:59.791 "bdev_set_qos_limit", 00:04:59.791 "bdev_set_qd_sampling_period", 00:04:59.791 "bdev_get_bdevs", 00:04:59.791 "bdev_reset_iostat", 00:04:59.791 "bdev_get_iostat", 00:04:59.791 "bdev_examine", 00:04:59.791 "bdev_wait_for_examine", 00:04:59.791 "bdev_set_options", 00:04:59.791 "accel_get_stats", 00:04:59.791 "accel_set_options", 00:04:59.791 "accel_set_driver", 00:04:59.791 "accel_crypto_key_destroy", 00:04:59.791 "accel_crypto_keys_get", 00:04:59.791 "accel_crypto_key_create", 00:04:59.791 "accel_assign_opc", 00:04:59.791 "accel_get_module_info", 00:04:59.791 "accel_get_opc_assignments", 00:04:59.791 "vmd_rescan", 00:04:59.791 "vmd_remove_device", 00:04:59.791 "vmd_enable", 00:04:59.791 "sock_get_default_impl", 00:04:59.791 "sock_set_default_impl", 00:04:59.791 "sock_impl_set_options", 00:04:59.791 "sock_impl_get_options", 00:04:59.791 "iobuf_get_stats", 00:04:59.791 "iobuf_set_options", 00:04:59.791 "keyring_get_keys", 00:04:59.791 "framework_get_pci_devices", 00:04:59.791 "framework_get_config", 00:04:59.791 "framework_get_subsystems", 00:04:59.791 "fsdev_set_opts", 00:04:59.791 "fsdev_get_opts", 00:04:59.791 "trace_get_info", 00:04:59.791 "trace_get_tpoint_group_mask", 00:04:59.791 "trace_disable_tpoint_group", 00:04:59.791 "trace_enable_tpoint_group", 00:04:59.791 "trace_clear_tpoint_mask", 00:04:59.791 "trace_set_tpoint_mask", 00:04:59.791 "notify_get_notifications", 00:04:59.791 "notify_get_types", 00:04:59.791 "spdk_get_version", 00:04:59.791 "rpc_get_methods" 00:04:59.791 ] 00:04:59.791 10:06:53 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:59.791 10:06:53 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:59.791 10:06:53 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:00.050 10:06:53 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:00.050 10:06:53 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 3707369 00:05:00.050 10:06:53 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 3707369 ']' 00:05:00.050 10:06:53 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 3707369 00:05:00.050 10:06:53 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:05:00.050 10:06:53 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:00.050 10:06:53 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3707369 00:05:00.050 10:06:53 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:00.050 10:06:53 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:00.050 10:06:53 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3707369' 00:05:00.050 killing process with pid 3707369 00:05:00.050 10:06:53 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 3707369 00:05:00.050 10:06:53 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 3707369 00:05:02.590 00:05:02.590 real 0m3.997s 00:05:02.590 user 0m7.298s 00:05:02.590 sys 0m0.588s 00:05:02.590 10:06:56 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:02.590 10:06:56 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:02.590 ************************************ 00:05:02.590 END TEST spdkcli_tcp 00:05:02.590 ************************************ 00:05:02.590 10:06:56 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:02.590 10:06:56 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:02.590 10:06:56 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:02.590 10:06:56 -- common/autotest_common.sh@10 -- # set +x 00:05:02.590 ************************************ 00:05:02.590 START TEST dpdk_mem_utility 00:05:02.590 ************************************ 00:05:02.590 10:06:56 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:02.590 * Looking for test storage... 00:05:02.590 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:05:02.590 10:06:56 dpdk_mem_utility -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:02.590 10:06:56 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lcov --version 00:05:02.590 10:06:56 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:02.590 10:06:56 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:02.590 10:06:56 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:02.590 10:06:56 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:02.590 10:06:56 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:02.590 10:06:56 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:05:02.590 10:06:56 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:05:02.590 10:06:56 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:05:02.590 10:06:56 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:05:02.590 10:06:56 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:05:02.590 10:06:56 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:05:02.590 10:06:56 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:05:02.590 10:06:56 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:02.590 10:06:56 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:05:02.590 10:06:56 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:05:02.590 10:06:56 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:02.590 10:06:56 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:02.590 10:06:56 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:05:02.590 10:06:56 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:05:02.590 10:06:56 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:02.590 10:06:56 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:05:02.590 10:06:56 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:05:02.590 10:06:56 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:05:02.590 10:06:56 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:05:02.590 10:06:56 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:02.590 10:06:56 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:05:02.590 10:06:56 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:05:02.590 10:06:56 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:02.590 10:06:56 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:02.590 10:06:56 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:05:02.590 10:06:56 dpdk_mem_utility -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:02.590 10:06:56 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:02.590 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.590 --rc genhtml_branch_coverage=1 00:05:02.590 --rc genhtml_function_coverage=1 00:05:02.590 --rc genhtml_legend=1 00:05:02.590 --rc geninfo_all_blocks=1 00:05:02.590 --rc geninfo_unexecuted_blocks=1 00:05:02.590 00:05:02.590 ' 00:05:02.590 10:06:56 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:02.590 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.590 --rc genhtml_branch_coverage=1 00:05:02.590 --rc genhtml_function_coverage=1 00:05:02.590 --rc genhtml_legend=1 00:05:02.590 --rc geninfo_all_blocks=1 00:05:02.590 --rc geninfo_unexecuted_blocks=1 00:05:02.590 00:05:02.590 ' 00:05:02.590 10:06:56 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:02.590 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.590 --rc genhtml_branch_coverage=1 00:05:02.590 --rc genhtml_function_coverage=1 00:05:02.590 --rc genhtml_legend=1 00:05:02.590 --rc geninfo_all_blocks=1 00:05:02.590 --rc geninfo_unexecuted_blocks=1 00:05:02.590 00:05:02.590 ' 00:05:02.590 10:06:56 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:02.590 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.590 --rc genhtml_branch_coverage=1 00:05:02.590 --rc genhtml_function_coverage=1 00:05:02.590 --rc genhtml_legend=1 00:05:02.590 --rc geninfo_all_blocks=1 00:05:02.590 --rc geninfo_unexecuted_blocks=1 00:05:02.590 00:05:02.590 ' 00:05:02.590 10:06:56 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:02.590 10:06:56 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=3708172 00:05:02.590 10:06:56 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 3708172 00:05:02.590 10:06:56 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:02.590 10:06:56 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 3708172 ']' 00:05:02.590 10:06:56 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:02.590 10:06:56 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:02.590 10:06:56 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:02.590 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:02.590 10:06:56 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:02.590 10:06:56 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:02.590 [2024-12-13 10:06:56.473409] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:05:02.591 [2024-12-13 10:06:56.473529] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3708172 ] 00:05:02.850 [2024-12-13 10:06:56.586860] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:02.850 [2024-12-13 10:06:56.692238] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:03.787 10:06:57 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:03.787 10:06:57 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:05:03.787 10:06:57 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:03.787 10:06:57 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:03.787 10:06:57 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:03.787 10:06:57 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:03.787 { 00:05:03.787 "filename": "/tmp/spdk_mem_dump.txt" 00:05:03.787 } 00:05:03.787 10:06:57 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:03.787 10:06:57 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:03.787 DPDK memory size 824.000000 MiB in 1 heap(s) 00:05:03.787 1 heaps totaling size 824.000000 MiB 00:05:03.787 size: 824.000000 MiB heap id: 0 00:05:03.787 end heaps---------- 00:05:03.787 9 mempools totaling size 603.782043 MiB 00:05:03.787 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:03.787 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:03.787 size: 100.555481 MiB name: bdev_io_3708172 00:05:03.787 size: 50.003479 MiB name: msgpool_3708172 00:05:03.787 size: 36.509338 MiB name: fsdev_io_3708172 00:05:03.787 size: 21.763794 MiB name: PDU_Pool 00:05:03.787 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:03.787 size: 4.133484 MiB name: evtpool_3708172 00:05:03.787 size: 0.026123 MiB name: Session_Pool 00:05:03.787 end mempools------- 00:05:03.787 6 memzones totaling size 4.142822 MiB 00:05:03.787 size: 1.000366 MiB name: RG_ring_0_3708172 00:05:03.787 size: 1.000366 MiB name: RG_ring_1_3708172 00:05:03.787 size: 1.000366 MiB name: RG_ring_4_3708172 00:05:03.787 size: 1.000366 MiB name: RG_ring_5_3708172 00:05:03.787 size: 0.125366 MiB name: RG_ring_2_3708172 00:05:03.787 size: 0.015991 MiB name: RG_ring_3_3708172 00:05:03.787 end memzones------- 00:05:03.787 10:06:57 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:05:03.787 heap id: 0 total size: 824.000000 MiB number of busy elements: 44 number of free elements: 19 00:05:03.787 list of free elements. size: 16.847595 MiB 00:05:03.787 element at address: 0x200006400000 with size: 1.995972 MiB 00:05:03.787 element at address: 0x20000a600000 with size: 1.995972 MiB 00:05:03.787 element at address: 0x200003e00000 with size: 1.991028 MiB 00:05:03.787 element at address: 0x200019500040 with size: 0.999939 MiB 00:05:03.787 element at address: 0x200019900040 with size: 0.999939 MiB 00:05:03.787 element at address: 0x200019a00000 with size: 0.999329 MiB 00:05:03.787 element at address: 0x200000400000 with size: 0.998108 MiB 00:05:03.787 element at address: 0x200032600000 with size: 0.994324 MiB 00:05:03.787 element at address: 0x200019200000 with size: 0.959900 MiB 00:05:03.787 element at address: 0x200019d00040 with size: 0.937256 MiB 00:05:03.787 element at address: 0x200000200000 with size: 0.716980 MiB 00:05:03.787 element at address: 0x20001b400000 with size: 0.583191 MiB 00:05:03.787 element at address: 0x200000c00000 with size: 0.495300 MiB 00:05:03.787 element at address: 0x200019600000 with size: 0.491150 MiB 00:05:03.787 element at address: 0x200019e00000 with size: 0.485657 MiB 00:05:03.787 element at address: 0x200012c00000 with size: 0.436157 MiB 00:05:03.787 element at address: 0x200028800000 with size: 0.411072 MiB 00:05:03.787 element at address: 0x200000800000 with size: 0.355286 MiB 00:05:03.787 element at address: 0x20000a5ff040 with size: 0.001038 MiB 00:05:03.787 list of standard malloc elements. size: 199.221497 MiB 00:05:03.787 element at address: 0x20000a7fef80 with size: 132.000183 MiB 00:05:03.787 element at address: 0x2000065fef80 with size: 64.000183 MiB 00:05:03.787 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:05:03.787 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:05:03.787 element at address: 0x200019bfff80 with size: 1.000183 MiB 00:05:03.787 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:05:03.787 element at address: 0x200019deff40 with size: 0.062683 MiB 00:05:03.787 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:05:03.787 element at address: 0x200012bff040 with size: 0.000427 MiB 00:05:03.787 element at address: 0x200012bffa00 with size: 0.000366 MiB 00:05:03.787 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:05:03.787 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:05:03.787 element at address: 0x2000004ff840 with size: 0.000244 MiB 00:05:03.787 element at address: 0x2000004ff940 with size: 0.000244 MiB 00:05:03.787 element at address: 0x2000004ffa40 with size: 0.000244 MiB 00:05:03.787 element at address: 0x2000004ffcc0 with size: 0.000244 MiB 00:05:03.787 element at address: 0x2000004ffdc0 with size: 0.000244 MiB 00:05:03.787 element at address: 0x20000087f3c0 with size: 0.000244 MiB 00:05:03.787 element at address: 0x20000087f4c0 with size: 0.000244 MiB 00:05:03.787 element at address: 0x2000008ff800 with size: 0.000244 MiB 00:05:03.787 element at address: 0x2000008ffa80 with size: 0.000244 MiB 00:05:03.787 element at address: 0x200000cfef00 with size: 0.000244 MiB 00:05:03.787 element at address: 0x200000cff000 with size: 0.000244 MiB 00:05:03.787 element at address: 0x20000a5ff480 with size: 0.000244 MiB 00:05:03.787 element at address: 0x20000a5ff580 with size: 0.000244 MiB 00:05:03.787 element at address: 0x20000a5ff680 with size: 0.000244 MiB 00:05:03.787 element at address: 0x20000a5ff780 with size: 0.000244 MiB 00:05:03.787 element at address: 0x20000a5ff880 with size: 0.000244 MiB 00:05:03.787 element at address: 0x20000a5ff980 with size: 0.000244 MiB 00:05:03.787 element at address: 0x20000a5ffc00 with size: 0.000244 MiB 00:05:03.787 element at address: 0x20000a5ffd00 with size: 0.000244 MiB 00:05:03.787 element at address: 0x20000a5ffe00 with size: 0.000244 MiB 00:05:03.787 element at address: 0x20000a5fff00 with size: 0.000244 MiB 00:05:03.787 element at address: 0x200012bff200 with size: 0.000244 MiB 00:05:03.787 element at address: 0x200012bff300 with size: 0.000244 MiB 00:05:03.787 element at address: 0x200012bff400 with size: 0.000244 MiB 00:05:03.787 element at address: 0x200012bff500 with size: 0.000244 MiB 00:05:03.787 element at address: 0x200012bff600 with size: 0.000244 MiB 00:05:03.787 element at address: 0x200012bff700 with size: 0.000244 MiB 00:05:03.787 element at address: 0x200012bff800 with size: 0.000244 MiB 00:05:03.788 element at address: 0x200012bff900 with size: 0.000244 MiB 00:05:03.788 element at address: 0x200012bffb80 with size: 0.000244 MiB 00:05:03.788 element at address: 0x200012bffc80 with size: 0.000244 MiB 00:05:03.788 element at address: 0x200012bfff00 with size: 0.000244 MiB 00:05:03.788 list of memzone associated elements. size: 607.930908 MiB 00:05:03.788 element at address: 0x20001b4954c0 with size: 211.416809 MiB 00:05:03.788 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:03.788 element at address: 0x20002886ff80 with size: 157.562622 MiB 00:05:03.788 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:03.788 element at address: 0x200012df1e40 with size: 100.055115 MiB 00:05:03.788 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_3708172_0 00:05:03.788 element at address: 0x200000dff340 with size: 48.003113 MiB 00:05:03.788 associated memzone info: size: 48.002930 MiB name: MP_msgpool_3708172_0 00:05:03.788 element at address: 0x200003ffdb40 with size: 36.008972 MiB 00:05:03.788 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_3708172_0 00:05:03.788 element at address: 0x200019fbe900 with size: 20.255615 MiB 00:05:03.788 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:03.788 element at address: 0x2000327feb00 with size: 18.005127 MiB 00:05:03.788 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:03.788 element at address: 0x2000004ffec0 with size: 3.000305 MiB 00:05:03.788 associated memzone info: size: 3.000122 MiB name: MP_evtpool_3708172_0 00:05:03.788 element at address: 0x2000009ffdc0 with size: 2.000549 MiB 00:05:03.788 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_3708172 00:05:03.788 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:05:03.788 associated memzone info: size: 1.007996 MiB name: MP_evtpool_3708172 00:05:03.788 element at address: 0x2000196fde00 with size: 1.008179 MiB 00:05:03.788 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:03.788 element at address: 0x200019ebc780 with size: 1.008179 MiB 00:05:03.788 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:03.788 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:05:03.788 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:03.788 element at address: 0x200012cefcc0 with size: 1.008179 MiB 00:05:03.788 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:03.788 element at address: 0x200000cff100 with size: 1.000549 MiB 00:05:03.788 associated memzone info: size: 1.000366 MiB name: RG_ring_0_3708172 00:05:03.788 element at address: 0x2000008ffb80 with size: 1.000549 MiB 00:05:03.788 associated memzone info: size: 1.000366 MiB name: RG_ring_1_3708172 00:05:03.788 element at address: 0x200019affd40 with size: 1.000549 MiB 00:05:03.788 associated memzone info: size: 1.000366 MiB name: RG_ring_4_3708172 00:05:03.788 element at address: 0x2000326fe8c0 with size: 1.000549 MiB 00:05:03.788 associated memzone info: size: 1.000366 MiB name: RG_ring_5_3708172 00:05:03.788 element at address: 0x20000087f5c0 with size: 0.500549 MiB 00:05:03.788 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_3708172 00:05:03.788 element at address: 0x200000c7ecc0 with size: 0.500549 MiB 00:05:03.788 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_3708172 00:05:03.788 element at address: 0x20001967dbc0 with size: 0.500549 MiB 00:05:03.788 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:03.788 element at address: 0x200012c6fa80 with size: 0.500549 MiB 00:05:03.788 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:03.788 element at address: 0x200019e7c540 with size: 0.250549 MiB 00:05:03.788 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:03.788 element at address: 0x2000002b78c0 with size: 0.125549 MiB 00:05:03.788 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_3708172 00:05:03.788 element at address: 0x20000085f180 with size: 0.125549 MiB 00:05:03.788 associated memzone info: size: 0.125366 MiB name: RG_ring_2_3708172 00:05:03.788 element at address: 0x2000192f5bc0 with size: 0.031799 MiB 00:05:03.788 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:03.788 element at address: 0x2000288693c0 with size: 0.023804 MiB 00:05:03.788 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:03.788 element at address: 0x20000085af40 with size: 0.016174 MiB 00:05:03.788 associated memzone info: size: 0.015991 MiB name: RG_ring_3_3708172 00:05:03.788 element at address: 0x20002886f540 with size: 0.002502 MiB 00:05:03.788 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:03.788 element at address: 0x2000004ffb40 with size: 0.000366 MiB 00:05:03.788 associated memzone info: size: 0.000183 MiB name: MP_msgpool_3708172 00:05:03.788 element at address: 0x2000008ff900 with size: 0.000366 MiB 00:05:03.788 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_3708172 00:05:03.788 element at address: 0x200012bffd80 with size: 0.000366 MiB 00:05:03.788 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_3708172 00:05:03.788 element at address: 0x20000a5ffa80 with size: 0.000366 MiB 00:05:03.788 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:03.788 10:06:57 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:03.788 10:06:57 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 3708172 00:05:03.788 10:06:57 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 3708172 ']' 00:05:03.788 10:06:57 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 3708172 00:05:03.788 10:06:57 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:05:03.788 10:06:57 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:03.788 10:06:57 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3708172 00:05:03.788 10:06:57 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:03.788 10:06:57 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:03.788 10:06:57 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3708172' 00:05:03.788 killing process with pid 3708172 00:05:03.788 10:06:57 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 3708172 00:05:03.788 10:06:57 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 3708172 00:05:06.323 00:05:06.323 real 0m3.761s 00:05:06.323 user 0m3.724s 00:05:06.323 sys 0m0.565s 00:05:06.323 10:06:59 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:06.323 10:06:59 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:06.323 ************************************ 00:05:06.323 END TEST dpdk_mem_utility 00:05:06.323 ************************************ 00:05:06.323 10:07:00 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:06.323 10:07:00 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:06.323 10:07:00 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:06.323 10:07:00 -- common/autotest_common.sh@10 -- # set +x 00:05:06.323 ************************************ 00:05:06.323 START TEST event 00:05:06.323 ************************************ 00:05:06.323 10:07:00 event -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:06.323 * Looking for test storage... 00:05:06.323 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:06.323 10:07:00 event -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:06.323 10:07:00 event -- common/autotest_common.sh@1711 -- # lcov --version 00:05:06.323 10:07:00 event -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:06.323 10:07:00 event -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:06.323 10:07:00 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:06.323 10:07:00 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:06.323 10:07:00 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:06.323 10:07:00 event -- scripts/common.sh@336 -- # IFS=.-: 00:05:06.323 10:07:00 event -- scripts/common.sh@336 -- # read -ra ver1 00:05:06.323 10:07:00 event -- scripts/common.sh@337 -- # IFS=.-: 00:05:06.323 10:07:00 event -- scripts/common.sh@337 -- # read -ra ver2 00:05:06.323 10:07:00 event -- scripts/common.sh@338 -- # local 'op=<' 00:05:06.323 10:07:00 event -- scripts/common.sh@340 -- # ver1_l=2 00:05:06.323 10:07:00 event -- scripts/common.sh@341 -- # ver2_l=1 00:05:06.323 10:07:00 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:06.323 10:07:00 event -- scripts/common.sh@344 -- # case "$op" in 00:05:06.323 10:07:00 event -- scripts/common.sh@345 -- # : 1 00:05:06.323 10:07:00 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:06.323 10:07:00 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:06.323 10:07:00 event -- scripts/common.sh@365 -- # decimal 1 00:05:06.323 10:07:00 event -- scripts/common.sh@353 -- # local d=1 00:05:06.323 10:07:00 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:06.323 10:07:00 event -- scripts/common.sh@355 -- # echo 1 00:05:06.323 10:07:00 event -- scripts/common.sh@365 -- # ver1[v]=1 00:05:06.323 10:07:00 event -- scripts/common.sh@366 -- # decimal 2 00:05:06.323 10:07:00 event -- scripts/common.sh@353 -- # local d=2 00:05:06.323 10:07:00 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:06.323 10:07:00 event -- scripts/common.sh@355 -- # echo 2 00:05:06.323 10:07:00 event -- scripts/common.sh@366 -- # ver2[v]=2 00:05:06.323 10:07:00 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:06.324 10:07:00 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:06.324 10:07:00 event -- scripts/common.sh@368 -- # return 0 00:05:06.324 10:07:00 event -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:06.324 10:07:00 event -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:06.324 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:06.324 --rc genhtml_branch_coverage=1 00:05:06.324 --rc genhtml_function_coverage=1 00:05:06.324 --rc genhtml_legend=1 00:05:06.324 --rc geninfo_all_blocks=1 00:05:06.324 --rc geninfo_unexecuted_blocks=1 00:05:06.324 00:05:06.324 ' 00:05:06.324 10:07:00 event -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:06.324 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:06.324 --rc genhtml_branch_coverage=1 00:05:06.324 --rc genhtml_function_coverage=1 00:05:06.324 --rc genhtml_legend=1 00:05:06.324 --rc geninfo_all_blocks=1 00:05:06.324 --rc geninfo_unexecuted_blocks=1 00:05:06.324 00:05:06.324 ' 00:05:06.324 10:07:00 event -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:06.324 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:06.324 --rc genhtml_branch_coverage=1 00:05:06.324 --rc genhtml_function_coverage=1 00:05:06.324 --rc genhtml_legend=1 00:05:06.324 --rc geninfo_all_blocks=1 00:05:06.324 --rc geninfo_unexecuted_blocks=1 00:05:06.324 00:05:06.324 ' 00:05:06.324 10:07:00 event -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:06.324 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:06.324 --rc genhtml_branch_coverage=1 00:05:06.324 --rc genhtml_function_coverage=1 00:05:06.324 --rc genhtml_legend=1 00:05:06.324 --rc geninfo_all_blocks=1 00:05:06.324 --rc geninfo_unexecuted_blocks=1 00:05:06.324 00:05:06.324 ' 00:05:06.324 10:07:00 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:05:06.324 10:07:00 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:06.324 10:07:00 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:06.324 10:07:00 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:05:06.324 10:07:00 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:06.324 10:07:00 event -- common/autotest_common.sh@10 -- # set +x 00:05:06.583 ************************************ 00:05:06.583 START TEST event_perf 00:05:06.583 ************************************ 00:05:06.583 10:07:00 event.event_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:06.583 Running I/O for 1 seconds...[2024-12-13 10:07:00.287862] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:05:06.583 [2024-12-13 10:07:00.287936] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3708819 ] 00:05:06.583 [2024-12-13 10:07:00.402161] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:06.841 [2024-12-13 10:07:00.514696] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:06.841 [2024-12-13 10:07:00.514714] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:05:06.841 [2024-12-13 10:07:00.514809] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:06.841 [2024-12-13 10:07:00.514822] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:05:08.217 Running I/O for 1 seconds... 00:05:08.217 lcore 0: 207256 00:05:08.217 lcore 1: 207255 00:05:08.217 lcore 2: 207255 00:05:08.217 lcore 3: 207256 00:05:08.217 done. 00:05:08.217 00:05:08.217 real 0m1.497s 00:05:08.217 user 0m4.361s 00:05:08.217 sys 0m0.132s 00:05:08.217 10:07:01 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:08.217 10:07:01 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:08.217 ************************************ 00:05:08.217 END TEST event_perf 00:05:08.217 ************************************ 00:05:08.217 10:07:01 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:08.217 10:07:01 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:08.217 10:07:01 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:08.217 10:07:01 event -- common/autotest_common.sh@10 -- # set +x 00:05:08.217 ************************************ 00:05:08.217 START TEST event_reactor 00:05:08.217 ************************************ 00:05:08.217 10:07:01 event.event_reactor -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:08.217 [2024-12-13 10:07:01.853747] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:05:08.217 [2024-12-13 10:07:01.853819] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3709175 ] 00:05:08.217 [2024-12-13 10:07:01.964029] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:08.217 [2024-12-13 10:07:02.069827] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:09.594 test_start 00:05:09.594 oneshot 00:05:09.594 tick 100 00:05:09.594 tick 100 00:05:09.594 tick 250 00:05:09.594 tick 100 00:05:09.594 tick 100 00:05:09.594 tick 100 00:05:09.594 tick 250 00:05:09.594 tick 500 00:05:09.594 tick 100 00:05:09.594 tick 100 00:05:09.594 tick 250 00:05:09.594 tick 100 00:05:09.594 tick 100 00:05:09.594 test_end 00:05:09.594 00:05:09.594 real 0m1.470s 00:05:09.594 user 0m1.339s 00:05:09.594 sys 0m0.125s 00:05:09.594 10:07:03 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:09.594 10:07:03 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:09.594 ************************************ 00:05:09.594 END TEST event_reactor 00:05:09.594 ************************************ 00:05:09.594 10:07:03 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:09.594 10:07:03 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:09.594 10:07:03 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:09.594 10:07:03 event -- common/autotest_common.sh@10 -- # set +x 00:05:09.594 ************************************ 00:05:09.594 START TEST event_reactor_perf 00:05:09.594 ************************************ 00:05:09.594 10:07:03 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:09.594 [2024-12-13 10:07:03.389808] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:05:09.594 [2024-12-13 10:07:03.389881] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3709526 ] 00:05:09.852 [2024-12-13 10:07:03.498477] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:09.852 [2024-12-13 10:07:03.596272] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:11.230 test_start 00:05:11.230 test_end 00:05:11.230 Performance: 396879 events per second 00:05:11.230 00:05:11.230 real 0m1.455s 00:05:11.230 user 0m1.331s 00:05:11.230 sys 0m0.118s 00:05:11.230 10:07:04 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:11.230 10:07:04 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:11.230 ************************************ 00:05:11.230 END TEST event_reactor_perf 00:05:11.230 ************************************ 00:05:11.230 10:07:04 event -- event/event.sh@49 -- # uname -s 00:05:11.230 10:07:04 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:11.230 10:07:04 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:11.230 10:07:04 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:11.230 10:07:04 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:11.230 10:07:04 event -- common/autotest_common.sh@10 -- # set +x 00:05:11.230 ************************************ 00:05:11.230 START TEST event_scheduler 00:05:11.230 ************************************ 00:05:11.230 10:07:04 event.event_scheduler -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:11.230 * Looking for test storage... 00:05:11.230 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:05:11.230 10:07:04 event.event_scheduler -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:11.230 10:07:04 event.event_scheduler -- common/autotest_common.sh@1711 -- # lcov --version 00:05:11.230 10:07:04 event.event_scheduler -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:11.230 10:07:05 event.event_scheduler -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:11.230 10:07:05 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:11.230 10:07:05 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:11.230 10:07:05 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:11.230 10:07:05 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:05:11.230 10:07:05 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:05:11.230 10:07:05 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:05:11.230 10:07:05 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:05:11.230 10:07:05 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:05:11.230 10:07:05 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:05:11.230 10:07:05 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:05:11.230 10:07:05 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:11.230 10:07:05 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:05:11.230 10:07:05 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:05:11.230 10:07:05 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:11.230 10:07:05 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:11.230 10:07:05 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:05:11.230 10:07:05 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:05:11.230 10:07:05 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:11.230 10:07:05 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:05:11.230 10:07:05 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:05:11.230 10:07:05 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:05:11.230 10:07:05 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:05:11.230 10:07:05 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:11.230 10:07:05 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:05:11.230 10:07:05 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:05:11.230 10:07:05 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:11.230 10:07:05 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:11.230 10:07:05 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:05:11.230 10:07:05 event.event_scheduler -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:11.230 10:07:05 event.event_scheduler -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:11.230 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:11.230 --rc genhtml_branch_coverage=1 00:05:11.230 --rc genhtml_function_coverage=1 00:05:11.230 --rc genhtml_legend=1 00:05:11.230 --rc geninfo_all_blocks=1 00:05:11.230 --rc geninfo_unexecuted_blocks=1 00:05:11.230 00:05:11.230 ' 00:05:11.230 10:07:05 event.event_scheduler -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:11.230 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:11.230 --rc genhtml_branch_coverage=1 00:05:11.230 --rc genhtml_function_coverage=1 00:05:11.230 --rc genhtml_legend=1 00:05:11.230 --rc geninfo_all_blocks=1 00:05:11.230 --rc geninfo_unexecuted_blocks=1 00:05:11.230 00:05:11.230 ' 00:05:11.230 10:07:05 event.event_scheduler -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:11.230 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:11.230 --rc genhtml_branch_coverage=1 00:05:11.230 --rc genhtml_function_coverage=1 00:05:11.230 --rc genhtml_legend=1 00:05:11.230 --rc geninfo_all_blocks=1 00:05:11.230 --rc geninfo_unexecuted_blocks=1 00:05:11.230 00:05:11.230 ' 00:05:11.230 10:07:05 event.event_scheduler -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:11.230 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:11.230 --rc genhtml_branch_coverage=1 00:05:11.230 --rc genhtml_function_coverage=1 00:05:11.230 --rc genhtml_legend=1 00:05:11.230 --rc geninfo_all_blocks=1 00:05:11.230 --rc geninfo_unexecuted_blocks=1 00:05:11.230 00:05:11.230 ' 00:05:11.230 10:07:05 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:11.230 10:07:05 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=3709806 00:05:11.230 10:07:05 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:11.230 10:07:05 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:11.230 10:07:05 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 3709806 00:05:11.230 10:07:05 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 3709806 ']' 00:05:11.230 10:07:05 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:11.230 10:07:05 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:11.230 10:07:05 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:11.230 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:11.230 10:07:05 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:11.230 10:07:05 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:11.230 [2024-12-13 10:07:05.086767] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:05:11.230 [2024-12-13 10:07:05.086868] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3709806 ] 00:05:11.488 [2024-12-13 10:07:05.193824] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:11.488 [2024-12-13 10:07:05.303559] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:11.488 [2024-12-13 10:07:05.303629] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:11.488 [2024-12-13 10:07:05.303686] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:05:11.488 [2024-12-13 10:07:05.303700] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:05:12.055 10:07:05 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:12.055 10:07:05 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:05:12.055 10:07:05 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:12.055 10:07:05 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:12.055 10:07:05 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:12.055 [2024-12-13 10:07:05.930198] dpdk_governor.c: 178:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:05:12.055 [2024-12-13 10:07:05.930224] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:05:12.055 [2024-12-13 10:07:05.930241] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:12.055 [2024-12-13 10:07:05.930250] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:12.055 [2024-12-13 10:07:05.930262] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:12.055 10:07:05 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:12.055 10:07:05 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:12.055 10:07:05 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:12.055 10:07:05 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:12.623 [2024-12-13 10:07:06.248011] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:12.623 10:07:06 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:12.623 10:07:06 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:12.623 10:07:06 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:12.623 10:07:06 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:12.623 10:07:06 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:12.623 ************************************ 00:05:12.623 START TEST scheduler_create_thread 00:05:12.623 ************************************ 00:05:12.623 10:07:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:05:12.623 10:07:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:12.623 10:07:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:12.623 10:07:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:12.623 2 00:05:12.623 10:07:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:12.623 10:07:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:12.623 10:07:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:12.623 10:07:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:12.623 3 00:05:12.623 10:07:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:12.623 10:07:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:12.623 10:07:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:12.623 10:07:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:12.623 4 00:05:12.623 10:07:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:12.623 10:07:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:12.623 10:07:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:12.623 10:07:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:12.623 5 00:05:12.623 10:07:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:12.623 10:07:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:12.623 10:07:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:12.623 10:07:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:12.623 6 00:05:12.623 10:07:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:12.623 10:07:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:12.623 10:07:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:12.623 10:07:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:12.623 7 00:05:12.623 10:07:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:12.623 10:07:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:12.623 10:07:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:12.623 10:07:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:12.623 8 00:05:12.623 10:07:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:12.623 10:07:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:12.623 10:07:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:12.623 10:07:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:12.623 9 00:05:12.623 10:07:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:12.623 10:07:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:12.623 10:07:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:12.623 10:07:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:12.623 10 00:05:12.623 10:07:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:12.623 10:07:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:12.623 10:07:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:12.624 10:07:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:12.624 10:07:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:12.624 10:07:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:12.624 10:07:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:12.624 10:07:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:12.624 10:07:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:12.624 10:07:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:12.624 10:07:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:12.624 10:07:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:12.624 10:07:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:14.001 10:07:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:14.001 10:07:07 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:14.001 10:07:07 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:14.001 10:07:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:14.001 10:07:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:15.378 10:07:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:15.378 00:05:15.378 real 0m2.625s 00:05:15.378 user 0m0.023s 00:05:15.378 sys 0m0.006s 00:05:15.378 10:07:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:15.378 10:07:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:15.378 ************************************ 00:05:15.378 END TEST scheduler_create_thread 00:05:15.378 ************************************ 00:05:15.378 10:07:08 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:15.378 10:07:08 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 3709806 00:05:15.378 10:07:08 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 3709806 ']' 00:05:15.378 10:07:08 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 3709806 00:05:15.378 10:07:08 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:05:15.378 10:07:08 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:15.378 10:07:08 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3709806 00:05:15.378 10:07:08 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:05:15.378 10:07:08 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:05:15.378 10:07:08 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3709806' 00:05:15.378 killing process with pid 3709806 00:05:15.378 10:07:08 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 3709806 00:05:15.378 10:07:08 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 3709806 00:05:15.637 [2024-12-13 10:07:09.388494] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:17.013 00:05:17.013 real 0m5.661s 00:05:17.013 user 0m10.117s 00:05:17.013 sys 0m0.472s 00:05:17.013 10:07:10 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:17.013 10:07:10 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:17.013 ************************************ 00:05:17.013 END TEST event_scheduler 00:05:17.013 ************************************ 00:05:17.013 10:07:10 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:17.013 10:07:10 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:17.013 10:07:10 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:17.013 10:07:10 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:17.013 10:07:10 event -- common/autotest_common.sh@10 -- # set +x 00:05:17.013 ************************************ 00:05:17.013 START TEST app_repeat 00:05:17.013 ************************************ 00:05:17.013 10:07:10 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:05:17.013 10:07:10 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:17.013 10:07:10 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:17.013 10:07:10 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:17.013 10:07:10 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:17.013 10:07:10 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:17.013 10:07:10 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:17.013 10:07:10 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:17.013 10:07:10 event.app_repeat -- event/event.sh@19 -- # repeat_pid=3710759 00:05:17.014 10:07:10 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:17.014 10:07:10 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:17.014 10:07:10 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 3710759' 00:05:17.014 Process app_repeat pid: 3710759 00:05:17.014 10:07:10 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:17.014 10:07:10 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:17.014 spdk_app_start Round 0 00:05:17.014 10:07:10 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3710759 /var/tmp/spdk-nbd.sock 00:05:17.014 10:07:10 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 3710759 ']' 00:05:17.014 10:07:10 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:17.014 10:07:10 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:17.014 10:07:10 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:17.014 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:17.014 10:07:10 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:17.014 10:07:10 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:17.014 [2024-12-13 10:07:10.669419] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:05:17.014 [2024-12-13 10:07:10.669514] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3710759 ] 00:05:17.014 [2024-12-13 10:07:10.780332] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:17.014 [2024-12-13 10:07:10.887231] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:17.014 [2024-12-13 10:07:10.887241] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:17.950 10:07:11 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:17.950 10:07:11 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:17.950 10:07:11 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:17.950 Malloc0 00:05:17.950 10:07:11 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:18.208 Malloc1 00:05:18.208 10:07:11 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:18.208 10:07:11 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:18.208 10:07:11 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:18.208 10:07:11 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:18.208 10:07:11 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:18.208 10:07:11 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:18.208 10:07:12 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:18.208 10:07:12 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:18.208 10:07:12 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:18.208 10:07:12 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:18.208 10:07:12 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:18.208 10:07:12 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:18.208 10:07:12 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:18.208 10:07:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:18.208 10:07:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:18.208 10:07:12 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:18.467 /dev/nbd0 00:05:18.467 10:07:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:18.467 10:07:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:18.467 10:07:12 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:18.467 10:07:12 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:18.467 10:07:12 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:18.467 10:07:12 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:18.467 10:07:12 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:18.467 10:07:12 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:18.467 10:07:12 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:18.467 10:07:12 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:18.467 10:07:12 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:18.467 1+0 records in 00:05:18.467 1+0 records out 00:05:18.467 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000218472 s, 18.7 MB/s 00:05:18.467 10:07:12 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:18.467 10:07:12 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:18.467 10:07:12 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:18.467 10:07:12 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:18.467 10:07:12 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:18.467 10:07:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:18.467 10:07:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:18.467 10:07:12 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:18.726 /dev/nbd1 00:05:18.726 10:07:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:18.726 10:07:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:18.726 10:07:12 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:18.726 10:07:12 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:18.726 10:07:12 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:18.726 10:07:12 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:18.726 10:07:12 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:18.726 10:07:12 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:18.726 10:07:12 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:18.726 10:07:12 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:18.726 10:07:12 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:18.726 1+0 records in 00:05:18.726 1+0 records out 00:05:18.726 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000217225 s, 18.9 MB/s 00:05:18.726 10:07:12 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:18.726 10:07:12 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:18.726 10:07:12 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:18.726 10:07:12 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:18.726 10:07:12 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:18.726 10:07:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:18.726 10:07:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:18.726 10:07:12 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:18.726 10:07:12 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:18.726 10:07:12 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:18.985 10:07:12 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:18.985 { 00:05:18.985 "nbd_device": "/dev/nbd0", 00:05:18.985 "bdev_name": "Malloc0" 00:05:18.985 }, 00:05:18.985 { 00:05:18.985 "nbd_device": "/dev/nbd1", 00:05:18.985 "bdev_name": "Malloc1" 00:05:18.985 } 00:05:18.985 ]' 00:05:18.985 10:07:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:18.985 { 00:05:18.985 "nbd_device": "/dev/nbd0", 00:05:18.985 "bdev_name": "Malloc0" 00:05:18.985 }, 00:05:18.985 { 00:05:18.985 "nbd_device": "/dev/nbd1", 00:05:18.985 "bdev_name": "Malloc1" 00:05:18.985 } 00:05:18.985 ]' 00:05:18.985 10:07:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:18.985 10:07:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:18.985 /dev/nbd1' 00:05:18.985 10:07:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:18.985 /dev/nbd1' 00:05:18.985 10:07:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:18.985 10:07:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:18.985 10:07:12 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:18.985 10:07:12 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:18.985 10:07:12 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:18.985 10:07:12 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:18.985 10:07:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:18.985 10:07:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:18.985 10:07:12 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:18.985 10:07:12 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:18.985 10:07:12 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:18.985 10:07:12 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:18.985 256+0 records in 00:05:18.985 256+0 records out 00:05:18.985 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0108237 s, 96.9 MB/s 00:05:18.985 10:07:12 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:18.985 10:07:12 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:18.985 256+0 records in 00:05:18.985 256+0 records out 00:05:18.985 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0160576 s, 65.3 MB/s 00:05:18.985 10:07:12 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:18.985 10:07:12 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:18.985 256+0 records in 00:05:18.985 256+0 records out 00:05:18.985 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0188738 s, 55.6 MB/s 00:05:18.985 10:07:12 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:18.985 10:07:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:18.985 10:07:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:18.985 10:07:12 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:18.985 10:07:12 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:18.985 10:07:12 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:18.985 10:07:12 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:18.985 10:07:12 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:18.985 10:07:12 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:18.985 10:07:12 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:18.985 10:07:12 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:18.985 10:07:12 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:18.985 10:07:12 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:18.985 10:07:12 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:18.985 10:07:12 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:18.985 10:07:12 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:18.985 10:07:12 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:18.985 10:07:12 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:18.985 10:07:12 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:19.244 10:07:12 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:19.244 10:07:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:19.244 10:07:13 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:19.244 10:07:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:19.244 10:07:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:19.244 10:07:13 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:19.244 10:07:13 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:19.244 10:07:13 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:19.244 10:07:13 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:19.244 10:07:13 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:19.503 10:07:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:19.503 10:07:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:19.503 10:07:13 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:19.503 10:07:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:19.503 10:07:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:19.503 10:07:13 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:19.503 10:07:13 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:19.503 10:07:13 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:19.503 10:07:13 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:19.503 10:07:13 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:19.503 10:07:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:19.762 10:07:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:19.762 10:07:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:19.762 10:07:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:19.762 10:07:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:19.762 10:07:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:19.762 10:07:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:19.762 10:07:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:19.762 10:07:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:19.762 10:07:13 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:19.762 10:07:13 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:19.762 10:07:13 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:19.762 10:07:13 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:19.762 10:07:13 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:20.021 10:07:13 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:21.405 [2024-12-13 10:07:15.040468] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:21.405 [2024-12-13 10:07:15.144754] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:21.405 [2024-12-13 10:07:15.144754] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:21.666 [2024-12-13 10:07:15.337787] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:21.666 [2024-12-13 10:07:15.337834] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:23.042 10:07:16 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:23.042 10:07:16 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:23.042 spdk_app_start Round 1 00:05:23.042 10:07:16 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3710759 /var/tmp/spdk-nbd.sock 00:05:23.042 10:07:16 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 3710759 ']' 00:05:23.042 10:07:16 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:23.042 10:07:16 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:23.042 10:07:16 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:23.042 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:23.042 10:07:16 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:23.042 10:07:16 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:23.301 10:07:17 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:23.301 10:07:17 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:23.301 10:07:17 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:23.560 Malloc0 00:05:23.560 10:07:17 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:23.818 Malloc1 00:05:23.818 10:07:17 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:23.818 10:07:17 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:23.818 10:07:17 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:23.818 10:07:17 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:23.818 10:07:17 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:23.818 10:07:17 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:23.819 10:07:17 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:23.819 10:07:17 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:23.819 10:07:17 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:23.819 10:07:17 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:23.819 10:07:17 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:23.819 10:07:17 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:23.819 10:07:17 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:23.819 10:07:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:23.819 10:07:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:23.819 10:07:17 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:23.819 /dev/nbd0 00:05:24.077 10:07:17 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:24.077 10:07:17 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:24.077 10:07:17 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:24.077 10:07:17 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:24.077 10:07:17 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:24.077 10:07:17 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:24.077 10:07:17 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:24.077 10:07:17 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:24.077 10:07:17 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:24.077 10:07:17 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:24.077 10:07:17 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:24.077 1+0 records in 00:05:24.077 1+0 records out 00:05:24.077 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00019638 s, 20.9 MB/s 00:05:24.077 10:07:17 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:24.077 10:07:17 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:24.077 10:07:17 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:24.077 10:07:17 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:24.077 10:07:17 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:24.077 10:07:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:24.077 10:07:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:24.077 10:07:17 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:24.077 /dev/nbd1 00:05:24.077 10:07:17 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:24.336 10:07:17 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:24.336 10:07:17 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:24.336 10:07:17 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:24.336 10:07:17 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:24.336 10:07:17 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:24.336 10:07:17 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:24.336 10:07:17 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:24.336 10:07:17 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:24.336 10:07:17 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:24.336 10:07:17 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:24.336 1+0 records in 00:05:24.336 1+0 records out 00:05:24.336 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000243122 s, 16.8 MB/s 00:05:24.336 10:07:17 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:24.336 10:07:17 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:24.336 10:07:17 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:24.336 10:07:17 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:24.336 10:07:17 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:24.336 10:07:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:24.336 10:07:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:24.336 10:07:17 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:24.336 10:07:17 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:24.336 10:07:17 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:24.336 10:07:18 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:24.336 { 00:05:24.336 "nbd_device": "/dev/nbd0", 00:05:24.336 "bdev_name": "Malloc0" 00:05:24.336 }, 00:05:24.336 { 00:05:24.336 "nbd_device": "/dev/nbd1", 00:05:24.336 "bdev_name": "Malloc1" 00:05:24.336 } 00:05:24.336 ]' 00:05:24.336 10:07:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:24.336 { 00:05:24.336 "nbd_device": "/dev/nbd0", 00:05:24.336 "bdev_name": "Malloc0" 00:05:24.336 }, 00:05:24.336 { 00:05:24.336 "nbd_device": "/dev/nbd1", 00:05:24.336 "bdev_name": "Malloc1" 00:05:24.336 } 00:05:24.336 ]' 00:05:24.336 10:07:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:24.595 10:07:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:24.595 /dev/nbd1' 00:05:24.595 10:07:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:24.595 10:07:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:24.595 /dev/nbd1' 00:05:24.595 10:07:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:24.595 10:07:18 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:24.595 10:07:18 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:24.595 10:07:18 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:24.595 10:07:18 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:24.595 10:07:18 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:24.595 10:07:18 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:24.595 10:07:18 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:24.595 10:07:18 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:24.595 10:07:18 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:24.595 10:07:18 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:24.595 256+0 records in 00:05:24.595 256+0 records out 00:05:24.595 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.010678 s, 98.2 MB/s 00:05:24.595 10:07:18 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:24.595 10:07:18 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:24.595 256+0 records in 00:05:24.595 256+0 records out 00:05:24.595 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0158108 s, 66.3 MB/s 00:05:24.595 10:07:18 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:24.595 10:07:18 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:24.595 256+0 records in 00:05:24.595 256+0 records out 00:05:24.595 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0185599 s, 56.5 MB/s 00:05:24.595 10:07:18 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:24.595 10:07:18 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:24.595 10:07:18 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:24.596 10:07:18 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:24.596 10:07:18 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:24.596 10:07:18 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:24.596 10:07:18 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:24.596 10:07:18 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:24.596 10:07:18 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:24.596 10:07:18 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:24.596 10:07:18 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:24.596 10:07:18 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:24.596 10:07:18 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:24.596 10:07:18 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:24.596 10:07:18 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:24.596 10:07:18 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:24.596 10:07:18 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:24.596 10:07:18 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:24.596 10:07:18 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:24.854 10:07:18 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:24.854 10:07:18 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:24.854 10:07:18 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:24.854 10:07:18 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:24.854 10:07:18 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:24.854 10:07:18 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:24.854 10:07:18 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:24.854 10:07:18 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:24.854 10:07:18 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:24.854 10:07:18 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:24.854 10:07:18 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:24.854 10:07:18 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:24.854 10:07:18 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:24.854 10:07:18 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:24.854 10:07:18 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:24.854 10:07:18 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:24.854 10:07:18 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:24.854 10:07:18 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:24.854 10:07:18 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:24.854 10:07:18 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:24.854 10:07:18 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:25.113 10:07:18 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:25.113 10:07:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:25.113 10:07:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:25.113 10:07:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:25.113 10:07:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:25.113 10:07:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:25.113 10:07:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:25.113 10:07:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:25.113 10:07:18 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:25.113 10:07:18 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:25.113 10:07:18 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:25.113 10:07:18 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:25.113 10:07:18 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:25.681 10:07:19 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:27.058 [2024-12-13 10:07:20.547781] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:27.058 [2024-12-13 10:07:20.655280] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:27.058 [2024-12-13 10:07:20.655286] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:27.058 [2024-12-13 10:07:20.846969] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:27.058 [2024-12-13 10:07:20.847021] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:28.435 10:07:22 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:28.435 10:07:22 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:28.435 spdk_app_start Round 2 00:05:28.435 10:07:22 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3710759 /var/tmp/spdk-nbd.sock 00:05:28.693 10:07:22 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 3710759 ']' 00:05:28.693 10:07:22 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:28.693 10:07:22 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:28.693 10:07:22 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:28.693 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:28.693 10:07:22 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:28.693 10:07:22 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:28.693 10:07:22 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:28.693 10:07:22 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:28.693 10:07:22 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:28.952 Malloc0 00:05:28.952 10:07:22 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:29.211 Malloc1 00:05:29.211 10:07:22 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:29.211 10:07:22 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:29.211 10:07:22 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:29.211 10:07:22 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:29.211 10:07:22 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:29.211 10:07:22 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:29.211 10:07:22 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:29.211 10:07:22 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:29.211 10:07:22 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:29.211 10:07:22 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:29.211 10:07:22 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:29.211 10:07:22 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:29.211 10:07:22 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:29.211 10:07:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:29.211 10:07:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:29.211 10:07:22 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:29.469 /dev/nbd0 00:05:29.469 10:07:23 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:29.469 10:07:23 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:29.469 10:07:23 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:29.469 10:07:23 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:29.469 10:07:23 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:29.469 10:07:23 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:29.469 10:07:23 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:29.469 10:07:23 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:29.469 10:07:23 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:29.469 10:07:23 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:29.469 10:07:23 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:29.469 1+0 records in 00:05:29.470 1+0 records out 00:05:29.470 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000198245 s, 20.7 MB/s 00:05:29.470 10:07:23 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:29.470 10:07:23 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:29.470 10:07:23 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:29.470 10:07:23 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:29.470 10:07:23 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:29.470 10:07:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:29.470 10:07:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:29.470 10:07:23 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:29.771 /dev/nbd1 00:05:29.771 10:07:23 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:29.771 10:07:23 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:29.771 10:07:23 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:29.771 10:07:23 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:29.771 10:07:23 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:29.771 10:07:23 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:29.771 10:07:23 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:29.771 10:07:23 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:29.771 10:07:23 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:29.771 10:07:23 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:29.771 10:07:23 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:29.771 1+0 records in 00:05:29.771 1+0 records out 00:05:29.771 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000220217 s, 18.6 MB/s 00:05:29.771 10:07:23 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:29.771 10:07:23 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:29.771 10:07:23 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:29.771 10:07:23 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:29.771 10:07:23 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:29.771 10:07:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:29.771 10:07:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:29.771 10:07:23 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:29.771 10:07:23 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:29.771 10:07:23 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:30.092 10:07:23 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:30.092 { 00:05:30.092 "nbd_device": "/dev/nbd0", 00:05:30.092 "bdev_name": "Malloc0" 00:05:30.092 }, 00:05:30.092 { 00:05:30.092 "nbd_device": "/dev/nbd1", 00:05:30.092 "bdev_name": "Malloc1" 00:05:30.092 } 00:05:30.092 ]' 00:05:30.092 10:07:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:30.092 10:07:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:30.092 { 00:05:30.092 "nbd_device": "/dev/nbd0", 00:05:30.092 "bdev_name": "Malloc0" 00:05:30.092 }, 00:05:30.092 { 00:05:30.092 "nbd_device": "/dev/nbd1", 00:05:30.092 "bdev_name": "Malloc1" 00:05:30.092 } 00:05:30.092 ]' 00:05:30.092 10:07:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:30.092 /dev/nbd1' 00:05:30.092 10:07:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:30.092 /dev/nbd1' 00:05:30.092 10:07:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:30.092 10:07:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:30.092 10:07:23 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:30.092 10:07:23 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:30.092 10:07:23 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:30.092 10:07:23 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:30.092 10:07:23 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:30.092 10:07:23 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:30.092 10:07:23 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:30.092 10:07:23 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:30.092 10:07:23 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:30.092 10:07:23 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:30.092 256+0 records in 00:05:30.092 256+0 records out 00:05:30.092 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0108969 s, 96.2 MB/s 00:05:30.092 10:07:23 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:30.092 10:07:23 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:30.092 256+0 records in 00:05:30.092 256+0 records out 00:05:30.092 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0156341 s, 67.1 MB/s 00:05:30.092 10:07:23 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:30.092 10:07:23 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:30.092 256+0 records in 00:05:30.092 256+0 records out 00:05:30.092 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0191383 s, 54.8 MB/s 00:05:30.092 10:07:23 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:30.092 10:07:23 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:30.092 10:07:23 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:30.092 10:07:23 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:30.092 10:07:23 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:30.092 10:07:23 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:30.092 10:07:23 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:30.092 10:07:23 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:30.092 10:07:23 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:30.092 10:07:23 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:30.092 10:07:23 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:30.092 10:07:23 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:30.092 10:07:23 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:30.092 10:07:23 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:30.092 10:07:23 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:30.092 10:07:23 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:30.092 10:07:23 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:30.092 10:07:23 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:30.092 10:07:23 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:30.352 10:07:24 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:30.352 10:07:24 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:30.352 10:07:24 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:30.352 10:07:24 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:30.352 10:07:24 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:30.352 10:07:24 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:30.352 10:07:24 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:30.352 10:07:24 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:30.352 10:07:24 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:30.352 10:07:24 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:30.352 10:07:24 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:30.352 10:07:24 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:30.352 10:07:24 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:30.352 10:07:24 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:30.352 10:07:24 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:30.352 10:07:24 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:30.352 10:07:24 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:30.352 10:07:24 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:30.352 10:07:24 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:30.352 10:07:24 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:30.352 10:07:24 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:30.610 10:07:24 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:30.610 10:07:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:30.610 10:07:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:30.610 10:07:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:30.610 10:07:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:30.610 10:07:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:30.610 10:07:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:30.610 10:07:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:30.610 10:07:24 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:30.610 10:07:24 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:30.610 10:07:24 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:30.610 10:07:24 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:30.610 10:07:24 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:31.177 10:07:24 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:32.553 [2024-12-13 10:07:26.063563] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:32.553 [2024-12-13 10:07:26.167103] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.553 [2024-12-13 10:07:26.167103] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:32.553 [2024-12-13 10:07:26.355759] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:32.553 [2024-12-13 10:07:26.355812] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:34.456 10:07:27 event.app_repeat -- event/event.sh@38 -- # waitforlisten 3710759 /var/tmp/spdk-nbd.sock 00:05:34.456 10:07:27 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 3710759 ']' 00:05:34.456 10:07:27 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:34.456 10:07:27 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:34.456 10:07:27 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:34.456 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:34.456 10:07:27 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:34.456 10:07:27 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:34.456 10:07:28 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:34.456 10:07:28 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:34.456 10:07:28 event.app_repeat -- event/event.sh@39 -- # killprocess 3710759 00:05:34.456 10:07:28 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 3710759 ']' 00:05:34.456 10:07:28 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 3710759 00:05:34.456 10:07:28 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:05:34.456 10:07:28 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:34.456 10:07:28 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3710759 00:05:34.456 10:07:28 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:34.456 10:07:28 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:34.456 10:07:28 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3710759' 00:05:34.456 killing process with pid 3710759 00:05:34.456 10:07:28 event.app_repeat -- common/autotest_common.sh@973 -- # kill 3710759 00:05:34.456 10:07:28 event.app_repeat -- common/autotest_common.sh@978 -- # wait 3710759 00:05:35.393 spdk_app_start is called in Round 0. 00:05:35.393 Shutdown signal received, stop current app iteration 00:05:35.393 Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 reinitialization... 00:05:35.393 spdk_app_start is called in Round 1. 00:05:35.393 Shutdown signal received, stop current app iteration 00:05:35.393 Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 reinitialization... 00:05:35.393 spdk_app_start is called in Round 2. 00:05:35.393 Shutdown signal received, stop current app iteration 00:05:35.393 Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 reinitialization... 00:05:35.393 spdk_app_start is called in Round 3. 00:05:35.393 Shutdown signal received, stop current app iteration 00:05:35.393 10:07:29 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:35.393 10:07:29 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:35.393 00:05:35.393 real 0m18.518s 00:05:35.393 user 0m39.088s 00:05:35.393 sys 0m2.583s 00:05:35.393 10:07:29 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:35.393 10:07:29 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:35.393 ************************************ 00:05:35.393 END TEST app_repeat 00:05:35.393 ************************************ 00:05:35.393 10:07:29 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:35.393 10:07:29 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:35.393 10:07:29 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:35.393 10:07:29 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:35.393 10:07:29 event -- common/autotest_common.sh@10 -- # set +x 00:05:35.393 ************************************ 00:05:35.393 START TEST cpu_locks 00:05:35.393 ************************************ 00:05:35.393 10:07:29 event.cpu_locks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:35.393 * Looking for test storage... 00:05:35.393 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:35.393 10:07:29 event.cpu_locks -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:35.393 10:07:29 event.cpu_locks -- common/autotest_common.sh@1711 -- # lcov --version 00:05:35.393 10:07:29 event.cpu_locks -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:35.653 10:07:29 event.cpu_locks -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:35.653 10:07:29 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:35.653 10:07:29 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:35.653 10:07:29 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:35.653 10:07:29 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:05:35.653 10:07:29 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:05:35.653 10:07:29 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:05:35.653 10:07:29 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:05:35.653 10:07:29 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:05:35.653 10:07:29 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:05:35.653 10:07:29 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:05:35.653 10:07:29 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:35.653 10:07:29 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:05:35.653 10:07:29 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:05:35.653 10:07:29 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:35.653 10:07:29 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:35.653 10:07:29 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:05:35.653 10:07:29 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:05:35.653 10:07:29 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:35.653 10:07:29 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:05:35.653 10:07:29 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:05:35.653 10:07:29 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:05:35.653 10:07:29 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:05:35.653 10:07:29 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:35.653 10:07:29 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:05:35.653 10:07:29 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:05:35.653 10:07:29 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:35.653 10:07:29 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:35.653 10:07:29 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:05:35.653 10:07:29 event.cpu_locks -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:35.653 10:07:29 event.cpu_locks -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:35.653 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.653 --rc genhtml_branch_coverage=1 00:05:35.653 --rc genhtml_function_coverage=1 00:05:35.653 --rc genhtml_legend=1 00:05:35.653 --rc geninfo_all_blocks=1 00:05:35.653 --rc geninfo_unexecuted_blocks=1 00:05:35.653 00:05:35.653 ' 00:05:35.653 10:07:29 event.cpu_locks -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:35.653 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.653 --rc genhtml_branch_coverage=1 00:05:35.653 --rc genhtml_function_coverage=1 00:05:35.653 --rc genhtml_legend=1 00:05:35.653 --rc geninfo_all_blocks=1 00:05:35.653 --rc geninfo_unexecuted_blocks=1 00:05:35.653 00:05:35.653 ' 00:05:35.653 10:07:29 event.cpu_locks -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:35.653 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.653 --rc genhtml_branch_coverage=1 00:05:35.653 --rc genhtml_function_coverage=1 00:05:35.653 --rc genhtml_legend=1 00:05:35.653 --rc geninfo_all_blocks=1 00:05:35.653 --rc geninfo_unexecuted_blocks=1 00:05:35.653 00:05:35.653 ' 00:05:35.653 10:07:29 event.cpu_locks -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:35.653 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.653 --rc genhtml_branch_coverage=1 00:05:35.653 --rc genhtml_function_coverage=1 00:05:35.653 --rc genhtml_legend=1 00:05:35.653 --rc geninfo_all_blocks=1 00:05:35.653 --rc geninfo_unexecuted_blocks=1 00:05:35.653 00:05:35.653 ' 00:05:35.653 10:07:29 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:35.653 10:07:29 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:35.653 10:07:29 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:35.653 10:07:29 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:35.653 10:07:29 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:35.653 10:07:29 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:35.653 10:07:29 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:35.653 ************************************ 00:05:35.653 START TEST default_locks 00:05:35.653 ************************************ 00:05:35.653 10:07:29 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:05:35.653 10:07:29 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=3714147 00:05:35.653 10:07:29 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 3714147 00:05:35.653 10:07:29 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:35.653 10:07:29 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 3714147 ']' 00:05:35.653 10:07:29 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:35.653 10:07:29 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:35.653 10:07:29 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:35.653 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:35.653 10:07:29 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:35.653 10:07:29 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:35.653 [2024-12-13 10:07:29.488519] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:05:35.653 [2024-12-13 10:07:29.488606] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3714147 ] 00:05:35.912 [2024-12-13 10:07:29.599786] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:35.912 [2024-12-13 10:07:29.706485] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.849 10:07:30 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:36.849 10:07:30 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:05:36.849 10:07:30 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 3714147 00:05:36.849 10:07:30 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 3714147 00:05:36.849 10:07:30 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:37.416 lslocks: write error 00:05:37.416 10:07:31 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 3714147 00:05:37.416 10:07:31 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 3714147 ']' 00:05:37.416 10:07:31 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 3714147 00:05:37.416 10:07:31 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:05:37.416 10:07:31 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:37.416 10:07:31 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3714147 00:05:37.416 10:07:31 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:37.416 10:07:31 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:37.416 10:07:31 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3714147' 00:05:37.416 killing process with pid 3714147 00:05:37.416 10:07:31 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 3714147 00:05:37.416 10:07:31 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 3714147 00:05:39.951 10:07:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 3714147 00:05:39.951 10:07:33 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:05:39.951 10:07:33 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 3714147 00:05:39.951 10:07:33 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:39.951 10:07:33 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:39.951 10:07:33 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:39.951 10:07:33 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:39.951 10:07:33 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 3714147 00:05:39.951 10:07:33 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 3714147 ']' 00:05:39.951 10:07:33 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:39.951 10:07:33 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:39.951 10:07:33 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:39.951 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:39.951 10:07:33 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:39.951 10:07:33 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:39.951 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (3714147) - No such process 00:05:39.951 ERROR: process (pid: 3714147) is no longer running 00:05:39.951 10:07:33 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:39.951 10:07:33 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:05:39.951 10:07:33 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:05:39.951 10:07:33 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:39.951 10:07:33 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:39.951 10:07:33 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:39.951 10:07:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:39.951 10:07:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:39.951 10:07:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:39.951 10:07:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:39.951 00:05:39.951 real 0m4.000s 00:05:39.951 user 0m3.950s 00:05:39.951 sys 0m0.734s 00:05:39.951 10:07:33 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:39.951 10:07:33 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:39.951 ************************************ 00:05:39.951 END TEST default_locks 00:05:39.951 ************************************ 00:05:39.951 10:07:33 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:39.951 10:07:33 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:39.951 10:07:33 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:39.951 10:07:33 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:39.951 ************************************ 00:05:39.951 START TEST default_locks_via_rpc 00:05:39.951 ************************************ 00:05:39.951 10:07:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:05:39.951 10:07:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=3714849 00:05:39.951 10:07:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 3714849 00:05:39.951 10:07:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:39.951 10:07:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 3714849 ']' 00:05:39.951 10:07:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:39.951 10:07:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:39.951 10:07:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:39.951 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:39.951 10:07:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:39.951 10:07:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:39.951 [2024-12-13 10:07:33.560932] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:05:39.951 [2024-12-13 10:07:33.561022] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3714849 ] 00:05:39.951 [2024-12-13 10:07:33.671894] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:39.951 [2024-12-13 10:07:33.777345] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.888 10:07:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:40.888 10:07:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:40.888 10:07:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:40.888 10:07:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:40.888 10:07:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:40.888 10:07:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:40.888 10:07:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:40.888 10:07:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:40.888 10:07:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:40.888 10:07:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:40.888 10:07:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:40.888 10:07:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:40.888 10:07:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:40.888 10:07:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:40.888 10:07:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 3714849 00:05:40.888 10:07:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 3714849 00:05:40.888 10:07:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:41.147 10:07:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 3714849 00:05:41.147 10:07:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 3714849 ']' 00:05:41.147 10:07:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 3714849 00:05:41.147 10:07:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:05:41.147 10:07:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:41.147 10:07:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3714849 00:05:41.147 10:07:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:41.147 10:07:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:41.147 10:07:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3714849' 00:05:41.147 killing process with pid 3714849 00:05:41.147 10:07:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 3714849 00:05:41.147 10:07:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 3714849 00:05:43.679 00:05:43.679 real 0m3.799s 00:05:43.679 user 0m3.780s 00:05:43.679 sys 0m0.610s 00:05:43.679 10:07:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:43.679 10:07:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:43.679 ************************************ 00:05:43.679 END TEST default_locks_via_rpc 00:05:43.679 ************************************ 00:05:43.679 10:07:37 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:43.679 10:07:37 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:43.679 10:07:37 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:43.679 10:07:37 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:43.679 ************************************ 00:05:43.679 START TEST non_locking_app_on_locked_coremask 00:05:43.679 ************************************ 00:05:43.679 10:07:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:05:43.679 10:07:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=3715550 00:05:43.679 10:07:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 3715550 /var/tmp/spdk.sock 00:05:43.679 10:07:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:43.679 10:07:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 3715550 ']' 00:05:43.679 10:07:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:43.679 10:07:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:43.679 10:07:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:43.679 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:43.679 10:07:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:43.679 10:07:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:43.679 [2024-12-13 10:07:37.423291] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:05:43.679 [2024-12-13 10:07:37.423397] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3715550 ] 00:05:43.679 [2024-12-13 10:07:37.536395] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:43.938 [2024-12-13 10:07:37.642351] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.874 10:07:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:44.874 10:07:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:44.874 10:07:38 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:44.874 10:07:38 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=3715774 00:05:44.874 10:07:38 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 3715774 /var/tmp/spdk2.sock 00:05:44.874 10:07:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 3715774 ']' 00:05:44.874 10:07:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:44.874 10:07:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:44.874 10:07:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:44.874 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:44.874 10:07:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:44.874 10:07:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:44.874 [2024-12-13 10:07:38.544407] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:05:44.874 [2024-12-13 10:07:38.544503] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3715774 ] 00:05:44.874 [2024-12-13 10:07:38.700008] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:44.874 [2024-12-13 10:07:38.700055] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:45.133 [2024-12-13 10:07:38.909198] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.666 10:07:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:47.666 10:07:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:47.666 10:07:41 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 3715550 00:05:47.666 10:07:41 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:47.666 10:07:41 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3715550 00:05:47.925 lslocks: write error 00:05:47.925 10:07:41 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 3715550 00:05:47.925 10:07:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 3715550 ']' 00:05:47.925 10:07:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 3715550 00:05:47.925 10:07:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:47.925 10:07:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:47.925 10:07:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3715550 00:05:47.925 10:07:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:47.925 10:07:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:47.925 10:07:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3715550' 00:05:47.925 killing process with pid 3715550 00:05:47.925 10:07:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 3715550 00:05:47.925 10:07:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 3715550 00:05:53.258 10:07:46 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 3715774 00:05:53.258 10:07:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 3715774 ']' 00:05:53.258 10:07:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 3715774 00:05:53.258 10:07:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:53.258 10:07:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:53.258 10:07:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3715774 00:05:53.258 10:07:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:53.258 10:07:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:53.258 10:07:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3715774' 00:05:53.258 killing process with pid 3715774 00:05:53.258 10:07:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 3715774 00:05:53.258 10:07:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 3715774 00:05:55.163 00:05:55.163 real 0m11.259s 00:05:55.163 user 0m11.486s 00:05:55.163 sys 0m1.229s 00:05:55.163 10:07:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:55.163 10:07:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:55.163 ************************************ 00:05:55.163 END TEST non_locking_app_on_locked_coremask 00:05:55.163 ************************************ 00:05:55.163 10:07:48 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:55.163 10:07:48 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:55.163 10:07:48 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:55.163 10:07:48 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:55.163 ************************************ 00:05:55.163 START TEST locking_app_on_unlocked_coremask 00:05:55.163 ************************************ 00:05:55.163 10:07:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:05:55.163 10:07:48 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=3717388 00:05:55.163 10:07:48 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 3717388 /var/tmp/spdk.sock 00:05:55.163 10:07:48 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:55.163 10:07:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 3717388 ']' 00:05:55.163 10:07:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:55.163 10:07:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:55.163 10:07:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:55.163 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:55.163 10:07:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:55.163 10:07:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:55.163 [2024-12-13 10:07:48.754223] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:05:55.163 [2024-12-13 10:07:48.754316] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3717388 ] 00:05:55.163 [2024-12-13 10:07:48.870797] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:55.163 [2024-12-13 10:07:48.870835] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:55.163 [2024-12-13 10:07:48.973435] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.100 10:07:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:56.100 10:07:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:56.100 10:07:49 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=3717614 00:05:56.100 10:07:49 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 3717614 /var/tmp/spdk2.sock 00:05:56.100 10:07:49 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:56.100 10:07:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 3717614 ']' 00:05:56.100 10:07:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:56.100 10:07:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:56.100 10:07:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:56.100 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:56.100 10:07:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:56.100 10:07:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:56.100 [2024-12-13 10:07:49.845240] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:05:56.100 [2024-12-13 10:07:49.845330] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3717614 ] 00:05:56.359 [2024-12-13 10:07:49.999994] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:56.359 [2024-12-13 10:07:50.231940] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.892 10:07:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:58.892 10:07:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:58.892 10:07:52 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 3717614 00:05:58.892 10:07:52 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3717614 00:05:58.892 10:07:52 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:59.151 lslocks: write error 00:05:59.151 10:07:52 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 3717388 00:05:59.151 10:07:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 3717388 ']' 00:05:59.151 10:07:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 3717388 00:05:59.151 10:07:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:59.151 10:07:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:59.151 10:07:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3717388 00:05:59.151 10:07:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:59.151 10:07:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:59.151 10:07:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3717388' 00:05:59.151 killing process with pid 3717388 00:05:59.151 10:07:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 3717388 00:05:59.151 10:07:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 3717388 00:06:04.420 10:07:57 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 3717614 00:06:04.420 10:07:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 3717614 ']' 00:06:04.420 10:07:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 3717614 00:06:04.420 10:07:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:04.420 10:07:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:04.420 10:07:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3717614 00:06:04.420 10:07:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:04.420 10:07:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:04.420 10:07:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3717614' 00:06:04.420 killing process with pid 3717614 00:06:04.420 10:07:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 3717614 00:06:04.420 10:07:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 3717614 00:06:06.324 00:06:06.324 real 0m11.143s 00:06:06.324 user 0m11.380s 00:06:06.324 sys 0m1.235s 00:06:06.324 10:07:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:06.324 10:07:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:06.324 ************************************ 00:06:06.324 END TEST locking_app_on_unlocked_coremask 00:06:06.324 ************************************ 00:06:06.324 10:07:59 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:06.324 10:07:59 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:06.324 10:07:59 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:06.324 10:07:59 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:06.324 ************************************ 00:06:06.324 START TEST locking_app_on_locked_coremask 00:06:06.324 ************************************ 00:06:06.324 10:07:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:06:06.324 10:07:59 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=3719390 00:06:06.324 10:07:59 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 3719390 /var/tmp/spdk.sock 00:06:06.324 10:07:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 3719390 ']' 00:06:06.324 10:07:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:06.324 10:07:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:06.324 10:07:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:06.324 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:06.324 10:07:59 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:06.324 10:07:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:06.324 10:07:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:06.324 [2024-12-13 10:07:59.958133] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:06:06.324 [2024-12-13 10:07:59.958224] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3719390 ] 00:06:06.324 [2024-12-13 10:08:00.073344] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.324 [2024-12-13 10:08:00.182773] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.261 10:08:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:07.261 10:08:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:07.261 10:08:00 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=3719469 00:06:07.261 10:08:00 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:07.261 10:08:00 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 3719469 /var/tmp/spdk2.sock 00:06:07.261 10:08:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:07.261 10:08:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 3719469 /var/tmp/spdk2.sock 00:06:07.261 10:08:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:07.261 10:08:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:07.261 10:08:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:07.261 10:08:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:07.261 10:08:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 3719469 /var/tmp/spdk2.sock 00:06:07.261 10:08:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 3719469 ']' 00:06:07.261 10:08:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:07.261 10:08:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:07.261 10:08:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:07.261 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:07.261 10:08:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:07.261 10:08:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:07.261 [2024-12-13 10:08:01.061918] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:06:07.261 [2024-12-13 10:08:01.062008] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3719469 ] 00:06:07.520 [2024-12-13 10:08:01.218422] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 3719390 has claimed it. 00:06:07.520 [2024-12-13 10:08:01.218478] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:08.087 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (3719469) - No such process 00:06:08.087 ERROR: process (pid: 3719469) is no longer running 00:06:08.087 10:08:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:08.087 10:08:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:08.087 10:08:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:08.087 10:08:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:08.087 10:08:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:08.087 10:08:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:08.087 10:08:01 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 3719390 00:06:08.087 10:08:01 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3719390 00:06:08.087 10:08:01 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:08.087 lslocks: write error 00:06:08.087 10:08:01 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 3719390 00:06:08.087 10:08:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 3719390 ']' 00:06:08.087 10:08:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 3719390 00:06:08.087 10:08:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:08.087 10:08:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:08.087 10:08:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3719390 00:06:08.087 10:08:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:08.087 10:08:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:08.087 10:08:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3719390' 00:06:08.087 killing process with pid 3719390 00:06:08.087 10:08:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 3719390 00:06:08.087 10:08:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 3719390 00:06:10.620 00:06:10.620 real 0m4.285s 00:06:10.620 user 0m4.422s 00:06:10.620 sys 0m0.679s 00:06:10.620 10:08:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:10.620 10:08:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:10.620 ************************************ 00:06:10.620 END TEST locking_app_on_locked_coremask 00:06:10.620 ************************************ 00:06:10.620 10:08:04 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:10.620 10:08:04 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:10.620 10:08:04 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:10.620 10:08:04 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:10.620 ************************************ 00:06:10.620 START TEST locking_overlapped_coremask 00:06:10.620 ************************************ 00:06:10.620 10:08:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:06:10.620 10:08:04 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=3720138 00:06:10.620 10:08:04 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 3720138 /var/tmp/spdk.sock 00:06:10.620 10:08:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 3720138 ']' 00:06:10.620 10:08:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:10.620 10:08:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:10.621 10:08:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:10.621 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:10.621 10:08:04 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:06:10.621 10:08:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:10.621 10:08:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:10.621 [2024-12-13 10:08:04.302248] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:06:10.621 [2024-12-13 10:08:04.302336] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3720138 ] 00:06:10.621 [2024-12-13 10:08:04.415993] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:10.879 [2024-12-13 10:08:04.523861] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:06:10.879 [2024-12-13 10:08:04.523932] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.879 [2024-12-13 10:08:04.523938] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:06:11.815 10:08:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:11.815 10:08:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:11.815 10:08:05 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=3720366 00:06:11.815 10:08:05 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 3720366 /var/tmp/spdk2.sock 00:06:11.815 10:08:05 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:11.815 10:08:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:11.815 10:08:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 3720366 /var/tmp/spdk2.sock 00:06:11.815 10:08:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:11.815 10:08:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:11.815 10:08:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:11.815 10:08:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:11.815 10:08:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 3720366 /var/tmp/spdk2.sock 00:06:11.815 10:08:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 3720366 ']' 00:06:11.815 10:08:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:11.815 10:08:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:11.815 10:08:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:11.815 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:11.815 10:08:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:11.815 10:08:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:11.815 [2024-12-13 10:08:05.450436] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:06:11.815 [2024-12-13 10:08:05.450558] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3720366 ] 00:06:11.815 [2024-12-13 10:08:05.608688] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3720138 has claimed it. 00:06:11.815 [2024-12-13 10:08:05.608737] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:12.382 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (3720366) - No such process 00:06:12.382 ERROR: process (pid: 3720366) is no longer running 00:06:12.382 10:08:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:12.382 10:08:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:12.382 10:08:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:12.382 10:08:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:12.382 10:08:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:12.382 10:08:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:12.382 10:08:06 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:12.382 10:08:06 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:12.382 10:08:06 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:12.382 10:08:06 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:12.382 10:08:06 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 3720138 00:06:12.382 10:08:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 3720138 ']' 00:06:12.382 10:08:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 3720138 00:06:12.382 10:08:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:06:12.382 10:08:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:12.382 10:08:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3720138 00:06:12.382 10:08:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:12.382 10:08:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:12.382 10:08:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3720138' 00:06:12.382 killing process with pid 3720138 00:06:12.382 10:08:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 3720138 00:06:12.382 10:08:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 3720138 00:06:14.916 00:06:14.917 real 0m4.268s 00:06:14.917 user 0m11.814s 00:06:14.917 sys 0m0.600s 00:06:14.917 10:08:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:14.917 10:08:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:14.917 ************************************ 00:06:14.917 END TEST locking_overlapped_coremask 00:06:14.917 ************************************ 00:06:14.917 10:08:08 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:14.917 10:08:08 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:14.917 10:08:08 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:14.917 10:08:08 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:14.917 ************************************ 00:06:14.917 START TEST locking_overlapped_coremask_via_rpc 00:06:14.917 ************************************ 00:06:14.917 10:08:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:06:14.917 10:08:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=3720848 00:06:14.917 10:08:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 3720848 /var/tmp/spdk.sock 00:06:14.917 10:08:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:14.917 10:08:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 3720848 ']' 00:06:14.917 10:08:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:14.917 10:08:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:14.917 10:08:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:14.917 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:14.917 10:08:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:14.917 10:08:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:14.917 [2024-12-13 10:08:08.646937] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:06:14.917 [2024-12-13 10:08:08.647028] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3720848 ] 00:06:14.917 [2024-12-13 10:08:08.759279] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:14.917 [2024-12-13 10:08:08.759317] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:15.176 [2024-12-13 10:08:08.866591] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:06:15.176 [2024-12-13 10:08:08.866659] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.176 [2024-12-13 10:08:08.866667] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:06:16.113 10:08:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:16.113 10:08:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:16.113 10:08:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=3721081 00:06:16.113 10:08:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 3721081 /var/tmp/spdk2.sock 00:06:16.113 10:08:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:16.113 10:08:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 3721081 ']' 00:06:16.113 10:08:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:16.113 10:08:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:16.113 10:08:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:16.113 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:16.113 10:08:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:16.113 10:08:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:16.113 [2024-12-13 10:08:09.819545] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:06:16.113 [2024-12-13 10:08:09.819640] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3721081 ] 00:06:16.113 [2024-12-13 10:08:09.976005] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:16.113 [2024-12-13 10:08:09.976055] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:16.372 [2024-12-13 10:08:10.209111] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:06:16.372 [2024-12-13 10:08:10.212503] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:06:16.372 [2024-12-13 10:08:10.212528] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:06:18.909 10:08:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:18.909 10:08:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:18.909 10:08:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:18.909 10:08:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:18.909 10:08:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:18.909 10:08:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:18.910 10:08:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:18.910 10:08:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:06:18.910 10:08:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:18.910 10:08:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:06:18.910 10:08:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:18.910 10:08:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:06:18.910 10:08:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:18.910 10:08:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:18.910 10:08:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:18.910 10:08:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:18.910 [2024-12-13 10:08:12.352564] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3720848 has claimed it. 00:06:18.910 request: 00:06:18.910 { 00:06:18.910 "method": "framework_enable_cpumask_locks", 00:06:18.910 "req_id": 1 00:06:18.910 } 00:06:18.910 Got JSON-RPC error response 00:06:18.910 response: 00:06:18.910 { 00:06:18.910 "code": -32603, 00:06:18.910 "message": "Failed to claim CPU core: 2" 00:06:18.910 } 00:06:18.910 10:08:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:18.910 10:08:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:06:18.910 10:08:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:18.910 10:08:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:18.910 10:08:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:18.910 10:08:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 3720848 /var/tmp/spdk.sock 00:06:18.910 10:08:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 3720848 ']' 00:06:18.910 10:08:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:18.910 10:08:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:18.910 10:08:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:18.910 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:18.910 10:08:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:18.910 10:08:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:18.910 10:08:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:18.910 10:08:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:18.910 10:08:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 3721081 /var/tmp/spdk2.sock 00:06:18.910 10:08:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 3721081 ']' 00:06:18.910 10:08:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:18.910 10:08:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:18.910 10:08:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:18.910 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:18.910 10:08:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:18.910 10:08:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:18.910 10:08:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:18.910 10:08:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:18.910 10:08:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:18.910 10:08:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:18.910 10:08:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:18.910 10:08:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:18.910 00:06:18.910 real 0m4.202s 00:06:18.910 user 0m1.187s 00:06:18.910 sys 0m0.186s 00:06:18.910 10:08:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:18.910 10:08:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:18.910 ************************************ 00:06:18.910 END TEST locking_overlapped_coremask_via_rpc 00:06:18.910 ************************************ 00:06:18.910 10:08:12 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:18.910 10:08:12 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 3720848 ]] 00:06:18.910 10:08:12 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 3720848 00:06:18.910 10:08:12 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 3720848 ']' 00:06:18.910 10:08:12 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 3720848 00:06:18.910 10:08:12 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:18.910 10:08:12 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:18.910 10:08:12 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3720848 00:06:19.170 10:08:12 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:19.170 10:08:12 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:19.170 10:08:12 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3720848' 00:06:19.170 killing process with pid 3720848 00:06:19.170 10:08:12 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 3720848 00:06:19.170 10:08:12 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 3720848 00:06:21.705 10:08:15 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 3721081 ]] 00:06:21.705 10:08:15 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 3721081 00:06:21.705 10:08:15 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 3721081 ']' 00:06:21.705 10:08:15 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 3721081 00:06:21.705 10:08:15 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:21.705 10:08:15 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:21.705 10:08:15 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3721081 00:06:21.705 10:08:15 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:06:21.705 10:08:15 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:06:21.705 10:08:15 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3721081' 00:06:21.705 killing process with pid 3721081 00:06:21.705 10:08:15 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 3721081 00:06:21.705 10:08:15 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 3721081 00:06:24.243 10:08:17 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:24.243 10:08:17 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:24.243 10:08:17 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 3720848 ]] 00:06:24.243 10:08:17 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 3720848 00:06:24.243 10:08:17 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 3720848 ']' 00:06:24.243 10:08:17 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 3720848 00:06:24.243 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (3720848) - No such process 00:06:24.243 10:08:17 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 3720848 is not found' 00:06:24.243 Process with pid 3720848 is not found 00:06:24.243 10:08:17 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 3721081 ]] 00:06:24.243 10:08:17 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 3721081 00:06:24.243 10:08:17 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 3721081 ']' 00:06:24.243 10:08:17 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 3721081 00:06:24.243 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (3721081) - No such process 00:06:24.243 10:08:17 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 3721081 is not found' 00:06:24.243 Process with pid 3721081 is not found 00:06:24.243 10:08:17 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:24.243 00:06:24.243 real 0m48.603s 00:06:24.243 user 1m24.148s 00:06:24.243 sys 0m6.484s 00:06:24.243 10:08:17 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:24.243 10:08:17 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:24.243 ************************************ 00:06:24.243 END TEST cpu_locks 00:06:24.243 ************************************ 00:06:24.243 00:06:24.243 real 1m17.790s 00:06:24.243 user 2m20.645s 00:06:24.243 sys 0m10.274s 00:06:24.243 10:08:17 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:24.243 10:08:17 event -- common/autotest_common.sh@10 -- # set +x 00:06:24.243 ************************************ 00:06:24.243 END TEST event 00:06:24.243 ************************************ 00:06:24.243 10:08:17 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:24.243 10:08:17 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:24.243 10:08:17 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:24.243 10:08:17 -- common/autotest_common.sh@10 -- # set +x 00:06:24.243 ************************************ 00:06:24.243 START TEST thread 00:06:24.243 ************************************ 00:06:24.243 10:08:17 thread -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:24.243 * Looking for test storage... 00:06:24.243 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:06:24.243 10:08:17 thread -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:24.243 10:08:17 thread -- common/autotest_common.sh@1711 -- # lcov --version 00:06:24.243 10:08:17 thread -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:24.243 10:08:18 thread -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:24.243 10:08:18 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:24.243 10:08:18 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:24.243 10:08:18 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:24.243 10:08:18 thread -- scripts/common.sh@336 -- # IFS=.-: 00:06:24.243 10:08:18 thread -- scripts/common.sh@336 -- # read -ra ver1 00:06:24.243 10:08:18 thread -- scripts/common.sh@337 -- # IFS=.-: 00:06:24.243 10:08:18 thread -- scripts/common.sh@337 -- # read -ra ver2 00:06:24.243 10:08:18 thread -- scripts/common.sh@338 -- # local 'op=<' 00:06:24.243 10:08:18 thread -- scripts/common.sh@340 -- # ver1_l=2 00:06:24.243 10:08:18 thread -- scripts/common.sh@341 -- # ver2_l=1 00:06:24.243 10:08:18 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:24.243 10:08:18 thread -- scripts/common.sh@344 -- # case "$op" in 00:06:24.243 10:08:18 thread -- scripts/common.sh@345 -- # : 1 00:06:24.243 10:08:18 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:24.243 10:08:18 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:24.243 10:08:18 thread -- scripts/common.sh@365 -- # decimal 1 00:06:24.243 10:08:18 thread -- scripts/common.sh@353 -- # local d=1 00:06:24.243 10:08:18 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:24.243 10:08:18 thread -- scripts/common.sh@355 -- # echo 1 00:06:24.243 10:08:18 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:06:24.243 10:08:18 thread -- scripts/common.sh@366 -- # decimal 2 00:06:24.243 10:08:18 thread -- scripts/common.sh@353 -- # local d=2 00:06:24.243 10:08:18 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:24.243 10:08:18 thread -- scripts/common.sh@355 -- # echo 2 00:06:24.243 10:08:18 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:06:24.243 10:08:18 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:24.243 10:08:18 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:24.243 10:08:18 thread -- scripts/common.sh@368 -- # return 0 00:06:24.243 10:08:18 thread -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:24.243 10:08:18 thread -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:24.243 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.243 --rc genhtml_branch_coverage=1 00:06:24.243 --rc genhtml_function_coverage=1 00:06:24.243 --rc genhtml_legend=1 00:06:24.243 --rc geninfo_all_blocks=1 00:06:24.243 --rc geninfo_unexecuted_blocks=1 00:06:24.243 00:06:24.243 ' 00:06:24.243 10:08:18 thread -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:24.243 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.243 --rc genhtml_branch_coverage=1 00:06:24.243 --rc genhtml_function_coverage=1 00:06:24.243 --rc genhtml_legend=1 00:06:24.243 --rc geninfo_all_blocks=1 00:06:24.243 --rc geninfo_unexecuted_blocks=1 00:06:24.243 00:06:24.243 ' 00:06:24.243 10:08:18 thread -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:24.243 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.243 --rc genhtml_branch_coverage=1 00:06:24.243 --rc genhtml_function_coverage=1 00:06:24.243 --rc genhtml_legend=1 00:06:24.243 --rc geninfo_all_blocks=1 00:06:24.243 --rc geninfo_unexecuted_blocks=1 00:06:24.243 00:06:24.243 ' 00:06:24.243 10:08:18 thread -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:24.243 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.243 --rc genhtml_branch_coverage=1 00:06:24.243 --rc genhtml_function_coverage=1 00:06:24.243 --rc genhtml_legend=1 00:06:24.243 --rc geninfo_all_blocks=1 00:06:24.243 --rc geninfo_unexecuted_blocks=1 00:06:24.243 00:06:24.243 ' 00:06:24.243 10:08:18 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:24.243 10:08:18 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:24.243 10:08:18 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:24.243 10:08:18 thread -- common/autotest_common.sh@10 -- # set +x 00:06:24.243 ************************************ 00:06:24.243 START TEST thread_poller_perf 00:06:24.243 ************************************ 00:06:24.243 10:08:18 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:24.243 [2024-12-13 10:08:18.124872] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:06:24.243 [2024-12-13 10:08:18.124947] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3722534 ] 00:06:24.503 [2024-12-13 10:08:18.235305] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.503 [2024-12-13 10:08:18.344047] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.503 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:26.018 [2024-12-13T09:08:19.909Z] ====================================== 00:06:26.018 [2024-12-13T09:08:19.909Z] busy:2108094614 (cyc) 00:06:26.018 [2024-12-13T09:08:19.909Z] total_run_count: 406000 00:06:26.018 [2024-12-13T09:08:19.909Z] tsc_hz: 2100000000 (cyc) 00:06:26.018 [2024-12-13T09:08:19.909Z] ====================================== 00:06:26.018 [2024-12-13T09:08:19.909Z] poller_cost: 5192 (cyc), 2472 (nsec) 00:06:26.018 00:06:26.018 real 0m1.480s 00:06:26.018 user 0m1.349s 00:06:26.018 sys 0m0.125s 00:06:26.018 10:08:19 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:26.018 10:08:19 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:26.018 ************************************ 00:06:26.018 END TEST thread_poller_perf 00:06:26.018 ************************************ 00:06:26.018 10:08:19 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:26.018 10:08:19 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:26.018 10:08:19 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:26.018 10:08:19 thread -- common/autotest_common.sh@10 -- # set +x 00:06:26.018 ************************************ 00:06:26.018 START TEST thread_poller_perf 00:06:26.018 ************************************ 00:06:26.018 10:08:19 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:26.018 [2024-12-13 10:08:19.670563] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:06:26.018 [2024-12-13 10:08:19.670652] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3722780 ] 00:06:26.018 [2024-12-13 10:08:19.779569] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.018 [2024-12-13 10:08:19.886082] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.018 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:27.428 [2024-12-13T09:08:21.319Z] ====================================== 00:06:27.428 [2024-12-13T09:08:21.319Z] busy:2102393158 (cyc) 00:06:27.428 [2024-12-13T09:08:21.319Z] total_run_count: 4630000 00:06:27.428 [2024-12-13T09:08:21.319Z] tsc_hz: 2100000000 (cyc) 00:06:27.428 [2024-12-13T09:08:21.319Z] ====================================== 00:06:27.428 [2024-12-13T09:08:21.319Z] poller_cost: 454 (cyc), 216 (nsec) 00:06:27.428 00:06:27.428 real 0m1.465s 00:06:27.428 user 0m1.341s 00:06:27.428 sys 0m0.117s 00:06:27.428 10:08:21 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:27.428 10:08:21 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:27.428 ************************************ 00:06:27.428 END TEST thread_poller_perf 00:06:27.428 ************************************ 00:06:27.428 10:08:21 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:27.428 00:06:27.428 real 0m3.236s 00:06:27.428 user 0m2.844s 00:06:27.428 sys 0m0.400s 00:06:27.428 10:08:21 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:27.428 10:08:21 thread -- common/autotest_common.sh@10 -- # set +x 00:06:27.428 ************************************ 00:06:27.428 END TEST thread 00:06:27.428 ************************************ 00:06:27.428 10:08:21 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:06:27.428 10:08:21 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:27.428 10:08:21 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:27.428 10:08:21 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:27.428 10:08:21 -- common/autotest_common.sh@10 -- # set +x 00:06:27.428 ************************************ 00:06:27.428 START TEST app_cmdline 00:06:27.428 ************************************ 00:06:27.428 10:08:21 app_cmdline -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:27.428 * Looking for test storage... 00:06:27.428 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:27.428 10:08:21 app_cmdline -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:27.428 10:08:21 app_cmdline -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:27.428 10:08:21 app_cmdline -- common/autotest_common.sh@1711 -- # lcov --version 00:06:27.687 10:08:21 app_cmdline -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:27.687 10:08:21 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:27.687 10:08:21 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:27.687 10:08:21 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:27.687 10:08:21 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:06:27.687 10:08:21 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:06:27.688 10:08:21 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:06:27.688 10:08:21 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:06:27.688 10:08:21 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:06:27.688 10:08:21 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:06:27.688 10:08:21 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:06:27.688 10:08:21 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:27.688 10:08:21 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:06:27.688 10:08:21 app_cmdline -- scripts/common.sh@345 -- # : 1 00:06:27.688 10:08:21 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:27.688 10:08:21 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:27.688 10:08:21 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:06:27.688 10:08:21 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:06:27.688 10:08:21 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:27.688 10:08:21 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:06:27.688 10:08:21 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:06:27.688 10:08:21 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:06:27.688 10:08:21 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:06:27.688 10:08:21 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:27.688 10:08:21 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:06:27.688 10:08:21 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:06:27.688 10:08:21 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:27.688 10:08:21 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:27.688 10:08:21 app_cmdline -- scripts/common.sh@368 -- # return 0 00:06:27.688 10:08:21 app_cmdline -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:27.688 10:08:21 app_cmdline -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:27.688 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:27.688 --rc genhtml_branch_coverage=1 00:06:27.688 --rc genhtml_function_coverage=1 00:06:27.688 --rc genhtml_legend=1 00:06:27.688 --rc geninfo_all_blocks=1 00:06:27.688 --rc geninfo_unexecuted_blocks=1 00:06:27.688 00:06:27.688 ' 00:06:27.688 10:08:21 app_cmdline -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:27.688 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:27.688 --rc genhtml_branch_coverage=1 00:06:27.688 --rc genhtml_function_coverage=1 00:06:27.688 --rc genhtml_legend=1 00:06:27.688 --rc geninfo_all_blocks=1 00:06:27.688 --rc geninfo_unexecuted_blocks=1 00:06:27.688 00:06:27.688 ' 00:06:27.688 10:08:21 app_cmdline -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:27.688 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:27.688 --rc genhtml_branch_coverage=1 00:06:27.688 --rc genhtml_function_coverage=1 00:06:27.688 --rc genhtml_legend=1 00:06:27.688 --rc geninfo_all_blocks=1 00:06:27.688 --rc geninfo_unexecuted_blocks=1 00:06:27.688 00:06:27.688 ' 00:06:27.688 10:08:21 app_cmdline -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:27.688 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:27.688 --rc genhtml_branch_coverage=1 00:06:27.688 --rc genhtml_function_coverage=1 00:06:27.688 --rc genhtml_legend=1 00:06:27.688 --rc geninfo_all_blocks=1 00:06:27.688 --rc geninfo_unexecuted_blocks=1 00:06:27.688 00:06:27.688 ' 00:06:27.688 10:08:21 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:27.688 10:08:21 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=3723140 00:06:27.688 10:08:21 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 3723140 00:06:27.688 10:08:21 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:27.688 10:08:21 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 3723140 ']' 00:06:27.688 10:08:21 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:27.688 10:08:21 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:27.688 10:08:21 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:27.688 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:27.688 10:08:21 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:27.688 10:08:21 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:27.688 [2024-12-13 10:08:21.446106] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:06:27.688 [2024-12-13 10:08:21.446200] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3723140 ] 00:06:27.688 [2024-12-13 10:08:21.558063] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.948 [2024-12-13 10:08:21.664458] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.885 10:08:22 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:28.885 10:08:22 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:06:28.885 10:08:22 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:06:28.885 { 00:06:28.885 "version": "SPDK v25.01-pre git sha1 e01cb43b8", 00:06:28.885 "fields": { 00:06:28.885 "major": 25, 00:06:28.885 "minor": 1, 00:06:28.885 "patch": 0, 00:06:28.885 "suffix": "-pre", 00:06:28.886 "commit": "e01cb43b8" 00:06:28.886 } 00:06:28.886 } 00:06:28.886 10:08:22 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:28.886 10:08:22 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:28.886 10:08:22 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:28.886 10:08:22 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:28.886 10:08:22 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:28.886 10:08:22 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:28.886 10:08:22 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:28.886 10:08:22 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:28.886 10:08:22 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:28.886 10:08:22 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:28.886 10:08:22 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:28.886 10:08:22 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:28.886 10:08:22 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:28.886 10:08:22 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:06:28.886 10:08:22 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:28.886 10:08:22 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:28.886 10:08:22 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:28.886 10:08:22 app_cmdline -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:28.886 10:08:22 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:28.886 10:08:22 app_cmdline -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:28.886 10:08:22 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:28.886 10:08:22 app_cmdline -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:28.886 10:08:22 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:06:28.886 10:08:22 app_cmdline -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:29.145 request: 00:06:29.145 { 00:06:29.145 "method": "env_dpdk_get_mem_stats", 00:06:29.145 "req_id": 1 00:06:29.145 } 00:06:29.145 Got JSON-RPC error response 00:06:29.145 response: 00:06:29.145 { 00:06:29.145 "code": -32601, 00:06:29.145 "message": "Method not found" 00:06:29.145 } 00:06:29.145 10:08:22 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:06:29.145 10:08:22 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:29.145 10:08:22 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:29.145 10:08:22 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:29.145 10:08:22 app_cmdline -- app/cmdline.sh@1 -- # killprocess 3723140 00:06:29.145 10:08:22 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 3723140 ']' 00:06:29.145 10:08:22 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 3723140 00:06:29.145 10:08:22 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:06:29.145 10:08:22 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:29.145 10:08:22 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3723140 00:06:29.145 10:08:22 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:29.145 10:08:22 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:29.145 10:08:22 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3723140' 00:06:29.145 killing process with pid 3723140 00:06:29.145 10:08:22 app_cmdline -- common/autotest_common.sh@973 -- # kill 3723140 00:06:29.145 10:08:22 app_cmdline -- common/autotest_common.sh@978 -- # wait 3723140 00:06:31.681 00:06:31.682 real 0m4.017s 00:06:31.682 user 0m4.248s 00:06:31.682 sys 0m0.564s 00:06:31.682 10:08:25 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:31.682 10:08:25 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:31.682 ************************************ 00:06:31.682 END TEST app_cmdline 00:06:31.682 ************************************ 00:06:31.682 10:08:25 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:31.682 10:08:25 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:31.682 10:08:25 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:31.682 10:08:25 -- common/autotest_common.sh@10 -- # set +x 00:06:31.682 ************************************ 00:06:31.682 START TEST version 00:06:31.682 ************************************ 00:06:31.682 10:08:25 version -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:31.682 * Looking for test storage... 00:06:31.682 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:31.682 10:08:25 version -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:31.682 10:08:25 version -- common/autotest_common.sh@1711 -- # lcov --version 00:06:31.682 10:08:25 version -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:31.682 10:08:25 version -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:31.682 10:08:25 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:31.682 10:08:25 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:31.682 10:08:25 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:31.682 10:08:25 version -- scripts/common.sh@336 -- # IFS=.-: 00:06:31.682 10:08:25 version -- scripts/common.sh@336 -- # read -ra ver1 00:06:31.682 10:08:25 version -- scripts/common.sh@337 -- # IFS=.-: 00:06:31.682 10:08:25 version -- scripts/common.sh@337 -- # read -ra ver2 00:06:31.682 10:08:25 version -- scripts/common.sh@338 -- # local 'op=<' 00:06:31.682 10:08:25 version -- scripts/common.sh@340 -- # ver1_l=2 00:06:31.682 10:08:25 version -- scripts/common.sh@341 -- # ver2_l=1 00:06:31.682 10:08:25 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:31.682 10:08:25 version -- scripts/common.sh@344 -- # case "$op" in 00:06:31.682 10:08:25 version -- scripts/common.sh@345 -- # : 1 00:06:31.682 10:08:25 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:31.682 10:08:25 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:31.682 10:08:25 version -- scripts/common.sh@365 -- # decimal 1 00:06:31.682 10:08:25 version -- scripts/common.sh@353 -- # local d=1 00:06:31.682 10:08:25 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:31.682 10:08:25 version -- scripts/common.sh@355 -- # echo 1 00:06:31.682 10:08:25 version -- scripts/common.sh@365 -- # ver1[v]=1 00:06:31.682 10:08:25 version -- scripts/common.sh@366 -- # decimal 2 00:06:31.682 10:08:25 version -- scripts/common.sh@353 -- # local d=2 00:06:31.682 10:08:25 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:31.682 10:08:25 version -- scripts/common.sh@355 -- # echo 2 00:06:31.682 10:08:25 version -- scripts/common.sh@366 -- # ver2[v]=2 00:06:31.682 10:08:25 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:31.682 10:08:25 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:31.682 10:08:25 version -- scripts/common.sh@368 -- # return 0 00:06:31.682 10:08:25 version -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:31.682 10:08:25 version -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:31.682 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:31.682 --rc genhtml_branch_coverage=1 00:06:31.682 --rc genhtml_function_coverage=1 00:06:31.682 --rc genhtml_legend=1 00:06:31.682 --rc geninfo_all_blocks=1 00:06:31.682 --rc geninfo_unexecuted_blocks=1 00:06:31.682 00:06:31.682 ' 00:06:31.682 10:08:25 version -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:31.682 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:31.682 --rc genhtml_branch_coverage=1 00:06:31.682 --rc genhtml_function_coverage=1 00:06:31.682 --rc genhtml_legend=1 00:06:31.682 --rc geninfo_all_blocks=1 00:06:31.682 --rc geninfo_unexecuted_blocks=1 00:06:31.682 00:06:31.682 ' 00:06:31.682 10:08:25 version -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:31.682 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:31.682 --rc genhtml_branch_coverage=1 00:06:31.682 --rc genhtml_function_coverage=1 00:06:31.682 --rc genhtml_legend=1 00:06:31.682 --rc geninfo_all_blocks=1 00:06:31.682 --rc geninfo_unexecuted_blocks=1 00:06:31.682 00:06:31.682 ' 00:06:31.682 10:08:25 version -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:31.682 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:31.682 --rc genhtml_branch_coverage=1 00:06:31.682 --rc genhtml_function_coverage=1 00:06:31.682 --rc genhtml_legend=1 00:06:31.682 --rc geninfo_all_blocks=1 00:06:31.682 --rc geninfo_unexecuted_blocks=1 00:06:31.682 00:06:31.682 ' 00:06:31.682 10:08:25 version -- app/version.sh@17 -- # get_header_version major 00:06:31.682 10:08:25 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:31.682 10:08:25 version -- app/version.sh@14 -- # tr -d '"' 00:06:31.682 10:08:25 version -- app/version.sh@14 -- # cut -f2 00:06:31.682 10:08:25 version -- app/version.sh@17 -- # major=25 00:06:31.682 10:08:25 version -- app/version.sh@18 -- # get_header_version minor 00:06:31.682 10:08:25 version -- app/version.sh@14 -- # cut -f2 00:06:31.682 10:08:25 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:31.682 10:08:25 version -- app/version.sh@14 -- # tr -d '"' 00:06:31.682 10:08:25 version -- app/version.sh@18 -- # minor=1 00:06:31.682 10:08:25 version -- app/version.sh@19 -- # get_header_version patch 00:06:31.682 10:08:25 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:31.682 10:08:25 version -- app/version.sh@14 -- # cut -f2 00:06:31.682 10:08:25 version -- app/version.sh@14 -- # tr -d '"' 00:06:31.682 10:08:25 version -- app/version.sh@19 -- # patch=0 00:06:31.682 10:08:25 version -- app/version.sh@20 -- # get_header_version suffix 00:06:31.682 10:08:25 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:31.682 10:08:25 version -- app/version.sh@14 -- # cut -f2 00:06:31.682 10:08:25 version -- app/version.sh@14 -- # tr -d '"' 00:06:31.682 10:08:25 version -- app/version.sh@20 -- # suffix=-pre 00:06:31.682 10:08:25 version -- app/version.sh@22 -- # version=25.1 00:06:31.682 10:08:25 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:31.682 10:08:25 version -- app/version.sh@28 -- # version=25.1rc0 00:06:31.682 10:08:25 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:31.682 10:08:25 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:31.682 10:08:25 version -- app/version.sh@30 -- # py_version=25.1rc0 00:06:31.682 10:08:25 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:06:31.682 00:06:31.682 real 0m0.247s 00:06:31.682 user 0m0.155s 00:06:31.682 sys 0m0.129s 00:06:31.682 10:08:25 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:31.682 10:08:25 version -- common/autotest_common.sh@10 -- # set +x 00:06:31.682 ************************************ 00:06:31.682 END TEST version 00:06:31.682 ************************************ 00:06:31.682 10:08:25 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:06:31.682 10:08:25 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:06:31.682 10:08:25 -- spdk/autotest.sh@194 -- # uname -s 00:06:31.682 10:08:25 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:06:31.682 10:08:25 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:31.682 10:08:25 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:31.682 10:08:25 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:06:31.682 10:08:25 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:06:31.682 10:08:25 -- spdk/autotest.sh@260 -- # timing_exit lib 00:06:31.682 10:08:25 -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:31.682 10:08:25 -- common/autotest_common.sh@10 -- # set +x 00:06:31.940 10:08:25 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:06:31.940 10:08:25 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:06:31.940 10:08:25 -- spdk/autotest.sh@276 -- # '[' 1 -eq 1 ']' 00:06:31.940 10:08:25 -- spdk/autotest.sh@277 -- # export NET_TYPE 00:06:31.940 10:08:25 -- spdk/autotest.sh@280 -- # '[' tcp = rdma ']' 00:06:31.941 10:08:25 -- spdk/autotest.sh@283 -- # '[' tcp = tcp ']' 00:06:31.941 10:08:25 -- spdk/autotest.sh@284 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:31.941 10:08:25 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:31.941 10:08:25 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:31.941 10:08:25 -- common/autotest_common.sh@10 -- # set +x 00:06:31.941 ************************************ 00:06:31.941 START TEST nvmf_tcp 00:06:31.941 ************************************ 00:06:31.941 10:08:25 nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:31.941 * Looking for test storage... 00:06:31.941 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:06:31.941 10:08:25 nvmf_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:31.941 10:08:25 nvmf_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:06:31.941 10:08:25 nvmf_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:31.941 10:08:25 nvmf_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:31.941 10:08:25 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:31.941 10:08:25 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:31.941 10:08:25 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:31.941 10:08:25 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:06:31.941 10:08:25 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:06:31.941 10:08:25 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:06:31.941 10:08:25 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:06:31.941 10:08:25 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:06:31.941 10:08:25 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:06:31.941 10:08:25 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:06:31.941 10:08:25 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:31.941 10:08:25 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:06:31.941 10:08:25 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:06:31.941 10:08:25 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:31.941 10:08:25 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:31.941 10:08:25 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:06:31.941 10:08:25 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:06:31.941 10:08:25 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:31.941 10:08:25 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:06:31.941 10:08:25 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:06:31.941 10:08:25 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:06:31.941 10:08:25 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:06:31.941 10:08:25 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:31.941 10:08:25 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:06:31.941 10:08:25 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:06:31.941 10:08:25 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:31.941 10:08:25 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:31.941 10:08:25 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:06:31.941 10:08:25 nvmf_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:31.941 10:08:25 nvmf_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:31.941 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:31.941 --rc genhtml_branch_coverage=1 00:06:31.941 --rc genhtml_function_coverage=1 00:06:31.941 --rc genhtml_legend=1 00:06:31.941 --rc geninfo_all_blocks=1 00:06:31.941 --rc geninfo_unexecuted_blocks=1 00:06:31.941 00:06:31.941 ' 00:06:31.941 10:08:25 nvmf_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:31.941 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:31.941 --rc genhtml_branch_coverage=1 00:06:31.941 --rc genhtml_function_coverage=1 00:06:31.941 --rc genhtml_legend=1 00:06:31.941 --rc geninfo_all_blocks=1 00:06:31.941 --rc geninfo_unexecuted_blocks=1 00:06:31.941 00:06:31.941 ' 00:06:31.941 10:08:25 nvmf_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:31.941 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:31.941 --rc genhtml_branch_coverage=1 00:06:31.941 --rc genhtml_function_coverage=1 00:06:31.941 --rc genhtml_legend=1 00:06:31.941 --rc geninfo_all_blocks=1 00:06:31.941 --rc geninfo_unexecuted_blocks=1 00:06:31.941 00:06:31.941 ' 00:06:31.941 10:08:25 nvmf_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:31.941 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:31.941 --rc genhtml_branch_coverage=1 00:06:31.941 --rc genhtml_function_coverage=1 00:06:31.941 --rc genhtml_legend=1 00:06:31.941 --rc geninfo_all_blocks=1 00:06:31.941 --rc geninfo_unexecuted_blocks=1 00:06:31.941 00:06:31.941 ' 00:06:31.941 10:08:25 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:06:31.941 10:08:25 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:31.941 10:08:25 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:06:31.941 10:08:25 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:31.941 10:08:25 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:31.941 10:08:25 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:31.941 ************************************ 00:06:31.941 START TEST nvmf_target_core 00:06:31.941 ************************************ 00:06:31.941 10:08:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:06:32.201 * Looking for test storage... 00:06:32.201 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:06:32.201 10:08:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:32.201 10:08:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # lcov --version 00:06:32.201 10:08:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:32.201 10:08:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:32.201 10:08:25 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:32.201 10:08:25 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:32.201 10:08:25 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:32.201 10:08:25 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:06:32.201 10:08:25 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:06:32.201 10:08:25 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:06:32.201 10:08:25 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:06:32.201 10:08:25 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:06:32.201 10:08:25 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:06:32.201 10:08:25 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:06:32.201 10:08:25 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:32.201 10:08:25 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:06:32.201 10:08:25 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:06:32.201 10:08:25 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:32.201 10:08:25 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:32.201 10:08:25 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:06:32.201 10:08:25 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:06:32.201 10:08:25 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:32.201 10:08:25 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:06:32.201 10:08:25 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:06:32.201 10:08:25 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:06:32.201 10:08:25 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:06:32.201 10:08:25 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:32.201 10:08:25 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:06:32.201 10:08:25 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:06:32.201 10:08:25 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:32.201 10:08:25 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:32.201 10:08:25 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:06:32.201 10:08:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:32.201 10:08:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:32.201 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:32.201 --rc genhtml_branch_coverage=1 00:06:32.201 --rc genhtml_function_coverage=1 00:06:32.201 --rc genhtml_legend=1 00:06:32.201 --rc geninfo_all_blocks=1 00:06:32.201 --rc geninfo_unexecuted_blocks=1 00:06:32.201 00:06:32.201 ' 00:06:32.201 10:08:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:32.201 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:32.201 --rc genhtml_branch_coverage=1 00:06:32.201 --rc genhtml_function_coverage=1 00:06:32.201 --rc genhtml_legend=1 00:06:32.201 --rc geninfo_all_blocks=1 00:06:32.201 --rc geninfo_unexecuted_blocks=1 00:06:32.201 00:06:32.201 ' 00:06:32.201 10:08:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:32.201 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:32.201 --rc genhtml_branch_coverage=1 00:06:32.201 --rc genhtml_function_coverage=1 00:06:32.201 --rc genhtml_legend=1 00:06:32.201 --rc geninfo_all_blocks=1 00:06:32.201 --rc geninfo_unexecuted_blocks=1 00:06:32.201 00:06:32.201 ' 00:06:32.201 10:08:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:32.201 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:32.201 --rc genhtml_branch_coverage=1 00:06:32.201 --rc genhtml_function_coverage=1 00:06:32.201 --rc genhtml_legend=1 00:06:32.201 --rc geninfo_all_blocks=1 00:06:32.201 --rc geninfo_unexecuted_blocks=1 00:06:32.201 00:06:32.201 ' 00:06:32.201 10:08:25 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:06:32.201 10:08:25 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:32.201 10:08:25 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:32.201 10:08:25 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:06:32.201 10:08:25 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:32.201 10:08:25 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:32.201 10:08:25 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:32.201 10:08:25 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:32.201 10:08:25 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:32.201 10:08:25 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:32.201 10:08:25 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:32.201 10:08:25 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:32.201 10:08:25 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:32.201 10:08:26 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:32.201 10:08:26 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:06:32.201 10:08:26 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:06:32.201 10:08:26 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:32.201 10:08:26 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:32.201 10:08:26 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:32.201 10:08:26 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:32.201 10:08:26 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:32.201 10:08:26 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:06:32.201 10:08:26 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:32.201 10:08:26 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:32.201 10:08:26 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:32.201 10:08:26 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:32.201 10:08:26 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:32.201 10:08:26 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:32.201 10:08:26 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:06:32.201 10:08:26 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:32.201 10:08:26 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:06:32.201 10:08:26 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:32.201 10:08:26 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:32.201 10:08:26 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:32.201 10:08:26 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:32.201 10:08:26 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:32.201 10:08:26 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:32.201 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:32.201 10:08:26 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:32.201 10:08:26 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:32.201 10:08:26 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:32.201 10:08:26 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:06:32.201 10:08:26 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:06:32.201 10:08:26 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:06:32.201 10:08:26 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:06:32.201 10:08:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:32.201 10:08:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:32.201 10:08:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:32.201 ************************************ 00:06:32.201 START TEST nvmf_abort 00:06:32.202 ************************************ 00:06:32.202 10:08:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:06:32.462 * Looking for test storage... 00:06:32.462 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:32.462 10:08:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:32.462 10:08:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # lcov --version 00:06:32.462 10:08:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:32.462 10:08:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:32.462 10:08:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:32.462 10:08:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:32.462 10:08:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:32.462 10:08:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:06:32.462 10:08:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:06:32.462 10:08:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:06:32.462 10:08:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:06:32.462 10:08:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:06:32.462 10:08:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:06:32.462 10:08:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:06:32.462 10:08:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:32.462 10:08:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:06:32.462 10:08:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:06:32.462 10:08:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:32.462 10:08:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:32.462 10:08:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:06:32.462 10:08:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:06:32.462 10:08:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:32.462 10:08:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:06:32.462 10:08:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:06:32.462 10:08:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:06:32.462 10:08:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:06:32.462 10:08:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:32.462 10:08:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:06:32.462 10:08:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:06:32.462 10:08:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:32.462 10:08:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:32.462 10:08:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:06:32.462 10:08:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:32.462 10:08:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:32.462 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:32.462 --rc genhtml_branch_coverage=1 00:06:32.462 --rc genhtml_function_coverage=1 00:06:32.462 --rc genhtml_legend=1 00:06:32.462 --rc geninfo_all_blocks=1 00:06:32.462 --rc geninfo_unexecuted_blocks=1 00:06:32.462 00:06:32.462 ' 00:06:32.462 10:08:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:32.462 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:32.462 --rc genhtml_branch_coverage=1 00:06:32.462 --rc genhtml_function_coverage=1 00:06:32.462 --rc genhtml_legend=1 00:06:32.462 --rc geninfo_all_blocks=1 00:06:32.462 --rc geninfo_unexecuted_blocks=1 00:06:32.462 00:06:32.462 ' 00:06:32.462 10:08:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:32.462 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:32.462 --rc genhtml_branch_coverage=1 00:06:32.462 --rc genhtml_function_coverage=1 00:06:32.462 --rc genhtml_legend=1 00:06:32.462 --rc geninfo_all_blocks=1 00:06:32.462 --rc geninfo_unexecuted_blocks=1 00:06:32.462 00:06:32.462 ' 00:06:32.462 10:08:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:32.462 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:32.462 --rc genhtml_branch_coverage=1 00:06:32.462 --rc genhtml_function_coverage=1 00:06:32.462 --rc genhtml_legend=1 00:06:32.462 --rc geninfo_all_blocks=1 00:06:32.462 --rc geninfo_unexecuted_blocks=1 00:06:32.462 00:06:32.462 ' 00:06:32.462 10:08:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:32.462 10:08:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:06:32.462 10:08:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:32.462 10:08:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:32.462 10:08:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:32.462 10:08:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:32.462 10:08:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:32.462 10:08:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:32.462 10:08:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:32.462 10:08:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:32.462 10:08:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:32.462 10:08:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:32.462 10:08:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:06:32.462 10:08:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:06:32.462 10:08:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:32.462 10:08:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:32.462 10:08:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:32.462 10:08:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:32.462 10:08:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:32.462 10:08:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:06:32.462 10:08:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:32.462 10:08:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:32.462 10:08:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:32.462 10:08:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:32.462 10:08:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:32.462 10:08:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:32.462 10:08:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:06:32.462 10:08:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:32.462 10:08:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:06:32.462 10:08:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:32.462 10:08:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:32.462 10:08:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:32.463 10:08:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:32.463 10:08:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:32.463 10:08:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:32.463 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:32.463 10:08:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:32.463 10:08:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:32.463 10:08:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:32.463 10:08:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:32.463 10:08:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:06:32.463 10:08:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:06:32.463 10:08:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:32.463 10:08:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:32.463 10:08:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:32.463 10:08:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:32.463 10:08:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:32.463 10:08:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:32.463 10:08:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:32.463 10:08:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:32.463 10:08:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:32.463 10:08:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:32.463 10:08:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:06:32.463 10:08:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:39.031 10:08:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:39.031 10:08:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:06:39.031 10:08:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:39.031 10:08:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:39.031 10:08:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:39.031 10:08:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:39.031 10:08:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:39.031 10:08:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:06:39.031 10:08:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:39.031 10:08:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:06:39.031 10:08:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:06:39.031 10:08:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:06:39.031 10:08:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:06:39.031 10:08:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:06:39.031 10:08:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:06:39.031 10:08:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:39.031 10:08:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:39.031 10:08:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:39.031 10:08:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:39.031 10:08:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:39.031 10:08:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:39.031 10:08:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:39.031 10:08:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:39.031 10:08:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:39.031 10:08:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:39.031 10:08:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:39.031 10:08:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:39.031 10:08:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:39.031 10:08:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:39.031 10:08:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:39.031 10:08:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:39.031 10:08:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:39.031 10:08:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:39.031 10:08:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:39.031 10:08:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:06:39.031 Found 0000:af:00.0 (0x8086 - 0x159b) 00:06:39.031 10:08:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:39.031 10:08:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:39.031 10:08:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:39.031 10:08:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:39.031 10:08:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:39.031 10:08:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:39.031 10:08:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:06:39.031 Found 0000:af:00.1 (0x8086 - 0x159b) 00:06:39.031 10:08:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:39.031 10:08:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:39.031 10:08:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:39.031 10:08:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:39.031 10:08:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:39.031 10:08:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:39.031 10:08:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:39.031 10:08:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:39.031 10:08:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:39.031 10:08:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:39.031 10:08:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:39.031 10:08:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:39.031 10:08:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:39.031 10:08:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:39.031 10:08:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:39.031 10:08:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:06:39.031 Found net devices under 0000:af:00.0: cvl_0_0 00:06:39.031 10:08:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:39.031 10:08:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:39.031 10:08:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:39.031 10:08:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:39.031 10:08:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:39.031 10:08:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:39.031 10:08:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:39.031 10:08:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:39.031 10:08:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:06:39.031 Found net devices under 0000:af:00.1: cvl_0_1 00:06:39.031 10:08:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:39.031 10:08:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:39.031 10:08:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:06:39.031 10:08:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:39.031 10:08:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:39.031 10:08:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:39.031 10:08:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:39.031 10:08:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:39.031 10:08:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:39.031 10:08:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:39.031 10:08:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:39.031 10:08:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:39.031 10:08:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:39.031 10:08:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:39.031 10:08:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:39.031 10:08:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:39.031 10:08:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:39.031 10:08:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:39.031 10:08:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:39.031 10:08:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:39.031 10:08:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:39.031 10:08:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:39.031 10:08:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:39.031 10:08:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:39.032 10:08:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:39.032 10:08:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:39.032 10:08:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:39.032 10:08:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:39.032 10:08:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:39.032 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:39.032 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.463 ms 00:06:39.032 00:06:39.032 --- 10.0.0.2 ping statistics --- 00:06:39.032 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:39.032 rtt min/avg/max/mdev = 0.463/0.463/0.463/0.000 ms 00:06:39.032 10:08:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:39.032 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:39.032 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.213 ms 00:06:39.032 00:06:39.032 --- 10.0.0.1 ping statistics --- 00:06:39.032 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:39.032 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:06:39.032 10:08:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:39.032 10:08:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:06:39.032 10:08:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:39.032 10:08:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:39.032 10:08:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:39.032 10:08:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:39.032 10:08:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:39.032 10:08:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:39.032 10:08:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:39.032 10:08:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:06:39.032 10:08:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:39.032 10:08:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:39.032 10:08:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:39.032 10:08:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=3727143 00:06:39.032 10:08:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:06:39.032 10:08:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 3727143 00:06:39.032 10:08:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 3727143 ']' 00:06:39.032 10:08:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:39.032 10:08:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:39.032 10:08:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:39.032 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:39.032 10:08:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:39.032 10:08:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:39.032 [2024-12-13 10:08:32.054312] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:06:39.032 [2024-12-13 10:08:32.054402] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:39.032 [2024-12-13 10:08:32.176303] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:39.032 [2024-12-13 10:08:32.288494] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:39.032 [2024-12-13 10:08:32.288539] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:39.032 [2024-12-13 10:08:32.288550] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:39.032 [2024-12-13 10:08:32.288577] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:39.032 [2024-12-13 10:08:32.288585] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:39.032 [2024-12-13 10:08:32.290623] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:06:39.032 [2024-12-13 10:08:32.290690] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:06:39.032 [2024-12-13 10:08:32.290696] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:06:39.032 10:08:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:39.032 10:08:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:06:39.032 10:08:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:39.032 10:08:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:39.032 10:08:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:39.032 10:08:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:39.032 10:08:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:06:39.032 10:08:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:39.032 10:08:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:39.032 [2024-12-13 10:08:32.899012] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:39.032 10:08:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:39.032 10:08:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:06:39.032 10:08:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:39.032 10:08:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:39.291 Malloc0 00:06:39.291 10:08:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:39.291 10:08:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:39.291 10:08:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:39.291 10:08:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:39.291 Delay0 00:06:39.291 10:08:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:39.291 10:08:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:06:39.291 10:08:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:39.291 10:08:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:39.291 10:08:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:39.291 10:08:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:06:39.291 10:08:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:39.291 10:08:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:39.291 10:08:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:39.291 10:08:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:06:39.291 10:08:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:39.291 10:08:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:39.291 [2024-12-13 10:08:33.020843] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:39.291 10:08:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:39.291 10:08:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:39.291 10:08:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:39.291 10:08:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:39.291 10:08:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:39.291 10:08:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:06:39.291 [2024-12-13 10:08:33.136055] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:06:41.824 Initializing NVMe Controllers 00:06:41.824 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:06:41.824 controller IO queue size 128 less than required 00:06:41.824 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:06:41.824 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:06:41.824 Initialization complete. Launching workers. 00:06:41.824 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 33893 00:06:41.824 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 33950, failed to submit 66 00:06:41.824 success 33893, unsuccessful 57, failed 0 00:06:41.824 10:08:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:06:41.824 10:08:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:41.824 10:08:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:41.824 10:08:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:41.824 10:08:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:06:41.824 10:08:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:06:41.824 10:08:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:41.824 10:08:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:06:41.824 10:08:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:41.824 10:08:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:06:41.824 10:08:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:41.824 10:08:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:41.824 rmmod nvme_tcp 00:06:41.824 rmmod nvme_fabrics 00:06:41.824 rmmod nvme_keyring 00:06:41.824 10:08:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:41.824 10:08:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:06:41.824 10:08:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:06:41.824 10:08:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 3727143 ']' 00:06:41.824 10:08:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 3727143 00:06:41.824 10:08:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 3727143 ']' 00:06:41.824 10:08:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 3727143 00:06:41.824 10:08:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:06:41.824 10:08:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:41.824 10:08:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3727143 00:06:41.824 10:08:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:06:41.824 10:08:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:06:41.824 10:08:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3727143' 00:06:41.824 killing process with pid 3727143 00:06:41.824 10:08:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@973 -- # kill 3727143 00:06:41.824 10:08:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@978 -- # wait 3727143 00:06:43.201 10:08:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:43.201 10:08:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:43.201 10:08:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:43.201 10:08:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:06:43.201 10:08:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:06:43.201 10:08:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:06:43.201 10:08:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:43.201 10:08:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:43.201 10:08:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:43.201 10:08:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:43.201 10:08:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:43.201 10:08:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:45.105 10:08:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:45.105 00:06:45.105 real 0m12.703s 00:06:45.105 user 0m16.091s 00:06:45.105 sys 0m5.341s 00:06:45.106 10:08:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:45.106 10:08:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:45.106 ************************************ 00:06:45.106 END TEST nvmf_abort 00:06:45.106 ************************************ 00:06:45.106 10:08:38 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:06:45.106 10:08:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:45.106 10:08:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:45.106 10:08:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:45.106 ************************************ 00:06:45.106 START TEST nvmf_ns_hotplug_stress 00:06:45.106 ************************************ 00:06:45.106 10:08:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:06:45.106 * Looking for test storage... 00:06:45.106 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:45.106 10:08:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:45.106 10:08:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lcov --version 00:06:45.106 10:08:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:45.106 10:08:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:45.106 10:08:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:45.106 10:08:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:45.106 10:08:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:45.106 10:08:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:06:45.106 10:08:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:06:45.106 10:08:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:06:45.106 10:08:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:06:45.106 10:08:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:06:45.106 10:08:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:06:45.106 10:08:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:06:45.106 10:08:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:45.106 10:08:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:06:45.106 10:08:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:06:45.106 10:08:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:45.106 10:08:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:45.106 10:08:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:06:45.106 10:08:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:06:45.106 10:08:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:45.106 10:08:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:06:45.106 10:08:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:06:45.106 10:08:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:06:45.106 10:08:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:06:45.106 10:08:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:45.106 10:08:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:06:45.106 10:08:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:06:45.106 10:08:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:45.106 10:08:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:45.106 10:08:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:06:45.106 10:08:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:45.106 10:08:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:45.106 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:45.106 --rc genhtml_branch_coverage=1 00:06:45.106 --rc genhtml_function_coverage=1 00:06:45.106 --rc genhtml_legend=1 00:06:45.106 --rc geninfo_all_blocks=1 00:06:45.106 --rc geninfo_unexecuted_blocks=1 00:06:45.106 00:06:45.106 ' 00:06:45.106 10:08:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:45.106 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:45.106 --rc genhtml_branch_coverage=1 00:06:45.106 --rc genhtml_function_coverage=1 00:06:45.106 --rc genhtml_legend=1 00:06:45.106 --rc geninfo_all_blocks=1 00:06:45.106 --rc geninfo_unexecuted_blocks=1 00:06:45.106 00:06:45.106 ' 00:06:45.106 10:08:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:45.106 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:45.106 --rc genhtml_branch_coverage=1 00:06:45.106 --rc genhtml_function_coverage=1 00:06:45.106 --rc genhtml_legend=1 00:06:45.106 --rc geninfo_all_blocks=1 00:06:45.106 --rc geninfo_unexecuted_blocks=1 00:06:45.106 00:06:45.106 ' 00:06:45.106 10:08:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:45.106 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:45.106 --rc genhtml_branch_coverage=1 00:06:45.106 --rc genhtml_function_coverage=1 00:06:45.106 --rc genhtml_legend=1 00:06:45.106 --rc geninfo_all_blocks=1 00:06:45.106 --rc geninfo_unexecuted_blocks=1 00:06:45.106 00:06:45.106 ' 00:06:45.106 10:08:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:45.106 10:08:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:06:45.106 10:08:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:45.366 10:08:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:45.366 10:08:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:45.366 10:08:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:45.366 10:08:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:45.366 10:08:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:45.366 10:08:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:45.366 10:08:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:45.366 10:08:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:45.366 10:08:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:45.366 10:08:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:06:45.366 10:08:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:06:45.366 10:08:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:45.366 10:08:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:45.366 10:08:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:45.366 10:08:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:45.366 10:08:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:45.366 10:08:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:06:45.366 10:08:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:45.366 10:08:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:45.366 10:08:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:45.366 10:08:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:45.366 10:08:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:45.366 10:08:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:45.366 10:08:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:06:45.366 10:08:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:45.366 10:08:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:06:45.366 10:08:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:45.366 10:08:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:45.366 10:08:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:45.366 10:08:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:45.366 10:08:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:45.366 10:08:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:45.366 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:45.366 10:08:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:45.366 10:08:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:45.366 10:08:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:45.366 10:08:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:45.366 10:08:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:06:45.366 10:08:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:45.366 10:08:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:45.366 10:08:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:45.366 10:08:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:45.366 10:08:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:45.366 10:08:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:45.366 10:08:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:45.366 10:08:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:45.366 10:08:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:45.366 10:08:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:45.366 10:08:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:06:45.366 10:08:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:50.641 10:08:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:50.641 10:08:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:06:50.641 10:08:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:50.641 10:08:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:50.641 10:08:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:50.641 10:08:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:50.641 10:08:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:50.641 10:08:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:06:50.641 10:08:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:50.641 10:08:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:06:50.641 10:08:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:06:50.641 10:08:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:06:50.641 10:08:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:06:50.641 10:08:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:06:50.641 10:08:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:06:50.641 10:08:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:50.641 10:08:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:50.641 10:08:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:50.642 10:08:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:50.642 10:08:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:50.642 10:08:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:50.642 10:08:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:50.642 10:08:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:50.642 10:08:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:50.642 10:08:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:50.642 10:08:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:50.642 10:08:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:50.642 10:08:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:50.642 10:08:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:50.642 10:08:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:50.642 10:08:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:50.642 10:08:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:50.642 10:08:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:50.642 10:08:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:50.642 10:08:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:06:50.642 Found 0000:af:00.0 (0x8086 - 0x159b) 00:06:50.642 10:08:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:50.642 10:08:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:50.642 10:08:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:50.642 10:08:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:50.642 10:08:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:50.642 10:08:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:50.642 10:08:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:06:50.642 Found 0000:af:00.1 (0x8086 - 0x159b) 00:06:50.642 10:08:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:50.642 10:08:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:50.642 10:08:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:50.642 10:08:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:50.642 10:08:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:50.642 10:08:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:50.642 10:08:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:50.642 10:08:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:50.642 10:08:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:50.642 10:08:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:50.642 10:08:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:50.642 10:08:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:50.642 10:08:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:50.642 10:08:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:50.642 10:08:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:50.642 10:08:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:06:50.642 Found net devices under 0000:af:00.0: cvl_0_0 00:06:50.642 10:08:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:50.642 10:08:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:50.642 10:08:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:50.642 10:08:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:50.642 10:08:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:50.642 10:08:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:50.642 10:08:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:50.642 10:08:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:50.642 10:08:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:06:50.642 Found net devices under 0000:af:00.1: cvl_0_1 00:06:50.642 10:08:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:50.642 10:08:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:50.642 10:08:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:06:50.642 10:08:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:50.642 10:08:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:50.642 10:08:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:50.642 10:08:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:50.642 10:08:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:50.642 10:08:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:50.642 10:08:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:50.642 10:08:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:50.642 10:08:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:50.642 10:08:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:50.642 10:08:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:50.642 10:08:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:50.642 10:08:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:50.642 10:08:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:50.642 10:08:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:50.642 10:08:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:50.642 10:08:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:50.642 10:08:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:50.642 10:08:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:50.642 10:08:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:50.642 10:08:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:50.642 10:08:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:50.642 10:08:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:50.642 10:08:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:50.642 10:08:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:50.642 10:08:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:50.642 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:50.642 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.401 ms 00:06:50.642 00:06:50.642 --- 10.0.0.2 ping statistics --- 00:06:50.642 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:50.642 rtt min/avg/max/mdev = 0.401/0.401/0.401/0.000 ms 00:06:50.642 10:08:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:50.642 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:50.642 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.146 ms 00:06:50.642 00:06:50.642 --- 10.0.0.1 ping statistics --- 00:06:50.642 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:50.642 rtt min/avg/max/mdev = 0.146/0.146/0.146/0.000 ms 00:06:50.642 10:08:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:50.642 10:08:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:06:50.642 10:08:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:50.642 10:08:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:50.642 10:08:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:50.642 10:08:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:50.642 10:08:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:50.642 10:08:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:50.642 10:08:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:50.642 10:08:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:06:50.642 10:08:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:50.642 10:08:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:50.642 10:08:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:50.642 10:08:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=3731359 00:06:50.642 10:08:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:06:50.642 10:08:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 3731359 00:06:50.643 10:08:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 3731359 ']' 00:06:50.643 10:08:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:50.643 10:08:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:50.643 10:08:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:50.643 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:50.643 10:08:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:50.643 10:08:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:50.902 [2024-12-13 10:08:44.555443] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:06:50.902 [2024-12-13 10:08:44.555546] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:50.902 [2024-12-13 10:08:44.674755] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:50.902 [2024-12-13 10:08:44.779385] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:50.902 [2024-12-13 10:08:44.779430] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:50.902 [2024-12-13 10:08:44.779440] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:50.902 [2024-12-13 10:08:44.779471] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:50.902 [2024-12-13 10:08:44.779480] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:50.902 [2024-12-13 10:08:44.781767] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:06:50.902 [2024-12-13 10:08:44.781830] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:06:50.902 [2024-12-13 10:08:44.781840] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:06:51.838 10:08:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:51.838 10:08:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:06:51.838 10:08:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:51.838 10:08:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:51.838 10:08:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:51.838 10:08:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:51.838 10:08:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:06:51.838 10:08:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:06:51.838 [2024-12-13 10:08:45.573022] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:51.838 10:08:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:52.097 10:08:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:52.355 [2024-12-13 10:08:45.992312] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:52.355 10:08:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:52.355 10:08:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:06:52.614 Malloc0 00:06:52.614 10:08:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:52.873 Delay0 00:06:52.873 10:08:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:53.131 10:08:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:06:53.131 NULL1 00:06:53.390 10:08:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:06:53.390 10:08:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=3731900 00:06:53.390 10:08:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:06:53.390 10:08:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3731900 00:06:53.390 10:08:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:53.648 10:08:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:53.907 10:08:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:06:53.907 10:08:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:06:54.165 true 00:06:54.165 10:08:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3731900 00:06:54.165 10:08:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:54.424 10:08:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:54.683 10:08:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:06:54.683 10:08:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:06:54.683 true 00:06:54.683 10:08:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3731900 00:06:54.683 10:08:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:54.942 10:08:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:55.201 10:08:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:06:55.201 10:08:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:06:55.460 true 00:06:55.460 10:08:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3731900 00:06:55.460 10:08:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:55.719 10:08:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:55.977 10:08:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:06:55.977 10:08:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:06:55.977 true 00:06:55.977 10:08:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3731900 00:06:55.977 10:08:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:56.235 10:08:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:56.494 10:08:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:06:56.494 10:08:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:06:56.753 true 00:06:56.753 10:08:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3731900 00:06:56.753 10:08:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:57.011 10:08:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:57.269 10:08:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:06:57.269 10:08:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:06:57.269 true 00:06:57.269 10:08:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3731900 00:06:57.269 10:08:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:57.528 10:08:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:57.787 10:08:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:06:57.787 10:08:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:06:58.046 true 00:06:58.046 10:08:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3731900 00:06:58.046 10:08:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:58.304 10:08:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:58.563 10:08:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:06:58.563 10:08:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:06:58.563 true 00:06:58.563 10:08:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3731900 00:06:58.563 10:08:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:58.821 10:08:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:59.080 10:08:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:06:59.080 10:08:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:06:59.338 true 00:06:59.338 10:08:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3731900 00:06:59.338 10:08:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:59.597 10:08:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:59.855 10:08:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:06:59.855 10:08:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:06:59.855 true 00:06:59.855 10:08:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3731900 00:06:59.855 10:08:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:00.114 10:08:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:00.372 10:08:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:07:00.372 10:08:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:07:00.630 true 00:07:00.630 10:08:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3731900 00:07:00.630 10:08:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:00.889 10:08:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:01.148 10:08:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:07:01.148 10:08:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:07:01.148 true 00:07:01.148 10:08:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3731900 00:07:01.148 10:08:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:01.405 10:08:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:01.663 10:08:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:07:01.663 10:08:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:07:01.921 true 00:07:01.921 10:08:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3731900 00:07:01.921 10:08:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:02.179 10:08:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:02.437 10:08:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:07:02.437 10:08:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:07:02.437 true 00:07:02.437 10:08:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3731900 00:07:02.437 10:08:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:02.694 10:08:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:02.953 10:08:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:07:02.953 10:08:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:07:03.211 true 00:07:03.211 10:08:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3731900 00:07:03.211 10:08:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:03.470 10:08:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:03.728 10:08:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:07:03.728 10:08:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:07:03.728 true 00:07:03.728 10:08:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3731900 00:07:03.728 10:08:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:03.987 10:08:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:04.245 10:08:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:07:04.245 10:08:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:07:04.504 true 00:07:04.504 10:08:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3731900 00:07:04.504 10:08:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:04.762 10:08:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:05.021 10:08:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:07:05.021 10:08:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:07:05.021 true 00:07:05.280 10:08:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3731900 00:07:05.280 10:08:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:05.280 10:08:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:05.539 10:08:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:07:05.539 10:08:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:07:05.798 true 00:07:05.798 10:08:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3731900 00:07:05.798 10:08:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:06.057 10:08:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:06.317 10:09:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:07:06.317 10:09:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:07:06.317 true 00:07:06.576 10:09:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3731900 00:07:06.576 10:09:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:06.576 10:09:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:06.835 10:09:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:07:06.835 10:09:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:07:07.093 true 00:07:07.093 10:09:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3731900 00:07:07.094 10:09:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:07.352 10:09:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:07.611 10:09:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:07:07.611 10:09:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:07:07.869 true 00:07:07.869 10:09:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3731900 00:07:07.869 10:09:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:07.869 10:09:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:08.127 10:09:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:07:08.127 10:09:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:07:08.386 true 00:07:08.386 10:09:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3731900 00:07:08.386 10:09:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:08.645 10:09:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:08.904 10:09:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:07:08.904 10:09:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:07:09.163 true 00:07:09.163 10:09:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3731900 00:07:09.163 10:09:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:09.163 10:09:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:09.422 10:09:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:07:09.422 10:09:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:07:09.681 true 00:07:09.681 10:09:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3731900 00:07:09.681 10:09:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:09.940 10:09:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:10.199 10:09:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:07:10.199 10:09:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:07:10.458 true 00:07:10.458 10:09:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3731900 00:07:10.458 10:09:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:10.458 10:09:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:10.716 10:09:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:07:10.716 10:09:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:07:10.975 true 00:07:10.975 10:09:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3731900 00:07:10.975 10:09:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:11.234 10:09:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:11.493 10:09:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:07:11.493 10:09:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:07:11.753 true 00:07:11.753 10:09:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3731900 00:07:11.753 10:09:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:12.097 10:09:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:12.097 10:09:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:07:12.097 10:09:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:07:12.373 true 00:07:12.373 10:09:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3731900 00:07:12.373 10:09:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:12.373 10:09:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:12.632 10:09:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:07:12.632 10:09:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:07:12.891 true 00:07:12.891 10:09:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3731900 00:07:12.891 10:09:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:13.149 10:09:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:13.408 10:09:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:07:13.408 10:09:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:07:13.408 true 00:07:13.667 10:09:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3731900 00:07:13.667 10:09:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:13.667 10:09:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:13.926 10:09:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:07:13.926 10:09:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:07:14.186 true 00:07:14.186 10:09:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3731900 00:07:14.186 10:09:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:14.444 10:09:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:14.702 10:09:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:07:14.702 10:09:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:07:14.702 true 00:07:14.702 10:09:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3731900 00:07:14.702 10:09:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:14.960 10:09:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:15.220 10:09:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:07:15.220 10:09:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:07:15.479 true 00:07:15.479 10:09:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3731900 00:07:15.479 10:09:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:15.738 10:09:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:15.998 10:09:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:07:15.998 10:09:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:07:15.998 true 00:07:16.257 10:09:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3731900 00:07:16.257 10:09:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:16.257 10:09:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:16.516 10:09:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:07:16.516 10:09:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:07:16.775 true 00:07:16.775 10:09:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3731900 00:07:16.775 10:09:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:17.034 10:09:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:17.293 10:09:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:07:17.293 10:09:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:07:17.293 true 00:07:17.293 10:09:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3731900 00:07:17.293 10:09:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:17.552 10:09:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:17.811 10:09:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:07:17.811 10:09:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:07:18.070 true 00:07:18.070 10:09:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3731900 00:07:18.070 10:09:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:18.329 10:09:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:18.587 10:09:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1039 00:07:18.587 10:09:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:07:18.587 true 00:07:18.587 10:09:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3731900 00:07:18.587 10:09:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:18.846 10:09:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:19.106 10:09:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1040 00:07:19.106 10:09:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:07:19.365 true 00:07:19.365 10:09:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3731900 00:07:19.365 10:09:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:19.624 10:09:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:19.883 10:09:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1041 00:07:19.883 10:09:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:07:19.883 true 00:07:20.142 10:09:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3731900 00:07:20.142 10:09:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:20.142 10:09:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:20.401 10:09:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1042 00:07:20.402 10:09:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:07:20.660 true 00:07:20.661 10:09:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3731900 00:07:20.661 10:09:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:20.919 10:09:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:21.179 10:09:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1043 00:07:21.179 10:09:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:07:21.179 true 00:07:21.437 10:09:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3731900 00:07:21.437 10:09:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:21.437 10:09:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:21.696 10:09:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1044 00:07:21.696 10:09:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:07:21.955 true 00:07:21.955 10:09:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3731900 00:07:21.955 10:09:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:22.214 10:09:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:22.472 10:09:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1045 00:07:22.472 10:09:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:07:22.472 true 00:07:22.472 10:09:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3731900 00:07:22.472 10:09:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:22.731 10:09:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:22.991 10:09:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1046 00:07:22.991 10:09:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:07:23.249 true 00:07:23.249 10:09:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3731900 00:07:23.249 10:09:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:23.509 10:09:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:23.767 10:09:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1047 00:07:23.768 10:09:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:07:23.768 Initializing NVMe Controllers 00:07:23.768 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:23.768 Controller IO queue size 128, less than required. 00:07:23.768 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:23.768 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:07:23.768 Initialization complete. Launching workers. 00:07:23.768 ======================================================== 00:07:23.768 Latency(us) 00:07:23.768 Device Information : IOPS MiB/s Average min max 00:07:23.768 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 23379.27 11.42 5474.73 3037.20 44181.50 00:07:23.768 ======================================================== 00:07:23.768 Total : 23379.27 11.42 5474.73 3037.20 44181.50 00:07:23.768 00:07:23.768 true 00:07:23.768 10:09:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3731900 00:07:23.768 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (3731900) - No such process 00:07:23.768 10:09:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 3731900 00:07:23.768 10:09:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:24.026 10:09:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:24.285 10:09:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:07:24.285 10:09:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:07:24.285 10:09:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:07:24.285 10:09:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:24.285 10:09:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:07:24.544 null0 00:07:24.544 10:09:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:24.544 10:09:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:24.545 10:09:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:07:24.545 null1 00:07:24.804 10:09:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:24.804 10:09:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:24.804 10:09:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:07:24.804 null2 00:07:24.804 10:09:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:24.804 10:09:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:24.804 10:09:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:07:25.063 null3 00:07:25.063 10:09:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:25.063 10:09:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:25.063 10:09:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:07:25.321 null4 00:07:25.321 10:09:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:25.321 10:09:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:25.321 10:09:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:07:25.580 null5 00:07:25.580 10:09:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:25.580 10:09:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:25.580 10:09:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:07:25.580 null6 00:07:25.580 10:09:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:25.580 10:09:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:25.580 10:09:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:07:25.840 null7 00:07:25.840 10:09:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:25.840 10:09:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:25.840 10:09:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:07:25.840 10:09:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:25.840 10:09:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:25.840 10:09:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:07:25.840 10:09:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:25.840 10:09:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:25.840 10:09:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:07:25.840 10:09:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:25.840 10:09:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:25.840 10:09:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:25.840 10:09:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:25.840 10:09:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:25.840 10:09:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:07:25.840 10:09:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:25.840 10:09:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:07:25.840 10:09:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:25.840 10:09:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:25.840 10:09:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:25.840 10:09:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:25.840 10:09:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:25.840 10:09:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:07:25.840 10:09:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:25.840 10:09:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:07:25.840 10:09:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:25.840 10:09:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:25.840 10:09:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:25.840 10:09:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:25.840 10:09:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:07:25.840 10:09:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:25.840 10:09:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:25.840 10:09:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:07:25.840 10:09:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:25.840 10:09:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:25.840 10:09:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:25.840 10:09:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:25.840 10:09:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:07:25.840 10:09:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:25.840 10:09:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:25.840 10:09:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:07:25.840 10:09:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:25.840 10:09:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:25.841 10:09:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:25.841 10:09:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:25.841 10:09:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:25.841 10:09:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:07:25.841 10:09:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:25.841 10:09:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:07:25.841 10:09:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:25.841 10:09:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:25.841 10:09:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:25.841 10:09:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:25.841 10:09:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:25.841 10:09:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:25.841 10:09:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:07:25.841 10:09:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:07:25.841 10:09:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:25.841 10:09:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:25.841 10:09:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:25.841 10:09:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:25.841 10:09:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:25.841 10:09:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 3737952 3737954 3737957 3737961 3737964 3737967 3737970 3737973 00:07:25.841 10:09:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:07:25.841 10:09:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:25.841 10:09:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:07:25.841 10:09:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:25.841 10:09:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:25.841 10:09:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:26.100 10:09:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:26.100 10:09:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:26.100 10:09:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:26.100 10:09:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:26.100 10:09:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:26.100 10:09:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:26.100 10:09:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:26.100 10:09:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:26.359 10:09:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:26.359 10:09:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:26.359 10:09:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:26.359 10:09:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:26.359 10:09:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:26.359 10:09:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:26.359 10:09:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:26.359 10:09:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:26.359 10:09:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:26.359 10:09:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:26.359 10:09:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:26.359 10:09:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:26.359 10:09:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:26.359 10:09:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:26.359 10:09:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:26.359 10:09:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:26.359 10:09:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:26.359 10:09:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:26.359 10:09:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:26.359 10:09:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:26.359 10:09:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:26.359 10:09:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:26.359 10:09:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:26.359 10:09:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:26.618 10:09:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:26.618 10:09:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:26.618 10:09:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:26.618 10:09:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:26.618 10:09:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:26.618 10:09:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:26.618 10:09:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:26.618 10:09:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:26.618 10:09:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:26.618 10:09:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:26.618 10:09:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:26.618 10:09:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:26.618 10:09:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:26.618 10:09:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:26.618 10:09:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:26.618 10:09:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:26.618 10:09:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:26.618 10:09:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:26.618 10:09:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:26.618 10:09:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:26.618 10:09:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:26.618 10:09:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:26.618 10:09:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:26.877 10:09:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:26.877 10:09:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:26.877 10:09:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:26.877 10:09:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:26.877 10:09:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:26.877 10:09:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:26.877 10:09:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:26.877 10:09:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:26.877 10:09:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:26.877 10:09:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:26.877 10:09:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:26.877 10:09:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:26.877 10:09:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:26.878 10:09:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:26.878 10:09:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:26.878 10:09:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:26.878 10:09:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:27.138 10:09:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:27.138 10:09:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:27.138 10:09:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:27.138 10:09:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:27.138 10:09:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:27.138 10:09:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:27.138 10:09:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:27.138 10:09:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:27.138 10:09:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:27.138 10:09:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:27.138 10:09:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:27.138 10:09:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:27.138 10:09:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:27.138 10:09:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:27.138 10:09:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:27.138 10:09:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:27.138 10:09:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:27.138 10:09:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:27.138 10:09:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:27.138 10:09:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:27.138 10:09:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:27.138 10:09:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:27.138 10:09:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:27.138 10:09:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:27.397 10:09:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:27.397 10:09:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:27.397 10:09:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:27.397 10:09:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:27.397 10:09:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:27.397 10:09:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:27.397 10:09:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:27.397 10:09:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:27.656 10:09:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:27.656 10:09:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:27.656 10:09:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:27.656 10:09:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:27.656 10:09:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:27.656 10:09:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:27.656 10:09:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:27.656 10:09:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:27.656 10:09:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:27.656 10:09:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:27.656 10:09:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:27.656 10:09:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:27.656 10:09:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:27.656 10:09:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:27.656 10:09:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:27.656 10:09:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:27.656 10:09:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:27.656 10:09:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:27.656 10:09:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:27.656 10:09:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:27.656 10:09:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:27.656 10:09:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:27.656 10:09:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:27.656 10:09:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:27.656 10:09:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:27.656 10:09:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:27.657 10:09:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:27.916 10:09:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:27.916 10:09:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:27.916 10:09:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:27.916 10:09:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:27.916 10:09:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:27.916 10:09:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:27.916 10:09:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:27.916 10:09:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:27.916 10:09:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:27.916 10:09:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:27.916 10:09:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:27.916 10:09:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:27.916 10:09:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:27.916 10:09:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:27.916 10:09:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:27.916 10:09:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:27.916 10:09:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:27.916 10:09:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:27.916 10:09:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:27.916 10:09:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:27.916 10:09:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:27.916 10:09:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:27.916 10:09:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:27.916 10:09:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:27.916 10:09:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:27.916 10:09:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:27.916 10:09:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:27.916 10:09:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:27.916 10:09:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:28.176 10:09:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:28.176 10:09:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:28.176 10:09:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:28.176 10:09:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:28.176 10:09:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:28.176 10:09:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:28.176 10:09:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:28.176 10:09:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:28.435 10:09:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:28.435 10:09:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:28.435 10:09:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:28.435 10:09:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:28.435 10:09:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:28.435 10:09:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:28.435 10:09:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:28.435 10:09:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:28.435 10:09:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:28.435 10:09:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:28.435 10:09:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:28.435 10:09:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:28.435 10:09:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:28.435 10:09:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:28.435 10:09:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:28.435 10:09:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:28.435 10:09:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:28.435 10:09:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:28.435 10:09:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:28.435 10:09:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:28.435 10:09:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:28.435 10:09:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:28.435 10:09:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:28.435 10:09:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:28.435 10:09:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:28.694 10:09:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:28.694 10:09:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:28.694 10:09:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:28.694 10:09:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:28.694 10:09:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:28.694 10:09:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:28.694 10:09:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:28.694 10:09:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:28.694 10:09:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:28.694 10:09:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:28.694 10:09:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:28.694 10:09:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:28.694 10:09:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:28.953 10:09:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:28.953 10:09:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:28.953 10:09:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:28.953 10:09:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:28.953 10:09:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:28.953 10:09:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:28.953 10:09:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:28.953 10:09:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:28.953 10:09:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:28.953 10:09:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:28.953 10:09:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:28.953 10:09:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:28.953 10:09:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:28.953 10:09:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:28.953 10:09:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:28.953 10:09:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:28.953 10:09:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:28.953 10:09:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:28.953 10:09:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:28.953 10:09:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:28.953 10:09:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:28.953 10:09:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:28.953 10:09:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:28.953 10:09:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:28.953 10:09:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:28.953 10:09:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:29.212 10:09:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:29.212 10:09:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:29.212 10:09:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:29.212 10:09:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:29.212 10:09:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:29.212 10:09:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:29.212 10:09:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:29.212 10:09:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:29.212 10:09:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:29.212 10:09:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:29.212 10:09:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:29.212 10:09:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:29.212 10:09:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:29.212 10:09:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:29.212 10:09:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:29.212 10:09:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:29.212 10:09:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:29.212 10:09:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:29.212 10:09:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:29.212 10:09:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:29.212 10:09:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:29.212 10:09:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:29.212 10:09:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:29.212 10:09:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:29.212 10:09:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:29.471 10:09:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:29.471 10:09:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:29.471 10:09:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:29.471 10:09:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:29.471 10:09:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:29.471 10:09:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:29.471 10:09:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:29.472 10:09:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:29.472 10:09:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:29.472 10:09:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:29.730 10:09:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:29.731 10:09:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:29.731 10:09:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:29.731 10:09:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:29.731 10:09:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:29.731 10:09:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:29.731 10:09:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:29.731 10:09:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:29.731 10:09:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:29.731 10:09:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:29.731 10:09:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:29.731 10:09:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:29.731 10:09:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:29.731 10:09:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:29.731 10:09:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:29.731 10:09:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:29.731 10:09:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:29.731 10:09:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:29.731 10:09:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:29.731 10:09:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:29.731 10:09:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:29.731 10:09:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:29.731 10:09:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:29.731 10:09:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:29.989 10:09:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:29.989 10:09:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:29.989 10:09:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:29.989 10:09:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:29.989 10:09:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:29.989 10:09:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:29.989 10:09:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:29.989 10:09:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:29.989 10:09:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:29.989 10:09:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:29.989 10:09:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:29.989 10:09:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:29.989 10:09:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:29.989 10:09:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:29.989 10:09:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:29.989 10:09:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:29.989 10:09:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:29.989 10:09:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:29.989 10:09:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:29.989 10:09:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:29.989 10:09:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:29.989 10:09:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:07:29.989 10:09:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:07:29.989 10:09:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:29.989 10:09:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:07:29.989 10:09:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:29.989 10:09:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:07:29.989 10:09:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:29.989 10:09:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:29.989 rmmod nvme_tcp 00:07:30.248 rmmod nvme_fabrics 00:07:30.248 rmmod nvme_keyring 00:07:30.248 10:09:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:30.248 10:09:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:07:30.248 10:09:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:07:30.248 10:09:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 3731359 ']' 00:07:30.248 10:09:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 3731359 00:07:30.248 10:09:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 3731359 ']' 00:07:30.248 10:09:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 3731359 00:07:30.248 10:09:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:07:30.248 10:09:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:30.248 10:09:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3731359 00:07:30.248 10:09:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:30.248 10:09:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:30.248 10:09:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3731359' 00:07:30.248 killing process with pid 3731359 00:07:30.248 10:09:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 3731359 00:07:30.248 10:09:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 3731359 00:07:31.623 10:09:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:31.623 10:09:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:31.623 10:09:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:31.623 10:09:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:07:31.623 10:09:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:07:31.623 10:09:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:07:31.623 10:09:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:31.623 10:09:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:31.623 10:09:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:31.623 10:09:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:31.623 10:09:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:31.623 10:09:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:33.528 10:09:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:33.528 00:07:33.528 real 0m48.446s 00:07:33.528 user 3m25.603s 00:07:33.528 sys 0m16.561s 00:07:33.528 10:09:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:33.528 10:09:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:33.528 ************************************ 00:07:33.528 END TEST nvmf_ns_hotplug_stress 00:07:33.528 ************************************ 00:07:33.528 10:09:27 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:07:33.528 10:09:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:33.528 10:09:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:33.528 10:09:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:33.528 ************************************ 00:07:33.528 START TEST nvmf_delete_subsystem 00:07:33.528 ************************************ 00:07:33.528 10:09:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:07:33.788 * Looking for test storage... 00:07:33.788 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:33.788 10:09:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:33.788 10:09:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lcov --version 00:07:33.788 10:09:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:33.788 10:09:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:33.788 10:09:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:33.788 10:09:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:33.788 10:09:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:33.788 10:09:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:07:33.788 10:09:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:07:33.788 10:09:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:07:33.788 10:09:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:07:33.788 10:09:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:07:33.788 10:09:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:07:33.788 10:09:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:07:33.788 10:09:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:33.788 10:09:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:07:33.788 10:09:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:07:33.788 10:09:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:33.788 10:09:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:33.788 10:09:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:07:33.788 10:09:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:07:33.788 10:09:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:33.788 10:09:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:07:33.788 10:09:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:07:33.788 10:09:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:07:33.788 10:09:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:07:33.788 10:09:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:33.788 10:09:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:07:33.788 10:09:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:07:33.788 10:09:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:33.788 10:09:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:33.788 10:09:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:07:33.788 10:09:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:33.788 10:09:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:33.788 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:33.788 --rc genhtml_branch_coverage=1 00:07:33.788 --rc genhtml_function_coverage=1 00:07:33.788 --rc genhtml_legend=1 00:07:33.788 --rc geninfo_all_blocks=1 00:07:33.788 --rc geninfo_unexecuted_blocks=1 00:07:33.788 00:07:33.788 ' 00:07:33.788 10:09:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:33.788 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:33.788 --rc genhtml_branch_coverage=1 00:07:33.788 --rc genhtml_function_coverage=1 00:07:33.788 --rc genhtml_legend=1 00:07:33.788 --rc geninfo_all_blocks=1 00:07:33.788 --rc geninfo_unexecuted_blocks=1 00:07:33.788 00:07:33.788 ' 00:07:33.788 10:09:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:33.788 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:33.788 --rc genhtml_branch_coverage=1 00:07:33.788 --rc genhtml_function_coverage=1 00:07:33.788 --rc genhtml_legend=1 00:07:33.788 --rc geninfo_all_blocks=1 00:07:33.788 --rc geninfo_unexecuted_blocks=1 00:07:33.788 00:07:33.788 ' 00:07:33.788 10:09:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:33.788 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:33.788 --rc genhtml_branch_coverage=1 00:07:33.788 --rc genhtml_function_coverage=1 00:07:33.788 --rc genhtml_legend=1 00:07:33.788 --rc geninfo_all_blocks=1 00:07:33.788 --rc geninfo_unexecuted_blocks=1 00:07:33.788 00:07:33.788 ' 00:07:33.788 10:09:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:33.788 10:09:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:07:33.788 10:09:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:33.788 10:09:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:33.788 10:09:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:33.788 10:09:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:33.788 10:09:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:33.788 10:09:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:33.788 10:09:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:33.788 10:09:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:33.788 10:09:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:33.788 10:09:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:33.788 10:09:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:07:33.788 10:09:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:07:33.788 10:09:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:33.789 10:09:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:33.789 10:09:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:33.789 10:09:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:33.789 10:09:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:33.789 10:09:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:07:33.789 10:09:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:33.789 10:09:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:33.789 10:09:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:33.789 10:09:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:33.789 10:09:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:33.789 10:09:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:33.789 10:09:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:07:33.789 10:09:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:33.789 10:09:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:07:33.789 10:09:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:33.789 10:09:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:33.789 10:09:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:33.789 10:09:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:33.789 10:09:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:33.789 10:09:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:33.789 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:33.789 10:09:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:33.789 10:09:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:33.789 10:09:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:33.789 10:09:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:07:33.789 10:09:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:33.789 10:09:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:33.789 10:09:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:33.789 10:09:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:33.789 10:09:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:33.789 10:09:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:33.789 10:09:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:33.789 10:09:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:33.789 10:09:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:33.789 10:09:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:33.789 10:09:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:07:33.789 10:09:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:39.063 10:09:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:39.063 10:09:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:07:39.063 10:09:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:39.063 10:09:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:39.063 10:09:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:39.063 10:09:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:39.063 10:09:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:39.063 10:09:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:07:39.063 10:09:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:39.063 10:09:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:07:39.063 10:09:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:07:39.063 10:09:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:07:39.063 10:09:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:07:39.063 10:09:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:07:39.063 10:09:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:07:39.063 10:09:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:39.063 10:09:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:39.063 10:09:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:39.063 10:09:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:39.063 10:09:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:39.063 10:09:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:39.063 10:09:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:39.063 10:09:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:39.063 10:09:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:39.063 10:09:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:39.063 10:09:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:39.063 10:09:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:39.063 10:09:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:39.063 10:09:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:39.063 10:09:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:39.063 10:09:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:39.063 10:09:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:39.063 10:09:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:39.063 10:09:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:39.063 10:09:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:07:39.063 Found 0000:af:00.0 (0x8086 - 0x159b) 00:07:39.063 10:09:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:39.063 10:09:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:39.063 10:09:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:39.063 10:09:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:39.063 10:09:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:39.063 10:09:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:39.063 10:09:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:07:39.063 Found 0000:af:00.1 (0x8086 - 0x159b) 00:07:39.063 10:09:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:39.063 10:09:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:39.063 10:09:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:39.063 10:09:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:39.063 10:09:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:39.063 10:09:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:39.063 10:09:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:39.063 10:09:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:39.063 10:09:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:39.063 10:09:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:39.063 10:09:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:39.063 10:09:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:39.063 10:09:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:39.063 10:09:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:39.063 10:09:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:39.063 10:09:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:07:39.063 Found net devices under 0000:af:00.0: cvl_0_0 00:07:39.063 10:09:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:39.063 10:09:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:39.063 10:09:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:39.063 10:09:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:39.063 10:09:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:39.063 10:09:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:39.063 10:09:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:39.063 10:09:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:39.063 10:09:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:07:39.063 Found net devices under 0000:af:00.1: cvl_0_1 00:07:39.063 10:09:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:39.063 10:09:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:39.063 10:09:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:07:39.063 10:09:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:39.063 10:09:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:39.063 10:09:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:39.063 10:09:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:39.063 10:09:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:39.063 10:09:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:39.063 10:09:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:39.063 10:09:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:39.063 10:09:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:39.063 10:09:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:39.063 10:09:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:39.063 10:09:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:39.063 10:09:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:39.063 10:09:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:39.063 10:09:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:39.063 10:09:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:39.063 10:09:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:39.063 10:09:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:39.386 10:09:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:39.386 10:09:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:39.386 10:09:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:39.386 10:09:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:39.387 10:09:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:39.387 10:09:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:39.387 10:09:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:39.387 10:09:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:39.387 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:39.387 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.324 ms 00:07:39.387 00:07:39.387 --- 10.0.0.2 ping statistics --- 00:07:39.387 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:39.387 rtt min/avg/max/mdev = 0.324/0.324/0.324/0.000 ms 00:07:39.387 10:09:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:39.387 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:39.387 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.180 ms 00:07:39.387 00:07:39.387 --- 10.0.0.1 ping statistics --- 00:07:39.387 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:39.387 rtt min/avg/max/mdev = 0.180/0.180/0.180/0.000 ms 00:07:39.387 10:09:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:39.387 10:09:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:07:39.387 10:09:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:39.387 10:09:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:39.387 10:09:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:39.387 10:09:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:39.387 10:09:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:39.387 10:09:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:39.387 10:09:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:39.387 10:09:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:07:39.387 10:09:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:39.387 10:09:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:39.387 10:09:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:39.387 10:09:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=3742535 00:07:39.387 10:09:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 3742535 00:07:39.387 10:09:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:07:39.387 10:09:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 3742535 ']' 00:07:39.387 10:09:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:39.387 10:09:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:39.387 10:09:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:39.387 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:39.387 10:09:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:39.387 10:09:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:39.646 [2024-12-13 10:09:33.299477] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:07:39.646 [2024-12-13 10:09:33.299568] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:39.646 [2024-12-13 10:09:33.414203] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:39.646 [2024-12-13 10:09:33.520288] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:39.646 [2024-12-13 10:09:33.520334] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:39.646 [2024-12-13 10:09:33.520345] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:39.646 [2024-12-13 10:09:33.520355] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:39.646 [2024-12-13 10:09:33.520363] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:39.646 [2024-12-13 10:09:33.522397] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:39.646 [2024-12-13 10:09:33.522411] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:07:40.213 10:09:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:40.213 10:09:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:07:40.213 10:09:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:40.213 10:09:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:40.213 10:09:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:40.473 10:09:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:40.473 10:09:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:40.473 10:09:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:40.473 10:09:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:40.473 [2024-12-13 10:09:34.133248] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:40.473 10:09:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:40.473 10:09:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:40.473 10:09:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:40.473 10:09:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:40.473 10:09:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:40.473 10:09:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:40.473 10:09:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:40.473 10:09:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:40.473 [2024-12-13 10:09:34.157372] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:40.473 10:09:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:40.473 10:09:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:07:40.473 10:09:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:40.473 10:09:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:40.473 NULL1 00:07:40.473 10:09:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:40.473 10:09:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:40.474 10:09:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:40.474 10:09:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:40.474 Delay0 00:07:40.474 10:09:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:40.474 10:09:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:40.474 10:09:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:40.474 10:09:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:40.474 10:09:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:40.474 10:09:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=3742600 00:07:40.474 10:09:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:07:40.474 10:09:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:07:40.474 [2024-12-13 10:09:34.295272] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:07:42.377 10:09:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:42.377 10:09:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:42.377 10:09:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:42.636 Write completed with error (sct=0, sc=8) 00:07:42.636 Write completed with error (sct=0, sc=8) 00:07:42.636 starting I/O failed: -6 00:07:42.636 Read completed with error (sct=0, sc=8) 00:07:42.636 Read completed with error (sct=0, sc=8) 00:07:42.636 Read completed with error (sct=0, sc=8) 00:07:42.636 Write completed with error (sct=0, sc=8) 00:07:42.636 starting I/O failed: -6 00:07:42.636 Read completed with error (sct=0, sc=8) 00:07:42.636 Read completed with error (sct=0, sc=8) 00:07:42.636 Read completed with error (sct=0, sc=8) 00:07:42.636 Read completed with error (sct=0, sc=8) 00:07:42.636 starting I/O failed: -6 00:07:42.636 Read completed with error (sct=0, sc=8) 00:07:42.636 Read completed with error (sct=0, sc=8) 00:07:42.636 Read completed with error (sct=0, sc=8) 00:07:42.636 Read completed with error (sct=0, sc=8) 00:07:42.636 starting I/O failed: -6 00:07:42.636 Write completed with error (sct=0, sc=8) 00:07:42.636 Read completed with error (sct=0, sc=8) 00:07:42.636 Read completed with error (sct=0, sc=8) 00:07:42.636 Read completed with error (sct=0, sc=8) 00:07:42.636 starting I/O failed: -6 00:07:42.636 Read completed with error (sct=0, sc=8) 00:07:42.636 Read completed with error (sct=0, sc=8) 00:07:42.636 Read completed with error (sct=0, sc=8) 00:07:42.636 Write completed with error (sct=0, sc=8) 00:07:42.636 starting I/O failed: -6 00:07:42.636 Read completed with error (sct=0, sc=8) 00:07:42.636 Read completed with error (sct=0, sc=8) 00:07:42.636 Read completed with error (sct=0, sc=8) 00:07:42.636 Read completed with error (sct=0, sc=8) 00:07:42.636 starting I/O failed: -6 00:07:42.636 Read completed with error (sct=0, sc=8) 00:07:42.636 Write completed with error (sct=0, sc=8) 00:07:42.636 Read completed with error (sct=0, sc=8) 00:07:42.636 Write completed with error (sct=0, sc=8) 00:07:42.636 starting I/O failed: -6 00:07:42.636 Read completed with error (sct=0, sc=8) 00:07:42.636 Read completed with error (sct=0, sc=8) 00:07:42.636 Read completed with error (sct=0, sc=8) 00:07:42.636 Read completed with error (sct=0, sc=8) 00:07:42.636 starting I/O failed: -6 00:07:42.636 Write completed with error (sct=0, sc=8) 00:07:42.636 Read completed with error (sct=0, sc=8) 00:07:42.636 Read completed with error (sct=0, sc=8) 00:07:42.636 Write completed with error (sct=0, sc=8) 00:07:42.636 starting I/O failed: -6 00:07:42.636 Read completed with error (sct=0, sc=8) 00:07:42.636 Write completed with error (sct=0, sc=8) 00:07:42.636 Write completed with error (sct=0, sc=8) 00:07:42.636 Write completed with error (sct=0, sc=8) 00:07:42.636 starting I/O failed: -6 00:07:42.636 Read completed with error (sct=0, sc=8) 00:07:42.636 Read completed with error (sct=0, sc=8) 00:07:42.636 Read completed with error (sct=0, sc=8) 00:07:42.636 Write completed with error (sct=0, sc=8) 00:07:42.636 starting I/O failed: -6 00:07:42.636 Write completed with error (sct=0, sc=8) 00:07:42.636 Read completed with error (sct=0, sc=8) 00:07:42.636 Read completed with error (sct=0, sc=8) 00:07:42.636 Read completed with error (sct=0, sc=8) 00:07:42.636 starting I/O failed: -6 00:07:42.636 Read completed with error (sct=0, sc=8) 00:07:42.636 Read completed with error (sct=0, sc=8) 00:07:42.636 Read completed with error (sct=0, sc=8) 00:07:42.636 Read completed with error (sct=0, sc=8) 00:07:42.636 starting I/O failed: -6 00:07:42.636 [2024-12-13 10:09:36.509379] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500001ed00 is same with the state(6) to be set 00:07:42.636 Write completed with error (sct=0, sc=8) 00:07:42.636 starting I/O failed: -6 00:07:42.636 Write completed with error (sct=0, sc=8) 00:07:42.636 Read completed with error (sct=0, sc=8) 00:07:42.636 Write completed with error (sct=0, sc=8) 00:07:42.636 Read completed with error (sct=0, sc=8) 00:07:42.636 starting I/O failed: -6 00:07:42.636 Read completed with error (sct=0, sc=8) 00:07:42.636 Read completed with error (sct=0, sc=8) 00:07:42.636 Read completed with error (sct=0, sc=8) 00:07:42.636 Write completed with error (sct=0, sc=8) 00:07:42.636 starting I/O failed: -6 00:07:42.636 Read completed with error (sct=0, sc=8) 00:07:42.636 Write completed with error (sct=0, sc=8) 00:07:42.636 Read completed with error (sct=0, sc=8) 00:07:42.636 Read completed with error (sct=0, sc=8) 00:07:42.636 starting I/O failed: -6 00:07:42.636 Write completed with error (sct=0, sc=8) 00:07:42.637 Read completed with error (sct=0, sc=8) 00:07:42.637 Write completed with error (sct=0, sc=8) 00:07:42.637 Read completed with error (sct=0, sc=8) 00:07:42.637 starting I/O failed: -6 00:07:42.637 Read completed with error (sct=0, sc=8) 00:07:42.637 Write completed with error (sct=0, sc=8) 00:07:42.637 Read completed with error (sct=0, sc=8) 00:07:42.637 Read completed with error (sct=0, sc=8) 00:07:42.637 starting I/O failed: -6 00:07:42.637 Read completed with error (sct=0, sc=8) 00:07:42.637 Read completed with error (sct=0, sc=8) 00:07:42.637 Read completed with error (sct=0, sc=8) 00:07:42.637 Read completed with error (sct=0, sc=8) 00:07:42.637 starting I/O failed: -6 00:07:42.637 Read completed with error (sct=0, sc=8) 00:07:42.637 Read completed with error (sct=0, sc=8) 00:07:42.637 Read completed with error (sct=0, sc=8) 00:07:42.637 Read completed with error (sct=0, sc=8) 00:07:42.637 starting I/O failed: -6 00:07:42.637 Write completed with error (sct=0, sc=8) 00:07:42.637 Read completed with error (sct=0, sc=8) 00:07:42.637 Read completed with error (sct=0, sc=8) 00:07:42.637 Write completed with error (sct=0, sc=8) 00:07:42.637 starting I/O failed: -6 00:07:42.637 Read completed with error (sct=0, sc=8) 00:07:42.637 Write completed with error (sct=0, sc=8) 00:07:42.637 Write completed with error (sct=0, sc=8) 00:07:42.637 Write completed with error (sct=0, sc=8) 00:07:42.637 starting I/O failed: -6 00:07:42.637 Read completed with error (sct=0, sc=8) 00:07:42.637 Read completed with error (sct=0, sc=8) 00:07:42.637 Read completed with error (sct=0, sc=8) 00:07:42.637 [2024-12-13 10:09:36.510319] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500001fe80 is same with the state(6) to be set 00:07:42.637 Read completed with error (sct=0, sc=8) 00:07:42.637 Read completed with error (sct=0, sc=8) 00:07:42.637 Read completed with error (sct=0, sc=8) 00:07:42.637 Write completed with error (sct=0, sc=8) 00:07:42.637 Read completed with error (sct=0, sc=8) 00:07:42.637 Write completed with error (sct=0, sc=8) 00:07:42.637 Read completed with error (sct=0, sc=8) 00:07:42.637 Write completed with error (sct=0, sc=8) 00:07:42.637 Read completed with error (sct=0, sc=8) 00:07:42.637 Read completed with error (sct=0, sc=8) 00:07:42.637 Read completed with error (sct=0, sc=8) 00:07:42.637 Read completed with error (sct=0, sc=8) 00:07:42.637 Write completed with error (sct=0, sc=8) 00:07:42.637 Read completed with error (sct=0, sc=8) 00:07:42.637 Write completed with error (sct=0, sc=8) 00:07:42.637 Read completed with error (sct=0, sc=8) 00:07:42.637 Write completed with error (sct=0, sc=8) 00:07:42.637 Read completed with error (sct=0, sc=8) 00:07:42.637 Read completed with error (sct=0, sc=8) 00:07:42.637 Read completed with error (sct=0, sc=8) 00:07:42.637 Read completed with error (sct=0, sc=8) 00:07:42.637 Read completed with error (sct=0, sc=8) 00:07:42.637 Read completed with error (sct=0, sc=8) 00:07:42.637 Read completed with error (sct=0, sc=8) 00:07:42.637 Read completed with error (sct=0, sc=8) 00:07:42.637 Read completed with error (sct=0, sc=8) 00:07:42.637 Read completed with error (sct=0, sc=8) 00:07:42.637 Read completed with error (sct=0, sc=8) 00:07:42.637 Read completed with error (sct=0, sc=8) 00:07:42.637 Write completed with error (sct=0, sc=8) 00:07:42.637 Read completed with error (sct=0, sc=8) 00:07:42.637 Write completed with error (sct=0, sc=8) 00:07:42.637 Read completed with error (sct=0, sc=8) 00:07:42.637 Read completed with error (sct=0, sc=8) 00:07:42.637 Read completed with error (sct=0, sc=8) 00:07:42.637 Write completed with error (sct=0, sc=8) 00:07:42.637 Read completed with error (sct=0, sc=8) 00:07:42.637 Read completed with error (sct=0, sc=8) 00:07:42.637 Write completed with error (sct=0, sc=8) 00:07:42.637 Write completed with error (sct=0, sc=8) 00:07:42.637 Write completed with error (sct=0, sc=8) 00:07:42.637 Read completed with error (sct=0, sc=8) 00:07:42.637 Read completed with error (sct=0, sc=8) 00:07:42.637 Write completed with error (sct=0, sc=8) 00:07:42.637 Write completed with error (sct=0, sc=8) 00:07:42.637 Write completed with error (sct=0, sc=8) 00:07:42.637 Read completed with error (sct=0, sc=8) 00:07:42.637 Write completed with error (sct=0, sc=8) 00:07:42.637 Write completed with error (sct=0, sc=8) 00:07:42.637 Read completed with error (sct=0, sc=8) 00:07:42.637 [2024-12-13 10:09:36.511000] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000020100 is same with the state(6) to be set 00:07:42.637 Write completed with error (sct=0, sc=8) 00:07:42.637 Read completed with error (sct=0, sc=8) 00:07:42.637 Write completed with error (sct=0, sc=8) 00:07:42.637 Read completed with error (sct=0, sc=8) 00:07:42.637 Read completed with error (sct=0, sc=8) 00:07:42.637 Read completed with error (sct=0, sc=8) 00:07:42.637 Read completed with error (sct=0, sc=8) 00:07:42.637 Read completed with error (sct=0, sc=8) 00:07:42.637 Write completed with error (sct=0, sc=8) 00:07:42.637 Read completed with error (sct=0, sc=8) 00:07:42.637 Read completed with error (sct=0, sc=8) 00:07:42.637 Read completed with error (sct=0, sc=8) 00:07:42.637 Write completed with error (sct=0, sc=8) 00:07:42.637 Write completed with error (sct=0, sc=8) 00:07:42.637 Write completed with error (sct=0, sc=8) 00:07:42.637 Read completed with error (sct=0, sc=8) 00:07:42.637 Write completed with error (sct=0, sc=8) 00:07:42.637 Read completed with error (sct=0, sc=8) 00:07:42.637 Write completed with error (sct=0, sc=8) 00:07:42.637 Read completed with error (sct=0, sc=8) 00:07:42.637 Read completed with error (sct=0, sc=8) 00:07:42.637 Write completed with error (sct=0, sc=8) 00:07:42.637 Read completed with error (sct=0, sc=8) 00:07:42.637 Read completed with error (sct=0, sc=8) 00:07:42.637 Write completed with error (sct=0, sc=8) 00:07:42.637 Read completed with error (sct=0, sc=8) 00:07:42.637 Write completed with error (sct=0, sc=8) 00:07:42.637 Read completed with error (sct=0, sc=8) 00:07:42.637 Write completed with error (sct=0, sc=8) 00:07:42.637 Read completed with error (sct=0, sc=8) 00:07:42.637 Read completed with error (sct=0, sc=8) 00:07:42.637 Read completed with error (sct=0, sc=8) 00:07:42.637 Read completed with error (sct=0, sc=8) 00:07:42.637 Read completed with error (sct=0, sc=8) 00:07:42.637 Read completed with error (sct=0, sc=8) 00:07:42.637 Read completed with error (sct=0, sc=8) 00:07:42.637 Read completed with error (sct=0, sc=8) 00:07:42.637 Read completed with error (sct=0, sc=8) 00:07:42.637 Read completed with error (sct=0, sc=8) 00:07:42.637 Write completed with error (sct=0, sc=8) 00:07:42.637 Read completed with error (sct=0, sc=8) 00:07:42.637 Read completed with error (sct=0, sc=8) 00:07:42.637 Read completed with error (sct=0, sc=8) 00:07:42.637 Read completed with error (sct=0, sc=8) 00:07:42.637 Write completed with error (sct=0, sc=8) 00:07:42.637 Write completed with error (sct=0, sc=8) 00:07:42.637 Read completed with error (sct=0, sc=8) 00:07:42.637 Write completed with error (sct=0, sc=8) 00:07:42.637 Read completed with error (sct=0, sc=8) 00:07:42.637 Read completed with error (sct=0, sc=8) 00:07:42.637 [2024-12-13 10:09:36.512629] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000020600 is same with the state(6) to be set 00:07:44.015 [2024-12-13 10:09:37.473636] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500001e080 is same with the state(6) to be set 00:07:44.015 Read completed with error (sct=0, sc=8) 00:07:44.015 Write completed with error (sct=0, sc=8) 00:07:44.015 Read completed with error (sct=0, sc=8) 00:07:44.015 Read completed with error (sct=0, sc=8) 00:07:44.015 Read completed with error (sct=0, sc=8) 00:07:44.015 Read completed with error (sct=0, sc=8) 00:07:44.015 Write completed with error (sct=0, sc=8) 00:07:44.015 Read completed with error (sct=0, sc=8) 00:07:44.015 Read completed with error (sct=0, sc=8) 00:07:44.015 Read completed with error (sct=0, sc=8) 00:07:44.015 Read completed with error (sct=0, sc=8) 00:07:44.015 Write completed with error (sct=0, sc=8) 00:07:44.015 Read completed with error (sct=0, sc=8) 00:07:44.015 Write completed with error (sct=0, sc=8) 00:07:44.015 Read completed with error (sct=0, sc=8) 00:07:44.015 Write completed with error (sct=0, sc=8) 00:07:44.015 Read completed with error (sct=0, sc=8) 00:07:44.015 Read completed with error (sct=0, sc=8) 00:07:44.015 Read completed with error (sct=0, sc=8) 00:07:44.015 Write completed with error (sct=0, sc=8) 00:07:44.015 Write completed with error (sct=0, sc=8) 00:07:44.015 Read completed with error (sct=0, sc=8) 00:07:44.015 Read completed with error (sct=0, sc=8) 00:07:44.015 Read completed with error (sct=0, sc=8) 00:07:44.015 Read completed with error (sct=0, sc=8) 00:07:44.015 Read completed with error (sct=0, sc=8) 00:07:44.015 Read completed with error (sct=0, sc=8) 00:07:44.015 Read completed with error (sct=0, sc=8) 00:07:44.015 Read completed with error (sct=0, sc=8) 00:07:44.015 Read completed with error (sct=0, sc=8) 00:07:44.015 Read completed with error (sct=0, sc=8) 00:07:44.015 Write completed with error (sct=0, sc=8) 00:07:44.015 Write completed with error (sct=0, sc=8) 00:07:44.015 Write completed with error (sct=0, sc=8) 00:07:44.015 Write completed with error (sct=0, sc=8) 00:07:44.015 Read completed with error (sct=0, sc=8) 00:07:44.015 [2024-12-13 10:09:37.512167] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500001ea80 is same with the state(6) to be set 00:07:44.015 Write completed with error (sct=0, sc=8) 00:07:44.015 Read completed with error (sct=0, sc=8) 00:07:44.015 Read completed with error (sct=0, sc=8) 00:07:44.015 Read completed with error (sct=0, sc=8) 00:07:44.015 Write completed with error (sct=0, sc=8) 00:07:44.015 Read completed with error (sct=0, sc=8) 00:07:44.015 Write completed with error (sct=0, sc=8) 00:07:44.015 Read completed with error (sct=0, sc=8) 00:07:44.015 Read completed with error (sct=0, sc=8) 00:07:44.015 Read completed with error (sct=0, sc=8) 00:07:44.015 Read completed with error (sct=0, sc=8) 00:07:44.015 Read completed with error (sct=0, sc=8) 00:07:44.015 Read completed with error (sct=0, sc=8) 00:07:44.015 Write completed with error (sct=0, sc=8) 00:07:44.015 Write completed with error (sct=0, sc=8) 00:07:44.015 Read completed with error (sct=0, sc=8) 00:07:44.015 Write completed with error (sct=0, sc=8) 00:07:44.015 Write completed with error (sct=0, sc=8) 00:07:44.015 Read completed with error (sct=0, sc=8) 00:07:44.015 Read completed with error (sct=0, sc=8) 00:07:44.015 Write completed with error (sct=0, sc=8) 00:07:44.015 Read completed with error (sct=0, sc=8) 00:07:44.015 Read completed with error (sct=0, sc=8) 00:07:44.015 Read completed with error (sct=0, sc=8) 00:07:44.015 Read completed with error (sct=0, sc=8) 00:07:44.015 Read completed with error (sct=0, sc=8) 00:07:44.015 Read completed with error (sct=0, sc=8) 00:07:44.015 Write completed with error (sct=0, sc=8) 00:07:44.015 Write completed with error (sct=0, sc=8) 00:07:44.015 Read completed with error (sct=0, sc=8) 00:07:44.015 Write completed with error (sct=0, sc=8) 00:07:44.015 Read completed with error (sct=0, sc=8) 00:07:44.015 Read completed with error (sct=0, sc=8) 00:07:44.015 Read completed with error (sct=0, sc=8) 00:07:44.015 Write completed with error (sct=0, sc=8) 00:07:44.015 [2024-12-13 10:09:37.512815] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500001ef80 is same with the state(6) to be set 00:07:44.015 Read completed with error (sct=0, sc=8) 00:07:44.015 Write completed with error (sct=0, sc=8) 00:07:44.015 Write completed with error (sct=0, sc=8) 00:07:44.015 Read completed with error (sct=0, sc=8) 00:07:44.015 Read completed with error (sct=0, sc=8) 00:07:44.015 Read completed with error (sct=0, sc=8) 00:07:44.015 Read completed with error (sct=0, sc=8) 00:07:44.015 Read completed with error (sct=0, sc=8) 00:07:44.015 Read completed with error (sct=0, sc=8) 00:07:44.015 Read completed with error (sct=0, sc=8) 00:07:44.015 Write completed with error (sct=0, sc=8) 00:07:44.015 Write completed with error (sct=0, sc=8) 00:07:44.015 Write completed with error (sct=0, sc=8) 00:07:44.015 Read completed with error (sct=0, sc=8) 00:07:44.015 Read completed with error (sct=0, sc=8) 00:07:44.015 Read completed with error (sct=0, sc=8) 00:07:44.015 Read completed with error (sct=0, sc=8) 00:07:44.015 Read completed with error (sct=0, sc=8) 00:07:44.015 [2024-12-13 10:09:37.513715] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000020380 is same with the state(6) to be set 00:07:44.015 Read completed with error (sct=0, sc=8) 00:07:44.015 Read completed with error (sct=0, sc=8) 00:07:44.015 Read completed with error (sct=0, sc=8) 00:07:44.015 Write completed with error (sct=0, sc=8) 00:07:44.015 Read completed with error (sct=0, sc=8) 00:07:44.015 Write completed with error (sct=0, sc=8) 00:07:44.015 Write completed with error (sct=0, sc=8) 00:07:44.015 Read completed with error (sct=0, sc=8) 00:07:44.015 Read completed with error (sct=0, sc=8) 00:07:44.015 Write completed with error (sct=0, sc=8) 00:07:44.015 Read completed with error (sct=0, sc=8) 00:07:44.015 Read completed with error (sct=0, sc=8) 00:07:44.015 Write completed with error (sct=0, sc=8) 00:07:44.015 Read completed with error (sct=0, sc=8) 00:07:44.015 Read completed with error (sct=0, sc=8) 00:07:44.015 Write completed with error (sct=0, sc=8) 00:07:44.015 Read completed with error (sct=0, sc=8) 00:07:44.015 Read completed with error (sct=0, sc=8) 00:07:44.015 Read completed with error (sct=0, sc=8) 00:07:44.015 Write completed with error (sct=0, sc=8) 00:07:44.015 Read completed with error (sct=0, sc=8) 00:07:44.015 Read completed with error (sct=0, sc=8) 00:07:44.015 Read completed with error (sct=0, sc=8) 00:07:44.015 Read completed with error (sct=0, sc=8) 00:07:44.015 Read completed with error (sct=0, sc=8) 00:07:44.015 Read completed with error (sct=0, sc=8) 00:07:44.015 Read completed with error (sct=0, sc=8) 00:07:44.015 Read completed with error (sct=0, sc=8) 00:07:44.015 Read completed with error (sct=0, sc=8) 00:07:44.015 Write completed with error (sct=0, sc=8) 00:07:44.015 Read completed with error (sct=0, sc=8) 00:07:44.015 Read completed with error (sct=0, sc=8) 00:07:44.015 Read completed with error (sct=0, sc=8) 00:07:44.015 Read completed with error (sct=0, sc=8) 00:07:44.015 Write completed with error (sct=0, sc=8) 00:07:44.015 Read completed with error (sct=0, sc=8) 00:07:44.015 10:09:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.015 10:09:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:07:44.015 [2024-12-13 10:09:37.519296] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500001e800 is same with the state(6) to be set 00:07:44.015 10:09:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3742600 00:07:44.015 10:09:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:07:44.015 Initializing NVMe Controllers 00:07:44.015 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:44.015 Controller IO queue size 128, less than required. 00:07:44.015 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:44.015 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:07:44.015 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:07:44.015 Initialization complete. Launching workers. 00:07:44.015 ======================================================== 00:07:44.015 Latency(us) 00:07:44.015 Device Information : IOPS MiB/s Average min max 00:07:44.015 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 191.05 0.09 948004.82 747.87 1013884.79 00:07:44.015 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 157.80 0.08 868154.20 699.86 1013329.75 00:07:44.015 ======================================================== 00:07:44.015 Total : 348.85 0.17 911884.63 699.86 1013884.79 00:07:44.015 00:07:44.015 [2024-12-13 10:09:37.524665] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500001e080 (9): Bad file descriptor 00:07:44.015 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:07:44.274 10:09:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:07:44.274 10:09:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3742600 00:07:44.274 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (3742600) - No such process 00:07:44.274 10:09:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 3742600 00:07:44.274 10:09:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:07:44.274 10:09:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 3742600 00:07:44.274 10:09:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:07:44.274 10:09:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:44.274 10:09:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:07:44.274 10:09:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:44.274 10:09:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 3742600 00:07:44.274 10:09:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:07:44.274 10:09:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:44.274 10:09:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:44.274 10:09:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:44.275 10:09:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:44.275 10:09:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.275 10:09:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:44.275 10:09:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.275 10:09:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:44.275 10:09:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.275 10:09:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:44.275 [2024-12-13 10:09:38.047434] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:44.275 10:09:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.275 10:09:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:44.275 10:09:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.275 10:09:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:44.275 10:09:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.275 10:09:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=3743276 00:07:44.275 10:09:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:07:44.275 10:09:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:07:44.275 10:09:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3743276 00:07:44.275 10:09:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:44.275 [2024-12-13 10:09:38.161123] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:07:44.842 10:09:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:44.842 10:09:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3743276 00:07:44.842 10:09:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:45.410 10:09:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:45.410 10:09:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3743276 00:07:45.410 10:09:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:45.977 10:09:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:45.977 10:09:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3743276 00:07:45.977 10:09:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:46.235 10:09:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:46.235 10:09:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3743276 00:07:46.235 10:09:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:46.801 10:09:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:46.801 10:09:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3743276 00:07:46.801 10:09:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:47.368 10:09:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:47.368 10:09:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3743276 00:07:47.368 10:09:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:47.626 Initializing NVMe Controllers 00:07:47.626 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:47.626 Controller IO queue size 128, less than required. 00:07:47.626 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:47.626 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:07:47.626 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:07:47.626 Initialization complete. Launching workers. 00:07:47.626 ======================================================== 00:07:47.626 Latency(us) 00:07:47.626 Device Information : IOPS MiB/s Average min max 00:07:47.626 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1005337.40 1000172.77 1042131.61 00:07:47.626 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004573.12 1000201.63 1013808.35 00:07:47.626 ======================================================== 00:07:47.626 Total : 256.00 0.12 1004955.26 1000172.77 1042131.61 00:07:47.626 00:07:47.885 10:09:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:47.885 10:09:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3743276 00:07:47.885 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (3743276) - No such process 00:07:47.885 10:09:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 3743276 00:07:47.885 10:09:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:07:47.885 10:09:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:07:47.885 10:09:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:47.885 10:09:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:07:47.885 10:09:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:47.885 10:09:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:07:47.885 10:09:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:47.885 10:09:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:47.885 rmmod nvme_tcp 00:07:47.885 rmmod nvme_fabrics 00:07:47.885 rmmod nvme_keyring 00:07:47.885 10:09:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:47.885 10:09:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:07:47.885 10:09:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:07:47.885 10:09:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 3742535 ']' 00:07:47.885 10:09:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 3742535 00:07:47.885 10:09:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 3742535 ']' 00:07:47.885 10:09:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 3742535 00:07:47.885 10:09:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:07:47.885 10:09:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:47.885 10:09:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3742535 00:07:47.885 10:09:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:47.885 10:09:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:47.885 10:09:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3742535' 00:07:47.885 killing process with pid 3742535 00:07:47.885 10:09:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 3742535 00:07:47.885 10:09:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 3742535 00:07:49.262 10:09:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:49.262 10:09:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:49.262 10:09:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:49.263 10:09:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:07:49.263 10:09:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:07:49.263 10:09:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:49.263 10:09:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:07:49.263 10:09:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:49.263 10:09:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:49.263 10:09:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:49.263 10:09:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:49.263 10:09:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:51.168 10:09:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:51.168 00:07:51.168 real 0m17.560s 00:07:51.168 user 0m32.500s 00:07:51.168 sys 0m5.459s 00:07:51.168 10:09:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:51.168 10:09:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:51.168 ************************************ 00:07:51.168 END TEST nvmf_delete_subsystem 00:07:51.168 ************************************ 00:07:51.168 10:09:44 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:51.168 10:09:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:51.168 10:09:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:51.168 10:09:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:51.168 ************************************ 00:07:51.168 START TEST nvmf_host_management 00:07:51.168 ************************************ 00:07:51.168 10:09:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:51.168 * Looking for test storage... 00:07:51.432 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:51.432 10:09:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:51.432 10:09:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # lcov --version 00:07:51.432 10:09:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:51.432 10:09:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:51.432 10:09:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:51.432 10:09:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:51.432 10:09:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:51.432 10:09:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:07:51.432 10:09:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:07:51.432 10:09:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:07:51.432 10:09:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:07:51.432 10:09:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:07:51.432 10:09:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:07:51.432 10:09:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:07:51.432 10:09:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:51.432 10:09:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:07:51.432 10:09:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:07:51.432 10:09:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:51.432 10:09:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:51.432 10:09:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:07:51.432 10:09:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:07:51.432 10:09:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:51.432 10:09:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:07:51.432 10:09:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:07:51.432 10:09:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:07:51.432 10:09:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:07:51.432 10:09:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:51.432 10:09:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:07:51.432 10:09:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:07:51.432 10:09:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:51.432 10:09:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:51.432 10:09:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:07:51.432 10:09:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:51.432 10:09:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:51.432 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:51.432 --rc genhtml_branch_coverage=1 00:07:51.432 --rc genhtml_function_coverage=1 00:07:51.432 --rc genhtml_legend=1 00:07:51.432 --rc geninfo_all_blocks=1 00:07:51.432 --rc geninfo_unexecuted_blocks=1 00:07:51.432 00:07:51.432 ' 00:07:51.432 10:09:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:51.432 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:51.432 --rc genhtml_branch_coverage=1 00:07:51.432 --rc genhtml_function_coverage=1 00:07:51.432 --rc genhtml_legend=1 00:07:51.432 --rc geninfo_all_blocks=1 00:07:51.432 --rc geninfo_unexecuted_blocks=1 00:07:51.432 00:07:51.432 ' 00:07:51.432 10:09:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:51.432 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:51.432 --rc genhtml_branch_coverage=1 00:07:51.432 --rc genhtml_function_coverage=1 00:07:51.433 --rc genhtml_legend=1 00:07:51.433 --rc geninfo_all_blocks=1 00:07:51.433 --rc geninfo_unexecuted_blocks=1 00:07:51.433 00:07:51.433 ' 00:07:51.433 10:09:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:51.433 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:51.433 --rc genhtml_branch_coverage=1 00:07:51.433 --rc genhtml_function_coverage=1 00:07:51.433 --rc genhtml_legend=1 00:07:51.433 --rc geninfo_all_blocks=1 00:07:51.433 --rc geninfo_unexecuted_blocks=1 00:07:51.433 00:07:51.433 ' 00:07:51.433 10:09:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:51.433 10:09:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:07:51.433 10:09:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:51.433 10:09:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:51.433 10:09:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:51.433 10:09:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:51.433 10:09:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:51.433 10:09:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:51.433 10:09:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:51.433 10:09:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:51.433 10:09:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:51.433 10:09:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:51.433 10:09:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:07:51.433 10:09:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:07:51.433 10:09:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:51.433 10:09:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:51.433 10:09:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:51.433 10:09:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:51.433 10:09:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:51.433 10:09:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:07:51.433 10:09:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:51.433 10:09:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:51.433 10:09:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:51.433 10:09:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:51.433 10:09:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:51.433 10:09:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:51.433 10:09:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:07:51.433 10:09:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:51.433 10:09:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:07:51.433 10:09:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:51.433 10:09:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:51.433 10:09:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:51.433 10:09:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:51.433 10:09:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:51.433 10:09:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:51.433 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:51.433 10:09:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:51.433 10:09:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:51.433 10:09:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:51.433 10:09:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:51.433 10:09:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:51.433 10:09:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:07:51.433 10:09:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:51.433 10:09:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:51.433 10:09:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:51.433 10:09:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:51.433 10:09:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:51.433 10:09:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:51.433 10:09:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:51.433 10:09:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:51.433 10:09:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:51.433 10:09:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:51.433 10:09:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:07:51.433 10:09:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:56.706 10:09:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:56.706 10:09:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:07:56.706 10:09:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:56.706 10:09:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:56.706 10:09:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:56.706 10:09:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:56.706 10:09:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:56.706 10:09:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:07:56.706 10:09:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:56.706 10:09:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:07:56.706 10:09:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:07:56.706 10:09:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:07:56.706 10:09:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:07:56.706 10:09:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:07:56.706 10:09:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:07:56.706 10:09:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:56.706 10:09:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:56.706 10:09:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:56.706 10:09:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:56.706 10:09:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:56.706 10:09:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:56.706 10:09:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:56.706 10:09:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:56.706 10:09:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:56.706 10:09:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:56.706 10:09:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:56.706 10:09:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:56.706 10:09:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:56.706 10:09:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:56.706 10:09:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:56.706 10:09:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:56.706 10:09:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:56.706 10:09:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:56.706 10:09:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:56.706 10:09:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:07:56.706 Found 0000:af:00.0 (0x8086 - 0x159b) 00:07:56.706 10:09:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:56.706 10:09:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:56.706 10:09:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:56.706 10:09:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:56.706 10:09:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:56.706 10:09:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:56.706 10:09:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:07:56.706 Found 0000:af:00.1 (0x8086 - 0x159b) 00:07:56.706 10:09:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:56.706 10:09:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:56.706 10:09:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:56.706 10:09:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:56.706 10:09:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:56.706 10:09:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:56.706 10:09:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:56.706 10:09:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:56.706 10:09:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:56.706 10:09:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:56.706 10:09:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:56.706 10:09:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:56.706 10:09:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:56.706 10:09:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:56.706 10:09:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:56.707 10:09:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:07:56.707 Found net devices under 0000:af:00.0: cvl_0_0 00:07:56.707 10:09:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:56.707 10:09:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:56.707 10:09:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:56.707 10:09:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:56.707 10:09:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:56.707 10:09:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:56.707 10:09:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:56.707 10:09:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:56.707 10:09:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:07:56.707 Found net devices under 0000:af:00.1: cvl_0_1 00:07:56.707 10:09:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:56.707 10:09:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:56.707 10:09:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:07:56.707 10:09:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:56.707 10:09:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:56.707 10:09:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:56.707 10:09:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:56.707 10:09:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:56.707 10:09:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:56.707 10:09:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:56.707 10:09:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:56.707 10:09:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:56.707 10:09:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:56.707 10:09:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:56.707 10:09:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:56.707 10:09:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:56.707 10:09:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:56.707 10:09:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:56.707 10:09:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:56.707 10:09:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:56.707 10:09:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:56.707 10:09:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:56.707 10:09:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:56.707 10:09:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:56.707 10:09:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:56.966 10:09:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:56.966 10:09:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:56.966 10:09:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:56.966 10:09:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:56.966 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:56.966 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.406 ms 00:07:56.966 00:07:56.966 --- 10.0.0.2 ping statistics --- 00:07:56.966 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:56.966 rtt min/avg/max/mdev = 0.406/0.406/0.406/0.000 ms 00:07:56.966 10:09:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:56.966 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:56.966 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.174 ms 00:07:56.966 00:07:56.966 --- 10.0.0.1 ping statistics --- 00:07:56.966 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:56.966 rtt min/avg/max/mdev = 0.174/0.174/0.174/0.000 ms 00:07:56.966 10:09:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:56.966 10:09:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:07:56.966 10:09:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:56.966 10:09:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:56.966 10:09:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:56.966 10:09:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:56.966 10:09:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:56.966 10:09:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:56.966 10:09:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:56.966 10:09:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:07:56.966 10:09:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:07:56.966 10:09:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:07:56.966 10:09:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:56.966 10:09:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:56.966 10:09:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:56.966 10:09:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=3747459 00:07:56.966 10:09:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 3747459 00:07:56.966 10:09:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:07:56.966 10:09:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 3747459 ']' 00:07:56.966 10:09:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:56.966 10:09:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:56.966 10:09:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:56.966 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:56.966 10:09:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:56.966 10:09:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:56.966 [2024-12-13 10:09:50.759415] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:07:56.966 [2024-12-13 10:09:50.759509] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:57.226 [2024-12-13 10:09:50.880065] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:57.226 [2024-12-13 10:09:50.993841] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:57.226 [2024-12-13 10:09:50.993885] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:57.226 [2024-12-13 10:09:50.993895] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:57.226 [2024-12-13 10:09:50.993905] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:57.226 [2024-12-13 10:09:50.993914] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:57.226 [2024-12-13 10:09:50.996348] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:07:57.226 [2024-12-13 10:09:50.996420] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:07:57.226 [2024-12-13 10:09:50.996501] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:07:57.226 [2024-12-13 10:09:50.996522] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:07:57.793 10:09:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:57.793 10:09:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:07:57.793 10:09:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:57.793 10:09:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:57.793 10:09:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:57.793 10:09:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:57.794 10:09:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:57.794 10:09:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.794 10:09:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:57.794 [2024-12-13 10:09:51.611736] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:57.794 10:09:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.794 10:09:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:07:57.794 10:09:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:57.794 10:09:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:57.794 10:09:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:07:57.794 10:09:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:07:57.794 10:09:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:07:57.794 10:09:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.794 10:09:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:58.052 Malloc0 00:07:58.052 [2024-12-13 10:09:51.749923] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:58.052 10:09:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.052 10:09:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:07:58.052 10:09:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:58.052 10:09:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:58.052 10:09:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=3747698 00:07:58.052 10:09:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 3747698 /var/tmp/bdevperf.sock 00:07:58.052 10:09:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 3747698 ']' 00:07:58.052 10:09:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:07:58.052 10:09:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:07:58.052 10:09:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:58.052 10:09:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:07:58.052 10:09:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:07:58.052 10:09:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:58.052 10:09:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:58.053 10:09:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:58.053 { 00:07:58.053 "params": { 00:07:58.053 "name": "Nvme$subsystem", 00:07:58.053 "trtype": "$TEST_TRANSPORT", 00:07:58.053 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:58.053 "adrfam": "ipv4", 00:07:58.053 "trsvcid": "$NVMF_PORT", 00:07:58.053 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:58.053 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:58.053 "hdgst": ${hdgst:-false}, 00:07:58.053 "ddgst": ${ddgst:-false} 00:07:58.053 }, 00:07:58.053 "method": "bdev_nvme_attach_controller" 00:07:58.053 } 00:07:58.053 EOF 00:07:58.053 )") 00:07:58.053 10:09:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:58.053 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:58.053 10:09:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:58.053 10:09:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:58.053 10:09:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:07:58.053 10:09:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:07:58.053 10:09:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:07:58.053 10:09:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:58.053 "params": { 00:07:58.053 "name": "Nvme0", 00:07:58.053 "trtype": "tcp", 00:07:58.053 "traddr": "10.0.0.2", 00:07:58.053 "adrfam": "ipv4", 00:07:58.053 "trsvcid": "4420", 00:07:58.053 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:58.053 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:58.053 "hdgst": false, 00:07:58.053 "ddgst": false 00:07:58.053 }, 00:07:58.053 "method": "bdev_nvme_attach_controller" 00:07:58.053 }' 00:07:58.053 [2024-12-13 10:09:51.871716] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:07:58.053 [2024-12-13 10:09:51.871806] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3747698 ] 00:07:58.311 [2024-12-13 10:09:51.985133] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:58.311 [2024-12-13 10:09:52.097787] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:58.880 Running I/O for 10 seconds... 00:07:58.880 10:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:58.880 10:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:07:58.880 10:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:07:58.880 10:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.880 10:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:58.880 10:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.880 10:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:58.880 10:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:07:58.880 10:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:07:58.880 10:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:07:58.880 10:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:07:58.880 10:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:07:58.880 10:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:07:58.880 10:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:07:58.880 10:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:07:58.880 10:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.880 10:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:58.880 10:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:07:58.880 10:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.880 10:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=297 00:07:58.880 10:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 297 -ge 100 ']' 00:07:58.880 10:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:07:58.880 10:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:07:58.880 10:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:07:58.880 10:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:58.880 10:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.880 10:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:58.880 [2024-12-13 10:09:52.753846] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:07:58.880 [2024-12-13 10:09:52.753899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.880 [2024-12-13 10:09:52.753919] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:07:58.880 [2024-12-13 10:09:52.753931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.880 [2024-12-13 10:09:52.753942] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:07:58.880 [2024-12-13 10:09:52.753951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.880 [2024-12-13 10:09:52.753962] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:07:58.880 [2024-12-13 10:09:52.753972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.880 [2024-12-13 10:09:52.753982] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:07:58.880 [2024-12-13 10:09:52.754425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:45184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.880 [2024-12-13 10:09:52.754458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.880 [2024-12-13 10:09:52.754488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:45312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.880 [2024-12-13 10:09:52.754499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.880 [2024-12-13 10:09:52.754512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:45440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.880 [2024-12-13 10:09:52.754522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.880 [2024-12-13 10:09:52.754533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:45568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.880 [2024-12-13 10:09:52.754543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.880 [2024-12-13 10:09:52.754555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:45696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.880 [2024-12-13 10:09:52.754565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.880 [2024-12-13 10:09:52.754577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:45824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.880 [2024-12-13 10:09:52.754586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.880 [2024-12-13 10:09:52.754597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:45952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.880 [2024-12-13 10:09:52.754607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.880 [2024-12-13 10:09:52.754618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:46080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.880 [2024-12-13 10:09:52.754628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.880 [2024-12-13 10:09:52.754639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:46208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.880 [2024-12-13 10:09:52.754649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.880 [2024-12-13 10:09:52.754660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:46336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.880 [2024-12-13 10:09:52.754672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.880 [2024-12-13 10:09:52.754683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:46464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.880 [2024-12-13 10:09:52.754693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.880 [2024-12-13 10:09:52.754705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:46592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.881 [2024-12-13 10:09:52.754714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.881 [2024-12-13 10:09:52.754726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:46720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.881 [2024-12-13 10:09:52.754736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.881 [2024-12-13 10:09:52.754747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:46848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.881 [2024-12-13 10:09:52.754757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.881 [2024-12-13 10:09:52.754768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:46976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.881 [2024-12-13 10:09:52.754777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.881 [2024-12-13 10:09:52.754789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:47104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.881 [2024-12-13 10:09:52.754799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.881 [2024-12-13 10:09:52.754810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:47232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.881 [2024-12-13 10:09:52.754819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.881 [2024-12-13 10:09:52.754831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:47360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.881 [2024-12-13 10:09:52.754840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.881 [2024-12-13 10:09:52.754851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:47488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.881 [2024-12-13 10:09:52.754861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.881 [2024-12-13 10:09:52.754872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:47616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.881 [2024-12-13 10:09:52.754882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.881 [2024-12-13 10:09:52.754893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:47744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.881 [2024-12-13 10:09:52.754903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.881 [2024-12-13 10:09:52.754915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:47872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.881 [2024-12-13 10:09:52.754924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.881 [2024-12-13 10:09:52.754937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:48000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.881 [2024-12-13 10:09:52.754947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.881 [2024-12-13 10:09:52.754958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:48128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.881 [2024-12-13 10:09:52.754968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.881 [2024-12-13 10:09:52.754979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:48256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.881 [2024-12-13 10:09:52.754988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.881 [2024-12-13 10:09:52.755000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:48384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.881 [2024-12-13 10:09:52.755009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.881 [2024-12-13 10:09:52.755020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:48512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.881 [2024-12-13 10:09:52.755030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.881 [2024-12-13 10:09:52.755041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:48640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.881 [2024-12-13 10:09:52.755051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.881 [2024-12-13 10:09:52.755062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:48768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.881 [2024-12-13 10:09:52.755071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.881 [2024-12-13 10:09:52.755082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:48896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.881 [2024-12-13 10:09:52.755093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.881 [2024-12-13 10:09:52.755104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:49024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.881 [2024-12-13 10:09:52.755114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.881 [2024-12-13 10:09:52.755125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.881 [2024-12-13 10:09:52.755135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.881 [2024-12-13 10:09:52.755147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:49280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.881 [2024-12-13 10:09:52.755156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.881 [2024-12-13 10:09:52.755168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:49408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.881 [2024-12-13 10:09:52.755178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.881 [2024-12-13 10:09:52.755190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:49536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.881 [2024-12-13 10:09:52.755201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.881 [2024-12-13 10:09:52.755212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:49664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.881 [2024-12-13 10:09:52.755221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.881 [2024-12-13 10:09:52.755232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:49792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.881 [2024-12-13 10:09:52.755242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.881 [2024-12-13 10:09:52.755253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:49920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.881 [2024-12-13 10:09:52.755262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.881 [2024-12-13 10:09:52.755273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:50048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.881 [2024-12-13 10:09:52.755283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.881 [2024-12-13 10:09:52.755294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:50176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.881 [2024-12-13 10:09:52.755304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.881 [2024-12-13 10:09:52.755314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:50304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.881 [2024-12-13 10:09:52.755324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.881 [2024-12-13 10:09:52.755335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:50432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.881 [2024-12-13 10:09:52.755344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.881 [2024-12-13 10:09:52.755355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:50560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.881 [2024-12-13 10:09:52.755365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.881 [2024-12-13 10:09:52.755376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:50688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.881 [2024-12-13 10:09:52.755385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.881 [2024-12-13 10:09:52.755396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:50816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.881 [2024-12-13 10:09:52.755406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.881 [2024-12-13 10:09:52.755417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:50944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.881 [2024-12-13 10:09:52.755426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.881 [2024-12-13 10:09:52.755437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:51072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.881 [2024-12-13 10:09:52.755452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.881 [2024-12-13 10:09:52.755466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:51200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.881 [2024-12-13 10:09:52.755476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.881 [2024-12-13 10:09:52.755487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:51328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.881 [2024-12-13 10:09:52.755496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.881 [2024-12-13 10:09:52.755508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:51456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.881 [2024-12-13 10:09:52.755518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.881 [2024-12-13 10:09:52.755530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:51584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.881 [2024-12-13 10:09:52.755540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.881 [2024-12-13 10:09:52.755551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:51712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.881 [2024-12-13 10:09:52.755561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.882 [2024-12-13 10:09:52.755572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:51840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.882 [2024-12-13 10:09:52.755581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.882 [2024-12-13 10:09:52.755592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:51968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.882 [2024-12-13 10:09:52.755601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.882 [2024-12-13 10:09:52.755612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:52096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.882 [2024-12-13 10:09:52.755622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.882 [2024-12-13 10:09:52.755633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:52224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.882 [2024-12-13 10:09:52.755642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.882 [2024-12-13 10:09:52.755653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:52352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.882 [2024-12-13 10:09:52.755662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.882 [2024-12-13 10:09:52.755674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:52480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.882 [2024-12-13 10:09:52.755684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.882 [2024-12-13 10:09:52.755695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:52608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.882 [2024-12-13 10:09:52.755705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.882 [2024-12-13 10:09:52.755717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:52736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.882 [2024-12-13 10:09:52.755730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.882 [2024-12-13 10:09:52.755742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:52864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.882 [2024-12-13 10:09:52.755755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.882 [2024-12-13 10:09:52.755766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:52992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.882 [2024-12-13 10:09:52.755776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.882 [2024-12-13 10:09:52.755788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:53120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.882 [2024-12-13 10:09:52.755797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.882 [2024-12-13 10:09:52.755809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:53248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.882 [2024-12-13 10:09:52.755819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.882 [2024-12-13 10:09:52.757101] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:07:58.882 10:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.882 10:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:58.882 task offset: 45184 on job bdev=Nvme0n1 fails 00:07:58.882 00:07:58.882 Latency(us) 00:07:58.882 [2024-12-13T09:09:52.773Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:58.882 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:58.882 Job: Nvme0n1 ended in about 0.22 seconds with error 00:07:58.882 Verification LBA range: start 0x0 length 0x400 00:07:58.882 Nvme0n1 : 0.22 1583.55 98.97 287.10 0.00 32695.89 2044.10 30833.13 00:07:58.882 [2024-12-13T09:09:52.773Z] =================================================================================================================== 00:07:58.882 [2024-12-13T09:09:52.773Z] Total : 1583.55 98.97 287.10 0.00 32695.89 2044.10 30833.13 00:07:58.882 10:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.882 10:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:58.882 10:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.882 10:09:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:07:59.141 [2024-12-13 10:09:52.773209] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:59.141 [2024-12-13 10:09:52.773253] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:07:59.141 [2024-12-13 10:09:52.877673] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:08:00.077 10:09:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 3747698 00:08:00.077 10:09:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:08:00.077 10:09:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:08:00.077 10:09:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:08:00.077 10:09:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:08:00.077 10:09:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:08:00.077 10:09:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:00.077 10:09:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:00.077 { 00:08:00.077 "params": { 00:08:00.077 "name": "Nvme$subsystem", 00:08:00.077 "trtype": "$TEST_TRANSPORT", 00:08:00.077 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:00.077 "adrfam": "ipv4", 00:08:00.077 "trsvcid": "$NVMF_PORT", 00:08:00.077 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:00.077 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:00.077 "hdgst": ${hdgst:-false}, 00:08:00.077 "ddgst": ${ddgst:-false} 00:08:00.077 }, 00:08:00.077 "method": "bdev_nvme_attach_controller" 00:08:00.077 } 00:08:00.077 EOF 00:08:00.077 )") 00:08:00.077 10:09:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:08:00.077 10:09:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:08:00.077 10:09:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:08:00.077 10:09:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:00.077 "params": { 00:08:00.077 "name": "Nvme0", 00:08:00.077 "trtype": "tcp", 00:08:00.077 "traddr": "10.0.0.2", 00:08:00.077 "adrfam": "ipv4", 00:08:00.077 "trsvcid": "4420", 00:08:00.077 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:00.077 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:08:00.077 "hdgst": false, 00:08:00.078 "ddgst": false 00:08:00.078 }, 00:08:00.078 "method": "bdev_nvme_attach_controller" 00:08:00.078 }' 00:08:00.078 [2024-12-13 10:09:53.841888] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:08:00.078 [2024-12-13 10:09:53.841972] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3748155 ] 00:08:00.078 [2024-12-13 10:09:53.954977] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:00.336 [2024-12-13 10:09:54.067932] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:00.904 Running I/O for 1 seconds... 00:08:01.839 1792.00 IOPS, 112.00 MiB/s 00:08:01.839 Latency(us) 00:08:01.839 [2024-12-13T09:09:55.730Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:01.839 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:08:01.839 Verification LBA range: start 0x0 length 0x400 00:08:01.839 Nvme0n1 : 1.02 1816.94 113.56 0.00 0.00 34650.57 5211.67 30333.81 00:08:01.839 [2024-12-13T09:09:55.730Z] =================================================================================================================== 00:08:01.839 [2024-12-13T09:09:55.730Z] Total : 1816.94 113.56 0.00 0.00 34650.57 5211.67 30333.81 00:08:02.775 10:09:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:08:02.775 10:09:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:08:02.775 10:09:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:08:02.775 10:09:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:08:02.775 10:09:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:08:02.775 10:09:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:02.775 10:09:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:08:02.775 10:09:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:02.775 10:09:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:08:02.775 10:09:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:02.775 10:09:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:02.775 rmmod nvme_tcp 00:08:02.775 rmmod nvme_fabrics 00:08:02.775 rmmod nvme_keyring 00:08:02.775 10:09:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:02.775 10:09:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:08:02.775 10:09:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:08:02.775 10:09:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 3747459 ']' 00:08:02.775 10:09:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 3747459 00:08:02.775 10:09:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 3747459 ']' 00:08:02.775 10:09:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 3747459 00:08:02.775 10:09:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:08:02.775 10:09:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:02.775 10:09:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3747459 00:08:03.034 10:09:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:03.034 10:09:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:03.034 10:09:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3747459' 00:08:03.034 killing process with pid 3747459 00:08:03.034 10:09:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 3747459 00:08:03.034 10:09:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 3747459 00:08:04.421 [2024-12-13 10:09:57.966344] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:08:04.421 10:09:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:04.421 10:09:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:04.421 10:09:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:04.421 10:09:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:08:04.421 10:09:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:08:04.421 10:09:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:08:04.421 10:09:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:04.421 10:09:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:04.421 10:09:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:04.421 10:09:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:04.421 10:09:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:04.421 10:09:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:06.325 10:10:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:06.325 10:10:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:08:06.325 00:08:06.325 real 0m15.133s 00:08:06.325 user 0m32.899s 00:08:06.325 sys 0m5.376s 00:08:06.325 10:10:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:06.325 10:10:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:06.325 ************************************ 00:08:06.325 END TEST nvmf_host_management 00:08:06.325 ************************************ 00:08:06.325 10:10:00 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:08:06.325 10:10:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:06.325 10:10:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:06.325 10:10:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:06.325 ************************************ 00:08:06.325 START TEST nvmf_lvol 00:08:06.325 ************************************ 00:08:06.325 10:10:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:08:06.584 * Looking for test storage... 00:08:06.584 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:06.584 10:10:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:06.584 10:10:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # lcov --version 00:08:06.584 10:10:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:06.584 10:10:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:06.584 10:10:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:06.584 10:10:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:06.584 10:10:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:06.584 10:10:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:08:06.584 10:10:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:08:06.584 10:10:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:08:06.584 10:10:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:08:06.585 10:10:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:08:06.585 10:10:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:08:06.585 10:10:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:08:06.585 10:10:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:06.585 10:10:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:08:06.585 10:10:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:08:06.585 10:10:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:06.585 10:10:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:06.585 10:10:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:08:06.585 10:10:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:08:06.585 10:10:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:06.585 10:10:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:08:06.585 10:10:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:08:06.585 10:10:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:08:06.585 10:10:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:08:06.585 10:10:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:06.585 10:10:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:08:06.585 10:10:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:08:06.585 10:10:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:06.585 10:10:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:06.585 10:10:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:08:06.585 10:10:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:06.585 10:10:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:06.585 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:06.585 --rc genhtml_branch_coverage=1 00:08:06.585 --rc genhtml_function_coverage=1 00:08:06.585 --rc genhtml_legend=1 00:08:06.585 --rc geninfo_all_blocks=1 00:08:06.585 --rc geninfo_unexecuted_blocks=1 00:08:06.585 00:08:06.585 ' 00:08:06.585 10:10:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:06.585 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:06.585 --rc genhtml_branch_coverage=1 00:08:06.585 --rc genhtml_function_coverage=1 00:08:06.585 --rc genhtml_legend=1 00:08:06.585 --rc geninfo_all_blocks=1 00:08:06.585 --rc geninfo_unexecuted_blocks=1 00:08:06.585 00:08:06.585 ' 00:08:06.585 10:10:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:06.585 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:06.585 --rc genhtml_branch_coverage=1 00:08:06.585 --rc genhtml_function_coverage=1 00:08:06.585 --rc genhtml_legend=1 00:08:06.585 --rc geninfo_all_blocks=1 00:08:06.585 --rc geninfo_unexecuted_blocks=1 00:08:06.585 00:08:06.585 ' 00:08:06.585 10:10:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:06.585 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:06.585 --rc genhtml_branch_coverage=1 00:08:06.585 --rc genhtml_function_coverage=1 00:08:06.585 --rc genhtml_legend=1 00:08:06.585 --rc geninfo_all_blocks=1 00:08:06.585 --rc geninfo_unexecuted_blocks=1 00:08:06.585 00:08:06.585 ' 00:08:06.585 10:10:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:06.585 10:10:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:08:06.585 10:10:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:06.585 10:10:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:06.585 10:10:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:06.585 10:10:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:06.585 10:10:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:06.585 10:10:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:06.585 10:10:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:06.585 10:10:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:06.585 10:10:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:06.585 10:10:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:06.585 10:10:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:08:06.585 10:10:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:08:06.585 10:10:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:06.585 10:10:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:06.585 10:10:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:06.585 10:10:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:06.585 10:10:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:06.585 10:10:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:08:06.585 10:10:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:06.585 10:10:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:06.585 10:10:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:06.585 10:10:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:06.585 10:10:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:06.585 10:10:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:06.585 10:10:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:08:06.585 10:10:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:06.585 10:10:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:08:06.585 10:10:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:06.585 10:10:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:06.585 10:10:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:06.585 10:10:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:06.585 10:10:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:06.585 10:10:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:06.585 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:06.585 10:10:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:06.585 10:10:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:06.585 10:10:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:06.585 10:10:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:06.585 10:10:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:06.585 10:10:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:08:06.585 10:10:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:08:06.585 10:10:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:06.585 10:10:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:08:06.585 10:10:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:06.585 10:10:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:06.585 10:10:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:06.585 10:10:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:06.585 10:10:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:06.585 10:10:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:06.585 10:10:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:06.585 10:10:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:06.585 10:10:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:06.585 10:10:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:06.585 10:10:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:08:06.586 10:10:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:11.855 10:10:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:11.855 10:10:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:08:11.855 10:10:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:11.855 10:10:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:11.855 10:10:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:11.855 10:10:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:11.855 10:10:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:11.855 10:10:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:08:11.855 10:10:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:11.855 10:10:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:08:11.855 10:10:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:08:11.855 10:10:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:08:11.855 10:10:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:08:11.855 10:10:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:08:11.855 10:10:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:08:11.855 10:10:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:11.855 10:10:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:11.855 10:10:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:11.855 10:10:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:11.855 10:10:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:11.855 10:10:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:11.855 10:10:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:11.855 10:10:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:11.855 10:10:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:11.855 10:10:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:11.855 10:10:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:11.855 10:10:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:11.855 10:10:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:11.855 10:10:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:11.855 10:10:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:11.855 10:10:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:11.855 10:10:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:11.855 10:10:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:11.855 10:10:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:11.855 10:10:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:08:11.855 Found 0000:af:00.0 (0x8086 - 0x159b) 00:08:11.855 10:10:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:11.855 10:10:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:11.855 10:10:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:11.855 10:10:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:11.855 10:10:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:11.855 10:10:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:11.855 10:10:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:08:11.855 Found 0000:af:00.1 (0x8086 - 0x159b) 00:08:12.115 10:10:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:12.115 10:10:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:12.115 10:10:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:12.115 10:10:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:12.115 10:10:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:12.115 10:10:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:12.115 10:10:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:12.115 10:10:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:12.115 10:10:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:12.115 10:10:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:12.115 10:10:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:12.115 10:10:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:12.115 10:10:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:12.115 10:10:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:12.115 10:10:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:12.115 10:10:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:08:12.115 Found net devices under 0000:af:00.0: cvl_0_0 00:08:12.115 10:10:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:12.115 10:10:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:12.115 10:10:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:12.115 10:10:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:12.115 10:10:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:12.115 10:10:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:12.115 10:10:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:12.115 10:10:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:12.115 10:10:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:08:12.115 Found net devices under 0000:af:00.1: cvl_0_1 00:08:12.115 10:10:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:12.115 10:10:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:12.115 10:10:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:08:12.115 10:10:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:12.115 10:10:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:12.115 10:10:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:12.115 10:10:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:12.115 10:10:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:12.115 10:10:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:12.115 10:10:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:12.115 10:10:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:12.115 10:10:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:12.115 10:10:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:12.115 10:10:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:12.115 10:10:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:12.115 10:10:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:12.115 10:10:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:12.115 10:10:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:12.115 10:10:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:12.115 10:10:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:12.115 10:10:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:12.115 10:10:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:12.115 10:10:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:12.115 10:10:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:12.115 10:10:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:12.115 10:10:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:12.115 10:10:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:12.115 10:10:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:12.115 10:10:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:12.115 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:12.115 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.383 ms 00:08:12.115 00:08:12.115 --- 10.0.0.2 ping statistics --- 00:08:12.115 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:12.115 rtt min/avg/max/mdev = 0.383/0.383/0.383/0.000 ms 00:08:12.115 10:10:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:12.115 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:12.115 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.242 ms 00:08:12.115 00:08:12.115 --- 10.0.0.1 ping statistics --- 00:08:12.115 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:12.115 rtt min/avg/max/mdev = 0.242/0.242/0.242/0.000 ms 00:08:12.115 10:10:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:12.115 10:10:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:08:12.115 10:10:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:12.115 10:10:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:12.115 10:10:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:12.115 10:10:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:12.115 10:10:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:12.115 10:10:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:12.115 10:10:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:12.375 10:10:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:08:12.375 10:10:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:12.375 10:10:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:12.375 10:10:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:12.375 10:10:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=3752304 00:08:12.375 10:10:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 3752304 00:08:12.375 10:10:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:08:12.375 10:10:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 3752304 ']' 00:08:12.375 10:10:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:12.375 10:10:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:12.375 10:10:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:12.375 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:12.375 10:10:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:12.375 10:10:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:12.375 [2024-12-13 10:10:06.115110] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:08:12.375 [2024-12-13 10:10:06.115206] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:12.375 [2024-12-13 10:10:06.232888] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:12.634 [2024-12-13 10:10:06.339913] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:12.634 [2024-12-13 10:10:06.339960] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:12.634 [2024-12-13 10:10:06.339970] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:12.634 [2024-12-13 10:10:06.339980] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:12.634 [2024-12-13 10:10:06.339988] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:12.634 [2024-12-13 10:10:06.342237] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:08:12.634 [2024-12-13 10:10:06.342308] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:12.634 [2024-12-13 10:10:06.342315] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:08:13.201 10:10:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:13.201 10:10:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:08:13.201 10:10:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:13.201 10:10:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:13.201 10:10:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:13.201 10:10:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:13.201 10:10:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:13.460 [2024-12-13 10:10:07.130753] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:13.460 10:10:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:13.718 10:10:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:08:13.718 10:10:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:13.977 10:10:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:08:13.977 10:10:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:08:14.236 10:10:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:08:14.236 10:10:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=527cdef2-7268-4bf7-8160-77ab46a91a4d 00:08:14.236 10:10:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 527cdef2-7268-4bf7-8160-77ab46a91a4d lvol 20 00:08:14.554 10:10:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=83daad0d-bf55-4040-97f4-ee35a35e2b90 00:08:14.554 10:10:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:14.814 10:10:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 83daad0d-bf55-4040-97f4-ee35a35e2b90 00:08:14.814 10:10:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:15.072 [2024-12-13 10:10:08.824427] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:15.072 10:10:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:15.331 10:10:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=3752790 00:08:15.331 10:10:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:08:15.331 10:10:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:08:16.268 10:10:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 83daad0d-bf55-4040-97f4-ee35a35e2b90 MY_SNAPSHOT 00:08:16.527 10:10:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=eeb1f43d-7ebf-4309-ae5c-6c72b1563503 00:08:16.527 10:10:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 83daad0d-bf55-4040-97f4-ee35a35e2b90 30 00:08:16.785 10:10:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone eeb1f43d-7ebf-4309-ae5c-6c72b1563503 MY_CLONE 00:08:17.044 10:10:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=870f255e-ece5-4d39-bab2-a793fdedc7eb 00:08:17.044 10:10:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 870f255e-ece5-4d39-bab2-a793fdedc7eb 00:08:17.981 10:10:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 3752790 00:08:26.186 Initializing NVMe Controllers 00:08:26.186 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:08:26.186 Controller IO queue size 128, less than required. 00:08:26.186 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:26.186 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:08:26.186 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:08:26.186 Initialization complete. Launching workers. 00:08:26.186 ======================================================== 00:08:26.186 Latency(us) 00:08:26.186 Device Information : IOPS MiB/s Average min max 00:08:26.186 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 11418.10 44.60 11211.45 229.54 170124.08 00:08:26.186 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 11142.60 43.53 11489.63 3785.69 139523.32 00:08:26.186 ======================================================== 00:08:26.186 Total : 22560.70 88.13 11348.84 229.54 170124.08 00:08:26.186 00:08:26.186 10:10:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:26.186 10:10:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 83daad0d-bf55-4040-97f4-ee35a35e2b90 00:08:26.186 10:10:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 527cdef2-7268-4bf7-8160-77ab46a91a4d 00:08:26.445 10:10:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:08:26.445 10:10:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:08:26.445 10:10:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:08:26.445 10:10:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:26.445 10:10:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:08:26.445 10:10:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:26.445 10:10:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:08:26.445 10:10:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:26.445 10:10:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:26.445 rmmod nvme_tcp 00:08:26.445 rmmod nvme_fabrics 00:08:26.445 rmmod nvme_keyring 00:08:26.445 10:10:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:26.445 10:10:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:08:26.445 10:10:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:08:26.445 10:10:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 3752304 ']' 00:08:26.445 10:10:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 3752304 00:08:26.445 10:10:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 3752304 ']' 00:08:26.445 10:10:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 3752304 00:08:26.445 10:10:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:08:26.445 10:10:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:26.445 10:10:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3752304 00:08:26.445 10:10:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:26.445 10:10:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:26.445 10:10:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3752304' 00:08:26.445 killing process with pid 3752304 00:08:26.445 10:10:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 3752304 00:08:26.445 10:10:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 3752304 00:08:28.350 10:10:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:28.350 10:10:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:28.350 10:10:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:28.350 10:10:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:08:28.350 10:10:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:08:28.350 10:10:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:28.350 10:10:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:08:28.350 10:10:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:28.350 10:10:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:28.350 10:10:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:28.350 10:10:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:28.350 10:10:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:30.255 10:10:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:30.255 00:08:30.255 real 0m23.704s 00:08:30.255 user 1m8.499s 00:08:30.255 sys 0m7.443s 00:08:30.255 10:10:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:30.255 10:10:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:30.255 ************************************ 00:08:30.255 END TEST nvmf_lvol 00:08:30.255 ************************************ 00:08:30.255 10:10:23 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:30.255 10:10:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:30.255 10:10:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:30.255 10:10:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:30.255 ************************************ 00:08:30.255 START TEST nvmf_lvs_grow 00:08:30.255 ************************************ 00:08:30.255 10:10:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:30.255 * Looking for test storage... 00:08:30.255 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:30.255 10:10:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:30.255 10:10:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lcov --version 00:08:30.255 10:10:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:30.255 10:10:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:30.255 10:10:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:30.255 10:10:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:30.255 10:10:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:30.255 10:10:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:08:30.255 10:10:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:08:30.255 10:10:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:08:30.255 10:10:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:08:30.255 10:10:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:08:30.255 10:10:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:08:30.255 10:10:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:08:30.255 10:10:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:30.255 10:10:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:08:30.255 10:10:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:08:30.255 10:10:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:30.255 10:10:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:30.255 10:10:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:08:30.256 10:10:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:08:30.256 10:10:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:30.256 10:10:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:08:30.256 10:10:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:08:30.256 10:10:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:08:30.256 10:10:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:08:30.256 10:10:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:30.256 10:10:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:08:30.256 10:10:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:08:30.256 10:10:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:30.256 10:10:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:30.256 10:10:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:08:30.256 10:10:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:30.256 10:10:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:30.256 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:30.256 --rc genhtml_branch_coverage=1 00:08:30.256 --rc genhtml_function_coverage=1 00:08:30.256 --rc genhtml_legend=1 00:08:30.256 --rc geninfo_all_blocks=1 00:08:30.256 --rc geninfo_unexecuted_blocks=1 00:08:30.256 00:08:30.256 ' 00:08:30.256 10:10:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:30.256 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:30.256 --rc genhtml_branch_coverage=1 00:08:30.256 --rc genhtml_function_coverage=1 00:08:30.256 --rc genhtml_legend=1 00:08:30.256 --rc geninfo_all_blocks=1 00:08:30.256 --rc geninfo_unexecuted_blocks=1 00:08:30.256 00:08:30.256 ' 00:08:30.256 10:10:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:30.256 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:30.256 --rc genhtml_branch_coverage=1 00:08:30.256 --rc genhtml_function_coverage=1 00:08:30.256 --rc genhtml_legend=1 00:08:30.256 --rc geninfo_all_blocks=1 00:08:30.256 --rc geninfo_unexecuted_blocks=1 00:08:30.256 00:08:30.256 ' 00:08:30.256 10:10:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:30.256 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:30.256 --rc genhtml_branch_coverage=1 00:08:30.256 --rc genhtml_function_coverage=1 00:08:30.256 --rc genhtml_legend=1 00:08:30.256 --rc geninfo_all_blocks=1 00:08:30.256 --rc geninfo_unexecuted_blocks=1 00:08:30.256 00:08:30.256 ' 00:08:30.256 10:10:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:30.256 10:10:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:08:30.256 10:10:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:30.256 10:10:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:30.256 10:10:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:30.256 10:10:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:30.256 10:10:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:30.256 10:10:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:30.256 10:10:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:30.256 10:10:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:30.256 10:10:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:30.256 10:10:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:30.256 10:10:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:08:30.256 10:10:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:08:30.256 10:10:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:30.256 10:10:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:30.256 10:10:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:30.256 10:10:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:30.256 10:10:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:30.256 10:10:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:08:30.256 10:10:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:30.256 10:10:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:30.256 10:10:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:30.256 10:10:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:30.256 10:10:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:30.256 10:10:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:30.256 10:10:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:08:30.256 10:10:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:30.256 10:10:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:08:30.256 10:10:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:30.256 10:10:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:30.256 10:10:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:30.256 10:10:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:30.256 10:10:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:30.256 10:10:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:30.256 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:30.256 10:10:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:30.256 10:10:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:30.256 10:10:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:30.516 10:10:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:30.516 10:10:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:30.516 10:10:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:08:30.516 10:10:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:30.516 10:10:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:30.516 10:10:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:30.516 10:10:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:30.516 10:10:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:30.516 10:10:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:30.516 10:10:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:30.516 10:10:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:30.516 10:10:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:30.516 10:10:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:30.516 10:10:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:08:30.516 10:10:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:35.787 10:10:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:35.787 10:10:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:08:35.787 10:10:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:35.787 10:10:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:35.787 10:10:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:35.787 10:10:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:35.787 10:10:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:35.787 10:10:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:08:35.787 10:10:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:35.787 10:10:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:08:35.787 10:10:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:08:35.787 10:10:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:08:35.787 10:10:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:08:35.787 10:10:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:08:35.787 10:10:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:08:35.787 10:10:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:35.787 10:10:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:35.787 10:10:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:35.787 10:10:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:35.787 10:10:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:35.787 10:10:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:35.787 10:10:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:35.787 10:10:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:35.787 10:10:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:35.787 10:10:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:35.787 10:10:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:35.787 10:10:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:35.787 10:10:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:35.787 10:10:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:35.787 10:10:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:35.787 10:10:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:35.787 10:10:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:35.787 10:10:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:35.787 10:10:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:35.787 10:10:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:08:35.787 Found 0000:af:00.0 (0x8086 - 0x159b) 00:08:35.787 10:10:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:35.787 10:10:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:35.787 10:10:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:35.787 10:10:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:35.787 10:10:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:35.787 10:10:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:35.787 10:10:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:08:35.787 Found 0000:af:00.1 (0x8086 - 0x159b) 00:08:35.787 10:10:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:35.787 10:10:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:35.787 10:10:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:35.787 10:10:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:35.787 10:10:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:35.788 10:10:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:35.788 10:10:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:35.788 10:10:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:35.788 10:10:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:35.788 10:10:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:35.788 10:10:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:35.788 10:10:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:35.788 10:10:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:35.788 10:10:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:35.788 10:10:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:35.788 10:10:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:08:35.788 Found net devices under 0000:af:00.0: cvl_0_0 00:08:35.788 10:10:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:35.788 10:10:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:35.788 10:10:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:35.788 10:10:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:35.788 10:10:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:35.788 10:10:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:35.788 10:10:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:35.788 10:10:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:35.788 10:10:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:08:35.788 Found net devices under 0000:af:00.1: cvl_0_1 00:08:35.788 10:10:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:35.788 10:10:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:35.788 10:10:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:08:35.788 10:10:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:35.788 10:10:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:35.788 10:10:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:35.788 10:10:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:35.788 10:10:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:35.788 10:10:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:35.788 10:10:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:35.788 10:10:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:35.788 10:10:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:35.788 10:10:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:35.788 10:10:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:35.788 10:10:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:35.788 10:10:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:35.788 10:10:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:35.788 10:10:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:35.788 10:10:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:35.788 10:10:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:35.788 10:10:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:35.788 10:10:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:35.788 10:10:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:35.788 10:10:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:35.788 10:10:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:36.048 10:10:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:36.048 10:10:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:36.048 10:10:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:36.048 10:10:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:36.048 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:36.048 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.355 ms 00:08:36.048 00:08:36.048 --- 10.0.0.2 ping statistics --- 00:08:36.048 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:36.048 rtt min/avg/max/mdev = 0.355/0.355/0.355/0.000 ms 00:08:36.048 10:10:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:36.048 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:36.048 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.164 ms 00:08:36.048 00:08:36.048 --- 10.0.0.1 ping statistics --- 00:08:36.048 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:36.048 rtt min/avg/max/mdev = 0.164/0.164/0.164/0.000 ms 00:08:36.048 10:10:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:36.048 10:10:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:08:36.048 10:10:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:36.048 10:10:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:36.048 10:10:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:36.048 10:10:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:36.048 10:10:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:36.048 10:10:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:36.048 10:10:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:36.048 10:10:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:08:36.048 10:10:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:36.048 10:10:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:36.048 10:10:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:36.048 10:10:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=3758297 00:08:36.048 10:10:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:36.048 10:10:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 3758297 00:08:36.048 10:10:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 3758297 ']' 00:08:36.048 10:10:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:36.048 10:10:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:36.048 10:10:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:36.048 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:36.048 10:10:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:36.048 10:10:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:36.048 [2024-12-13 10:10:29.861271] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:08:36.048 [2024-12-13 10:10:29.861358] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:36.308 [2024-12-13 10:10:29.979638] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:36.308 [2024-12-13 10:10:30.094078] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:36.308 [2024-12-13 10:10:30.094127] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:36.308 [2024-12-13 10:10:30.094138] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:36.308 [2024-12-13 10:10:30.094149] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:36.308 [2024-12-13 10:10:30.094158] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:36.308 [2024-12-13 10:10:30.095627] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:36.876 10:10:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:36.876 10:10:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:08:36.876 10:10:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:36.876 10:10:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:36.876 10:10:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:36.876 10:10:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:36.876 10:10:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:37.134 [2024-12-13 10:10:30.867118] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:37.134 10:10:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:08:37.134 10:10:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:37.134 10:10:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:37.134 10:10:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:37.134 ************************************ 00:08:37.134 START TEST lvs_grow_clean 00:08:37.134 ************************************ 00:08:37.134 10:10:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:08:37.134 10:10:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:37.134 10:10:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:37.134 10:10:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:37.134 10:10:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:37.134 10:10:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:37.134 10:10:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:37.134 10:10:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:37.134 10:10:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:37.134 10:10:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:37.393 10:10:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:37.393 10:10:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:37.652 10:10:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=44c0874a-77fc-412d-9313-e79ee2cc1b86 00:08:37.652 10:10:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 44c0874a-77fc-412d-9313-e79ee2cc1b86 00:08:37.652 10:10:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:37.652 10:10:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:37.652 10:10:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:37.652 10:10:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 44c0874a-77fc-412d-9313-e79ee2cc1b86 lvol 150 00:08:37.910 10:10:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=d00d5a1a-9adb-41b7-af86-65269e9b2b9b 00:08:37.910 10:10:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:37.910 10:10:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:38.169 [2024-12-13 10:10:31.875446] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:38.169 [2024-12-13 10:10:31.875529] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:38.169 true 00:08:38.169 10:10:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 44c0874a-77fc-412d-9313-e79ee2cc1b86 00:08:38.169 10:10:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:38.428 10:10:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:38.428 10:10:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:38.428 10:10:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 d00d5a1a-9adb-41b7-af86-65269e9b2b9b 00:08:38.686 10:10:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:38.945 [2024-12-13 10:10:32.613806] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:38.945 10:10:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:38.945 10:10:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3758988 00:08:38.945 10:10:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:38.945 10:10:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3758988 /var/tmp/bdevperf.sock 00:08:38.945 10:10:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 3758988 ']' 00:08:38.945 10:10:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:38.945 10:10:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:38.945 10:10:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:38.945 10:10:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:38.945 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:38.945 10:10:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:38.945 10:10:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:39.204 [2024-12-13 10:10:32.883950] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:08:39.204 [2024-12-13 10:10:32.884034] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3758988 ] 00:08:39.204 [2024-12-13 10:10:32.997975] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:39.462 [2024-12-13 10:10:33.106250] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:08:40.030 10:10:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:40.030 10:10:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:08:40.030 10:10:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:40.288 Nvme0n1 00:08:40.288 10:10:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:40.288 [ 00:08:40.288 { 00:08:40.288 "name": "Nvme0n1", 00:08:40.288 "aliases": [ 00:08:40.288 "d00d5a1a-9adb-41b7-af86-65269e9b2b9b" 00:08:40.288 ], 00:08:40.288 "product_name": "NVMe disk", 00:08:40.288 "block_size": 4096, 00:08:40.288 "num_blocks": 38912, 00:08:40.288 "uuid": "d00d5a1a-9adb-41b7-af86-65269e9b2b9b", 00:08:40.288 "numa_id": 1, 00:08:40.288 "assigned_rate_limits": { 00:08:40.288 "rw_ios_per_sec": 0, 00:08:40.288 "rw_mbytes_per_sec": 0, 00:08:40.288 "r_mbytes_per_sec": 0, 00:08:40.288 "w_mbytes_per_sec": 0 00:08:40.288 }, 00:08:40.288 "claimed": false, 00:08:40.288 "zoned": false, 00:08:40.289 "supported_io_types": { 00:08:40.289 "read": true, 00:08:40.289 "write": true, 00:08:40.289 "unmap": true, 00:08:40.289 "flush": true, 00:08:40.289 "reset": true, 00:08:40.289 "nvme_admin": true, 00:08:40.289 "nvme_io": true, 00:08:40.289 "nvme_io_md": false, 00:08:40.289 "write_zeroes": true, 00:08:40.289 "zcopy": false, 00:08:40.289 "get_zone_info": false, 00:08:40.289 "zone_management": false, 00:08:40.289 "zone_append": false, 00:08:40.289 "compare": true, 00:08:40.289 "compare_and_write": true, 00:08:40.289 "abort": true, 00:08:40.289 "seek_hole": false, 00:08:40.289 "seek_data": false, 00:08:40.289 "copy": true, 00:08:40.289 "nvme_iov_md": false 00:08:40.289 }, 00:08:40.289 "memory_domains": [ 00:08:40.289 { 00:08:40.289 "dma_device_id": "system", 00:08:40.289 "dma_device_type": 1 00:08:40.289 } 00:08:40.289 ], 00:08:40.289 "driver_specific": { 00:08:40.289 "nvme": [ 00:08:40.289 { 00:08:40.289 "trid": { 00:08:40.289 "trtype": "TCP", 00:08:40.289 "adrfam": "IPv4", 00:08:40.289 "traddr": "10.0.0.2", 00:08:40.289 "trsvcid": "4420", 00:08:40.289 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:40.289 }, 00:08:40.289 "ctrlr_data": { 00:08:40.289 "cntlid": 1, 00:08:40.289 "vendor_id": "0x8086", 00:08:40.289 "model_number": "SPDK bdev Controller", 00:08:40.289 "serial_number": "SPDK0", 00:08:40.289 "firmware_revision": "25.01", 00:08:40.289 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:40.289 "oacs": { 00:08:40.289 "security": 0, 00:08:40.289 "format": 0, 00:08:40.289 "firmware": 0, 00:08:40.289 "ns_manage": 0 00:08:40.289 }, 00:08:40.289 "multi_ctrlr": true, 00:08:40.289 "ana_reporting": false 00:08:40.289 }, 00:08:40.289 "vs": { 00:08:40.289 "nvme_version": "1.3" 00:08:40.289 }, 00:08:40.289 "ns_data": { 00:08:40.289 "id": 1, 00:08:40.289 "can_share": true 00:08:40.289 } 00:08:40.289 } 00:08:40.289 ], 00:08:40.289 "mp_policy": "active_passive" 00:08:40.289 } 00:08:40.289 } 00:08:40.289 ] 00:08:40.289 10:10:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3759214 00:08:40.289 10:10:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:40.289 10:10:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:40.547 Running I/O for 10 seconds... 00:08:41.483 Latency(us) 00:08:41.483 [2024-12-13T09:10:35.374Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:41.483 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:41.483 Nvme0n1 : 1.00 20385.00 79.63 0.00 0.00 0.00 0.00 0.00 00:08:41.483 [2024-12-13T09:10:35.374Z] =================================================================================================================== 00:08:41.483 [2024-12-13T09:10:35.374Z] Total : 20385.00 79.63 0.00 0.00 0.00 0.00 0.00 00:08:41.483 00:08:42.419 10:10:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 44c0874a-77fc-412d-9313-e79ee2cc1b86 00:08:42.419 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:42.419 Nvme0n1 : 2.00 20514.50 80.13 0.00 0.00 0.00 0.00 0.00 00:08:42.419 [2024-12-13T09:10:36.310Z] =================================================================================================================== 00:08:42.419 [2024-12-13T09:10:36.310Z] Total : 20514.50 80.13 0.00 0.00 0.00 0.00 0.00 00:08:42.419 00:08:42.677 true 00:08:42.677 10:10:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 44c0874a-77fc-412d-9313-e79ee2cc1b86 00:08:42.677 10:10:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:42.677 10:10:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:42.677 10:10:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:42.677 10:10:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 3759214 00:08:43.613 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:43.613 Nvme0n1 : 3.00 20542.00 80.24 0.00 0.00 0.00 0.00 0.00 00:08:43.613 [2024-12-13T09:10:37.504Z] =================================================================================================================== 00:08:43.613 [2024-12-13T09:10:37.504Z] Total : 20542.00 80.24 0.00 0.00 0.00 0.00 0.00 00:08:43.613 00:08:44.549 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:44.549 Nvme0n1 : 4.00 20580.00 80.39 0.00 0.00 0.00 0.00 0.00 00:08:44.549 [2024-12-13T09:10:38.440Z] =================================================================================================================== 00:08:44.549 [2024-12-13T09:10:38.440Z] Total : 20580.00 80.39 0.00 0.00 0.00 0.00 0.00 00:08:44.549 00:08:45.485 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:45.485 Nvme0n1 : 5.00 20633.80 80.60 0.00 0.00 0.00 0.00 0.00 00:08:45.485 [2024-12-13T09:10:39.376Z] =================================================================================================================== 00:08:45.485 [2024-12-13T09:10:39.376Z] Total : 20633.80 80.60 0.00 0.00 0.00 0.00 0.00 00:08:45.485 00:08:46.419 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:46.419 Nvme0n1 : 6.00 20627.17 80.57 0.00 0.00 0.00 0.00 0.00 00:08:46.419 [2024-12-13T09:10:40.310Z] =================================================================================================================== 00:08:46.419 [2024-12-13T09:10:40.310Z] Total : 20627.17 80.57 0.00 0.00 0.00 0.00 0.00 00:08:46.419 00:08:47.794 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:47.794 Nvme0n1 : 7.00 20647.14 80.65 0.00 0.00 0.00 0.00 0.00 00:08:47.794 [2024-12-13T09:10:41.685Z] =================================================================================================================== 00:08:47.794 [2024-12-13T09:10:41.685Z] Total : 20647.14 80.65 0.00 0.00 0.00 0.00 0.00 00:08:47.794 00:08:48.727 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:48.727 Nvme0n1 : 8.00 20673.38 80.76 0.00 0.00 0.00 0.00 0.00 00:08:48.727 [2024-12-13T09:10:42.618Z] =================================================================================================================== 00:08:48.727 [2024-12-13T09:10:42.618Z] Total : 20673.38 80.76 0.00 0.00 0.00 0.00 0.00 00:08:48.727 00:08:49.661 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:49.661 Nvme0n1 : 9.00 20700.11 80.86 0.00 0.00 0.00 0.00 0.00 00:08:49.661 [2024-12-13T09:10:43.552Z] =================================================================================================================== 00:08:49.661 [2024-12-13T09:10:43.552Z] Total : 20700.11 80.86 0.00 0.00 0.00 0.00 0.00 00:08:49.661 00:08:50.595 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:50.595 Nvme0n1 : 10.00 20713.30 80.91 0.00 0.00 0.00 0.00 0.00 00:08:50.595 [2024-12-13T09:10:44.486Z] =================================================================================================================== 00:08:50.595 [2024-12-13T09:10:44.486Z] Total : 20713.30 80.91 0.00 0.00 0.00 0.00 0.00 00:08:50.595 00:08:50.595 00:08:50.595 Latency(us) 00:08:50.595 [2024-12-13T09:10:44.486Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:50.595 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:50.595 Nvme0n1 : 10.00 20716.07 80.92 0.00 0.00 6175.49 2793.08 12358.22 00:08:50.595 [2024-12-13T09:10:44.486Z] =================================================================================================================== 00:08:50.595 [2024-12-13T09:10:44.486Z] Total : 20716.07 80.92 0.00 0.00 6175.49 2793.08 12358.22 00:08:50.595 { 00:08:50.595 "results": [ 00:08:50.595 { 00:08:50.595 "job": "Nvme0n1", 00:08:50.595 "core_mask": "0x2", 00:08:50.595 "workload": "randwrite", 00:08:50.595 "status": "finished", 00:08:50.595 "queue_depth": 128, 00:08:50.595 "io_size": 4096, 00:08:50.595 "runtime": 10.004022, 00:08:50.595 "iops": 20716.067997451424, 00:08:50.595 "mibps": 80.92214061504463, 00:08:50.595 "io_failed": 0, 00:08:50.595 "io_timeout": 0, 00:08:50.595 "avg_latency_us": 6175.491770951379, 00:08:50.595 "min_latency_us": 2793.0819047619048, 00:08:50.595 "max_latency_us": 12358.217142857144 00:08:50.595 } 00:08:50.595 ], 00:08:50.595 "core_count": 1 00:08:50.595 } 00:08:50.595 10:10:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3758988 00:08:50.595 10:10:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 3758988 ']' 00:08:50.595 10:10:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 3758988 00:08:50.595 10:10:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:08:50.595 10:10:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:50.596 10:10:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3758988 00:08:50.596 10:10:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:50.596 10:10:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:50.596 10:10:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3758988' 00:08:50.596 killing process with pid 3758988 00:08:50.596 10:10:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 3758988 00:08:50.596 Received shutdown signal, test time was about 10.000000 seconds 00:08:50.596 00:08:50.596 Latency(us) 00:08:50.596 [2024-12-13T09:10:44.487Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:50.596 [2024-12-13T09:10:44.487Z] =================================================================================================================== 00:08:50.596 [2024-12-13T09:10:44.487Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:50.596 10:10:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 3758988 00:08:51.529 10:10:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:51.529 10:10:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:51.787 10:10:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 44c0874a-77fc-412d-9313-e79ee2cc1b86 00:08:51.787 10:10:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:52.045 10:10:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:52.045 10:10:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:08:52.045 10:10:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:52.302 [2024-12-13 10:10:45.956316] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:52.302 10:10:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 44c0874a-77fc-412d-9313-e79ee2cc1b86 00:08:52.302 10:10:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:08:52.302 10:10:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 44c0874a-77fc-412d-9313-e79ee2cc1b86 00:08:52.302 10:10:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:52.302 10:10:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:52.302 10:10:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:52.302 10:10:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:52.302 10:10:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:52.302 10:10:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:52.302 10:10:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:52.302 10:10:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:08:52.303 10:10:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 44c0874a-77fc-412d-9313-e79ee2cc1b86 00:08:52.303 request: 00:08:52.303 { 00:08:52.303 "uuid": "44c0874a-77fc-412d-9313-e79ee2cc1b86", 00:08:52.303 "method": "bdev_lvol_get_lvstores", 00:08:52.303 "req_id": 1 00:08:52.303 } 00:08:52.303 Got JSON-RPC error response 00:08:52.303 response: 00:08:52.303 { 00:08:52.303 "code": -19, 00:08:52.303 "message": "No such device" 00:08:52.303 } 00:08:52.303 10:10:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:08:52.303 10:10:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:52.303 10:10:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:52.303 10:10:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:52.303 10:10:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:52.560 aio_bdev 00:08:52.560 10:10:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev d00d5a1a-9adb-41b7-af86-65269e9b2b9b 00:08:52.560 10:10:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=d00d5a1a-9adb-41b7-af86-65269e9b2b9b 00:08:52.560 10:10:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:52.560 10:10:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:08:52.560 10:10:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:52.560 10:10:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:52.560 10:10:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:52.818 10:10:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b d00d5a1a-9adb-41b7-af86-65269e9b2b9b -t 2000 00:08:53.077 [ 00:08:53.077 { 00:08:53.077 "name": "d00d5a1a-9adb-41b7-af86-65269e9b2b9b", 00:08:53.077 "aliases": [ 00:08:53.077 "lvs/lvol" 00:08:53.077 ], 00:08:53.077 "product_name": "Logical Volume", 00:08:53.077 "block_size": 4096, 00:08:53.077 "num_blocks": 38912, 00:08:53.077 "uuid": "d00d5a1a-9adb-41b7-af86-65269e9b2b9b", 00:08:53.077 "assigned_rate_limits": { 00:08:53.077 "rw_ios_per_sec": 0, 00:08:53.077 "rw_mbytes_per_sec": 0, 00:08:53.077 "r_mbytes_per_sec": 0, 00:08:53.077 "w_mbytes_per_sec": 0 00:08:53.077 }, 00:08:53.077 "claimed": false, 00:08:53.077 "zoned": false, 00:08:53.077 "supported_io_types": { 00:08:53.077 "read": true, 00:08:53.077 "write": true, 00:08:53.077 "unmap": true, 00:08:53.077 "flush": false, 00:08:53.077 "reset": true, 00:08:53.077 "nvme_admin": false, 00:08:53.077 "nvme_io": false, 00:08:53.077 "nvme_io_md": false, 00:08:53.077 "write_zeroes": true, 00:08:53.077 "zcopy": false, 00:08:53.077 "get_zone_info": false, 00:08:53.077 "zone_management": false, 00:08:53.077 "zone_append": false, 00:08:53.077 "compare": false, 00:08:53.077 "compare_and_write": false, 00:08:53.077 "abort": false, 00:08:53.077 "seek_hole": true, 00:08:53.077 "seek_data": true, 00:08:53.077 "copy": false, 00:08:53.077 "nvme_iov_md": false 00:08:53.077 }, 00:08:53.077 "driver_specific": { 00:08:53.077 "lvol": { 00:08:53.077 "lvol_store_uuid": "44c0874a-77fc-412d-9313-e79ee2cc1b86", 00:08:53.077 "base_bdev": "aio_bdev", 00:08:53.077 "thin_provision": false, 00:08:53.077 "num_allocated_clusters": 38, 00:08:53.077 "snapshot": false, 00:08:53.077 "clone": false, 00:08:53.077 "esnap_clone": false 00:08:53.077 } 00:08:53.077 } 00:08:53.077 } 00:08:53.077 ] 00:08:53.077 10:10:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:08:53.077 10:10:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 44c0874a-77fc-412d-9313-e79ee2cc1b86 00:08:53.077 10:10:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:53.077 10:10:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:53.077 10:10:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 44c0874a-77fc-412d-9313-e79ee2cc1b86 00:08:53.077 10:10:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:53.335 10:10:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:53.335 10:10:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete d00d5a1a-9adb-41b7-af86-65269e9b2b9b 00:08:53.594 10:10:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 44c0874a-77fc-412d-9313-e79ee2cc1b86 00:08:53.852 10:10:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:53.852 10:10:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:53.852 00:08:53.852 real 0m16.805s 00:08:53.852 user 0m16.418s 00:08:53.852 sys 0m1.538s 00:08:53.852 10:10:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:53.852 10:10:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:53.852 ************************************ 00:08:53.852 END TEST lvs_grow_clean 00:08:53.852 ************************************ 00:08:54.110 10:10:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:08:54.110 10:10:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:54.110 10:10:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:54.110 10:10:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:54.110 ************************************ 00:08:54.110 START TEST lvs_grow_dirty 00:08:54.110 ************************************ 00:08:54.110 10:10:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:08:54.110 10:10:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:54.110 10:10:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:54.110 10:10:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:54.110 10:10:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:54.110 10:10:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:54.110 10:10:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:54.110 10:10:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:54.110 10:10:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:54.110 10:10:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:54.368 10:10:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:54.368 10:10:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:54.368 10:10:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=b2b5c714-43f2-4225-9c3e-cdbc9f624e48 00:08:54.368 10:10:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b2b5c714-43f2-4225-9c3e-cdbc9f624e48 00:08:54.368 10:10:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:54.626 10:10:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:54.626 10:10:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:54.626 10:10:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u b2b5c714-43f2-4225-9c3e-cdbc9f624e48 lvol 150 00:08:54.884 10:10:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=ce991d52-cf9d-4d3e-8a6f-b893dbc658d3 00:08:54.884 10:10:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:54.884 10:10:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:54.884 [2024-12-13 10:10:48.744550] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:54.884 [2024-12-13 10:10:48.744652] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:54.884 true 00:08:54.884 10:10:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b2b5c714-43f2-4225-9c3e-cdbc9f624e48 00:08:54.884 10:10:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:55.142 10:10:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:55.142 10:10:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:55.400 10:10:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 ce991d52-cf9d-4d3e-8a6f-b893dbc658d3 00:08:55.661 10:10:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:55.662 [2024-12-13 10:10:49.466870] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:55.662 10:10:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:55.920 10:10:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3761749 00:08:55.920 10:10:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:55.920 10:10:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:55.920 10:10:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3761749 /var/tmp/bdevperf.sock 00:08:55.920 10:10:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 3761749 ']' 00:08:55.920 10:10:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:55.920 10:10:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:55.920 10:10:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:55.920 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:55.920 10:10:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:55.920 10:10:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:55.920 [2024-12-13 10:10:49.732148] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:08:55.920 [2024-12-13 10:10:49.732239] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3761749 ] 00:08:56.178 [2024-12-13 10:10:49.846109] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:56.178 [2024-12-13 10:10:49.956719] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:08:56.744 10:10:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:56.744 10:10:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:08:56.744 10:10:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:57.068 Nvme0n1 00:08:57.347 10:10:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:57.347 [ 00:08:57.347 { 00:08:57.347 "name": "Nvme0n1", 00:08:57.347 "aliases": [ 00:08:57.347 "ce991d52-cf9d-4d3e-8a6f-b893dbc658d3" 00:08:57.347 ], 00:08:57.347 "product_name": "NVMe disk", 00:08:57.347 "block_size": 4096, 00:08:57.347 "num_blocks": 38912, 00:08:57.347 "uuid": "ce991d52-cf9d-4d3e-8a6f-b893dbc658d3", 00:08:57.347 "numa_id": 1, 00:08:57.347 "assigned_rate_limits": { 00:08:57.347 "rw_ios_per_sec": 0, 00:08:57.347 "rw_mbytes_per_sec": 0, 00:08:57.347 "r_mbytes_per_sec": 0, 00:08:57.347 "w_mbytes_per_sec": 0 00:08:57.348 }, 00:08:57.348 "claimed": false, 00:08:57.348 "zoned": false, 00:08:57.348 "supported_io_types": { 00:08:57.348 "read": true, 00:08:57.348 "write": true, 00:08:57.348 "unmap": true, 00:08:57.348 "flush": true, 00:08:57.348 "reset": true, 00:08:57.348 "nvme_admin": true, 00:08:57.348 "nvme_io": true, 00:08:57.348 "nvme_io_md": false, 00:08:57.348 "write_zeroes": true, 00:08:57.348 "zcopy": false, 00:08:57.348 "get_zone_info": false, 00:08:57.348 "zone_management": false, 00:08:57.348 "zone_append": false, 00:08:57.348 "compare": true, 00:08:57.348 "compare_and_write": true, 00:08:57.348 "abort": true, 00:08:57.348 "seek_hole": false, 00:08:57.348 "seek_data": false, 00:08:57.348 "copy": true, 00:08:57.348 "nvme_iov_md": false 00:08:57.348 }, 00:08:57.348 "memory_domains": [ 00:08:57.348 { 00:08:57.348 "dma_device_id": "system", 00:08:57.348 "dma_device_type": 1 00:08:57.348 } 00:08:57.348 ], 00:08:57.348 "driver_specific": { 00:08:57.348 "nvme": [ 00:08:57.348 { 00:08:57.348 "trid": { 00:08:57.348 "trtype": "TCP", 00:08:57.348 "adrfam": "IPv4", 00:08:57.348 "traddr": "10.0.0.2", 00:08:57.348 "trsvcid": "4420", 00:08:57.348 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:57.348 }, 00:08:57.348 "ctrlr_data": { 00:08:57.348 "cntlid": 1, 00:08:57.348 "vendor_id": "0x8086", 00:08:57.348 "model_number": "SPDK bdev Controller", 00:08:57.348 "serial_number": "SPDK0", 00:08:57.348 "firmware_revision": "25.01", 00:08:57.348 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:57.348 "oacs": { 00:08:57.348 "security": 0, 00:08:57.348 "format": 0, 00:08:57.348 "firmware": 0, 00:08:57.348 "ns_manage": 0 00:08:57.348 }, 00:08:57.348 "multi_ctrlr": true, 00:08:57.348 "ana_reporting": false 00:08:57.348 }, 00:08:57.348 "vs": { 00:08:57.348 "nvme_version": "1.3" 00:08:57.348 }, 00:08:57.348 "ns_data": { 00:08:57.348 "id": 1, 00:08:57.348 "can_share": true 00:08:57.348 } 00:08:57.348 } 00:08:57.348 ], 00:08:57.348 "mp_policy": "active_passive" 00:08:57.348 } 00:08:57.348 } 00:08:57.348 ] 00:08:57.348 10:10:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3761981 00:08:57.348 10:10:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:57.348 10:10:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:57.348 Running I/O for 10 seconds... 00:08:58.722 Latency(us) 00:08:58.722 [2024-12-13T09:10:52.613Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:58.722 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:58.722 Nvme0n1 : 1.00 20386.00 79.63 0.00 0.00 0.00 0.00 0.00 00:08:58.722 [2024-12-13T09:10:52.613Z] =================================================================================================================== 00:08:58.722 [2024-12-13T09:10:52.613Z] Total : 20386.00 79.63 0.00 0.00 0.00 0.00 0.00 00:08:58.722 00:08:59.288 10:10:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u b2b5c714-43f2-4225-9c3e-cdbc9f624e48 00:08:59.547 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:59.547 Nvme0n1 : 2.00 20514.00 80.13 0.00 0.00 0.00 0.00 0.00 00:08:59.547 [2024-12-13T09:10:53.438Z] =================================================================================================================== 00:08:59.547 [2024-12-13T09:10:53.438Z] Total : 20514.00 80.13 0.00 0.00 0.00 0.00 0.00 00:08:59.547 00:08:59.547 true 00:08:59.547 10:10:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b2b5c714-43f2-4225-9c3e-cdbc9f624e48 00:08:59.547 10:10:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:59.805 10:10:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:59.805 10:10:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:59.805 10:10:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 3761981 00:09:00.371 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:00.371 Nvme0n1 : 3.00 20555.67 80.30 0.00 0.00 0.00 0.00 0.00 00:09:00.371 [2024-12-13T09:10:54.262Z] =================================================================================================================== 00:09:00.371 [2024-12-13T09:10:54.262Z] Total : 20555.67 80.30 0.00 0.00 0.00 0.00 0.00 00:09:00.371 00:09:01.745 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:01.745 Nvme0n1 : 4.00 20599.50 80.47 0.00 0.00 0.00 0.00 0.00 00:09:01.745 [2024-12-13T09:10:55.636Z] =================================================================================================================== 00:09:01.745 [2024-12-13T09:10:55.636Z] Total : 20599.50 80.47 0.00 0.00 0.00 0.00 0.00 00:09:01.745 00:09:02.678 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:02.678 Nvme0n1 : 5.00 20637.40 80.61 0.00 0.00 0.00 0.00 0.00 00:09:02.678 [2024-12-13T09:10:56.569Z] =================================================================================================================== 00:09:02.678 [2024-12-13T09:10:56.569Z] Total : 20637.40 80.61 0.00 0.00 0.00 0.00 0.00 00:09:02.678 00:09:03.613 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:03.613 Nvme0n1 : 6.00 20665.17 80.72 0.00 0.00 0.00 0.00 0.00 00:09:03.613 [2024-12-13T09:10:57.504Z] =================================================================================================================== 00:09:03.613 [2024-12-13T09:10:57.504Z] Total : 20665.17 80.72 0.00 0.00 0.00 0.00 0.00 00:09:03.613 00:09:04.548 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:04.548 Nvme0n1 : 7.00 20686.14 80.81 0.00 0.00 0.00 0.00 0.00 00:09:04.548 [2024-12-13T09:10:58.439Z] =================================================================================================================== 00:09:04.548 [2024-12-13T09:10:58.439Z] Total : 20686.14 80.81 0.00 0.00 0.00 0.00 0.00 00:09:04.548 00:09:05.482 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:05.482 Nvme0n1 : 8.00 20707.38 80.89 0.00 0.00 0.00 0.00 0.00 00:09:05.482 [2024-12-13T09:10:59.373Z] =================================================================================================================== 00:09:05.482 [2024-12-13T09:10:59.373Z] Total : 20707.38 80.89 0.00 0.00 0.00 0.00 0.00 00:09:05.482 00:09:06.416 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:06.416 Nvme0n1 : 9.00 20685.67 80.80 0.00 0.00 0.00 0.00 0.00 00:09:06.416 [2024-12-13T09:11:00.307Z] =================================================================================================================== 00:09:06.416 [2024-12-13T09:11:00.307Z] Total : 20685.67 80.80 0.00 0.00 0.00 0.00 0.00 00:09:06.416 00:09:07.350 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:07.350 Nvme0n1 : 10.00 20700.60 80.86 0.00 0.00 0.00 0.00 0.00 00:09:07.350 [2024-12-13T09:11:01.241Z] =================================================================================================================== 00:09:07.350 [2024-12-13T09:11:01.241Z] Total : 20700.60 80.86 0.00 0.00 0.00 0.00 0.00 00:09:07.350 00:09:07.350 00:09:07.350 Latency(us) 00:09:07.350 [2024-12-13T09:11:01.241Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:07.350 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:07.350 Nvme0n1 : 10.01 20700.94 80.86 0.00 0.00 6180.14 1817.84 13856.18 00:09:07.350 [2024-12-13T09:11:01.241Z] =================================================================================================================== 00:09:07.350 [2024-12-13T09:11:01.241Z] Total : 20700.94 80.86 0.00 0.00 6180.14 1817.84 13856.18 00:09:07.608 { 00:09:07.609 "results": [ 00:09:07.609 { 00:09:07.609 "job": "Nvme0n1", 00:09:07.609 "core_mask": "0x2", 00:09:07.609 "workload": "randwrite", 00:09:07.609 "status": "finished", 00:09:07.609 "queue_depth": 128, 00:09:07.609 "io_size": 4096, 00:09:07.609 "runtime": 10.00602, 00:09:07.609 "iops": 20700.938035302748, 00:09:07.609 "mibps": 80.86303920040136, 00:09:07.609 "io_failed": 0, 00:09:07.609 "io_timeout": 0, 00:09:07.609 "avg_latency_us": 6180.141899878937, 00:09:07.609 "min_latency_us": 1817.8438095238096, 00:09:07.609 "max_latency_us": 13856.182857142858 00:09:07.609 } 00:09:07.609 ], 00:09:07.609 "core_count": 1 00:09:07.609 } 00:09:07.609 10:11:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3761749 00:09:07.609 10:11:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 3761749 ']' 00:09:07.609 10:11:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 3761749 00:09:07.609 10:11:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:09:07.609 10:11:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:07.609 10:11:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3761749 00:09:07.609 10:11:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:07.609 10:11:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:07.609 10:11:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3761749' 00:09:07.609 killing process with pid 3761749 00:09:07.609 10:11:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 3761749 00:09:07.609 Received shutdown signal, test time was about 10.000000 seconds 00:09:07.609 00:09:07.609 Latency(us) 00:09:07.609 [2024-12-13T09:11:01.500Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:07.609 [2024-12-13T09:11:01.500Z] =================================================================================================================== 00:09:07.609 [2024-12-13T09:11:01.500Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:07.609 10:11:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 3761749 00:09:08.544 10:11:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:08.544 10:11:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:08.802 10:11:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b2b5c714-43f2-4225-9c3e-cdbc9f624e48 00:09:08.802 10:11:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:09:09.061 10:11:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:09:09.061 10:11:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:09:09.061 10:11:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 3758297 00:09:09.061 10:11:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 3758297 00:09:09.061 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 3758297 Killed "${NVMF_APP[@]}" "$@" 00:09:09.061 10:11:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:09:09.061 10:11:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:09:09.061 10:11:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:09.061 10:11:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:09.061 10:11:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:09.061 10:11:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=3763994 00:09:09.061 10:11:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 3763994 00:09:09.061 10:11:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 3763994 ']' 00:09:09.061 10:11:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:09.061 10:11:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:09.061 10:11:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:09.061 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:09.061 10:11:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:09:09.061 10:11:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:09.061 10:11:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:09.061 [2024-12-13 10:11:02.943383] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:09:09.061 [2024-12-13 10:11:02.943496] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:09.320 [2024-12-13 10:11:03.066767] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:09.320 [2024-12-13 10:11:03.170592] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:09.320 [2024-12-13 10:11:03.170637] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:09.320 [2024-12-13 10:11:03.170647] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:09.320 [2024-12-13 10:11:03.170656] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:09.320 [2024-12-13 10:11:03.170664] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:09.320 [2024-12-13 10:11:03.171907] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:09:09.887 10:11:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:09.887 10:11:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:09:09.887 10:11:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:09.887 10:11:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:09.887 10:11:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:09.887 10:11:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:09.887 10:11:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:10.145 [2024-12-13 10:11:03.944422] blobstore.c:4899:bs_recover: *NOTICE*: Performing recovery on blobstore 00:09:10.145 [2024-12-13 10:11:03.944581] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:09:10.145 [2024-12-13 10:11:03.944618] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:09:10.145 10:11:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:09:10.145 10:11:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev ce991d52-cf9d-4d3e-8a6f-b893dbc658d3 00:09:10.145 10:11:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=ce991d52-cf9d-4d3e-8a6f-b893dbc658d3 00:09:10.145 10:11:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:10.145 10:11:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:09:10.145 10:11:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:10.145 10:11:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:10.145 10:11:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:10.404 10:11:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b ce991d52-cf9d-4d3e-8a6f-b893dbc658d3 -t 2000 00:09:10.662 [ 00:09:10.662 { 00:09:10.662 "name": "ce991d52-cf9d-4d3e-8a6f-b893dbc658d3", 00:09:10.662 "aliases": [ 00:09:10.662 "lvs/lvol" 00:09:10.662 ], 00:09:10.662 "product_name": "Logical Volume", 00:09:10.662 "block_size": 4096, 00:09:10.662 "num_blocks": 38912, 00:09:10.662 "uuid": "ce991d52-cf9d-4d3e-8a6f-b893dbc658d3", 00:09:10.662 "assigned_rate_limits": { 00:09:10.662 "rw_ios_per_sec": 0, 00:09:10.662 "rw_mbytes_per_sec": 0, 00:09:10.662 "r_mbytes_per_sec": 0, 00:09:10.662 "w_mbytes_per_sec": 0 00:09:10.662 }, 00:09:10.662 "claimed": false, 00:09:10.662 "zoned": false, 00:09:10.662 "supported_io_types": { 00:09:10.662 "read": true, 00:09:10.662 "write": true, 00:09:10.662 "unmap": true, 00:09:10.662 "flush": false, 00:09:10.662 "reset": true, 00:09:10.662 "nvme_admin": false, 00:09:10.662 "nvme_io": false, 00:09:10.662 "nvme_io_md": false, 00:09:10.662 "write_zeroes": true, 00:09:10.662 "zcopy": false, 00:09:10.662 "get_zone_info": false, 00:09:10.662 "zone_management": false, 00:09:10.662 "zone_append": false, 00:09:10.662 "compare": false, 00:09:10.662 "compare_and_write": false, 00:09:10.662 "abort": false, 00:09:10.662 "seek_hole": true, 00:09:10.662 "seek_data": true, 00:09:10.662 "copy": false, 00:09:10.662 "nvme_iov_md": false 00:09:10.662 }, 00:09:10.662 "driver_specific": { 00:09:10.662 "lvol": { 00:09:10.662 "lvol_store_uuid": "b2b5c714-43f2-4225-9c3e-cdbc9f624e48", 00:09:10.662 "base_bdev": "aio_bdev", 00:09:10.662 "thin_provision": false, 00:09:10.663 "num_allocated_clusters": 38, 00:09:10.663 "snapshot": false, 00:09:10.663 "clone": false, 00:09:10.663 "esnap_clone": false 00:09:10.663 } 00:09:10.663 } 00:09:10.663 } 00:09:10.663 ] 00:09:10.663 10:11:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:09:10.663 10:11:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b2b5c714-43f2-4225-9c3e-cdbc9f624e48 00:09:10.663 10:11:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:09:10.663 10:11:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:09:10.663 10:11:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b2b5c714-43f2-4225-9c3e-cdbc9f624e48 00:09:10.663 10:11:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:09:10.921 10:11:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:09:10.921 10:11:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:11.179 [2024-12-13 10:11:04.888657] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:09:11.179 10:11:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b2b5c714-43f2-4225-9c3e-cdbc9f624e48 00:09:11.179 10:11:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:09:11.179 10:11:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b2b5c714-43f2-4225-9c3e-cdbc9f624e48 00:09:11.179 10:11:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:11.179 10:11:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:11.179 10:11:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:11.179 10:11:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:11.179 10:11:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:11.179 10:11:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:11.179 10:11:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:11.179 10:11:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:09:11.179 10:11:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b2b5c714-43f2-4225-9c3e-cdbc9f624e48 00:09:11.565 request: 00:09:11.565 { 00:09:11.565 "uuid": "b2b5c714-43f2-4225-9c3e-cdbc9f624e48", 00:09:11.565 "method": "bdev_lvol_get_lvstores", 00:09:11.565 "req_id": 1 00:09:11.565 } 00:09:11.565 Got JSON-RPC error response 00:09:11.565 response: 00:09:11.565 { 00:09:11.565 "code": -19, 00:09:11.565 "message": "No such device" 00:09:11.565 } 00:09:11.565 10:11:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:09:11.565 10:11:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:11.565 10:11:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:11.565 10:11:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:11.565 10:11:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:11.565 aio_bdev 00:09:11.565 10:11:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev ce991d52-cf9d-4d3e-8a6f-b893dbc658d3 00:09:11.565 10:11:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=ce991d52-cf9d-4d3e-8a6f-b893dbc658d3 00:09:11.565 10:11:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:11.565 10:11:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:09:11.565 10:11:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:11.565 10:11:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:11.565 10:11:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:11.823 10:11:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b ce991d52-cf9d-4d3e-8a6f-b893dbc658d3 -t 2000 00:09:11.823 [ 00:09:11.823 { 00:09:11.823 "name": "ce991d52-cf9d-4d3e-8a6f-b893dbc658d3", 00:09:11.823 "aliases": [ 00:09:11.823 "lvs/lvol" 00:09:11.823 ], 00:09:11.823 "product_name": "Logical Volume", 00:09:11.823 "block_size": 4096, 00:09:11.823 "num_blocks": 38912, 00:09:11.823 "uuid": "ce991d52-cf9d-4d3e-8a6f-b893dbc658d3", 00:09:11.823 "assigned_rate_limits": { 00:09:11.823 "rw_ios_per_sec": 0, 00:09:11.823 "rw_mbytes_per_sec": 0, 00:09:11.823 "r_mbytes_per_sec": 0, 00:09:11.823 "w_mbytes_per_sec": 0 00:09:11.823 }, 00:09:11.823 "claimed": false, 00:09:11.823 "zoned": false, 00:09:11.823 "supported_io_types": { 00:09:11.823 "read": true, 00:09:11.823 "write": true, 00:09:11.823 "unmap": true, 00:09:11.823 "flush": false, 00:09:11.823 "reset": true, 00:09:11.823 "nvme_admin": false, 00:09:11.823 "nvme_io": false, 00:09:11.823 "nvme_io_md": false, 00:09:11.823 "write_zeroes": true, 00:09:11.823 "zcopy": false, 00:09:11.823 "get_zone_info": false, 00:09:11.823 "zone_management": false, 00:09:11.823 "zone_append": false, 00:09:11.823 "compare": false, 00:09:11.823 "compare_and_write": false, 00:09:11.823 "abort": false, 00:09:11.823 "seek_hole": true, 00:09:11.823 "seek_data": true, 00:09:11.823 "copy": false, 00:09:11.823 "nvme_iov_md": false 00:09:11.823 }, 00:09:11.823 "driver_specific": { 00:09:11.823 "lvol": { 00:09:11.823 "lvol_store_uuid": "b2b5c714-43f2-4225-9c3e-cdbc9f624e48", 00:09:11.823 "base_bdev": "aio_bdev", 00:09:11.823 "thin_provision": false, 00:09:11.823 "num_allocated_clusters": 38, 00:09:11.823 "snapshot": false, 00:09:11.823 "clone": false, 00:09:11.823 "esnap_clone": false 00:09:11.823 } 00:09:11.823 } 00:09:11.823 } 00:09:11.823 ] 00:09:11.823 10:11:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:09:11.823 10:11:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b2b5c714-43f2-4225-9c3e-cdbc9f624e48 00:09:11.823 10:11:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:09:12.082 10:11:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:09:12.082 10:11:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b2b5c714-43f2-4225-9c3e-cdbc9f624e48 00:09:12.082 10:11:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:09:12.340 10:11:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:09:12.340 10:11:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete ce991d52-cf9d-4d3e-8a6f-b893dbc658d3 00:09:12.598 10:11:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u b2b5c714-43f2-4225-9c3e-cdbc9f624e48 00:09:12.598 10:11:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:12.856 10:11:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:12.856 00:09:12.856 real 0m18.863s 00:09:12.856 user 0m48.422s 00:09:12.856 sys 0m3.777s 00:09:12.856 10:11:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:12.856 10:11:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:12.856 ************************************ 00:09:12.856 END TEST lvs_grow_dirty 00:09:12.856 ************************************ 00:09:12.856 10:11:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:09:12.856 10:11:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:09:12.856 10:11:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:09:12.856 10:11:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:09:12.856 10:11:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:09:12.856 10:11:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:09:12.856 10:11:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:09:12.856 10:11:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:09:12.856 10:11:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:09:12.856 nvmf_trace.0 00:09:12.856 10:11:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:09:12.856 10:11:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:09:12.856 10:11:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:12.856 10:11:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:09:13.115 10:11:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:13.115 10:11:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:09:13.115 10:11:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:13.115 10:11:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:13.115 rmmod nvme_tcp 00:09:13.115 rmmod nvme_fabrics 00:09:13.115 rmmod nvme_keyring 00:09:13.115 10:11:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:13.115 10:11:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:09:13.115 10:11:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:09:13.115 10:11:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 3763994 ']' 00:09:13.115 10:11:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 3763994 00:09:13.115 10:11:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 3763994 ']' 00:09:13.115 10:11:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 3763994 00:09:13.115 10:11:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:09:13.115 10:11:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:13.115 10:11:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3763994 00:09:13.115 10:11:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:13.115 10:11:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:13.115 10:11:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3763994' 00:09:13.115 killing process with pid 3763994 00:09:13.115 10:11:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 3763994 00:09:13.115 10:11:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 3763994 00:09:14.049 10:11:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:14.049 10:11:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:14.049 10:11:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:14.049 10:11:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:09:14.049 10:11:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:09:14.049 10:11:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:14.049 10:11:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:09:14.307 10:11:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:14.307 10:11:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:14.307 10:11:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:14.307 10:11:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:14.307 10:11:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:16.209 10:11:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:16.209 00:09:16.209 real 0m46.063s 00:09:16.209 user 1m11.828s 00:09:16.209 sys 0m10.119s 00:09:16.209 10:11:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:16.209 10:11:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:16.209 ************************************ 00:09:16.209 END TEST nvmf_lvs_grow 00:09:16.209 ************************************ 00:09:16.209 10:11:10 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:09:16.209 10:11:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:16.209 10:11:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:16.209 10:11:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:16.209 ************************************ 00:09:16.209 START TEST nvmf_bdev_io_wait 00:09:16.209 ************************************ 00:09:16.209 10:11:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:09:16.468 * Looking for test storage... 00:09:16.468 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:16.468 10:11:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:16.468 10:11:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lcov --version 00:09:16.468 10:11:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:16.468 10:11:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:16.468 10:11:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:16.468 10:11:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:16.468 10:11:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:16.468 10:11:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:09:16.468 10:11:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:09:16.468 10:11:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:09:16.468 10:11:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:09:16.468 10:11:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:09:16.468 10:11:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:09:16.468 10:11:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:09:16.468 10:11:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:16.468 10:11:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:09:16.468 10:11:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:09:16.468 10:11:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:16.468 10:11:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:16.468 10:11:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:09:16.468 10:11:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:09:16.469 10:11:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:16.469 10:11:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:09:16.469 10:11:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:09:16.469 10:11:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:09:16.469 10:11:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:09:16.469 10:11:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:16.469 10:11:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:09:16.469 10:11:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:09:16.469 10:11:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:16.469 10:11:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:16.469 10:11:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:09:16.469 10:11:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:16.469 10:11:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:16.469 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:16.469 --rc genhtml_branch_coverage=1 00:09:16.469 --rc genhtml_function_coverage=1 00:09:16.469 --rc genhtml_legend=1 00:09:16.469 --rc geninfo_all_blocks=1 00:09:16.469 --rc geninfo_unexecuted_blocks=1 00:09:16.469 00:09:16.469 ' 00:09:16.469 10:11:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:16.469 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:16.469 --rc genhtml_branch_coverage=1 00:09:16.469 --rc genhtml_function_coverage=1 00:09:16.469 --rc genhtml_legend=1 00:09:16.469 --rc geninfo_all_blocks=1 00:09:16.469 --rc geninfo_unexecuted_blocks=1 00:09:16.469 00:09:16.469 ' 00:09:16.469 10:11:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:16.469 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:16.469 --rc genhtml_branch_coverage=1 00:09:16.469 --rc genhtml_function_coverage=1 00:09:16.469 --rc genhtml_legend=1 00:09:16.469 --rc geninfo_all_blocks=1 00:09:16.469 --rc geninfo_unexecuted_blocks=1 00:09:16.469 00:09:16.469 ' 00:09:16.469 10:11:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:16.469 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:16.469 --rc genhtml_branch_coverage=1 00:09:16.469 --rc genhtml_function_coverage=1 00:09:16.469 --rc genhtml_legend=1 00:09:16.469 --rc geninfo_all_blocks=1 00:09:16.469 --rc geninfo_unexecuted_blocks=1 00:09:16.469 00:09:16.469 ' 00:09:16.469 10:11:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:16.469 10:11:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:09:16.469 10:11:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:16.469 10:11:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:16.469 10:11:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:16.469 10:11:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:16.469 10:11:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:16.469 10:11:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:16.469 10:11:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:16.469 10:11:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:16.469 10:11:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:16.469 10:11:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:16.469 10:11:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:09:16.469 10:11:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:09:16.469 10:11:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:16.469 10:11:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:16.469 10:11:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:16.469 10:11:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:16.469 10:11:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:16.469 10:11:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:09:16.469 10:11:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:16.469 10:11:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:16.469 10:11:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:16.469 10:11:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:16.469 10:11:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:16.469 10:11:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:16.469 10:11:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:09:16.469 10:11:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:16.469 10:11:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:09:16.469 10:11:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:16.469 10:11:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:16.469 10:11:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:16.469 10:11:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:16.469 10:11:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:16.469 10:11:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:16.469 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:16.469 10:11:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:16.469 10:11:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:16.469 10:11:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:16.469 10:11:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:16.469 10:11:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:16.469 10:11:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:09:16.469 10:11:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:16.469 10:11:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:16.469 10:11:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:16.469 10:11:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:16.469 10:11:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:16.469 10:11:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:16.469 10:11:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:16.469 10:11:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:16.469 10:11:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:16.469 10:11:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:16.469 10:11:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:09:16.469 10:11:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:21.735 10:11:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:21.735 10:11:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:09:21.735 10:11:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:21.735 10:11:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:21.735 10:11:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:21.735 10:11:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:21.735 10:11:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:21.735 10:11:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:09:21.735 10:11:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:21.736 10:11:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:09:21.736 10:11:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:09:21.736 10:11:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:09:21.736 10:11:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:09:21.736 10:11:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:09:21.736 10:11:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:09:21.736 10:11:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:21.736 10:11:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:21.736 10:11:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:21.736 10:11:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:21.736 10:11:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:21.736 10:11:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:21.736 10:11:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:21.736 10:11:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:21.736 10:11:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:21.736 10:11:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:21.736 10:11:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:21.736 10:11:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:21.736 10:11:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:21.736 10:11:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:21.736 10:11:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:21.736 10:11:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:21.736 10:11:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:21.736 10:11:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:21.736 10:11:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:21.736 10:11:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:09:21.736 Found 0000:af:00.0 (0x8086 - 0x159b) 00:09:21.736 10:11:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:21.736 10:11:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:21.736 10:11:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:21.736 10:11:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:21.736 10:11:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:21.736 10:11:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:21.736 10:11:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:09:21.736 Found 0000:af:00.1 (0x8086 - 0x159b) 00:09:21.736 10:11:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:21.736 10:11:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:21.736 10:11:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:21.736 10:11:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:21.736 10:11:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:21.736 10:11:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:21.736 10:11:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:21.736 10:11:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:21.736 10:11:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:21.736 10:11:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:21.736 10:11:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:21.736 10:11:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:21.736 10:11:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:21.736 10:11:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:21.736 10:11:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:21.736 10:11:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:09:21.736 Found net devices under 0000:af:00.0: cvl_0_0 00:09:21.736 10:11:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:21.736 10:11:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:21.736 10:11:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:21.736 10:11:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:21.736 10:11:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:21.736 10:11:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:21.736 10:11:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:21.736 10:11:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:21.736 10:11:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:09:21.736 Found net devices under 0000:af:00.1: cvl_0_1 00:09:21.736 10:11:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:21.736 10:11:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:21.736 10:11:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:09:21.736 10:11:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:21.736 10:11:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:21.736 10:11:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:21.736 10:11:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:21.736 10:11:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:21.736 10:11:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:21.736 10:11:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:21.736 10:11:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:21.736 10:11:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:21.736 10:11:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:21.736 10:11:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:21.736 10:11:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:21.736 10:11:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:21.736 10:11:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:21.736 10:11:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:21.736 10:11:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:21.736 10:11:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:21.736 10:11:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:21.736 10:11:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:21.736 10:11:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:21.736 10:11:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:21.736 10:11:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:21.994 10:11:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:21.994 10:11:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:21.994 10:11:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:21.994 10:11:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:21.994 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:21.994 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.418 ms 00:09:21.994 00:09:21.994 --- 10.0.0.2 ping statistics --- 00:09:21.994 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:21.994 rtt min/avg/max/mdev = 0.418/0.418/0.418/0.000 ms 00:09:21.994 10:11:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:21.994 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:21.994 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.228 ms 00:09:21.994 00:09:21.994 --- 10.0.0.1 ping statistics --- 00:09:21.994 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:21.994 rtt min/avg/max/mdev = 0.228/0.228/0.228/0.000 ms 00:09:21.994 10:11:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:21.994 10:11:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:09:21.994 10:11:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:21.994 10:11:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:21.994 10:11:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:21.994 10:11:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:21.994 10:11:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:21.994 10:11:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:21.994 10:11:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:21.994 10:11:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:09:21.994 10:11:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:21.994 10:11:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:21.994 10:11:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:21.994 10:11:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=3768214 00:09:21.994 10:11:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:09:21.994 10:11:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 3768214 00:09:21.994 10:11:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 3768214 ']' 00:09:21.994 10:11:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:21.994 10:11:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:21.994 10:11:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:21.994 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:21.994 10:11:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:21.994 10:11:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:21.994 [2024-12-13 10:11:15.842780] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:09:21.994 [2024-12-13 10:11:15.842869] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:22.253 [2024-12-13 10:11:15.961665] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:22.253 [2024-12-13 10:11:16.071515] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:22.253 [2024-12-13 10:11:16.071560] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:22.253 [2024-12-13 10:11:16.071570] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:22.253 [2024-12-13 10:11:16.071596] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:22.253 [2024-12-13 10:11:16.071605] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:22.253 [2024-12-13 10:11:16.073899] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:09:22.253 [2024-12-13 10:11:16.073975] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:09:22.253 [2024-12-13 10:11:16.074041] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:09:22.253 [2024-12-13 10:11:16.074050] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:09:22.819 10:11:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:22.819 10:11:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:09:22.819 10:11:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:22.819 10:11:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:22.819 10:11:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:22.819 10:11:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:22.819 10:11:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:09:22.819 10:11:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.819 10:11:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:22.819 10:11:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.819 10:11:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:09:22.819 10:11:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.819 10:11:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:23.077 10:11:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.077 10:11:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:23.077 10:11:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.077 10:11:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:23.077 [2024-12-13 10:11:16.956492] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:23.077 10:11:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.077 10:11:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:23.077 10:11:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.077 10:11:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:23.335 Malloc0 00:09:23.335 10:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.335 10:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:23.335 10:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.335 10:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:23.335 10:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.335 10:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:23.335 10:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.335 10:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:23.335 10:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.335 10:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:23.335 10:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.335 10:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:23.335 [2024-12-13 10:11:17.071083] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:23.335 10:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.335 10:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=3768460 00:09:23.336 10:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:09:23.336 10:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:09:23.336 10:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=3768462 00:09:23.336 10:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:09:23.336 10:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:09:23.336 10:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:23.336 10:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:23.336 { 00:09:23.336 "params": { 00:09:23.336 "name": "Nvme$subsystem", 00:09:23.336 "trtype": "$TEST_TRANSPORT", 00:09:23.336 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:23.336 "adrfam": "ipv4", 00:09:23.336 "trsvcid": "$NVMF_PORT", 00:09:23.336 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:23.336 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:23.336 "hdgst": ${hdgst:-false}, 00:09:23.336 "ddgst": ${ddgst:-false} 00:09:23.336 }, 00:09:23.336 "method": "bdev_nvme_attach_controller" 00:09:23.336 } 00:09:23.336 EOF 00:09:23.336 )") 00:09:23.336 10:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:09:23.336 10:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=3768464 00:09:23.336 10:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:09:23.336 10:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:09:23.336 10:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:09:23.336 10:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:23.336 10:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:09:23.336 10:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:09:23.336 10:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=3768467 00:09:23.336 10:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:23.336 { 00:09:23.336 "params": { 00:09:23.336 "name": "Nvme$subsystem", 00:09:23.336 "trtype": "$TEST_TRANSPORT", 00:09:23.336 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:23.336 "adrfam": "ipv4", 00:09:23.336 "trsvcid": "$NVMF_PORT", 00:09:23.336 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:23.336 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:23.336 "hdgst": ${hdgst:-false}, 00:09:23.336 "ddgst": ${ddgst:-false} 00:09:23.336 }, 00:09:23.336 "method": "bdev_nvme_attach_controller" 00:09:23.336 } 00:09:23.336 EOF 00:09:23.336 )") 00:09:23.336 10:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:09:23.336 10:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:09:23.336 10:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:09:23.336 10:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:09:23.336 10:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:23.336 10:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:09:23.336 10:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:23.336 { 00:09:23.336 "params": { 00:09:23.336 "name": "Nvme$subsystem", 00:09:23.336 "trtype": "$TEST_TRANSPORT", 00:09:23.336 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:23.336 "adrfam": "ipv4", 00:09:23.336 "trsvcid": "$NVMF_PORT", 00:09:23.336 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:23.336 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:23.336 "hdgst": ${hdgst:-false}, 00:09:23.336 "ddgst": ${ddgst:-false} 00:09:23.336 }, 00:09:23.336 "method": "bdev_nvme_attach_controller" 00:09:23.336 } 00:09:23.336 EOF 00:09:23.336 )") 00:09:23.336 10:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:09:23.336 10:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:09:23.336 10:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:09:23.336 10:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:09:23.336 10:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:23.336 10:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:23.336 { 00:09:23.336 "params": { 00:09:23.336 "name": "Nvme$subsystem", 00:09:23.336 "trtype": "$TEST_TRANSPORT", 00:09:23.336 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:23.336 "adrfam": "ipv4", 00:09:23.336 "trsvcid": "$NVMF_PORT", 00:09:23.336 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:23.336 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:23.336 "hdgst": ${hdgst:-false}, 00:09:23.336 "ddgst": ${ddgst:-false} 00:09:23.336 }, 00:09:23.336 "method": "bdev_nvme_attach_controller" 00:09:23.336 } 00:09:23.336 EOF 00:09:23.336 )") 00:09:23.336 10:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:09:23.336 10:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 3768460 00:09:23.336 10:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:09:23.336 10:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:09:23.336 10:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:09:23.336 10:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:09:23.336 10:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:09:23.336 10:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:23.336 "params": { 00:09:23.336 "name": "Nvme1", 00:09:23.336 "trtype": "tcp", 00:09:23.336 "traddr": "10.0.0.2", 00:09:23.336 "adrfam": "ipv4", 00:09:23.336 "trsvcid": "4420", 00:09:23.336 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:23.336 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:23.336 "hdgst": false, 00:09:23.336 "ddgst": false 00:09:23.336 }, 00:09:23.336 "method": "bdev_nvme_attach_controller" 00:09:23.336 }' 00:09:23.336 10:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:09:23.336 10:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:09:23.336 10:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:23.336 "params": { 00:09:23.336 "name": "Nvme1", 00:09:23.336 "trtype": "tcp", 00:09:23.336 "traddr": "10.0.0.2", 00:09:23.336 "adrfam": "ipv4", 00:09:23.336 "trsvcid": "4420", 00:09:23.336 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:23.336 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:23.336 "hdgst": false, 00:09:23.336 "ddgst": false 00:09:23.336 }, 00:09:23.336 "method": "bdev_nvme_attach_controller" 00:09:23.336 }' 00:09:23.336 10:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:09:23.336 10:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:23.336 "params": { 00:09:23.336 "name": "Nvme1", 00:09:23.336 "trtype": "tcp", 00:09:23.336 "traddr": "10.0.0.2", 00:09:23.336 "adrfam": "ipv4", 00:09:23.336 "trsvcid": "4420", 00:09:23.336 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:23.336 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:23.336 "hdgst": false, 00:09:23.336 "ddgst": false 00:09:23.336 }, 00:09:23.336 "method": "bdev_nvme_attach_controller" 00:09:23.336 }' 00:09:23.336 10:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:09:23.336 10:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:23.336 "params": { 00:09:23.336 "name": "Nvme1", 00:09:23.336 "trtype": "tcp", 00:09:23.336 "traddr": "10.0.0.2", 00:09:23.336 "adrfam": "ipv4", 00:09:23.336 "trsvcid": "4420", 00:09:23.336 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:23.336 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:23.336 "hdgst": false, 00:09:23.336 "ddgst": false 00:09:23.336 }, 00:09:23.336 "method": "bdev_nvme_attach_controller" 00:09:23.336 }' 00:09:23.336 [2024-12-13 10:11:17.149616] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:09:23.336 [2024-12-13 10:11:17.149707] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:09:23.336 [2024-12-13 10:11:17.151296] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:09:23.336 [2024-12-13 10:11:17.151370] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:09:23.336 [2024-12-13 10:11:17.154001] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:09:23.336 [2024-12-13 10:11:17.154006] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:09:23.336 [2024-12-13 10:11:17.154091] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-12-13 10:11:17.154092] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:09:23.336 --proc-type=auto ] 00:09:23.594 [2024-12-13 10:11:17.380319] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:23.594 [2024-12-13 10:11:17.480619] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:23.852 [2024-12-13 10:11:17.492504] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:09:23.852 [2024-12-13 10:11:17.536384] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:23.852 [2024-12-13 10:11:17.592606] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:23.852 [2024-12-13 10:11:17.593670] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 6 00:09:23.852 [2024-12-13 10:11:17.639326] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 5 00:09:23.852 [2024-12-13 10:11:17.692124] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 7 00:09:24.110 Running I/O for 1 seconds... 00:09:24.368 Running I/O for 1 seconds... 00:09:24.368 Running I/O for 1 seconds... 00:09:24.368 Running I/O for 1 seconds... 00:09:25.301 214048.00 IOPS, 836.12 MiB/s 00:09:25.301 Latency(us) 00:09:25.301 [2024-12-13T09:11:19.192Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:25.301 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:09:25.301 Nvme1n1 : 1.00 213699.55 834.76 0.00 0.00 595.96 263.31 1614.99 00:09:25.301 [2024-12-13T09:11:19.192Z] =================================================================================================================== 00:09:25.301 [2024-12-13T09:11:19.192Z] Total : 213699.55 834.76 0.00 0.00 595.96 263.31 1614.99 00:09:25.301 7271.00 IOPS, 28.40 MiB/s 00:09:25.301 Latency(us) 00:09:25.301 [2024-12-13T09:11:19.192Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:25.301 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:09:25.301 Nvme1n1 : 1.02 7289.61 28.48 0.00 0.00 17415.39 4556.31 34203.55 00:09:25.301 [2024-12-13T09:11:19.192Z] =================================================================================================================== 00:09:25.301 [2024-12-13T09:11:19.192Z] Total : 7289.61 28.48 0.00 0.00 17415.39 4556.31 34203.55 00:09:25.301 10925.00 IOPS, 42.68 MiB/s 00:09:25.301 Latency(us) 00:09:25.301 [2024-12-13T09:11:19.192Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:25.301 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:09:25.301 Nvme1n1 : 1.01 10984.63 42.91 0.00 0.00 11610.43 3698.10 17101.78 00:09:25.301 [2024-12-13T09:11:19.192Z] =================================================================================================================== 00:09:25.301 [2024-12-13T09:11:19.192Z] Total : 10984.63 42.91 0.00 0.00 11610.43 3698.10 17101.78 00:09:25.301 6589.00 IOPS, 25.74 MiB/s 00:09:25.301 Latency(us) 00:09:25.301 [2024-12-13T09:11:19.192Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:25.301 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:09:25.301 Nvme1n1 : 1.01 6679.30 26.09 0.00 0.00 19107.21 4587.52 59918.63 00:09:25.301 [2024-12-13T09:11:19.192Z] =================================================================================================================== 00:09:25.301 [2024-12-13T09:11:19.192Z] Total : 6679.30 26.09 0.00 0.00 19107.21 4587.52 59918.63 00:09:25.867 10:11:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 3768462 00:09:26.125 10:11:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 3768464 00:09:26.125 10:11:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 3768467 00:09:26.125 10:11:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:26.125 10:11:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.125 10:11:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:26.125 10:11:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.125 10:11:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:09:26.125 10:11:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:09:26.125 10:11:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:26.125 10:11:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:09:26.125 10:11:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:26.125 10:11:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:09:26.125 10:11:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:26.125 10:11:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:26.125 rmmod nvme_tcp 00:09:26.125 rmmod nvme_fabrics 00:09:26.125 rmmod nvme_keyring 00:09:26.125 10:11:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:26.125 10:11:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:09:26.125 10:11:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:09:26.125 10:11:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 3768214 ']' 00:09:26.125 10:11:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 3768214 00:09:26.125 10:11:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 3768214 ']' 00:09:26.125 10:11:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 3768214 00:09:26.125 10:11:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:09:26.125 10:11:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:26.125 10:11:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3768214 00:09:26.125 10:11:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:26.125 10:11:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:26.125 10:11:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3768214' 00:09:26.125 killing process with pid 3768214 00:09:26.125 10:11:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 3768214 00:09:26.125 10:11:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 3768214 00:09:27.501 10:11:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:27.501 10:11:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:27.501 10:11:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:27.501 10:11:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:09:27.501 10:11:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:09:27.501 10:11:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:09:27.501 10:11:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:27.501 10:11:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:27.501 10:11:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:27.501 10:11:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:27.501 10:11:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:27.501 10:11:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:29.403 10:11:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:29.403 00:09:29.403 real 0m13.041s 00:09:29.403 user 0m29.167s 00:09:29.403 sys 0m6.132s 00:09:29.403 10:11:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:29.403 10:11:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:29.403 ************************************ 00:09:29.403 END TEST nvmf_bdev_io_wait 00:09:29.403 ************************************ 00:09:29.403 10:11:23 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:29.403 10:11:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:29.403 10:11:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:29.403 10:11:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:29.403 ************************************ 00:09:29.403 START TEST nvmf_queue_depth 00:09:29.403 ************************************ 00:09:29.403 10:11:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:29.403 * Looking for test storage... 00:09:29.403 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:29.403 10:11:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:29.403 10:11:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lcov --version 00:09:29.403 10:11:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:29.662 10:11:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:29.662 10:11:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:29.662 10:11:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:29.662 10:11:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:29.662 10:11:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:09:29.662 10:11:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:09:29.662 10:11:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:09:29.662 10:11:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:09:29.662 10:11:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:09:29.662 10:11:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:09:29.662 10:11:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:09:29.662 10:11:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:29.662 10:11:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:09:29.662 10:11:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:09:29.662 10:11:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:29.662 10:11:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:29.662 10:11:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:09:29.662 10:11:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:09:29.662 10:11:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:29.662 10:11:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:09:29.662 10:11:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:09:29.662 10:11:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:09:29.662 10:11:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:09:29.662 10:11:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:29.662 10:11:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:09:29.662 10:11:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:09:29.662 10:11:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:29.662 10:11:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:29.662 10:11:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:09:29.662 10:11:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:29.662 10:11:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:29.662 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:29.662 --rc genhtml_branch_coverage=1 00:09:29.662 --rc genhtml_function_coverage=1 00:09:29.662 --rc genhtml_legend=1 00:09:29.662 --rc geninfo_all_blocks=1 00:09:29.662 --rc geninfo_unexecuted_blocks=1 00:09:29.662 00:09:29.662 ' 00:09:29.662 10:11:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:29.662 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:29.662 --rc genhtml_branch_coverage=1 00:09:29.662 --rc genhtml_function_coverage=1 00:09:29.662 --rc genhtml_legend=1 00:09:29.662 --rc geninfo_all_blocks=1 00:09:29.662 --rc geninfo_unexecuted_blocks=1 00:09:29.662 00:09:29.662 ' 00:09:29.662 10:11:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:29.662 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:29.662 --rc genhtml_branch_coverage=1 00:09:29.662 --rc genhtml_function_coverage=1 00:09:29.662 --rc genhtml_legend=1 00:09:29.662 --rc geninfo_all_blocks=1 00:09:29.662 --rc geninfo_unexecuted_blocks=1 00:09:29.662 00:09:29.662 ' 00:09:29.662 10:11:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:29.662 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:29.662 --rc genhtml_branch_coverage=1 00:09:29.662 --rc genhtml_function_coverage=1 00:09:29.662 --rc genhtml_legend=1 00:09:29.662 --rc geninfo_all_blocks=1 00:09:29.662 --rc geninfo_unexecuted_blocks=1 00:09:29.662 00:09:29.662 ' 00:09:29.662 10:11:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:29.662 10:11:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:09:29.662 10:11:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:29.662 10:11:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:29.662 10:11:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:29.662 10:11:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:29.662 10:11:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:29.662 10:11:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:29.662 10:11:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:29.663 10:11:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:29.663 10:11:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:29.663 10:11:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:29.663 10:11:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:09:29.663 10:11:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:09:29.663 10:11:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:29.663 10:11:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:29.663 10:11:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:29.663 10:11:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:29.663 10:11:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:29.663 10:11:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:09:29.663 10:11:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:29.663 10:11:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:29.663 10:11:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:29.663 10:11:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:29.663 10:11:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:29.663 10:11:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:29.663 10:11:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:09:29.663 10:11:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:29.663 10:11:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:09:29.663 10:11:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:29.663 10:11:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:29.663 10:11:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:29.663 10:11:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:29.663 10:11:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:29.663 10:11:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:29.663 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:29.663 10:11:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:29.663 10:11:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:29.663 10:11:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:29.663 10:11:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:09:29.663 10:11:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:09:29.663 10:11:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:09:29.663 10:11:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:09:29.663 10:11:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:29.663 10:11:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:29.663 10:11:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:29.663 10:11:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:29.663 10:11:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:29.663 10:11:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:29.663 10:11:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:29.663 10:11:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:29.663 10:11:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:29.663 10:11:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:29.663 10:11:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:09:29.663 10:11:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:34.925 10:11:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:34.925 10:11:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:09:34.925 10:11:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:34.925 10:11:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:34.925 10:11:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:34.925 10:11:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:34.925 10:11:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:34.925 10:11:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:09:34.925 10:11:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:34.925 10:11:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:09:34.925 10:11:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:09:34.925 10:11:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:09:34.925 10:11:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:09:34.925 10:11:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:09:34.925 10:11:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:09:34.925 10:11:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:34.925 10:11:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:34.925 10:11:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:34.925 10:11:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:34.925 10:11:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:34.925 10:11:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:34.925 10:11:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:34.925 10:11:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:34.925 10:11:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:34.925 10:11:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:34.925 10:11:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:34.925 10:11:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:34.925 10:11:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:34.925 10:11:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:34.925 10:11:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:34.925 10:11:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:34.925 10:11:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:34.925 10:11:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:34.925 10:11:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:34.925 10:11:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:09:34.925 Found 0000:af:00.0 (0x8086 - 0x159b) 00:09:34.925 10:11:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:34.925 10:11:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:34.925 10:11:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:34.925 10:11:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:34.925 10:11:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:34.925 10:11:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:34.925 10:11:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:09:34.925 Found 0000:af:00.1 (0x8086 - 0x159b) 00:09:34.925 10:11:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:34.925 10:11:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:34.925 10:11:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:34.925 10:11:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:34.925 10:11:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:34.925 10:11:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:34.925 10:11:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:34.925 10:11:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:34.925 10:11:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:34.925 10:11:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:34.925 10:11:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:34.925 10:11:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:34.925 10:11:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:34.925 10:11:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:34.925 10:11:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:34.925 10:11:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:09:34.925 Found net devices under 0000:af:00.0: cvl_0_0 00:09:34.925 10:11:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:34.926 10:11:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:34.926 10:11:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:34.926 10:11:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:34.926 10:11:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:34.926 10:11:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:34.926 10:11:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:34.926 10:11:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:34.926 10:11:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:09:34.926 Found net devices under 0000:af:00.1: cvl_0_1 00:09:34.926 10:11:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:34.926 10:11:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:34.926 10:11:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:09:34.926 10:11:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:34.926 10:11:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:34.926 10:11:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:34.926 10:11:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:34.926 10:11:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:34.926 10:11:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:34.926 10:11:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:34.926 10:11:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:34.926 10:11:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:34.926 10:11:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:34.926 10:11:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:34.926 10:11:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:34.926 10:11:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:34.926 10:11:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:34.926 10:11:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:34.926 10:11:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:34.926 10:11:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:34.926 10:11:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:34.926 10:11:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:34.926 10:11:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:34.926 10:11:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:34.926 10:11:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:34.926 10:11:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:34.926 10:11:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:34.926 10:11:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:34.926 10:11:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:34.926 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:34.926 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.348 ms 00:09:34.926 00:09:34.926 --- 10.0.0.2 ping statistics --- 00:09:34.926 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:34.926 rtt min/avg/max/mdev = 0.348/0.348/0.348/0.000 ms 00:09:34.926 10:11:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:34.926 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:34.926 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.120 ms 00:09:34.926 00:09:34.926 --- 10.0.0.1 ping statistics --- 00:09:34.926 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:34.926 rtt min/avg/max/mdev = 0.120/0.120/0.120/0.000 ms 00:09:34.926 10:11:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:34.926 10:11:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:09:34.926 10:11:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:34.926 10:11:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:34.926 10:11:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:34.926 10:11:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:34.926 10:11:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:34.926 10:11:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:34.926 10:11:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:34.926 10:11:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:09:34.926 10:11:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:34.926 10:11:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:34.926 10:11:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:34.926 10:11:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=3772410 00:09:34.926 10:11:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 3772410 00:09:34.926 10:11:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 3772410 ']' 00:09:34.926 10:11:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:34.926 10:11:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:34.926 10:11:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:34.926 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:34.926 10:11:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:34.926 10:11:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:34.926 10:11:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:34.926 [2024-12-13 10:11:28.417976] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:09:34.926 [2024-12-13 10:11:28.418063] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:34.926 [2024-12-13 10:11:28.538963] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:34.926 [2024-12-13 10:11:28.644521] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:34.926 [2024-12-13 10:11:28.644566] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:34.926 [2024-12-13 10:11:28.644576] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:34.926 [2024-12-13 10:11:28.644602] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:34.926 [2024-12-13 10:11:28.644611] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:34.926 [2024-12-13 10:11:28.645883] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:09:35.493 10:11:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:35.493 10:11:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:09:35.493 10:11:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:35.493 10:11:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:35.493 10:11:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:35.493 10:11:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:35.493 10:11:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:35.493 10:11:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.493 10:11:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:35.493 [2024-12-13 10:11:29.244346] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:35.493 10:11:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.493 10:11:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:35.493 10:11:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.493 10:11:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:35.493 Malloc0 00:09:35.493 10:11:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.493 10:11:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:35.493 10:11:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.493 10:11:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:35.493 10:11:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.493 10:11:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:35.493 10:11:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.493 10:11:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:35.493 10:11:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.493 10:11:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:35.493 10:11:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.493 10:11:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:35.493 [2024-12-13 10:11:29.353694] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:35.493 10:11:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.493 10:11:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=3772648 00:09:35.493 10:11:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:35.493 10:11:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 3772648 /var/tmp/bdevperf.sock 00:09:35.493 10:11:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 3772648 ']' 00:09:35.493 10:11:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:35.493 10:11:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:35.493 10:11:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:09:35.493 10:11:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:35.493 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:35.493 10:11:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:35.493 10:11:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:35.752 [2024-12-13 10:11:29.430473] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:09:35.752 [2024-12-13 10:11:29.430553] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3772648 ] 00:09:35.752 [2024-12-13 10:11:29.542957] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:36.010 [2024-12-13 10:11:29.649622] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:09:36.576 10:11:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:36.576 10:11:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:09:36.576 10:11:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:09:36.576 10:11:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.576 10:11:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:36.576 NVMe0n1 00:09:36.576 10:11:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.576 10:11:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:36.576 Running I/O for 10 seconds... 00:09:38.884 10240.00 IOPS, 40.00 MiB/s [2024-12-13T09:11:33.709Z] 10503.50 IOPS, 41.03 MiB/s [2024-12-13T09:11:34.644Z] 10580.67 IOPS, 41.33 MiB/s [2024-12-13T09:11:35.579Z] 10736.25 IOPS, 41.94 MiB/s [2024-12-13T09:11:36.512Z] 10714.00 IOPS, 41.85 MiB/s [2024-12-13T09:11:37.887Z] 10748.17 IOPS, 41.99 MiB/s [2024-12-13T09:11:38.820Z] 10784.14 IOPS, 42.13 MiB/s [2024-12-13T09:11:39.755Z] 10745.00 IOPS, 41.97 MiB/s [2024-12-13T09:11:40.690Z] 10787.56 IOPS, 42.14 MiB/s [2024-12-13T09:11:40.690Z] 10755.60 IOPS, 42.01 MiB/s 00:09:46.799 Latency(us) 00:09:46.799 [2024-12-13T09:11:40.690Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:46.799 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:09:46.799 Verification LBA range: start 0x0 length 0x4000 00:09:46.799 NVMe0n1 : 10.06 10790.55 42.15 0.00 0.00 94538.78 9549.53 60168.29 00:09:46.799 [2024-12-13T09:11:40.690Z] =================================================================================================================== 00:09:46.799 [2024-12-13T09:11:40.690Z] Total : 10790.55 42.15 0.00 0.00 94538.78 9549.53 60168.29 00:09:46.799 { 00:09:46.799 "results": [ 00:09:46.799 { 00:09:46.799 "job": "NVMe0n1", 00:09:46.799 "core_mask": "0x1", 00:09:46.799 "workload": "verify", 00:09:46.799 "status": "finished", 00:09:46.799 "verify_range": { 00:09:46.799 "start": 0, 00:09:46.799 "length": 16384 00:09:46.799 }, 00:09:46.799 "queue_depth": 1024, 00:09:46.799 "io_size": 4096, 00:09:46.799 "runtime": 10.056758, 00:09:46.799 "iops": 10790.55496811199, 00:09:46.799 "mibps": 42.15060534418746, 00:09:46.799 "io_failed": 0, 00:09:46.799 "io_timeout": 0, 00:09:46.799 "avg_latency_us": 94538.77996986237, 00:09:46.799 "min_latency_us": 9549.531428571428, 00:09:46.799 "max_latency_us": 60168.28952380952 00:09:46.799 } 00:09:46.799 ], 00:09:46.799 "core_count": 1 00:09:46.799 } 00:09:46.799 10:11:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 3772648 00:09:46.799 10:11:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 3772648 ']' 00:09:46.799 10:11:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 3772648 00:09:46.799 10:11:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:09:46.799 10:11:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:46.799 10:11:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3772648 00:09:46.799 10:11:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:46.799 10:11:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:46.799 10:11:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3772648' 00:09:46.799 killing process with pid 3772648 00:09:46.799 10:11:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 3772648 00:09:46.799 Received shutdown signal, test time was about 10.000000 seconds 00:09:46.799 00:09:46.799 Latency(us) 00:09:46.799 [2024-12-13T09:11:40.690Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:46.800 [2024-12-13T09:11:40.691Z] =================================================================================================================== 00:09:46.800 [2024-12-13T09:11:40.691Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:46.800 10:11:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 3772648 00:09:47.734 10:11:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:09:47.734 10:11:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:09:47.734 10:11:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:47.734 10:11:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:09:47.734 10:11:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:47.734 10:11:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:09:47.734 10:11:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:47.734 10:11:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:47.734 rmmod nvme_tcp 00:09:47.734 rmmod nvme_fabrics 00:09:47.734 rmmod nvme_keyring 00:09:47.734 10:11:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:47.734 10:11:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:09:47.734 10:11:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:09:47.734 10:11:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 3772410 ']' 00:09:47.734 10:11:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 3772410 00:09:47.734 10:11:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 3772410 ']' 00:09:47.734 10:11:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 3772410 00:09:47.734 10:11:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:09:47.734 10:11:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:47.734 10:11:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3772410 00:09:47.734 10:11:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:47.734 10:11:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:47.734 10:11:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3772410' 00:09:47.734 killing process with pid 3772410 00:09:47.734 10:11:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 3772410 00:09:47.734 10:11:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 3772410 00:09:49.109 10:11:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:49.109 10:11:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:49.109 10:11:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:49.109 10:11:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:09:49.109 10:11:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:09:49.109 10:11:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:49.109 10:11:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:09:49.109 10:11:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:49.109 10:11:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:49.109 10:11:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:49.109 10:11:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:49.110 10:11:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:51.641 10:11:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:51.641 00:09:51.641 real 0m21.752s 00:09:51.641 user 0m27.346s 00:09:51.641 sys 0m5.409s 00:09:51.641 10:11:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:51.641 10:11:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:51.641 ************************************ 00:09:51.641 END TEST nvmf_queue_depth 00:09:51.641 ************************************ 00:09:51.641 10:11:44 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:51.641 10:11:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:51.641 10:11:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:51.641 10:11:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:51.641 ************************************ 00:09:51.641 START TEST nvmf_target_multipath 00:09:51.641 ************************************ 00:09:51.641 10:11:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:51.641 * Looking for test storage... 00:09:51.641 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:51.641 10:11:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:51.641 10:11:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lcov --version 00:09:51.641 10:11:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:51.641 10:11:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:51.641 10:11:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:51.641 10:11:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:51.641 10:11:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:51.641 10:11:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:09:51.641 10:11:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:09:51.641 10:11:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:09:51.641 10:11:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:09:51.641 10:11:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:09:51.641 10:11:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:09:51.641 10:11:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:09:51.641 10:11:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:51.641 10:11:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:09:51.641 10:11:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:09:51.641 10:11:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:51.641 10:11:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:51.641 10:11:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:09:51.641 10:11:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:09:51.641 10:11:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:51.641 10:11:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:09:51.641 10:11:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:09:51.641 10:11:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:09:51.641 10:11:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:09:51.641 10:11:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:51.641 10:11:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:09:51.641 10:11:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:09:51.641 10:11:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:51.641 10:11:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:51.641 10:11:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:09:51.641 10:11:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:51.641 10:11:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:51.641 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:51.641 --rc genhtml_branch_coverage=1 00:09:51.641 --rc genhtml_function_coverage=1 00:09:51.641 --rc genhtml_legend=1 00:09:51.641 --rc geninfo_all_blocks=1 00:09:51.641 --rc geninfo_unexecuted_blocks=1 00:09:51.641 00:09:51.641 ' 00:09:51.641 10:11:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:51.641 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:51.641 --rc genhtml_branch_coverage=1 00:09:51.641 --rc genhtml_function_coverage=1 00:09:51.641 --rc genhtml_legend=1 00:09:51.641 --rc geninfo_all_blocks=1 00:09:51.641 --rc geninfo_unexecuted_blocks=1 00:09:51.641 00:09:51.641 ' 00:09:51.641 10:11:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:51.641 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:51.641 --rc genhtml_branch_coverage=1 00:09:51.641 --rc genhtml_function_coverage=1 00:09:51.641 --rc genhtml_legend=1 00:09:51.641 --rc geninfo_all_blocks=1 00:09:51.641 --rc geninfo_unexecuted_blocks=1 00:09:51.641 00:09:51.641 ' 00:09:51.641 10:11:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:51.641 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:51.641 --rc genhtml_branch_coverage=1 00:09:51.641 --rc genhtml_function_coverage=1 00:09:51.641 --rc genhtml_legend=1 00:09:51.641 --rc geninfo_all_blocks=1 00:09:51.641 --rc geninfo_unexecuted_blocks=1 00:09:51.641 00:09:51.641 ' 00:09:51.641 10:11:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:51.641 10:11:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:09:51.641 10:11:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:51.641 10:11:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:51.641 10:11:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:51.641 10:11:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:51.641 10:11:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:51.641 10:11:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:51.641 10:11:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:51.641 10:11:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:51.641 10:11:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:51.641 10:11:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:51.641 10:11:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:09:51.641 10:11:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:09:51.641 10:11:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:51.641 10:11:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:51.641 10:11:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:51.641 10:11:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:51.641 10:11:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:51.641 10:11:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:09:51.641 10:11:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:51.641 10:11:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:51.641 10:11:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:51.641 10:11:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:51.641 10:11:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:51.642 10:11:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:51.642 10:11:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:09:51.642 10:11:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:51.642 10:11:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:09:51.642 10:11:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:51.642 10:11:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:51.642 10:11:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:51.642 10:11:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:51.642 10:11:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:51.642 10:11:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:51.642 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:51.642 10:11:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:51.642 10:11:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:51.642 10:11:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:51.642 10:11:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:51.642 10:11:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:51.642 10:11:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:09:51.642 10:11:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:51.642 10:11:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:09:51.642 10:11:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:51.642 10:11:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:51.642 10:11:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:51.642 10:11:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:51.642 10:11:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:51.642 10:11:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:51.642 10:11:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:51.642 10:11:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:51.642 10:11:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:51.642 10:11:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:51.642 10:11:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:09:51.642 10:11:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:56.908 10:11:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:56.908 10:11:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:09:56.908 10:11:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:56.908 10:11:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:56.908 10:11:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:56.908 10:11:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:56.908 10:11:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:56.908 10:11:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:09:56.908 10:11:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:56.908 10:11:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:09:56.908 10:11:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:09:56.908 10:11:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:09:56.908 10:11:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:09:56.908 10:11:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:09:56.908 10:11:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:09:56.908 10:11:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:56.908 10:11:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:56.908 10:11:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:56.909 10:11:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:56.909 10:11:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:56.909 10:11:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:56.909 10:11:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:56.909 10:11:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:56.909 10:11:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:56.909 10:11:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:56.909 10:11:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:56.909 10:11:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:56.909 10:11:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:56.909 10:11:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:56.909 10:11:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:56.909 10:11:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:56.909 10:11:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:56.909 10:11:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:56.909 10:11:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:56.909 10:11:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:09:56.909 Found 0000:af:00.0 (0x8086 - 0x159b) 00:09:56.909 10:11:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:56.909 10:11:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:56.909 10:11:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:56.909 10:11:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:56.909 10:11:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:56.909 10:11:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:56.909 10:11:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:09:56.909 Found 0000:af:00.1 (0x8086 - 0x159b) 00:09:56.909 10:11:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:56.909 10:11:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:56.909 10:11:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:56.909 10:11:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:56.909 10:11:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:56.909 10:11:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:56.909 10:11:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:56.909 10:11:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:56.909 10:11:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:56.909 10:11:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:56.909 10:11:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:56.909 10:11:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:56.909 10:11:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:56.909 10:11:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:56.909 10:11:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:56.909 10:11:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:09:56.909 Found net devices under 0000:af:00.0: cvl_0_0 00:09:56.909 10:11:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:56.909 10:11:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:56.909 10:11:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:56.909 10:11:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:56.909 10:11:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:56.909 10:11:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:56.909 10:11:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:56.909 10:11:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:56.909 10:11:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:09:56.909 Found net devices under 0000:af:00.1: cvl_0_1 00:09:56.909 10:11:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:56.909 10:11:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:56.909 10:11:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:09:56.909 10:11:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:56.909 10:11:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:56.909 10:11:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:56.909 10:11:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:56.909 10:11:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:56.909 10:11:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:56.909 10:11:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:56.909 10:11:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:56.909 10:11:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:56.909 10:11:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:56.909 10:11:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:56.909 10:11:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:56.909 10:11:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:56.909 10:11:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:56.909 10:11:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:56.909 10:11:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:56.909 10:11:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:56.909 10:11:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:56.909 10:11:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:56.909 10:11:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:56.909 10:11:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:56.909 10:11:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:57.167 10:11:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:57.168 10:11:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:57.168 10:11:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:57.168 10:11:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:57.168 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:57.168 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.277 ms 00:09:57.168 00:09:57.168 --- 10.0.0.2 ping statistics --- 00:09:57.168 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:57.168 rtt min/avg/max/mdev = 0.277/0.277/0.277/0.000 ms 00:09:57.168 10:11:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:57.168 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:57.168 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.225 ms 00:09:57.168 00:09:57.168 --- 10.0.0.1 ping statistics --- 00:09:57.168 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:57.168 rtt min/avg/max/mdev = 0.225/0.225/0.225/0.000 ms 00:09:57.168 10:11:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:57.168 10:11:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:09:57.168 10:11:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:57.168 10:11:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:57.168 10:11:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:57.168 10:11:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:57.168 10:11:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:57.168 10:11:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:57.168 10:11:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:57.168 10:11:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:09:57.168 10:11:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:09:57.168 only one NIC for nvmf test 00:09:57.168 10:11:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:09:57.168 10:11:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:57.168 10:11:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:09:57.168 10:11:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:57.168 10:11:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:09:57.168 10:11:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:57.168 10:11:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:57.168 rmmod nvme_tcp 00:09:57.168 rmmod nvme_fabrics 00:09:57.168 rmmod nvme_keyring 00:09:57.168 10:11:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:57.168 10:11:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:09:57.168 10:11:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:09:57.168 10:11:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:09:57.168 10:11:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:57.168 10:11:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:57.168 10:11:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:57.168 10:11:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:09:57.168 10:11:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:09:57.168 10:11:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:57.168 10:11:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:09:57.168 10:11:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:57.168 10:11:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:57.168 10:11:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:57.168 10:11:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:57.168 10:11:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:59.702 10:11:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:59.702 10:11:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:09:59.702 10:11:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:09:59.702 10:11:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:59.702 10:11:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:09:59.702 10:11:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:59.702 10:11:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:09:59.702 10:11:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:59.702 10:11:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:59.702 10:11:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:59.702 10:11:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:09:59.702 10:11:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:09:59.702 10:11:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:09:59.702 10:11:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:59.702 10:11:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:59.702 10:11:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:59.702 10:11:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:09:59.702 10:11:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:09:59.702 10:11:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:59.702 10:11:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:09:59.702 10:11:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:59.702 10:11:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:59.702 10:11:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:59.702 10:11:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:59.702 10:11:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:59.702 10:11:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:59.702 00:09:59.702 real 0m8.052s 00:09:59.702 user 0m1.790s 00:09:59.702 sys 0m4.291s 00:09:59.702 10:11:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:59.702 10:11:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:59.702 ************************************ 00:09:59.702 END TEST nvmf_target_multipath 00:09:59.702 ************************************ 00:09:59.702 10:11:53 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:59.702 10:11:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:59.702 10:11:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:59.702 10:11:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:59.702 ************************************ 00:09:59.702 START TEST nvmf_zcopy 00:09:59.702 ************************************ 00:09:59.702 10:11:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:59.702 * Looking for test storage... 00:09:59.702 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:59.702 10:11:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:59.702 10:11:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lcov --version 00:09:59.702 10:11:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:59.702 10:11:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:59.702 10:11:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:59.702 10:11:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:59.702 10:11:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:59.702 10:11:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:09:59.702 10:11:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:09:59.702 10:11:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:09:59.702 10:11:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:09:59.702 10:11:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:09:59.702 10:11:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:09:59.702 10:11:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:09:59.702 10:11:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:59.702 10:11:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:09:59.702 10:11:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:09:59.702 10:11:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:59.702 10:11:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:59.702 10:11:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:09:59.702 10:11:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:09:59.702 10:11:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:59.702 10:11:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:09:59.702 10:11:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:09:59.702 10:11:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:09:59.702 10:11:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:09:59.702 10:11:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:59.702 10:11:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:09:59.702 10:11:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:09:59.702 10:11:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:59.702 10:11:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:59.702 10:11:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:09:59.702 10:11:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:59.702 10:11:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:59.702 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:59.702 --rc genhtml_branch_coverage=1 00:09:59.702 --rc genhtml_function_coverage=1 00:09:59.702 --rc genhtml_legend=1 00:09:59.702 --rc geninfo_all_blocks=1 00:09:59.702 --rc geninfo_unexecuted_blocks=1 00:09:59.702 00:09:59.702 ' 00:09:59.702 10:11:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:59.702 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:59.702 --rc genhtml_branch_coverage=1 00:09:59.702 --rc genhtml_function_coverage=1 00:09:59.702 --rc genhtml_legend=1 00:09:59.702 --rc geninfo_all_blocks=1 00:09:59.702 --rc geninfo_unexecuted_blocks=1 00:09:59.702 00:09:59.702 ' 00:09:59.702 10:11:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:59.702 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:59.702 --rc genhtml_branch_coverage=1 00:09:59.702 --rc genhtml_function_coverage=1 00:09:59.702 --rc genhtml_legend=1 00:09:59.702 --rc geninfo_all_blocks=1 00:09:59.702 --rc geninfo_unexecuted_blocks=1 00:09:59.702 00:09:59.702 ' 00:09:59.702 10:11:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:59.702 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:59.702 --rc genhtml_branch_coverage=1 00:09:59.702 --rc genhtml_function_coverage=1 00:09:59.702 --rc genhtml_legend=1 00:09:59.702 --rc geninfo_all_blocks=1 00:09:59.702 --rc geninfo_unexecuted_blocks=1 00:09:59.702 00:09:59.702 ' 00:09:59.702 10:11:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:59.702 10:11:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:09:59.702 10:11:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:59.702 10:11:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:59.702 10:11:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:59.702 10:11:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:59.702 10:11:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:59.702 10:11:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:59.703 10:11:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:59.703 10:11:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:59.703 10:11:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:59.703 10:11:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:59.703 10:11:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:09:59.703 10:11:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:09:59.703 10:11:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:59.703 10:11:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:59.703 10:11:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:59.703 10:11:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:59.703 10:11:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:59.703 10:11:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:09:59.703 10:11:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:59.703 10:11:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:59.703 10:11:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:59.703 10:11:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:59.703 10:11:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:59.703 10:11:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:59.703 10:11:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:09:59.703 10:11:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:59.703 10:11:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:09:59.703 10:11:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:59.703 10:11:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:59.703 10:11:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:59.703 10:11:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:59.703 10:11:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:59.703 10:11:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:59.703 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:59.703 10:11:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:59.703 10:11:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:59.703 10:11:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:59.703 10:11:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:09:59.703 10:11:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:59.703 10:11:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:59.703 10:11:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:59.703 10:11:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:59.703 10:11:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:59.703 10:11:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:59.703 10:11:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:59.703 10:11:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:59.703 10:11:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:59.703 10:11:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:59.703 10:11:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:09:59.703 10:11:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:06.266 10:11:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:06.266 10:11:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:10:06.266 10:11:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:06.266 10:11:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:06.266 10:11:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:06.266 10:11:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:06.266 10:11:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:06.266 10:11:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:10:06.266 10:11:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:06.266 10:11:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:10:06.266 10:11:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:10:06.266 10:11:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:10:06.266 10:11:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:10:06.266 10:11:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:10:06.266 10:11:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:10:06.266 10:11:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:06.266 10:11:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:06.266 10:11:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:06.266 10:11:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:06.266 10:11:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:06.266 10:11:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:06.266 10:11:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:06.266 10:11:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:06.266 10:11:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:06.266 10:11:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:06.266 10:11:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:06.266 10:11:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:06.266 10:11:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:06.266 10:11:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:06.266 10:11:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:06.266 10:11:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:06.266 10:11:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:06.266 10:11:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:06.266 10:11:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:06.266 10:11:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:10:06.266 Found 0000:af:00.0 (0x8086 - 0x159b) 00:10:06.266 10:11:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:06.266 10:11:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:06.266 10:11:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:06.266 10:11:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:06.266 10:11:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:06.266 10:11:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:06.266 10:11:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:10:06.266 Found 0000:af:00.1 (0x8086 - 0x159b) 00:10:06.266 10:11:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:06.266 10:11:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:06.266 10:11:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:06.266 10:11:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:06.266 10:11:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:06.266 10:11:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:06.266 10:11:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:06.266 10:11:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:06.266 10:11:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:06.266 10:11:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:06.266 10:11:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:06.266 10:11:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:06.266 10:11:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:06.266 10:11:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:06.266 10:11:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:06.266 10:11:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:10:06.266 Found net devices under 0000:af:00.0: cvl_0_0 00:10:06.266 10:11:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:06.266 10:11:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:06.266 10:11:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:06.266 10:11:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:06.266 10:11:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:06.266 10:11:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:06.266 10:11:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:06.266 10:11:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:06.266 10:11:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:10:06.266 Found net devices under 0000:af:00.1: cvl_0_1 00:10:06.266 10:11:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:06.266 10:11:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:06.266 10:11:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:10:06.266 10:11:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:06.266 10:11:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:06.266 10:11:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:06.266 10:11:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:06.266 10:11:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:06.266 10:11:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:06.267 10:11:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:06.267 10:11:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:06.267 10:11:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:06.267 10:11:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:06.267 10:11:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:06.267 10:11:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:06.267 10:11:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:06.267 10:11:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:06.267 10:11:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:06.267 10:11:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:06.267 10:11:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:06.267 10:11:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:06.267 10:11:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:06.267 10:11:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:06.267 10:11:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:06.267 10:11:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:06.267 10:11:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:06.267 10:11:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:06.267 10:11:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:06.267 10:11:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:06.267 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:06.267 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.417 ms 00:10:06.267 00:10:06.267 --- 10.0.0.2 ping statistics --- 00:10:06.267 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:06.267 rtt min/avg/max/mdev = 0.417/0.417/0.417/0.000 ms 00:10:06.267 10:11:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:06.267 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:06.267 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.126 ms 00:10:06.267 00:10:06.267 --- 10.0.0.1 ping statistics --- 00:10:06.267 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:06.267 rtt min/avg/max/mdev = 0.126/0.126/0.126/0.000 ms 00:10:06.267 10:11:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:06.267 10:11:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:10:06.267 10:11:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:06.267 10:11:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:06.267 10:11:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:06.267 10:11:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:06.267 10:11:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:06.267 10:11:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:06.267 10:11:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:06.267 10:11:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:10:06.267 10:11:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:06.267 10:11:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:06.267 10:11:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:06.267 10:11:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=3781827 00:10:06.267 10:11:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 3781827 00:10:06.267 10:11:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:10:06.267 10:11:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 3781827 ']' 00:10:06.267 10:11:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:06.267 10:11:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:06.267 10:11:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:06.267 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:06.267 10:11:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:06.267 10:11:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:06.267 [2024-12-13 10:11:59.238106] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:10:06.267 [2024-12-13 10:11:59.238195] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:06.267 [2024-12-13 10:11:59.353374] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:06.267 [2024-12-13 10:11:59.463489] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:06.267 [2024-12-13 10:11:59.463532] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:06.267 [2024-12-13 10:11:59.463542] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:06.267 [2024-12-13 10:11:59.463553] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:06.267 [2024-12-13 10:11:59.463561] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:06.267 [2024-12-13 10:11:59.464821] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:10:06.267 10:12:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:06.267 10:12:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:10:06.267 10:12:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:06.267 10:12:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:06.267 10:12:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:06.267 10:12:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:06.267 10:12:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:10:06.267 10:12:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:10:06.267 10:12:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.267 10:12:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:06.267 [2024-12-13 10:12:00.074919] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:06.267 10:12:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.267 10:12:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:06.267 10:12:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.267 10:12:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:06.267 10:12:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.267 10:12:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:06.267 10:12:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.267 10:12:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:06.267 [2024-12-13 10:12:00.091093] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:06.267 10:12:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.267 10:12:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:06.267 10:12:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.267 10:12:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:06.267 10:12:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.267 10:12:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:10:06.267 10:12:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.267 10:12:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:06.267 malloc0 00:10:06.267 10:12:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.267 10:12:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:10:06.267 10:12:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.267 10:12:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:06.267 10:12:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.267 10:12:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:10:06.267 10:12:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:10:06.267 10:12:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:10:06.267 10:12:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:10:06.267 10:12:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:06.267 10:12:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:06.267 { 00:10:06.267 "params": { 00:10:06.267 "name": "Nvme$subsystem", 00:10:06.267 "trtype": "$TEST_TRANSPORT", 00:10:06.267 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:06.267 "adrfam": "ipv4", 00:10:06.267 "trsvcid": "$NVMF_PORT", 00:10:06.267 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:06.267 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:06.267 "hdgst": ${hdgst:-false}, 00:10:06.267 "ddgst": ${ddgst:-false} 00:10:06.267 }, 00:10:06.267 "method": "bdev_nvme_attach_controller" 00:10:06.267 } 00:10:06.267 EOF 00:10:06.267 )") 00:10:06.267 10:12:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:10:06.525 10:12:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:10:06.525 10:12:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:10:06.525 10:12:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:06.525 "params": { 00:10:06.525 "name": "Nvme1", 00:10:06.525 "trtype": "tcp", 00:10:06.525 "traddr": "10.0.0.2", 00:10:06.525 "adrfam": "ipv4", 00:10:06.525 "trsvcid": "4420", 00:10:06.525 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:06.525 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:06.525 "hdgst": false, 00:10:06.525 "ddgst": false 00:10:06.525 }, 00:10:06.525 "method": "bdev_nvme_attach_controller" 00:10:06.525 }' 00:10:06.525 [2024-12-13 10:12:00.225199] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:10:06.525 [2024-12-13 10:12:00.225285] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3782054 ] 00:10:06.525 [2024-12-13 10:12:00.339966] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:06.783 [2024-12-13 10:12:00.452679] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:10:07.349 Running I/O for 10 seconds... 00:10:09.213 7474.00 IOPS, 58.39 MiB/s [2024-12-13T09:12:04.038Z] 7527.50 IOPS, 58.81 MiB/s [2024-12-13T09:12:05.410Z] 7550.00 IOPS, 58.98 MiB/s [2024-12-13T09:12:06.343Z] 7558.00 IOPS, 59.05 MiB/s [2024-12-13T09:12:07.277Z] 7578.20 IOPS, 59.20 MiB/s [2024-12-13T09:12:08.209Z] 7557.17 IOPS, 59.04 MiB/s [2024-12-13T09:12:09.142Z] 7571.00 IOPS, 59.15 MiB/s [2024-12-13T09:12:10.075Z] 7561.25 IOPS, 59.07 MiB/s [2024-12-13T09:12:11.448Z] 7546.89 IOPS, 58.96 MiB/s [2024-12-13T09:12:11.448Z] 7537.40 IOPS, 58.89 MiB/s 00:10:17.557 Latency(us) 00:10:17.557 [2024-12-13T09:12:11.448Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:17.557 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:10:17.557 Verification LBA range: start 0x0 length 0x1000 00:10:17.557 Nvme1n1 : 10.01 7538.70 58.90 0.00 0.00 16930.52 485.67 23842.62 00:10:17.557 [2024-12-13T09:12:11.448Z] =================================================================================================================== 00:10:17.557 [2024-12-13T09:12:11.448Z] Total : 7538.70 58.90 0.00 0.00 16930.52 485.67 23842.62 00:10:18.123 10:12:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=3784381 00:10:18.123 10:12:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:10:18.123 10:12:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:18.123 10:12:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:10:18.123 10:12:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:10:18.123 10:12:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:10:18.123 10:12:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:10:18.123 10:12:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:18.123 10:12:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:18.123 { 00:10:18.123 "params": { 00:10:18.123 "name": "Nvme$subsystem", 00:10:18.123 "trtype": "$TEST_TRANSPORT", 00:10:18.123 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:18.123 "adrfam": "ipv4", 00:10:18.123 "trsvcid": "$NVMF_PORT", 00:10:18.123 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:18.123 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:18.123 "hdgst": ${hdgst:-false}, 00:10:18.123 "ddgst": ${ddgst:-false} 00:10:18.123 }, 00:10:18.123 "method": "bdev_nvme_attach_controller" 00:10:18.123 } 00:10:18.123 EOF 00:10:18.123 )") 00:10:18.123 10:12:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:10:18.123 [2024-12-13 10:12:11.930041] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.123 [2024-12-13 10:12:11.930083] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.123 10:12:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:10:18.123 10:12:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:10:18.123 10:12:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:18.123 "params": { 00:10:18.123 "name": "Nvme1", 00:10:18.123 "trtype": "tcp", 00:10:18.123 "traddr": "10.0.0.2", 00:10:18.123 "adrfam": "ipv4", 00:10:18.123 "trsvcid": "4420", 00:10:18.123 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:18.123 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:18.123 "hdgst": false, 00:10:18.123 "ddgst": false 00:10:18.123 }, 00:10:18.123 "method": "bdev_nvme_attach_controller" 00:10:18.123 }' 00:10:18.123 [2024-12-13 10:12:11.938051] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.123 [2024-12-13 10:12:11.938077] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.123 [2024-12-13 10:12:11.946015] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.123 [2024-12-13 10:12:11.946035] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.123 [2024-12-13 10:12:11.954042] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.123 [2024-12-13 10:12:11.954063] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.123 [2024-12-13 10:12:11.962063] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.123 [2024-12-13 10:12:11.962086] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.123 [2024-12-13 10:12:11.974078] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.123 [2024-12-13 10:12:11.974097] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.123 [2024-12-13 10:12:11.982121] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.123 [2024-12-13 10:12:11.982140] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.123 [2024-12-13 10:12:11.990141] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.123 [2024-12-13 10:12:11.990160] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.123 [2024-12-13 10:12:11.998021] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:10:18.123 [2024-12-13 10:12:11.998109] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3784381 ] 00:10:18.123 [2024-12-13 10:12:11.998160] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.123 [2024-12-13 10:12:11.998179] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.123 [2024-12-13 10:12:12.006189] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.123 [2024-12-13 10:12:12.006208] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.123 [2024-12-13 10:12:12.014193] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.123 [2024-12-13 10:12:12.014213] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.382 [2024-12-13 10:12:12.022237] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.382 [2024-12-13 10:12:12.022256] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.382 [2024-12-13 10:12:12.030249] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.382 [2024-12-13 10:12:12.030268] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.382 [2024-12-13 10:12:12.038269] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.382 [2024-12-13 10:12:12.038288] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.382 [2024-12-13 10:12:12.046288] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.382 [2024-12-13 10:12:12.046307] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.382 [2024-12-13 10:12:12.054322] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.382 [2024-12-13 10:12:12.054340] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.382 [2024-12-13 10:12:12.062328] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.382 [2024-12-13 10:12:12.062346] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.382 [2024-12-13 10:12:12.070361] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.382 [2024-12-13 10:12:12.070380] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.382 [2024-12-13 10:12:12.078369] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.382 [2024-12-13 10:12:12.078387] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.382 [2024-12-13 10:12:12.086408] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.382 [2024-12-13 10:12:12.086426] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.382 [2024-12-13 10:12:12.094420] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.382 [2024-12-13 10:12:12.094439] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.382 [2024-12-13 10:12:12.102433] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.382 [2024-12-13 10:12:12.102459] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.382 [2024-12-13 10:12:12.110475] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.382 [2024-12-13 10:12:12.110500] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.382 [2024-12-13 10:12:12.112935] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:18.382 [2024-12-13 10:12:12.118500] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.382 [2024-12-13 10:12:12.118519] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.382 [2024-12-13 10:12:12.126509] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.382 [2024-12-13 10:12:12.126529] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.382 [2024-12-13 10:12:12.134560] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.382 [2024-12-13 10:12:12.134582] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.382 [2024-12-13 10:12:12.142555] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.382 [2024-12-13 10:12:12.142575] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.382 [2024-12-13 10:12:12.150596] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.382 [2024-12-13 10:12:12.150617] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.382 [2024-12-13 10:12:12.158612] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.382 [2024-12-13 10:12:12.158631] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.382 [2024-12-13 10:12:12.166620] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.382 [2024-12-13 10:12:12.166639] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.382 [2024-12-13 10:12:12.174654] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.382 [2024-12-13 10:12:12.174672] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.382 [2024-12-13 10:12:12.182669] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.382 [2024-12-13 10:12:12.182687] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.382 [2024-12-13 10:12:12.190677] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.382 [2024-12-13 10:12:12.190695] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.382 [2024-12-13 10:12:12.198727] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.382 [2024-12-13 10:12:12.198746] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.382 [2024-12-13 10:12:12.206724] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.382 [2024-12-13 10:12:12.206744] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.382 [2024-12-13 10:12:12.214759] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.382 [2024-12-13 10:12:12.214778] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.382 [2024-12-13 10:12:12.222778] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.382 [2024-12-13 10:12:12.222796] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.382 [2024-12-13 10:12:12.226419] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:10:18.382 [2024-12-13 10:12:12.230812] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.382 [2024-12-13 10:12:12.230832] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.382 [2024-12-13 10:12:12.238832] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.382 [2024-12-13 10:12:12.238851] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.382 [2024-12-13 10:12:12.246857] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.382 [2024-12-13 10:12:12.246881] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.382 [2024-12-13 10:12:12.254857] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.382 [2024-12-13 10:12:12.254876] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.382 [2024-12-13 10:12:12.262895] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.382 [2024-12-13 10:12:12.262915] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.382 [2024-12-13 10:12:12.270903] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.382 [2024-12-13 10:12:12.270922] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.640 [2024-12-13 10:12:12.278936] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.640 [2024-12-13 10:12:12.278955] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.640 [2024-12-13 10:12:12.286988] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.640 [2024-12-13 10:12:12.287006] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.640 [2024-12-13 10:12:12.294966] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.640 [2024-12-13 10:12:12.294985] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.640 [2024-12-13 10:12:12.303007] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.640 [2024-12-13 10:12:12.303026] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.640 [2024-12-13 10:12:12.311028] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.640 [2024-12-13 10:12:12.311047] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.640 [2024-12-13 10:12:12.319048] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.640 [2024-12-13 10:12:12.319069] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.640 [2024-12-13 10:12:12.327093] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.640 [2024-12-13 10:12:12.327113] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.640 [2024-12-13 10:12:12.335080] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.640 [2024-12-13 10:12:12.335099] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.640 [2024-12-13 10:12:12.343118] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.640 [2024-12-13 10:12:12.343136] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.640 [2024-12-13 10:12:12.351136] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.640 [2024-12-13 10:12:12.351155] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.640 [2024-12-13 10:12:12.359147] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.640 [2024-12-13 10:12:12.359166] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.640 [2024-12-13 10:12:12.367181] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.640 [2024-12-13 10:12:12.367199] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.640 [2024-12-13 10:12:12.375208] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.640 [2024-12-13 10:12:12.375226] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.640 [2024-12-13 10:12:12.383228] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.640 [2024-12-13 10:12:12.383247] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.640 [2024-12-13 10:12:12.391251] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.640 [2024-12-13 10:12:12.391269] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.640 [2024-12-13 10:12:12.399255] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.640 [2024-12-13 10:12:12.399276] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.640 [2024-12-13 10:12:12.407294] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.640 [2024-12-13 10:12:12.407313] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.640 [2024-12-13 10:12:12.415312] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.640 [2024-12-13 10:12:12.415330] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.640 [2024-12-13 10:12:12.423336] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.640 [2024-12-13 10:12:12.423354] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.640 [2024-12-13 10:12:12.431364] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.640 [2024-12-13 10:12:12.431383] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.640 [2024-12-13 10:12:12.439379] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.640 [2024-12-13 10:12:12.439398] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.640 [2024-12-13 10:12:12.447395] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.640 [2024-12-13 10:12:12.447415] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.640 [2024-12-13 10:12:12.455443] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.640 [2024-12-13 10:12:12.455470] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.640 [2024-12-13 10:12:12.463444] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.640 [2024-12-13 10:12:12.463471] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.640 [2024-12-13 10:12:12.471496] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.640 [2024-12-13 10:12:12.471514] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.640 [2024-12-13 10:12:12.479501] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.640 [2024-12-13 10:12:12.479519] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.640 [2024-12-13 10:12:12.487522] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.640 [2024-12-13 10:12:12.487540] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.641 [2024-12-13 10:12:12.495548] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.641 [2024-12-13 10:12:12.495567] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.641 [2024-12-13 10:12:12.503559] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.641 [2024-12-13 10:12:12.503577] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.641 [2024-12-13 10:12:12.511574] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.641 [2024-12-13 10:12:12.511592] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.641 [2024-12-13 10:12:12.519625] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.641 [2024-12-13 10:12:12.519654] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.641 [2024-12-13 10:12:12.527612] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.641 [2024-12-13 10:12:12.527630] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.898 [2024-12-13 10:12:12.535649] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.898 [2024-12-13 10:12:12.535667] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.898 [2024-12-13 10:12:12.543667] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.898 [2024-12-13 10:12:12.543686] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.898 [2024-12-13 10:12:12.551680] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.898 [2024-12-13 10:12:12.551713] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.898 [2024-12-13 10:12:12.559724] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.898 [2024-12-13 10:12:12.559743] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.898 [2024-12-13 10:12:12.567756] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.898 [2024-12-13 10:12:12.567779] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.898 [2024-12-13 10:12:12.575796] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.898 [2024-12-13 10:12:12.575817] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.898 [2024-12-13 10:12:12.583826] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.898 [2024-12-13 10:12:12.583846] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.898 [2024-12-13 10:12:12.591852] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.898 [2024-12-13 10:12:12.591872] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.898 [2024-12-13 10:12:12.599882] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.898 [2024-12-13 10:12:12.599902] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.898 [2024-12-13 10:12:12.607926] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.898 [2024-12-13 10:12:12.607946] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.898 [2024-12-13 10:12:12.615914] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.898 [2024-12-13 10:12:12.615934] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.898 [2024-12-13 10:12:12.623959] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.898 [2024-12-13 10:12:12.623979] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.898 [2024-12-13 10:12:12.667782] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.898 [2024-12-13 10:12:12.667807] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.898 [2024-12-13 10:12:12.672078] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.898 [2024-12-13 10:12:12.672097] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.898 [2024-12-13 10:12:12.680112] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.898 [2024-12-13 10:12:12.680130] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.898 Running I/O for 5 seconds... 00:10:18.898 [2024-12-13 10:12:12.692171] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.898 [2024-12-13 10:12:12.692196] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.898 [2024-12-13 10:12:12.700760] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.898 [2024-12-13 10:12:12.700784] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.898 [2024-12-13 10:12:12.711254] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.898 [2024-12-13 10:12:12.711279] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.898 [2024-12-13 10:12:12.721040] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.898 [2024-12-13 10:12:12.721065] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.898 [2024-12-13 10:12:12.728637] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.898 [2024-12-13 10:12:12.728659] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.898 [2024-12-13 10:12:12.739912] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.898 [2024-12-13 10:12:12.739935] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.898 [2024-12-13 10:12:12.748715] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.899 [2024-12-13 10:12:12.748738] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.899 [2024-12-13 10:12:12.759332] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.899 [2024-12-13 10:12:12.759356] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.899 [2024-12-13 10:12:12.769194] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.899 [2024-12-13 10:12:12.769218] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.899 [2024-12-13 10:12:12.776874] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.899 [2024-12-13 10:12:12.776896] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.899 [2024-12-13 10:12:12.788036] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.899 [2024-12-13 10:12:12.788060] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.156 [2024-12-13 10:12:12.796586] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.156 [2024-12-13 10:12:12.796610] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.156 [2024-12-13 10:12:12.807052] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.156 [2024-12-13 10:12:12.807076] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.156 [2024-12-13 10:12:12.816919] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.156 [2024-12-13 10:12:12.816943] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.156 [2024-12-13 10:12:12.824774] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.156 [2024-12-13 10:12:12.824796] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.156 [2024-12-13 10:12:12.836443] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.156 [2024-12-13 10:12:12.836475] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.156 [2024-12-13 10:12:12.847791] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.156 [2024-12-13 10:12:12.847814] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.156 [2024-12-13 10:12:12.855378] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.156 [2024-12-13 10:12:12.855400] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.156 [2024-12-13 10:12:12.866661] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.156 [2024-12-13 10:12:12.866684] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.156 [2024-12-13 10:12:12.874374] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.156 [2024-12-13 10:12:12.874397] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.156 [2024-12-13 10:12:12.885754] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.156 [2024-12-13 10:12:12.885783] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.156 [2024-12-13 10:12:12.894072] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.156 [2024-12-13 10:12:12.894095] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.156 [2024-12-13 10:12:12.904863] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.156 [2024-12-13 10:12:12.904886] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.156 [2024-12-13 10:12:12.914682] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.156 [2024-12-13 10:12:12.914705] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.156 [2024-12-13 10:12:12.922383] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.156 [2024-12-13 10:12:12.922405] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.156 [2024-12-13 10:12:12.933742] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.156 [2024-12-13 10:12:12.933765] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.156 [2024-12-13 10:12:12.942487] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.156 [2024-12-13 10:12:12.942511] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.156 [2024-12-13 10:12:12.951036] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.156 [2024-12-13 10:12:12.951059] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.156 [2024-12-13 10:12:12.959827] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.156 [2024-12-13 10:12:12.959850] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.156 [2024-12-13 10:12:12.969074] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.156 [2024-12-13 10:12:12.969098] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.156 [2024-12-13 10:12:12.977978] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.156 [2024-12-13 10:12:12.978000] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.156 [2024-12-13 10:12:12.986829] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.156 [2024-12-13 10:12:12.986853] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.156 [2024-12-13 10:12:12.995470] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.156 [2024-12-13 10:12:12.995493] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.156 [2024-12-13 10:12:13.004282] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.156 [2024-12-13 10:12:13.004306] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.156 [2024-12-13 10:12:13.013100] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.156 [2024-12-13 10:12:13.013124] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.156 [2024-12-13 10:12:13.021849] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.156 [2024-12-13 10:12:13.021872] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.156 [2024-12-13 10:12:13.030744] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.156 [2024-12-13 10:12:13.030767] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.156 [2024-12-13 10:12:13.039778] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.156 [2024-12-13 10:12:13.039802] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.414 [2024-12-13 10:12:13.048641] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.414 [2024-12-13 10:12:13.048665] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.414 [2024-12-13 10:12:13.057502] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.414 [2024-12-13 10:12:13.057526] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.414 [2024-12-13 10:12:13.066116] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.414 [2024-12-13 10:12:13.066138] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.414 [2024-12-13 10:12:13.074893] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.414 [2024-12-13 10:12:13.074917] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.414 [2024-12-13 10:12:13.083721] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.414 [2024-12-13 10:12:13.083743] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.414 [2024-12-13 10:12:13.092543] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.414 [2024-12-13 10:12:13.092569] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.414 [2024-12-13 10:12:13.101552] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.414 [2024-12-13 10:12:13.101576] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.414 [2024-12-13 10:12:13.110283] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.414 [2024-12-13 10:12:13.110306] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.414 [2024-12-13 10:12:13.119029] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.414 [2024-12-13 10:12:13.119053] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.414 [2024-12-13 10:12:13.127556] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.414 [2024-12-13 10:12:13.127579] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.414 [2024-12-13 10:12:13.136343] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.414 [2024-12-13 10:12:13.136365] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.414 [2024-12-13 10:12:13.145760] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.414 [2024-12-13 10:12:13.145782] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.414 [2024-12-13 10:12:13.154382] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.414 [2024-12-13 10:12:13.154406] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.414 [2024-12-13 10:12:13.163214] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.414 [2024-12-13 10:12:13.163237] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.414 [2024-12-13 10:12:13.171988] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.414 [2024-12-13 10:12:13.172012] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.414 [2024-12-13 10:12:13.180415] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.414 [2024-12-13 10:12:13.180437] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.414 [2024-12-13 10:12:13.189189] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.414 [2024-12-13 10:12:13.189213] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.414 [2024-12-13 10:12:13.197989] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.414 [2024-12-13 10:12:13.198013] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.414 [2024-12-13 10:12:13.206885] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.414 [2024-12-13 10:12:13.206909] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.414 [2024-12-13 10:12:13.215900] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.414 [2024-12-13 10:12:13.215922] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.414 [2024-12-13 10:12:13.224660] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.414 [2024-12-13 10:12:13.224683] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.414 [2024-12-13 10:12:13.233489] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.414 [2024-12-13 10:12:13.233513] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.414 [2024-12-13 10:12:13.242178] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.414 [2024-12-13 10:12:13.242202] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.414 [2024-12-13 10:12:13.250770] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.414 [2024-12-13 10:12:13.250793] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.414 [2024-12-13 10:12:13.259544] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.414 [2024-12-13 10:12:13.259572] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.414 [2024-12-13 10:12:13.268332] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.414 [2024-12-13 10:12:13.268354] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.414 [2024-12-13 10:12:13.277078] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.414 [2024-12-13 10:12:13.277100] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.414 [2024-12-13 10:12:13.285679] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.414 [2024-12-13 10:12:13.285702] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.414 [2024-12-13 10:12:13.294595] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.414 [2024-12-13 10:12:13.294620] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.414 [2024-12-13 10:12:13.303386] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.414 [2024-12-13 10:12:13.303409] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.673 [2024-12-13 10:12:13.312264] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.673 [2024-12-13 10:12:13.312288] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.673 [2024-12-13 10:12:13.321315] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.673 [2024-12-13 10:12:13.321338] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.673 [2024-12-13 10:12:13.330202] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.673 [2024-12-13 10:12:13.330225] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.673 [2024-12-13 10:12:13.339160] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.673 [2024-12-13 10:12:13.339183] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.673 [2024-12-13 10:12:13.348237] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.673 [2024-12-13 10:12:13.348259] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.673 [2024-12-13 10:12:13.356898] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.673 [2024-12-13 10:12:13.356921] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.673 [2024-12-13 10:12:13.365924] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.673 [2024-12-13 10:12:13.365947] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.673 [2024-12-13 10:12:13.374823] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.673 [2024-12-13 10:12:13.374845] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.673 [2024-12-13 10:12:13.383672] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.673 [2024-12-13 10:12:13.383695] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.673 [2024-12-13 10:12:13.392710] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.673 [2024-12-13 10:12:13.392733] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.673 [2024-12-13 10:12:13.402095] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.673 [2024-12-13 10:12:13.402117] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.673 [2024-12-13 10:12:13.411159] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.673 [2024-12-13 10:12:13.411183] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.673 [2024-12-13 10:12:13.419755] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.673 [2024-12-13 10:12:13.419778] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.673 [2024-12-13 10:12:13.428557] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.673 [2024-12-13 10:12:13.428584] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.673 [2024-12-13 10:12:13.437509] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.673 [2024-12-13 10:12:13.437531] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.673 [2024-12-13 10:12:13.446548] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.673 [2024-12-13 10:12:13.446572] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.673 [2024-12-13 10:12:13.455377] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.673 [2024-12-13 10:12:13.455400] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.673 [2024-12-13 10:12:13.464363] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.673 [2024-12-13 10:12:13.464387] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.673 [2024-12-13 10:12:13.473385] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.673 [2024-12-13 10:12:13.473409] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.673 [2024-12-13 10:12:13.482377] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.673 [2024-12-13 10:12:13.482400] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.673 [2024-12-13 10:12:13.491646] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.673 [2024-12-13 10:12:13.491669] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.673 [2024-12-13 10:12:13.500695] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.673 [2024-12-13 10:12:13.500720] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.673 [2024-12-13 10:12:13.509596] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.673 [2024-12-13 10:12:13.509620] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.673 [2024-12-13 10:12:13.519892] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.673 [2024-12-13 10:12:13.519916] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.673 [2024-12-13 10:12:13.528097] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.673 [2024-12-13 10:12:13.528121] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.673 [2024-12-13 10:12:13.539045] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.673 [2024-12-13 10:12:13.539068] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.673 [2024-12-13 10:12:13.547722] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.673 [2024-12-13 10:12:13.547745] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.673 [2024-12-13 10:12:13.556550] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.673 [2024-12-13 10:12:13.556573] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.931 [2024-12-13 10:12:13.565469] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.931 [2024-12-13 10:12:13.565494] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.931 [2024-12-13 10:12:13.574713] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.931 [2024-12-13 10:12:13.574737] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.931 [2024-12-13 10:12:13.584030] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.931 [2024-12-13 10:12:13.584053] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.931 [2024-12-13 10:12:13.592965] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.931 [2024-12-13 10:12:13.592987] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.931 [2024-12-13 10:12:13.602096] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.932 [2024-12-13 10:12:13.602124] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.932 [2024-12-13 10:12:13.610803] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.932 [2024-12-13 10:12:13.610826] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.932 [2024-12-13 10:12:13.619640] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.932 [2024-12-13 10:12:13.619664] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.932 [2024-12-13 10:12:13.628398] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.932 [2024-12-13 10:12:13.628422] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.932 [2024-12-13 10:12:13.637410] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.932 [2024-12-13 10:12:13.637433] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.932 [2024-12-13 10:12:13.647503] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.932 [2024-12-13 10:12:13.647527] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.932 [2024-12-13 10:12:13.655852] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.932 [2024-12-13 10:12:13.655875] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.932 [2024-12-13 10:12:13.666744] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.932 [2024-12-13 10:12:13.666769] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.932 [2024-12-13 10:12:13.675493] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.932 [2024-12-13 10:12:13.675516] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.932 [2024-12-13 10:12:13.686099] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.932 [2024-12-13 10:12:13.686123] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.932 14260.00 IOPS, 111.41 MiB/s [2024-12-13T09:12:13.823Z] [2024-12-13 10:12:13.694399] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.932 [2024-12-13 10:12:13.694425] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.932 [2024-12-13 10:12:13.704904] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.932 [2024-12-13 10:12:13.704927] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.932 [2024-12-13 10:12:13.713155] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.932 [2024-12-13 10:12:13.713179] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.932 [2024-12-13 10:12:13.724500] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.932 [2024-12-13 10:12:13.724525] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.932 [2024-12-13 10:12:13.733298] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.932 [2024-12-13 10:12:13.733322] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.932 [2024-12-13 10:12:13.742526] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.932 [2024-12-13 10:12:13.742550] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.932 [2024-12-13 10:12:13.751421] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.932 [2024-12-13 10:12:13.751446] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.932 [2024-12-13 10:12:13.760336] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.932 [2024-12-13 10:12:13.760359] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.932 [2024-12-13 10:12:13.769265] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.932 [2024-12-13 10:12:13.769288] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.932 [2024-12-13 10:12:13.778185] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.932 [2024-12-13 10:12:13.778208] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.932 [2024-12-13 10:12:13.787088] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.932 [2024-12-13 10:12:13.787112] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.932 [2024-12-13 10:12:13.795904] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.932 [2024-12-13 10:12:13.795927] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.932 [2024-12-13 10:12:13.804726] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.932 [2024-12-13 10:12:13.804749] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.932 [2024-12-13 10:12:13.813548] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.932 [2024-12-13 10:12:13.813572] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.932 [2024-12-13 10:12:13.822748] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.932 [2024-12-13 10:12:13.822772] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.190 [2024-12-13 10:12:13.831649] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.190 [2024-12-13 10:12:13.831673] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.190 [2024-12-13 10:12:13.840415] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.190 [2024-12-13 10:12:13.840439] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.190 [2024-12-13 10:12:13.849255] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.190 [2024-12-13 10:12:13.849278] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.190 [2024-12-13 10:12:13.858323] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.190 [2024-12-13 10:12:13.858347] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.190 [2024-12-13 10:12:13.867171] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.190 [2024-12-13 10:12:13.867195] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.190 [2024-12-13 10:12:13.876154] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.190 [2024-12-13 10:12:13.876177] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.190 [2024-12-13 10:12:13.884931] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.190 [2024-12-13 10:12:13.884953] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.190 [2024-12-13 10:12:13.894233] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.190 [2024-12-13 10:12:13.894256] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.190 [2024-12-13 10:12:13.903002] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.190 [2024-12-13 10:12:13.903025] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.190 [2024-12-13 10:12:13.912055] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.190 [2024-12-13 10:12:13.912078] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.190 [2024-12-13 10:12:13.921043] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.190 [2024-12-13 10:12:13.921066] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.190 [2024-12-13 10:12:13.929861] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.190 [2024-12-13 10:12:13.929885] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.190 [2024-12-13 10:12:13.938892] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.190 [2024-12-13 10:12:13.938916] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.190 [2024-12-13 10:12:13.948030] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.190 [2024-12-13 10:12:13.948053] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.190 [2024-12-13 10:12:13.956809] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.190 [2024-12-13 10:12:13.956832] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.190 [2024-12-13 10:12:13.965445] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.190 [2024-12-13 10:12:13.965476] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.190 [2024-12-13 10:12:13.974382] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.190 [2024-12-13 10:12:13.974405] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.190 [2024-12-13 10:12:13.983252] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.190 [2024-12-13 10:12:13.983275] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.190 [2024-12-13 10:12:13.992263] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.190 [2024-12-13 10:12:13.992286] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.190 [2024-12-13 10:12:14.001328] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.190 [2024-12-13 10:12:14.001351] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.190 [2024-12-13 10:12:14.010201] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.190 [2024-12-13 10:12:14.010224] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.190 [2024-12-13 10:12:14.019020] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.190 [2024-12-13 10:12:14.019042] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.190 [2024-12-13 10:12:14.027828] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.190 [2024-12-13 10:12:14.027851] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.190 [2024-12-13 10:12:14.036625] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.190 [2024-12-13 10:12:14.036652] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.190 [2024-12-13 10:12:14.045551] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.190 [2024-12-13 10:12:14.045573] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.190 [2024-12-13 10:12:14.054275] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.190 [2024-12-13 10:12:14.054298] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.190 [2024-12-13 10:12:14.062947] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.190 [2024-12-13 10:12:14.062971] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.190 [2024-12-13 10:12:14.071615] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.190 [2024-12-13 10:12:14.071638] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.190 [2024-12-13 10:12:14.080126] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.190 [2024-12-13 10:12:14.080149] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.448 [2024-12-13 10:12:14.088898] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.448 [2024-12-13 10:12:14.088921] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.448 [2024-12-13 10:12:14.097690] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.448 [2024-12-13 10:12:14.097713] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.448 [2024-12-13 10:12:14.106621] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.448 [2024-12-13 10:12:14.106644] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.448 [2024-12-13 10:12:14.115576] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.448 [2024-12-13 10:12:14.115599] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.448 [2024-12-13 10:12:14.124542] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.448 [2024-12-13 10:12:14.124574] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.448 [2024-12-13 10:12:14.133476] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.448 [2024-12-13 10:12:14.133503] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.448 [2024-12-13 10:12:14.142439] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.448 [2024-12-13 10:12:14.142470] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.448 [2024-12-13 10:12:14.151400] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.448 [2024-12-13 10:12:14.151423] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.448 [2024-12-13 10:12:14.160429] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.448 [2024-12-13 10:12:14.160468] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.448 [2024-12-13 10:12:14.169287] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.448 [2024-12-13 10:12:14.169310] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.448 [2024-12-13 10:12:14.177894] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.448 [2024-12-13 10:12:14.177917] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.448 [2024-12-13 10:12:14.186675] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.448 [2024-12-13 10:12:14.186698] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.448 [2024-12-13 10:12:14.195229] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.448 [2024-12-13 10:12:14.195252] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.448 [2024-12-13 10:12:14.204119] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.448 [2024-12-13 10:12:14.204142] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.449 [2024-12-13 10:12:14.212825] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.449 [2024-12-13 10:12:14.212848] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.449 [2024-12-13 10:12:14.221465] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.449 [2024-12-13 10:12:14.221488] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.449 [2024-12-13 10:12:14.230292] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.449 [2024-12-13 10:12:14.230316] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.449 [2024-12-13 10:12:14.239085] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.449 [2024-12-13 10:12:14.239108] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.449 [2024-12-13 10:12:14.247693] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.449 [2024-12-13 10:12:14.247727] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.449 [2024-12-13 10:12:14.256367] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.449 [2024-12-13 10:12:14.256390] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.449 [2024-12-13 10:12:14.265066] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.449 [2024-12-13 10:12:14.265089] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.449 [2024-12-13 10:12:14.273876] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.449 [2024-12-13 10:12:14.273904] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.449 [2024-12-13 10:12:14.282556] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.449 [2024-12-13 10:12:14.282579] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.449 [2024-12-13 10:12:14.291370] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.449 [2024-12-13 10:12:14.291394] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.449 [2024-12-13 10:12:14.300455] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.449 [2024-12-13 10:12:14.300478] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.449 [2024-12-13 10:12:14.309272] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.449 [2024-12-13 10:12:14.309295] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.449 [2024-12-13 10:12:14.318181] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.449 [2024-12-13 10:12:14.318204] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.449 [2024-12-13 10:12:14.327152] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.449 [2024-12-13 10:12:14.327175] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.449 [2024-12-13 10:12:14.336302] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.449 [2024-12-13 10:12:14.336330] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.706 [2024-12-13 10:12:14.345258] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.706 [2024-12-13 10:12:14.345282] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.706 [2024-12-13 10:12:14.353807] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.706 [2024-12-13 10:12:14.353831] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.706 [2024-12-13 10:12:14.362316] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.706 [2024-12-13 10:12:14.362339] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.706 [2024-12-13 10:12:14.371353] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.706 [2024-12-13 10:12:14.371376] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.706 [2024-12-13 10:12:14.380367] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.706 [2024-12-13 10:12:14.380390] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.706 [2024-12-13 10:12:14.389003] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.706 [2024-12-13 10:12:14.389026] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.706 [2024-12-13 10:12:14.397805] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.706 [2024-12-13 10:12:14.397829] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.706 [2024-12-13 10:12:14.406660] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.706 [2024-12-13 10:12:14.406683] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.706 [2024-12-13 10:12:14.415641] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.706 [2024-12-13 10:12:14.415665] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.706 [2024-12-13 10:12:14.424501] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.706 [2024-12-13 10:12:14.424524] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.706 [2024-12-13 10:12:14.433534] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.706 [2024-12-13 10:12:14.433557] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.706 [2024-12-13 10:12:14.442325] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.706 [2024-12-13 10:12:14.442353] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.706 [2024-12-13 10:12:14.450907] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.706 [2024-12-13 10:12:14.450930] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.706 [2024-12-13 10:12:14.459994] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.706 [2024-12-13 10:12:14.460018] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.706 [2024-12-13 10:12:14.469033] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.706 [2024-12-13 10:12:14.469057] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.706 [2024-12-13 10:12:14.478050] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.706 [2024-12-13 10:12:14.478073] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.707 [2024-12-13 10:12:14.486997] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.707 [2024-12-13 10:12:14.487020] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.707 [2024-12-13 10:12:14.496243] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.707 [2024-12-13 10:12:14.496266] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.707 [2024-12-13 10:12:14.505258] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.707 [2024-12-13 10:12:14.505282] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.707 [2024-12-13 10:12:14.513786] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.707 [2024-12-13 10:12:14.513809] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.707 [2024-12-13 10:12:14.522798] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.707 [2024-12-13 10:12:14.522821] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.707 [2024-12-13 10:12:14.531775] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.707 [2024-12-13 10:12:14.531798] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.707 [2024-12-13 10:12:14.540629] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.707 [2024-12-13 10:12:14.540662] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.707 [2024-12-13 10:12:14.549439] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.707 [2024-12-13 10:12:14.549468] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.707 [2024-12-13 10:12:14.558269] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.707 [2024-12-13 10:12:14.558292] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.707 [2024-12-13 10:12:14.567025] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.707 [2024-12-13 10:12:14.567048] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.707 [2024-12-13 10:12:14.575749] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.707 [2024-12-13 10:12:14.575772] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.707 [2024-12-13 10:12:14.584700] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.707 [2024-12-13 10:12:14.584724] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.707 [2024-12-13 10:12:14.593803] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.707 [2024-12-13 10:12:14.593826] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.965 [2024-12-13 10:12:14.602714] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.965 [2024-12-13 10:12:14.602737] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.965 [2024-12-13 10:12:14.611867] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.965 [2024-12-13 10:12:14.611894] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.965 [2024-12-13 10:12:14.621065] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.965 [2024-12-13 10:12:14.621088] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.965 [2024-12-13 10:12:14.629906] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.965 [2024-12-13 10:12:14.629929] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.965 [2024-12-13 10:12:14.638834] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.965 [2024-12-13 10:12:14.638858] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.965 [2024-12-13 10:12:14.648022] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.965 [2024-12-13 10:12:14.648045] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.965 [2024-12-13 10:12:14.656762] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.965 [2024-12-13 10:12:14.656785] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.965 [2024-12-13 10:12:14.665581] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.965 [2024-12-13 10:12:14.665603] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.965 [2024-12-13 10:12:14.674801] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.965 [2024-12-13 10:12:14.674824] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.965 [2024-12-13 10:12:14.683817] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.965 [2024-12-13 10:12:14.683840] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.965 14301.50 IOPS, 111.73 MiB/s [2024-12-13T09:12:14.856Z] [2024-12-13 10:12:14.692560] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.965 [2024-12-13 10:12:14.692583] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.965 [2024-12-13 10:12:14.701500] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.965 [2024-12-13 10:12:14.701524] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.965 [2024-12-13 10:12:14.710490] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.965 [2024-12-13 10:12:14.710513] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.965 [2024-12-13 10:12:14.719343] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.965 [2024-12-13 10:12:14.719366] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.965 [2024-12-13 10:12:14.728336] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.965 [2024-12-13 10:12:14.728359] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.965 [2024-12-13 10:12:14.737101] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.965 [2024-12-13 10:12:14.737124] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.965 [2024-12-13 10:12:14.746122] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.965 [2024-12-13 10:12:14.746145] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.965 [2024-12-13 10:12:14.754990] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.965 [2024-12-13 10:12:14.755013] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.965 [2024-12-13 10:12:14.763778] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.965 [2024-12-13 10:12:14.763802] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.965 [2024-12-13 10:12:14.772591] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.965 [2024-12-13 10:12:14.772615] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.965 [2024-12-13 10:12:14.781531] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.965 [2024-12-13 10:12:14.781554] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.965 [2024-12-13 10:12:14.790194] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.965 [2024-12-13 10:12:14.790217] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.965 [2024-12-13 10:12:14.799206] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.965 [2024-12-13 10:12:14.799228] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.965 [2024-12-13 10:12:14.808204] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.965 [2024-12-13 10:12:14.808227] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.965 [2024-12-13 10:12:14.817151] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.965 [2024-12-13 10:12:14.817173] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.965 [2024-12-13 10:12:14.826060] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.965 [2024-12-13 10:12:14.826083] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.965 [2024-12-13 10:12:14.834935] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.965 [2024-12-13 10:12:14.834958] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.965 [2024-12-13 10:12:14.843746] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.965 [2024-12-13 10:12:14.843769] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.965 [2024-12-13 10:12:14.852627] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.965 [2024-12-13 10:12:14.852659] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.224 [2024-12-13 10:12:14.861509] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.224 [2024-12-13 10:12:14.861534] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.224 [2024-12-13 10:12:14.870323] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.224 [2024-12-13 10:12:14.870346] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.224 [2024-12-13 10:12:14.879100] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.224 [2024-12-13 10:12:14.879124] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.224 [2024-12-13 10:12:14.887853] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.224 [2024-12-13 10:12:14.887877] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.224 [2024-12-13 10:12:14.896909] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.224 [2024-12-13 10:12:14.896935] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.224 [2024-12-13 10:12:14.905957] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.224 [2024-12-13 10:12:14.905982] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.224 [2024-12-13 10:12:14.914778] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.224 [2024-12-13 10:12:14.914801] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.224 [2024-12-13 10:12:14.923546] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.224 [2024-12-13 10:12:14.923569] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.224 [2024-12-13 10:12:14.932166] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.224 [2024-12-13 10:12:14.932190] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.224 [2024-12-13 10:12:14.940799] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.224 [2024-12-13 10:12:14.940823] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.224 [2024-12-13 10:12:14.949788] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.224 [2024-12-13 10:12:14.949812] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.224 [2024-12-13 10:12:14.958379] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.224 [2024-12-13 10:12:14.958402] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.224 [2024-12-13 10:12:14.967118] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.224 [2024-12-13 10:12:14.967142] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.224 [2024-12-13 10:12:14.976012] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.224 [2024-12-13 10:12:14.976036] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.224 [2024-12-13 10:12:14.984612] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.224 [2024-12-13 10:12:14.984635] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.224 [2024-12-13 10:12:14.993506] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.224 [2024-12-13 10:12:14.993530] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.224 [2024-12-13 10:12:15.002389] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.224 [2024-12-13 10:12:15.002413] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.224 [2024-12-13 10:12:15.011460] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.224 [2024-12-13 10:12:15.011484] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.224 [2024-12-13 10:12:15.020093] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.224 [2024-12-13 10:12:15.020116] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.224 [2024-12-13 10:12:15.029027] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.224 [2024-12-13 10:12:15.029051] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.224 [2024-12-13 10:12:15.037954] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.224 [2024-12-13 10:12:15.037978] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.224 [2024-12-13 10:12:15.047030] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.224 [2024-12-13 10:12:15.047053] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.224 [2024-12-13 10:12:15.056211] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.224 [2024-12-13 10:12:15.056236] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.224 [2024-12-13 10:12:15.065242] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.224 [2024-12-13 10:12:15.065266] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.224 [2024-12-13 10:12:15.074190] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.224 [2024-12-13 10:12:15.074215] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.224 [2024-12-13 10:12:15.083138] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.224 [2024-12-13 10:12:15.083161] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.224 [2024-12-13 10:12:15.091992] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.224 [2024-12-13 10:12:15.092015] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.224 [2024-12-13 10:12:15.100584] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.224 [2024-12-13 10:12:15.100608] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.224 [2024-12-13 10:12:15.109795] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.224 [2024-12-13 10:12:15.109819] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.482 [2024-12-13 10:12:15.118546] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.482 [2024-12-13 10:12:15.118569] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.482 [2024-12-13 10:12:15.127612] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.482 [2024-12-13 10:12:15.127635] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.482 [2024-12-13 10:12:15.136661] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.482 [2024-12-13 10:12:15.136684] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.482 [2024-12-13 10:12:15.145395] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.482 [2024-12-13 10:12:15.145418] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.482 [2024-12-13 10:12:15.154385] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.482 [2024-12-13 10:12:15.154409] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.482 [2024-12-13 10:12:15.163506] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.482 [2024-12-13 10:12:15.163530] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.482 [2024-12-13 10:12:15.172187] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.482 [2024-12-13 10:12:15.172211] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.482 [2024-12-13 10:12:15.180873] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.482 [2024-12-13 10:12:15.180896] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.482 [2024-12-13 10:12:15.189797] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.482 [2024-12-13 10:12:15.189820] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.482 [2024-12-13 10:12:15.198987] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.482 [2024-12-13 10:12:15.199011] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.482 [2024-12-13 10:12:15.207838] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.482 [2024-12-13 10:12:15.207862] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.482 [2024-12-13 10:12:15.216372] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.482 [2024-12-13 10:12:15.216396] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.482 [2024-12-13 10:12:15.225248] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.482 [2024-12-13 10:12:15.225272] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.482 [2024-12-13 10:12:15.234250] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.482 [2024-12-13 10:12:15.234273] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.482 [2024-12-13 10:12:15.243006] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.482 [2024-12-13 10:12:15.243029] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.482 [2024-12-13 10:12:15.251959] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.482 [2024-12-13 10:12:15.251982] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.482 [2024-12-13 10:12:15.261049] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.482 [2024-12-13 10:12:15.261071] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.482 [2024-12-13 10:12:15.269874] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.482 [2024-12-13 10:12:15.269897] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.482 [2024-12-13 10:12:15.278943] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.482 [2024-12-13 10:12:15.278966] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.482 [2024-12-13 10:12:15.287921] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.482 [2024-12-13 10:12:15.287944] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.482 [2024-12-13 10:12:15.296582] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.482 [2024-12-13 10:12:15.296607] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.482 [2024-12-13 10:12:15.305532] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.482 [2024-12-13 10:12:15.305555] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.482 [2024-12-13 10:12:15.314315] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.482 [2024-12-13 10:12:15.314339] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.482 [2024-12-13 10:12:15.323547] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.482 [2024-12-13 10:12:15.323570] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.482 [2024-12-13 10:12:15.332750] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.482 [2024-12-13 10:12:15.332773] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.482 [2024-12-13 10:12:15.341622] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.482 [2024-12-13 10:12:15.341644] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.482 [2024-12-13 10:12:15.350686] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.482 [2024-12-13 10:12:15.350709] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.482 [2024-12-13 10:12:15.359686] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.482 [2024-12-13 10:12:15.359721] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.482 [2024-12-13 10:12:15.368607] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.482 [2024-12-13 10:12:15.368630] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.740 [2024-12-13 10:12:15.377765] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.740 [2024-12-13 10:12:15.377789] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.740 [2024-12-13 10:12:15.386789] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.740 [2024-12-13 10:12:15.386812] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.740 [2024-12-13 10:12:15.395390] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.740 [2024-12-13 10:12:15.395412] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.740 [2024-12-13 10:12:15.404118] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.740 [2024-12-13 10:12:15.404141] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.740 [2024-12-13 10:12:15.413167] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.740 [2024-12-13 10:12:15.413190] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.740 [2024-12-13 10:12:15.421951] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.740 [2024-12-13 10:12:15.421974] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.740 [2024-12-13 10:12:15.431118] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.740 [2024-12-13 10:12:15.431142] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.740 [2024-12-13 10:12:15.440247] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.740 [2024-12-13 10:12:15.440271] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.740 [2024-12-13 10:12:15.449187] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.740 [2024-12-13 10:12:15.449215] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.740 [2024-12-13 10:12:15.458315] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.740 [2024-12-13 10:12:15.458339] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.740 [2024-12-13 10:12:15.466983] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.740 [2024-12-13 10:12:15.467005] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.740 [2024-12-13 10:12:15.475731] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.740 [2024-12-13 10:12:15.475754] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.740 [2024-12-13 10:12:15.484397] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.740 [2024-12-13 10:12:15.484423] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.740 [2024-12-13 10:12:15.493466] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.740 [2024-12-13 10:12:15.493489] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.740 [2024-12-13 10:12:15.502550] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.740 [2024-12-13 10:12:15.502574] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.740 [2024-12-13 10:12:15.511538] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.740 [2024-12-13 10:12:15.511562] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.740 [2024-12-13 10:12:15.520216] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.740 [2024-12-13 10:12:15.520240] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.740 [2024-12-13 10:12:15.529203] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.741 [2024-12-13 10:12:15.529226] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.741 [2024-12-13 10:12:15.538176] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.741 [2024-12-13 10:12:15.538203] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.741 [2024-12-13 10:12:15.547128] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.741 [2024-12-13 10:12:15.547151] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.741 [2024-12-13 10:12:15.555953] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.741 [2024-12-13 10:12:15.555976] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.741 [2024-12-13 10:12:15.564910] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.741 [2024-12-13 10:12:15.564933] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.741 [2024-12-13 10:12:15.573600] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.741 [2024-12-13 10:12:15.573623] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.741 [2024-12-13 10:12:15.582059] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.741 [2024-12-13 10:12:15.582082] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.741 [2024-12-13 10:12:15.590788] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.741 [2024-12-13 10:12:15.590811] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.741 [2024-12-13 10:12:15.599633] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.741 [2024-12-13 10:12:15.599656] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.741 [2024-12-13 10:12:15.608356] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.741 [2024-12-13 10:12:15.608379] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.741 [2024-12-13 10:12:15.617067] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.741 [2024-12-13 10:12:15.617094] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.741 [2024-12-13 10:12:15.626127] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.741 [2024-12-13 10:12:15.626150] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.998 [2024-12-13 10:12:15.635252] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.999 [2024-12-13 10:12:15.635276] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.999 [2024-12-13 10:12:15.644385] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.999 [2024-12-13 10:12:15.644408] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.999 [2024-12-13 10:12:15.653377] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.999 [2024-12-13 10:12:15.653399] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.999 [2024-12-13 10:12:15.662256] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.999 [2024-12-13 10:12:15.662280] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.999 [2024-12-13 10:12:15.670984] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.999 [2024-12-13 10:12:15.671007] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.999 [2024-12-13 10:12:15.679930] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.999 [2024-12-13 10:12:15.679953] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.999 [2024-12-13 10:12:15.689162] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.999 [2024-12-13 10:12:15.689185] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.999 14294.33 IOPS, 111.67 MiB/s [2024-12-13T09:12:15.890Z] [2024-12-13 10:12:15.697836] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.999 [2024-12-13 10:12:15.697858] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.999 [2024-12-13 10:12:15.706590] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.999 [2024-12-13 10:12:15.706613] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.999 [2024-12-13 10:12:15.715844] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.999 [2024-12-13 10:12:15.715866] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.999 [2024-12-13 10:12:15.725039] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.999 [2024-12-13 10:12:15.725062] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.999 [2024-12-13 10:12:15.733804] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.999 [2024-12-13 10:12:15.733827] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.999 [2024-12-13 10:12:15.742638] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.999 [2024-12-13 10:12:15.742662] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.999 [2024-12-13 10:12:15.751374] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.999 [2024-12-13 10:12:15.751397] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.999 [2024-12-13 10:12:15.760186] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.999 [2024-12-13 10:12:15.760209] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.999 [2024-12-13 10:12:15.768754] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.999 [2024-12-13 10:12:15.768776] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.999 [2024-12-13 10:12:15.777468] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.999 [2024-12-13 10:12:15.777491] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.999 [2024-12-13 10:12:15.786154] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.999 [2024-12-13 10:12:15.786182] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.999 [2024-12-13 10:12:15.795075] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.999 [2024-12-13 10:12:15.795098] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.999 [2024-12-13 10:12:15.803990] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.999 [2024-12-13 10:12:15.804013] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.999 [2024-12-13 10:12:15.812848] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.999 [2024-12-13 10:12:15.812871] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.999 [2024-12-13 10:12:15.821807] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.999 [2024-12-13 10:12:15.821830] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.999 [2024-12-13 10:12:15.830528] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.999 [2024-12-13 10:12:15.830552] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.999 [2024-12-13 10:12:15.839202] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.999 [2024-12-13 10:12:15.839224] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.999 [2024-12-13 10:12:15.848028] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.999 [2024-12-13 10:12:15.848051] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.999 [2024-12-13 10:12:15.857012] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.999 [2024-12-13 10:12:15.857035] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.999 [2024-12-13 10:12:15.866076] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.999 [2024-12-13 10:12:15.866099] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.999 [2024-12-13 10:12:15.874892] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.999 [2024-12-13 10:12:15.874916] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.999 [2024-12-13 10:12:15.884143] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.999 [2024-12-13 10:12:15.884167] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.257 [2024-12-13 10:12:15.892921] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.257 [2024-12-13 10:12:15.892944] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.257 [2024-12-13 10:12:15.901663] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.257 [2024-12-13 10:12:15.901686] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.257 [2024-12-13 10:12:15.910679] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.257 [2024-12-13 10:12:15.910714] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.257 [2024-12-13 10:12:15.919532] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.257 [2024-12-13 10:12:15.919554] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.257 [2024-12-13 10:12:15.928535] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.257 [2024-12-13 10:12:15.928557] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.257 [2024-12-13 10:12:15.937502] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.257 [2024-12-13 10:12:15.937524] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.257 [2024-12-13 10:12:15.946373] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.257 [2024-12-13 10:12:15.946396] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.257 [2024-12-13 10:12:15.955539] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.257 [2024-12-13 10:12:15.955562] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.257 [2024-12-13 10:12:15.964463] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.257 [2024-12-13 10:12:15.964486] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.257 [2024-12-13 10:12:15.973133] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.257 [2024-12-13 10:12:15.973155] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.257 [2024-12-13 10:12:15.981876] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.257 [2024-12-13 10:12:15.981899] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.257 [2024-12-13 10:12:15.990963] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.257 [2024-12-13 10:12:15.990985] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.257 [2024-12-13 10:12:15.999612] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.257 [2024-12-13 10:12:15.999635] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.257 [2024-12-13 10:12:16.008623] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.257 [2024-12-13 10:12:16.008646] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.257 [2024-12-13 10:12:16.017607] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.257 [2024-12-13 10:12:16.017629] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.257 [2024-12-13 10:12:16.026528] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.257 [2024-12-13 10:12:16.026550] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.257 [2024-12-13 10:12:16.035632] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.257 [2024-12-13 10:12:16.035655] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.257 [2024-12-13 10:12:16.044646] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.257 [2024-12-13 10:12:16.044668] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.257 [2024-12-13 10:12:16.053964] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.257 [2024-12-13 10:12:16.053988] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.257 [2024-12-13 10:12:16.062880] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.257 [2024-12-13 10:12:16.062903] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.257 [2024-12-13 10:12:16.071643] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.257 [2024-12-13 10:12:16.071667] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.257 [2024-12-13 10:12:16.080507] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.257 [2024-12-13 10:12:16.080530] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.257 [2024-12-13 10:12:16.089259] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.257 [2024-12-13 10:12:16.089282] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.257 [2024-12-13 10:12:16.098216] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.257 [2024-12-13 10:12:16.098239] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.257 [2024-12-13 10:12:16.107042] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.257 [2024-12-13 10:12:16.107065] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.257 [2024-12-13 10:12:16.116152] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.257 [2024-12-13 10:12:16.116174] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.257 [2024-12-13 10:12:16.125100] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.257 [2024-12-13 10:12:16.125124] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.257 [2024-12-13 10:12:16.133614] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.257 [2024-12-13 10:12:16.133637] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.257 [2024-12-13 10:12:16.142388] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.257 [2024-12-13 10:12:16.142410] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.515 [2024-12-13 10:12:16.151334] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.515 [2024-12-13 10:12:16.151357] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.515 [2024-12-13 10:12:16.160343] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.515 [2024-12-13 10:12:16.160366] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.515 [2024-12-13 10:12:16.169132] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.515 [2024-12-13 10:12:16.169155] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.515 [2024-12-13 10:12:16.178094] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.515 [2024-12-13 10:12:16.178117] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.515 [2024-12-13 10:12:16.186810] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.515 [2024-12-13 10:12:16.186833] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.515 [2024-12-13 10:12:16.195529] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.515 [2024-12-13 10:12:16.195552] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.515 [2024-12-13 10:12:16.204577] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.515 [2024-12-13 10:12:16.204610] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.515 [2024-12-13 10:12:16.213618] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.515 [2024-12-13 10:12:16.213641] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.515 [2024-12-13 10:12:16.222478] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.515 [2024-12-13 10:12:16.222502] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.515 [2024-12-13 10:12:16.231150] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.515 [2024-12-13 10:12:16.231173] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.515 [2024-12-13 10:12:16.240240] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.515 [2024-12-13 10:12:16.240264] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.516 [2024-12-13 10:12:16.249251] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.516 [2024-12-13 10:12:16.249274] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.516 [2024-12-13 10:12:16.258031] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.516 [2024-12-13 10:12:16.258055] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.516 [2024-12-13 10:12:16.266836] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.516 [2024-12-13 10:12:16.266859] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.516 [2024-12-13 10:12:16.285440] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.516 [2024-12-13 10:12:16.285475] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.516 [2024-12-13 10:12:16.296941] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.516 [2024-12-13 10:12:16.296965] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.516 [2024-12-13 10:12:16.305235] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.516 [2024-12-13 10:12:16.305259] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.516 [2024-12-13 10:12:16.316613] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.516 [2024-12-13 10:12:16.316637] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.516 [2024-12-13 10:12:16.325383] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.516 [2024-12-13 10:12:16.325406] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.516 [2024-12-13 10:12:16.337070] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.516 [2024-12-13 10:12:16.337094] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.516 [2024-12-13 10:12:16.345754] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.516 [2024-12-13 10:12:16.345776] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.516 [2024-12-13 10:12:16.354651] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.516 [2024-12-13 10:12:16.354675] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.516 [2024-12-13 10:12:16.363540] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.516 [2024-12-13 10:12:16.363563] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.516 [2024-12-13 10:12:16.372541] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.516 [2024-12-13 10:12:16.372565] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.516 [2024-12-13 10:12:16.381379] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.516 [2024-12-13 10:12:16.381403] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.516 [2024-12-13 10:12:16.390110] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.516 [2024-12-13 10:12:16.390133] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.516 [2024-12-13 10:12:16.398998] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.516 [2024-12-13 10:12:16.399023] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.774 [2024-12-13 10:12:16.407925] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.774 [2024-12-13 10:12:16.407948] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.774 [2024-12-13 10:12:16.416997] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.774 [2024-12-13 10:12:16.417021] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.774 [2024-12-13 10:12:16.425749] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.774 [2024-12-13 10:12:16.425772] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.774 [2024-12-13 10:12:16.434573] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.774 [2024-12-13 10:12:16.434596] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.774 [2024-12-13 10:12:16.443369] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.774 [2024-12-13 10:12:16.443392] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.774 [2024-12-13 10:12:16.452140] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.774 [2024-12-13 10:12:16.452163] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.774 [2024-12-13 10:12:16.460784] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.774 [2024-12-13 10:12:16.460807] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.774 [2024-12-13 10:12:16.469468] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.774 [2024-12-13 10:12:16.469492] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.774 [2024-12-13 10:12:16.478469] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.774 [2024-12-13 10:12:16.478493] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.774 [2024-12-13 10:12:16.487695] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.774 [2024-12-13 10:12:16.487729] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.774 [2024-12-13 10:12:16.496412] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.774 [2024-12-13 10:12:16.496436] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.774 [2024-12-13 10:12:16.505072] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.774 [2024-12-13 10:12:16.505095] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.774 [2024-12-13 10:12:16.513891] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.774 [2024-12-13 10:12:16.513914] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.774 [2024-12-13 10:12:16.522963] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.774 [2024-12-13 10:12:16.522987] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.774 [2024-12-13 10:12:16.531859] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.774 [2024-12-13 10:12:16.531883] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.774 [2024-12-13 10:12:16.540563] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.774 [2024-12-13 10:12:16.540587] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.774 [2024-12-13 10:12:16.549330] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.774 [2024-12-13 10:12:16.549353] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.774 [2024-12-13 10:12:16.558331] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.774 [2024-12-13 10:12:16.558354] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.774 [2024-12-13 10:12:16.567402] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.774 [2024-12-13 10:12:16.567426] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.774 [2024-12-13 10:12:16.576474] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.774 [2024-12-13 10:12:16.576497] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.774 [2024-12-13 10:12:16.585410] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.774 [2024-12-13 10:12:16.585434] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.774 [2024-12-13 10:12:16.594378] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.774 [2024-12-13 10:12:16.594402] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.774 [2024-12-13 10:12:16.603111] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.774 [2024-12-13 10:12:16.603134] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.774 [2024-12-13 10:12:16.611915] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.774 [2024-12-13 10:12:16.611938] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.774 [2024-12-13 10:12:16.621044] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.774 [2024-12-13 10:12:16.621067] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.774 [2024-12-13 10:12:16.629906] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.774 [2024-12-13 10:12:16.629929] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.774 [2024-12-13 10:12:16.638853] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.774 [2024-12-13 10:12:16.638880] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.774 [2024-12-13 10:12:16.647981] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.774 [2024-12-13 10:12:16.648005] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.774 [2024-12-13 10:12:16.656821] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.774 [2024-12-13 10:12:16.656844] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.774 [2024-12-13 10:12:16.665336] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.774 [2024-12-13 10:12:16.665359] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.032 [2024-12-13 10:12:16.674177] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.032 [2024-12-13 10:12:16.674199] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.032 [2024-12-13 10:12:16.682876] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.032 [2024-12-13 10:12:16.682900] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.032 [2024-12-13 10:12:16.691490] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.032 [2024-12-13 10:12:16.691513] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.032 14300.25 IOPS, 111.72 MiB/s [2024-12-13T09:12:16.923Z] [2024-12-13 10:12:16.700206] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.032 [2024-12-13 10:12:16.700229] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.032 [2024-12-13 10:12:16.709069] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.032 [2024-12-13 10:12:16.709092] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.032 [2024-12-13 10:12:16.717732] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.032 [2024-12-13 10:12:16.717754] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.032 [2024-12-13 10:12:16.726887] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.032 [2024-12-13 10:12:16.726911] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.032 [2024-12-13 10:12:16.736152] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.032 [2024-12-13 10:12:16.736174] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.032 [2024-12-13 10:12:16.745068] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.032 [2024-12-13 10:12:16.745093] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.032 [2024-12-13 10:12:16.753810] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.032 [2024-12-13 10:12:16.753833] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.032 [2024-12-13 10:12:16.762652] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.032 [2024-12-13 10:12:16.762675] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.032 [2024-12-13 10:12:16.771563] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.032 [2024-12-13 10:12:16.771586] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.032 [2024-12-13 10:12:16.780177] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.032 [2024-12-13 10:12:16.780199] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.032 [2024-12-13 10:12:16.789002] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.032 [2024-12-13 10:12:16.789026] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.032 [2024-12-13 10:12:16.797896] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.032 [2024-12-13 10:12:16.797919] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.032 [2024-12-13 10:12:16.806624] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.032 [2024-12-13 10:12:16.806652] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.032 [2024-12-13 10:12:16.815120] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.032 [2024-12-13 10:12:16.815145] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.032 [2024-12-13 10:12:16.823892] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.032 [2024-12-13 10:12:16.823916] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.032 [2024-12-13 10:12:16.832748] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.032 [2024-12-13 10:12:16.832771] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.032 [2024-12-13 10:12:16.841369] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.032 [2024-12-13 10:12:16.841392] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.032 [2024-12-13 10:12:16.850280] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.032 [2024-12-13 10:12:16.850303] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.032 [2024-12-13 10:12:16.859111] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.033 [2024-12-13 10:12:16.859133] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.033 [2024-12-13 10:12:16.867856] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.033 [2024-12-13 10:12:16.867878] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.033 [2024-12-13 10:12:16.876703] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.033 [2024-12-13 10:12:16.876726] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.033 [2024-12-13 10:12:16.885503] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.033 [2024-12-13 10:12:16.885526] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.033 [2024-12-13 10:12:16.894383] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.033 [2024-12-13 10:12:16.894406] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.033 [2024-12-13 10:12:16.904297] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.033 [2024-12-13 10:12:16.904320] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.033 [2024-12-13 10:12:16.914180] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.033 [2024-12-13 10:12:16.914203] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.033 [2024-12-13 10:12:16.922131] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.033 [2024-12-13 10:12:16.922153] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.290 [2024-12-13 10:12:16.933995] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.290 [2024-12-13 10:12:16.934018] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.290 [2024-12-13 10:12:16.942518] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.290 [2024-12-13 10:12:16.942540] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.290 [2024-12-13 10:12:16.953923] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.290 [2024-12-13 10:12:16.953946] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.290 [2024-12-13 10:12:16.963752] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.290 [2024-12-13 10:12:16.963775] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.290 [2024-12-13 10:12:16.972149] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.290 [2024-12-13 10:12:16.972171] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.290 [2024-12-13 10:12:16.982565] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.290 [2024-12-13 10:12:16.982593] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.290 [2024-12-13 10:12:16.990918] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.290 [2024-12-13 10:12:16.990941] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.290 [2024-12-13 10:12:17.001594] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.290 [2024-12-13 10:12:17.001618] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.290 [2024-12-13 10:12:17.011584] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.290 [2024-12-13 10:12:17.011607] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.290 [2024-12-13 10:12:17.021189] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.290 [2024-12-13 10:12:17.021211] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.290 [2024-12-13 10:12:17.028679] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.290 [2024-12-13 10:12:17.028701] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.290 [2024-12-13 10:12:17.040130] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.290 [2024-12-13 10:12:17.040153] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.290 [2024-12-13 10:12:17.048935] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.290 [2024-12-13 10:12:17.048957] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.290 [2024-12-13 10:12:17.057716] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.290 [2024-12-13 10:12:17.057738] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.290 [2024-12-13 10:12:17.066992] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.290 [2024-12-13 10:12:17.067015] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.290 [2024-12-13 10:12:17.075828] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.290 [2024-12-13 10:12:17.075851] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.290 [2024-12-13 10:12:17.084754] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.290 [2024-12-13 10:12:17.084777] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.290 [2024-12-13 10:12:17.093727] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.290 [2024-12-13 10:12:17.093750] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.290 [2024-12-13 10:12:17.102583] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.290 [2024-12-13 10:12:17.102607] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.290 [2024-12-13 10:12:17.111435] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.290 [2024-12-13 10:12:17.111463] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.290 [2024-12-13 10:12:17.120066] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.290 [2024-12-13 10:12:17.120089] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.290 [2024-12-13 10:12:17.129068] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.290 [2024-12-13 10:12:17.129091] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.290 [2024-12-13 10:12:17.137758] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.290 [2024-12-13 10:12:17.137781] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.290 [2024-12-13 10:12:17.146492] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.290 [2024-12-13 10:12:17.146515] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.290 [2024-12-13 10:12:17.155475] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.290 [2024-12-13 10:12:17.155498] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.290 [2024-12-13 10:12:17.164513] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.290 [2024-12-13 10:12:17.164537] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.290 [2024-12-13 10:12:17.173336] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.290 [2024-12-13 10:12:17.173359] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.290 [2024-12-13 10:12:17.182314] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.290 [2024-12-13 10:12:17.182337] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.548 [2024-12-13 10:12:17.191377] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.548 [2024-12-13 10:12:17.191400] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.548 [2024-12-13 10:12:17.200137] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.548 [2024-12-13 10:12:17.200160] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.548 [2024-12-13 10:12:17.208913] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.548 [2024-12-13 10:12:17.208936] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.548 [2024-12-13 10:12:17.217740] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.548 [2024-12-13 10:12:17.217763] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.548 [2024-12-13 10:12:17.227066] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.548 [2024-12-13 10:12:17.227090] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.548 [2024-12-13 10:12:17.235656] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.548 [2024-12-13 10:12:17.235680] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.548 [2024-12-13 10:12:17.244552] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.548 [2024-12-13 10:12:17.244575] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.548 [2024-12-13 10:12:17.253395] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.548 [2024-12-13 10:12:17.253419] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.548 [2024-12-13 10:12:17.262222] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.548 [2024-12-13 10:12:17.262246] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.548 [2024-12-13 10:12:17.271346] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.548 [2024-12-13 10:12:17.271369] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.548 [2024-12-13 10:12:17.280263] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.548 [2024-12-13 10:12:17.280286] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.548 [2024-12-13 10:12:17.289236] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.548 [2024-12-13 10:12:17.289260] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.548 [2024-12-13 10:12:17.298337] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.548 [2024-12-13 10:12:17.298360] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.548 [2024-12-13 10:12:17.307269] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.548 [2024-12-13 10:12:17.307292] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.548 [2024-12-13 10:12:17.316257] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.548 [2024-12-13 10:12:17.316280] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.548 [2024-12-13 10:12:17.325190] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.548 [2024-12-13 10:12:17.325213] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.548 [2024-12-13 10:12:17.334270] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.548 [2024-12-13 10:12:17.334292] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.548 [2024-12-13 10:12:17.343311] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.548 [2024-12-13 10:12:17.343333] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.548 [2024-12-13 10:12:17.351983] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.548 [2024-12-13 10:12:17.352006] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.548 [2024-12-13 10:12:17.360946] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.548 [2024-12-13 10:12:17.360969] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.548 [2024-12-13 10:12:17.369866] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.548 [2024-12-13 10:12:17.369888] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.548 [2024-12-13 10:12:17.378677] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.548 [2024-12-13 10:12:17.378700] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.548 [2024-12-13 10:12:17.387553] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.548 [2024-12-13 10:12:17.387575] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.548 [2024-12-13 10:12:17.396497] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.548 [2024-12-13 10:12:17.396519] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.548 [2024-12-13 10:12:17.405760] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.548 [2024-12-13 10:12:17.405783] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.548 [2024-12-13 10:12:17.414711] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.548 [2024-12-13 10:12:17.414735] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.548 [2024-12-13 10:12:17.423609] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.548 [2024-12-13 10:12:17.423632] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.548 [2024-12-13 10:12:17.432631] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.548 [2024-12-13 10:12:17.432654] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.806 [2024-12-13 10:12:17.441168] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.806 [2024-12-13 10:12:17.441191] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.806 [2024-12-13 10:12:17.449845] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.806 [2024-12-13 10:12:17.449868] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.806 [2024-12-13 10:12:17.458534] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.806 [2024-12-13 10:12:17.458557] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.806 [2024-12-13 10:12:17.467548] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.806 [2024-12-13 10:12:17.467571] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.806 [2024-12-13 10:12:17.476565] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.806 [2024-12-13 10:12:17.476588] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.806 [2024-12-13 10:12:17.485358] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.806 [2024-12-13 10:12:17.485381] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.806 [2024-12-13 10:12:17.494066] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.806 [2024-12-13 10:12:17.494088] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.806 [2024-12-13 10:12:17.502852] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.806 [2024-12-13 10:12:17.502875] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.806 [2024-12-13 10:12:17.511419] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.806 [2024-12-13 10:12:17.511442] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.806 [2024-12-13 10:12:17.520142] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.806 [2024-12-13 10:12:17.520166] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.806 [2024-12-13 10:12:17.529193] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.806 [2024-12-13 10:12:17.529217] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.806 [2024-12-13 10:12:17.538206] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.806 [2024-12-13 10:12:17.538229] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.806 [2024-12-13 10:12:17.546918] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.806 [2024-12-13 10:12:17.546941] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.806 [2024-12-13 10:12:17.555858] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.806 [2024-12-13 10:12:17.555881] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.806 [2024-12-13 10:12:17.564617] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.806 [2024-12-13 10:12:17.564640] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.806 [2024-12-13 10:12:17.573253] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.806 [2024-12-13 10:12:17.573276] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.806 [2024-12-13 10:12:17.582012] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.806 [2024-12-13 10:12:17.582035] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.806 [2024-12-13 10:12:17.591060] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.806 [2024-12-13 10:12:17.591084] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.806 [2024-12-13 10:12:17.599709] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.806 [2024-12-13 10:12:17.599733] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.806 [2024-12-13 10:12:17.608415] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.806 [2024-12-13 10:12:17.608439] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.806 [2024-12-13 10:12:17.617155] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.806 [2024-12-13 10:12:17.617180] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.806 [2024-12-13 10:12:17.626047] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.806 [2024-12-13 10:12:17.626070] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.806 [2024-12-13 10:12:17.634986] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.806 [2024-12-13 10:12:17.635011] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.806 [2024-12-13 10:12:17.643972] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.806 [2024-12-13 10:12:17.643996] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.806 [2024-12-13 10:12:17.652876] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.806 [2024-12-13 10:12:17.652905] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.806 [2024-12-13 10:12:17.661680] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.806 [2024-12-13 10:12:17.661715] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.806 [2024-12-13 10:12:17.670482] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.806 [2024-12-13 10:12:17.670507] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.806 [2024-12-13 10:12:17.679673] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.806 [2024-12-13 10:12:17.679696] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.806 [2024-12-13 10:12:17.688385] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.806 [2024-12-13 10:12:17.688410] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.806 [2024-12-13 10:12:17.697020] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.806 [2024-12-13 10:12:17.697044] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.064 14320.20 IOPS, 111.88 MiB/s [2024-12-13T09:12:17.955Z] [2024-12-13 10:12:17.703949] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.064 [2024-12-13 10:12:17.703973] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.064 00:10:24.064 Latency(us) 00:10:24.064 [2024-12-13T09:12:17.955Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:24.064 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:10:24.064 Nvme1n1 : 5.01 14321.58 111.89 0.00 0.00 8928.26 3994.58 14480.34 00:10:24.064 [2024-12-13T09:12:17.955Z] =================================================================================================================== 00:10:24.064 [2024-12-13T09:12:17.955Z] Total : 14321.58 111.89 0.00 0.00 8928.26 3994.58 14480.34 00:10:24.064 [2024-12-13 10:12:17.711236] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.064 [2024-12-13 10:12:17.711256] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.064 [2024-12-13 10:12:17.719250] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.064 [2024-12-13 10:12:17.719271] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.064 [2024-12-13 10:12:17.727286] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.064 [2024-12-13 10:12:17.727306] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.064 [2024-12-13 10:12:17.735294] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.064 [2024-12-13 10:12:17.735313] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.064 [2024-12-13 10:12:17.743305] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.064 [2024-12-13 10:12:17.743323] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.064 [2024-12-13 10:12:17.751347] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.064 [2024-12-13 10:12:17.751367] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.064 [2024-12-13 10:12:17.759372] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.064 [2024-12-13 10:12:17.759395] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.064 [2024-12-13 10:12:17.767379] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.064 [2024-12-13 10:12:17.767399] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.064 [2024-12-13 10:12:17.775407] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.064 [2024-12-13 10:12:17.775427] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.064 [2024-12-13 10:12:17.783426] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.064 [2024-12-13 10:12:17.783456] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.064 [2024-12-13 10:12:17.791456] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.064 [2024-12-13 10:12:17.791491] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.064 [2024-12-13 10:12:17.799476] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.064 [2024-12-13 10:12:17.799495] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.064 [2024-12-13 10:12:17.807482] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.064 [2024-12-13 10:12:17.807501] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.064 [2024-12-13 10:12:17.815527] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.064 [2024-12-13 10:12:17.815546] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.064 [2024-12-13 10:12:17.823545] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.064 [2024-12-13 10:12:17.823564] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.064 [2024-12-13 10:12:17.831579] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.064 [2024-12-13 10:12:17.831598] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.064 [2024-12-13 10:12:17.839582] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.064 [2024-12-13 10:12:17.839601] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.064 [2024-12-13 10:12:17.847596] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.064 [2024-12-13 10:12:17.847615] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.064 [2024-12-13 10:12:17.855636] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.064 [2024-12-13 10:12:17.855657] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.064 [2024-12-13 10:12:17.863662] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.064 [2024-12-13 10:12:17.863682] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.064 [2024-12-13 10:12:17.871663] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.064 [2024-12-13 10:12:17.871682] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.064 [2024-12-13 10:12:17.879707] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.064 [2024-12-13 10:12:17.879726] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.064 [2024-12-13 10:12:17.887719] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.064 [2024-12-13 10:12:17.887738] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.064 [2024-12-13 10:12:17.895728] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.064 [2024-12-13 10:12:17.895747] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.064 [2024-12-13 10:12:17.903767] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.064 [2024-12-13 10:12:17.903785] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.064 [2024-12-13 10:12:17.911772] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.064 [2024-12-13 10:12:17.911791] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.064 [2024-12-13 10:12:17.919809] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.064 [2024-12-13 10:12:17.919828] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.064 [2024-12-13 10:12:17.927828] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.064 [2024-12-13 10:12:17.927847] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.064 [2024-12-13 10:12:17.935843] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.064 [2024-12-13 10:12:17.935865] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.064 [2024-12-13 10:12:17.943874] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.064 [2024-12-13 10:12:17.943892] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.064 [2024-12-13 10:12:17.951890] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.064 [2024-12-13 10:12:17.951908] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.321 [2024-12-13 10:12:17.959902] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.321 [2024-12-13 10:12:17.959920] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.321 [2024-12-13 10:12:17.967941] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.321 [2024-12-13 10:12:17.967960] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.321 [2024-12-13 10:12:17.975959] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.321 [2024-12-13 10:12:17.975977] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.321 [2024-12-13 10:12:17.983986] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.321 [2024-12-13 10:12:17.984005] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.321 [2024-12-13 10:12:17.992005] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.321 [2024-12-13 10:12:17.992023] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.321 [2024-12-13 10:12:18.000012] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.321 [2024-12-13 10:12:18.000031] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.321 [2024-12-13 10:12:18.008048] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.321 [2024-12-13 10:12:18.008067] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.321 [2024-12-13 10:12:18.016067] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.321 [2024-12-13 10:12:18.016086] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.321 [2024-12-13 10:12:18.024085] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.321 [2024-12-13 10:12:18.024103] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.321 [2024-12-13 10:12:18.032110] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.321 [2024-12-13 10:12:18.032128] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.321 [2024-12-13 10:12:18.040120] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.321 [2024-12-13 10:12:18.040139] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.321 [2024-12-13 10:12:18.048161] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.321 [2024-12-13 10:12:18.048181] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.322 [2024-12-13 10:12:18.056187] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.322 [2024-12-13 10:12:18.056207] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.322 [2024-12-13 10:12:18.064191] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.322 [2024-12-13 10:12:18.064210] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.322 [2024-12-13 10:12:18.072238] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.322 [2024-12-13 10:12:18.072256] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.322 [2024-12-13 10:12:18.080250] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.322 [2024-12-13 10:12:18.080268] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.322 [2024-12-13 10:12:18.088257] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.322 [2024-12-13 10:12:18.088276] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.322 [2024-12-13 10:12:18.096291] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.322 [2024-12-13 10:12:18.096309] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.322 [2024-12-13 10:12:18.104297] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.322 [2024-12-13 10:12:18.104315] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.322 [2024-12-13 10:12:18.112338] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.322 [2024-12-13 10:12:18.112356] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.322 [2024-12-13 10:12:18.120366] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.322 [2024-12-13 10:12:18.120386] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.322 [2024-12-13 10:12:18.128374] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.322 [2024-12-13 10:12:18.128397] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.322 [2024-12-13 10:12:18.136409] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.322 [2024-12-13 10:12:18.136428] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.322 [2024-12-13 10:12:18.144430] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.322 [2024-12-13 10:12:18.144454] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.322 [2024-12-13 10:12:18.152435] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.322 [2024-12-13 10:12:18.152459] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.322 [2024-12-13 10:12:18.160492] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.322 [2024-12-13 10:12:18.160511] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.322 [2024-12-13 10:12:18.168486] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.322 [2024-12-13 10:12:18.168504] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.322 [2024-12-13 10:12:18.176520] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.322 [2024-12-13 10:12:18.176538] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.322 [2024-12-13 10:12:18.184540] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.322 [2024-12-13 10:12:18.184559] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.322 [2024-12-13 10:12:18.192546] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.322 [2024-12-13 10:12:18.192565] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.322 [2024-12-13 10:12:18.200586] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.322 [2024-12-13 10:12:18.200605] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.322 [2024-12-13 10:12:18.208600] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.322 [2024-12-13 10:12:18.208618] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.579 [2024-12-13 10:12:18.216612] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.579 [2024-12-13 10:12:18.216630] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.579 [2024-12-13 10:12:18.224650] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.579 [2024-12-13 10:12:18.224669] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.579 [2024-12-13 10:12:18.232657] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.579 [2024-12-13 10:12:18.232675] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.579 [2024-12-13 10:12:18.240699] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.579 [2024-12-13 10:12:18.240716] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.579 [2024-12-13 10:12:18.248720] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.579 [2024-12-13 10:12:18.248738] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.579 [2024-12-13 10:12:18.256732] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.579 [2024-12-13 10:12:18.256750] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.579 [2024-12-13 10:12:18.264774] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.580 [2024-12-13 10:12:18.264793] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.580 [2024-12-13 10:12:18.272783] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.580 [2024-12-13 10:12:18.272801] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.580 [2024-12-13 10:12:18.280797] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.580 [2024-12-13 10:12:18.280815] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.580 [2024-12-13 10:12:18.288817] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.580 [2024-12-13 10:12:18.288836] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.580 [2024-12-13 10:12:18.296829] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.580 [2024-12-13 10:12:18.296848] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.580 [2024-12-13 10:12:18.304872] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.580 [2024-12-13 10:12:18.304892] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.580 [2024-12-13 10:12:18.312907] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.580 [2024-12-13 10:12:18.312927] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.580 [2024-12-13 10:12:18.320892] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.580 [2024-12-13 10:12:18.320910] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.580 [2024-12-13 10:12:18.328935] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.580 [2024-12-13 10:12:18.328953] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.580 [2024-12-13 10:12:18.336946] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.580 [2024-12-13 10:12:18.336965] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.580 [2024-12-13 10:12:18.344960] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.580 [2024-12-13 10:12:18.344978] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.580 [2024-12-13 10:12:18.352993] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.580 [2024-12-13 10:12:18.353011] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.580 [2024-12-13 10:12:18.361014] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.580 [2024-12-13 10:12:18.361032] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.580 [2024-12-13 10:12:18.369034] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.580 [2024-12-13 10:12:18.369052] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.580 [2024-12-13 10:12:18.377072] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.580 [2024-12-13 10:12:18.377090] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.580 [2024-12-13 10:12:18.385064] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.580 [2024-12-13 10:12:18.385082] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.580 [2024-12-13 10:12:18.393103] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.580 [2024-12-13 10:12:18.393122] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.580 [2024-12-13 10:12:18.401127] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.580 [2024-12-13 10:12:18.401147] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.580 [2024-12-13 10:12:18.409137] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.580 [2024-12-13 10:12:18.409156] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.580 [2024-12-13 10:12:18.417166] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.580 [2024-12-13 10:12:18.417185] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.580 [2024-12-13 10:12:18.425180] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.580 [2024-12-13 10:12:18.425199] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.580 [2024-12-13 10:12:18.433212] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.580 [2024-12-13 10:12:18.433230] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.580 [2024-12-13 10:12:18.441238] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.580 [2024-12-13 10:12:18.441257] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.580 [2024-12-13 10:12:18.449247] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.580 [2024-12-13 10:12:18.449265] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.580 [2024-12-13 10:12:18.457291] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.580 [2024-12-13 10:12:18.457309] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.580 [2024-12-13 10:12:18.465301] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.580 [2024-12-13 10:12:18.465320] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.837 [2024-12-13 10:12:18.473309] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.837 [2024-12-13 10:12:18.473327] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.837 [2024-12-13 10:12:18.481346] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.837 [2024-12-13 10:12:18.481364] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.837 [2024-12-13 10:12:18.489352] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.837 [2024-12-13 10:12:18.489370] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.837 [2024-12-13 10:12:18.497395] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.837 [2024-12-13 10:12:18.497414] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.837 [2024-12-13 10:12:18.505413] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.837 [2024-12-13 10:12:18.505431] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.837 [2024-12-13 10:12:18.513423] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.837 [2024-12-13 10:12:18.513442] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.837 [2024-12-13 10:12:18.521462] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.837 [2024-12-13 10:12:18.521497] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.837 [2024-12-13 10:12:18.529500] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.837 [2024-12-13 10:12:18.529518] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.837 [2024-12-13 10:12:18.537508] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.838 [2024-12-13 10:12:18.537527] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.838 [2024-12-13 10:12:18.545547] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.838 [2024-12-13 10:12:18.545565] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.838 [2024-12-13 10:12:18.553566] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.838 [2024-12-13 10:12:18.553584] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.838 [2024-12-13 10:12:18.561587] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.838 [2024-12-13 10:12:18.561606] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.838 [2024-12-13 10:12:18.569593] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.838 [2024-12-13 10:12:18.569611] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.838 [2024-12-13 10:12:18.577598] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.838 [2024-12-13 10:12:18.577616] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.838 [2024-12-13 10:12:18.585635] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.838 [2024-12-13 10:12:18.585654] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.838 [2024-12-13 10:12:18.593655] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.838 [2024-12-13 10:12:18.593674] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.838 [2024-12-13 10:12:18.601674] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.838 [2024-12-13 10:12:18.601693] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.838 [2024-12-13 10:12:18.609701] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.838 [2024-12-13 10:12:18.609719] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.838 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (3784381) - No such process 00:10:24.838 10:12:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 3784381 00:10:24.838 10:12:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:24.838 10:12:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.838 10:12:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:24.838 10:12:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.838 10:12:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:24.838 10:12:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.838 10:12:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:24.838 delay0 00:10:24.838 10:12:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.838 10:12:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:10:24.838 10:12:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.838 10:12:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:24.838 10:12:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.838 10:12:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:10:25.095 [2024-12-13 10:12:18.748047] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:10:33.201 Initializing NVMe Controllers 00:10:33.201 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:33.201 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:33.201 Initialization complete. Launching workers. 00:10:33.201 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 267, failed: 19663 00:10:33.201 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 19840, failed to submit 90 00:10:33.201 success 19717, unsuccessful 123, failed 0 00:10:33.201 10:12:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:10:33.201 10:12:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:10:33.201 10:12:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:33.201 10:12:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:10:33.201 10:12:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:33.201 10:12:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:10:33.201 10:12:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:33.201 10:12:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:33.201 rmmod nvme_tcp 00:10:33.201 rmmod nvme_fabrics 00:10:33.201 rmmod nvme_keyring 00:10:33.201 10:12:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:33.201 10:12:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:10:33.201 10:12:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:10:33.201 10:12:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 3781827 ']' 00:10:33.201 10:12:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 3781827 00:10:33.201 10:12:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 3781827 ']' 00:10:33.201 10:12:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 3781827 00:10:33.201 10:12:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:10:33.201 10:12:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:33.201 10:12:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3781827 00:10:33.201 10:12:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:10:33.201 10:12:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:10:33.201 10:12:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3781827' 00:10:33.201 killing process with pid 3781827 00:10:33.201 10:12:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 3781827 00:10:33.201 10:12:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 3781827 00:10:33.459 10:12:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:33.459 10:12:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:33.459 10:12:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:33.459 10:12:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:10:33.459 10:12:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:10:33.459 10:12:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:33.459 10:12:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:10:33.459 10:12:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:33.459 10:12:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:33.459 10:12:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:33.459 10:12:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:33.459 10:12:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:35.990 10:12:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:35.990 00:10:35.990 real 0m36.124s 00:10:35.990 user 0m49.934s 00:10:35.990 sys 0m12.053s 00:10:35.990 10:12:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:35.990 10:12:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:35.990 ************************************ 00:10:35.990 END TEST nvmf_zcopy 00:10:35.990 ************************************ 00:10:35.990 10:12:29 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:10:35.990 10:12:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:35.990 10:12:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:35.990 10:12:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:35.990 ************************************ 00:10:35.990 START TEST nvmf_nmic 00:10:35.990 ************************************ 00:10:35.990 10:12:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:10:35.990 * Looking for test storage... 00:10:35.990 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:35.990 10:12:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:35.990 10:12:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # lcov --version 00:10:35.990 10:12:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:35.990 10:12:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:35.990 10:12:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:35.990 10:12:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:35.990 10:12:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:35.990 10:12:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:10:35.991 10:12:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:10:35.991 10:12:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:10:35.991 10:12:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:10:35.991 10:12:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:10:35.991 10:12:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:10:35.991 10:12:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:10:35.991 10:12:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:35.991 10:12:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:10:35.991 10:12:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:10:35.991 10:12:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:35.991 10:12:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:35.991 10:12:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:10:35.991 10:12:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:10:35.991 10:12:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:35.991 10:12:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:10:35.991 10:12:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:10:35.991 10:12:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:10:35.991 10:12:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:10:35.991 10:12:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:35.991 10:12:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:10:35.991 10:12:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:10:35.991 10:12:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:35.991 10:12:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:35.991 10:12:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:10:35.991 10:12:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:35.991 10:12:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:35.991 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:35.991 --rc genhtml_branch_coverage=1 00:10:35.991 --rc genhtml_function_coverage=1 00:10:35.991 --rc genhtml_legend=1 00:10:35.991 --rc geninfo_all_blocks=1 00:10:35.991 --rc geninfo_unexecuted_blocks=1 00:10:35.991 00:10:35.991 ' 00:10:35.991 10:12:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:35.991 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:35.991 --rc genhtml_branch_coverage=1 00:10:35.991 --rc genhtml_function_coverage=1 00:10:35.991 --rc genhtml_legend=1 00:10:35.991 --rc geninfo_all_blocks=1 00:10:35.991 --rc geninfo_unexecuted_blocks=1 00:10:35.991 00:10:35.991 ' 00:10:35.991 10:12:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:35.991 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:35.991 --rc genhtml_branch_coverage=1 00:10:35.991 --rc genhtml_function_coverage=1 00:10:35.991 --rc genhtml_legend=1 00:10:35.991 --rc geninfo_all_blocks=1 00:10:35.991 --rc geninfo_unexecuted_blocks=1 00:10:35.991 00:10:35.991 ' 00:10:35.991 10:12:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:35.991 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:35.991 --rc genhtml_branch_coverage=1 00:10:35.991 --rc genhtml_function_coverage=1 00:10:35.991 --rc genhtml_legend=1 00:10:35.991 --rc geninfo_all_blocks=1 00:10:35.991 --rc geninfo_unexecuted_blocks=1 00:10:35.991 00:10:35.991 ' 00:10:35.991 10:12:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:35.991 10:12:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:10:35.991 10:12:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:35.991 10:12:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:35.991 10:12:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:35.991 10:12:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:35.991 10:12:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:35.991 10:12:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:35.991 10:12:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:35.991 10:12:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:35.991 10:12:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:35.991 10:12:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:35.991 10:12:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:10:35.991 10:12:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:10:35.991 10:12:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:35.991 10:12:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:35.991 10:12:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:35.991 10:12:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:35.991 10:12:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:35.991 10:12:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:10:35.991 10:12:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:35.991 10:12:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:35.991 10:12:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:35.991 10:12:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:35.991 10:12:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:35.991 10:12:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:35.991 10:12:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:10:35.991 10:12:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:35.991 10:12:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:10:35.991 10:12:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:35.991 10:12:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:35.991 10:12:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:35.991 10:12:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:35.991 10:12:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:35.991 10:12:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:35.991 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:35.991 10:12:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:35.991 10:12:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:35.991 10:12:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:35.991 10:12:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:35.991 10:12:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:35.991 10:12:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:10:35.991 10:12:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:35.991 10:12:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:35.991 10:12:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:35.991 10:12:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:35.991 10:12:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:35.991 10:12:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:35.991 10:12:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:35.991 10:12:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:35.991 10:12:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:35.991 10:12:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:35.991 10:12:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:10:35.991 10:12:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:41.432 10:12:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:41.432 10:12:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:10:41.432 10:12:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:41.432 10:12:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:41.432 10:12:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:41.432 10:12:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:41.432 10:12:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:41.432 10:12:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:10:41.432 10:12:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:41.432 10:12:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:10:41.432 10:12:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:10:41.432 10:12:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:10:41.432 10:12:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:10:41.432 10:12:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:10:41.432 10:12:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:10:41.432 10:12:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:41.432 10:12:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:41.432 10:12:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:41.432 10:12:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:41.432 10:12:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:41.432 10:12:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:41.432 10:12:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:41.432 10:12:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:41.432 10:12:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:41.432 10:12:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:41.432 10:12:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:41.432 10:12:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:41.432 10:12:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:41.432 10:12:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:41.432 10:12:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:41.432 10:12:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:41.432 10:12:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:41.432 10:12:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:41.432 10:12:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:41.432 10:12:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:10:41.432 Found 0000:af:00.0 (0x8086 - 0x159b) 00:10:41.432 10:12:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:41.432 10:12:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:41.432 10:12:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:41.432 10:12:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:41.432 10:12:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:41.432 10:12:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:41.432 10:12:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:10:41.432 Found 0000:af:00.1 (0x8086 - 0x159b) 00:10:41.432 10:12:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:41.432 10:12:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:41.432 10:12:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:41.432 10:12:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:41.432 10:12:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:41.432 10:12:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:41.432 10:12:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:41.432 10:12:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:41.432 10:12:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:41.432 10:12:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:41.432 10:12:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:41.432 10:12:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:41.432 10:12:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:41.432 10:12:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:41.432 10:12:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:41.432 10:12:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:10:41.432 Found net devices under 0000:af:00.0: cvl_0_0 00:10:41.432 10:12:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:41.432 10:12:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:41.432 10:12:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:41.432 10:12:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:41.432 10:12:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:41.432 10:12:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:41.432 10:12:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:41.432 10:12:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:41.432 10:12:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:10:41.432 Found net devices under 0000:af:00.1: cvl_0_1 00:10:41.433 10:12:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:41.433 10:12:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:41.433 10:12:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:10:41.433 10:12:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:41.433 10:12:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:41.433 10:12:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:41.433 10:12:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:41.433 10:12:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:41.433 10:12:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:41.433 10:12:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:41.433 10:12:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:41.433 10:12:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:41.433 10:12:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:41.433 10:12:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:41.433 10:12:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:41.433 10:12:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:41.433 10:12:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:41.433 10:12:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:41.433 10:12:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:41.433 10:12:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:41.433 10:12:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:41.433 10:12:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:41.433 10:12:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:41.433 10:12:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:41.433 10:12:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:41.433 10:12:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:41.433 10:12:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:41.433 10:12:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:41.433 10:12:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:41.433 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:41.433 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.423 ms 00:10:41.433 00:10:41.433 --- 10.0.0.2 ping statistics --- 00:10:41.433 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:41.433 rtt min/avg/max/mdev = 0.423/0.423/0.423/0.000 ms 00:10:41.433 10:12:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:41.433 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:41.433 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.176 ms 00:10:41.433 00:10:41.433 --- 10.0.0.1 ping statistics --- 00:10:41.433 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:41.433 rtt min/avg/max/mdev = 0.176/0.176/0.176/0.000 ms 00:10:41.433 10:12:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:41.433 10:12:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:10:41.433 10:12:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:41.433 10:12:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:41.433 10:12:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:41.433 10:12:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:41.433 10:12:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:41.433 10:12:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:41.433 10:12:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:41.433 10:12:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:10:41.433 10:12:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:41.433 10:12:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:41.433 10:12:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:41.433 10:12:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=3790328 00:10:41.433 10:12:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 3790328 00:10:41.433 10:12:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 3790328 ']' 00:10:41.433 10:12:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:41.433 10:12:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:41.433 10:12:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:41.433 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:41.433 10:12:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:41.433 10:12:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:41.433 10:12:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:41.433 [2024-12-13 10:12:35.087118] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:10:41.433 [2024-12-13 10:12:35.087206] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:41.433 [2024-12-13 10:12:35.206311] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:41.690 [2024-12-13 10:12:35.325020] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:41.690 [2024-12-13 10:12:35.325060] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:41.690 [2024-12-13 10:12:35.325071] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:41.690 [2024-12-13 10:12:35.325081] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:41.690 [2024-12-13 10:12:35.325089] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:41.690 [2024-12-13 10:12:35.327476] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:10:41.690 [2024-12-13 10:12:35.327537] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:10:41.690 [2024-12-13 10:12:35.327555] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:10:41.690 [2024-12-13 10:12:35.327565] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:10:42.256 10:12:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:42.256 10:12:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:10:42.256 10:12:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:42.256 10:12:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:42.256 10:12:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:42.256 10:12:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:42.256 10:12:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:42.256 10:12:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.256 10:12:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:42.256 [2024-12-13 10:12:35.934782] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:42.256 10:12:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.256 10:12:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:42.256 10:12:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.256 10:12:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:42.256 Malloc0 00:10:42.256 10:12:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.256 10:12:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:42.256 10:12:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.256 10:12:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:42.256 10:12:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.256 10:12:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:42.256 10:12:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.256 10:12:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:42.256 10:12:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.256 10:12:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:42.256 10:12:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.256 10:12:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:42.256 [2024-12-13 10:12:36.067790] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:42.256 10:12:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.256 10:12:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:10:42.256 test case1: single bdev can't be used in multiple subsystems 00:10:42.256 10:12:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:10:42.256 10:12:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.256 10:12:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:42.256 10:12:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.256 10:12:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:10:42.256 10:12:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.256 10:12:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:42.256 10:12:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.256 10:12:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:10:42.256 10:12:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:10:42.256 10:12:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.256 10:12:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:42.256 [2024-12-13 10:12:36.095653] bdev.c:8538:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:10:42.256 [2024-12-13 10:12:36.095685] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:10:42.256 [2024-12-13 10:12:36.095702] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.256 request: 00:10:42.256 { 00:10:42.256 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:10:42.256 "namespace": { 00:10:42.256 "bdev_name": "Malloc0", 00:10:42.256 "no_auto_visible": false, 00:10:42.256 "hide_metadata": false 00:10:42.256 }, 00:10:42.256 "method": "nvmf_subsystem_add_ns", 00:10:42.256 "req_id": 1 00:10:42.256 } 00:10:42.256 Got JSON-RPC error response 00:10:42.256 response: 00:10:42.256 { 00:10:42.256 "code": -32602, 00:10:42.256 "message": "Invalid parameters" 00:10:42.256 } 00:10:42.256 10:12:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:10:42.256 10:12:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:10:42.256 10:12:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:10:42.256 10:12:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:10:42.256 Adding namespace failed - expected result. 00:10:42.256 10:12:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:10:42.256 test case2: host connect to nvmf target in multiple paths 00:10:42.256 10:12:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:10:42.256 10:12:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.256 10:12:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:42.256 [2024-12-13 10:12:36.107803] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:10:42.256 10:12:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.256 10:12:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:43.628 10:12:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:10:44.562 10:12:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:10:44.562 10:12:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:10:44.562 10:12:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:44.562 10:12:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:10:44.562 10:12:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:10:47.088 10:12:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:47.088 10:12:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:47.088 10:12:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:47.088 10:12:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:10:47.088 10:12:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:47.088 10:12:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:10:47.088 10:12:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:47.088 [global] 00:10:47.088 thread=1 00:10:47.088 invalidate=1 00:10:47.088 rw=write 00:10:47.088 time_based=1 00:10:47.088 runtime=1 00:10:47.088 ioengine=libaio 00:10:47.088 direct=1 00:10:47.088 bs=4096 00:10:47.088 iodepth=1 00:10:47.088 norandommap=0 00:10:47.088 numjobs=1 00:10:47.088 00:10:47.088 verify_dump=1 00:10:47.088 verify_backlog=512 00:10:47.088 verify_state_save=0 00:10:47.088 do_verify=1 00:10:47.088 verify=crc32c-intel 00:10:47.088 [job0] 00:10:47.088 filename=/dev/nvme0n1 00:10:47.088 Could not set queue depth (nvme0n1) 00:10:47.088 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:47.088 fio-3.35 00:10:47.088 Starting 1 thread 00:10:48.023 00:10:48.023 job0: (groupid=0, jobs=1): err= 0: pid=3791389: Fri Dec 13 10:12:41 2024 00:10:48.023 read: IOPS=1970, BW=7880KiB/s (8069kB/s)(7888KiB/1001msec) 00:10:48.023 slat (nsec): min=6919, max=21326, avg=7878.52, stdev=1017.99 00:10:48.023 clat (usec): min=196, max=520, avg=312.53, stdev=67.63 00:10:48.023 lat (usec): min=203, max=528, avg=320.41, stdev=67.63 00:10:48.023 clat percentiles (usec): 00:10:48.023 | 1.00th=[ 225], 5.00th=[ 237], 10.00th=[ 269], 20.00th=[ 277], 00:10:48.023 | 30.00th=[ 281], 40.00th=[ 281], 50.00th=[ 285], 60.00th=[ 289], 00:10:48.023 | 70.00th=[ 297], 80.00th=[ 338], 90.00th=[ 453], 95.00th=[ 461], 00:10:48.023 | 99.00th=[ 474], 99.50th=[ 478], 99.90th=[ 510], 99.95th=[ 523], 00:10:48.023 | 99.99th=[ 523] 00:10:48.023 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:10:48.023 slat (nsec): min=9875, max=48263, avg=11076.04, stdev=2016.06 00:10:48.023 clat (usec): min=127, max=442, avg=162.93, stdev=17.02 00:10:48.023 lat (usec): min=148, max=453, avg=174.01, stdev=17.36 00:10:48.023 clat percentiles (usec): 00:10:48.023 | 1.00th=[ 147], 5.00th=[ 151], 10.00th=[ 153], 20.00th=[ 155], 00:10:48.023 | 30.00th=[ 157], 40.00th=[ 159], 50.00th=[ 161], 60.00th=[ 161], 00:10:48.023 | 70.00th=[ 163], 80.00th=[ 167], 90.00th=[ 174], 95.00th=[ 180], 00:10:48.023 | 99.00th=[ 241], 99.50th=[ 243], 99.90th=[ 367], 99.95th=[ 388], 00:10:48.023 | 99.99th=[ 445] 00:10:48.023 bw ( KiB/s): min= 8192, max= 8192, per=100.00%, avg=8192.00, stdev= 0.00, samples=1 00:10:48.023 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:10:48.023 lat (usec) : 250=53.78%, 500=46.17%, 750=0.05% 00:10:48.023 cpu : usr=2.30%, sys=7.20%, ctx=4020, majf=0, minf=1 00:10:48.023 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:48.023 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:48.023 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:48.023 issued rwts: total=1972,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:48.023 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:48.023 00:10:48.023 Run status group 0 (all jobs): 00:10:48.023 READ: bw=7880KiB/s (8069kB/s), 7880KiB/s-7880KiB/s (8069kB/s-8069kB/s), io=7888KiB (8077kB), run=1001-1001msec 00:10:48.023 WRITE: bw=8184KiB/s (8380kB/s), 8184KiB/s-8184KiB/s (8380kB/s-8380kB/s), io=8192KiB (8389kB), run=1001-1001msec 00:10:48.023 00:10:48.023 Disk stats (read/write): 00:10:48.023 nvme0n1: ios=1672/2048, merge=0/0, ticks=528/314, in_queue=842, util=91.58% 00:10:48.023 10:12:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:48.590 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:10:48.590 10:12:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:48.590 10:12:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:10:48.590 10:12:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:48.590 10:12:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:48.590 10:12:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:48.590 10:12:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:48.590 10:12:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:10:48.590 10:12:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:10:48.590 10:12:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:10:48.590 10:12:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:48.590 10:12:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:10:48.590 10:12:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:48.590 10:12:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:10:48.590 10:12:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:48.590 10:12:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:48.590 rmmod nvme_tcp 00:10:48.590 rmmod nvme_fabrics 00:10:48.590 rmmod nvme_keyring 00:10:48.849 10:12:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:48.849 10:12:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:10:48.849 10:12:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:10:48.849 10:12:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 3790328 ']' 00:10:48.849 10:12:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 3790328 00:10:48.849 10:12:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 3790328 ']' 00:10:48.849 10:12:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 3790328 00:10:48.849 10:12:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:10:48.849 10:12:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:48.849 10:12:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3790328 00:10:48.849 10:12:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:48.849 10:12:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:48.849 10:12:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3790328' 00:10:48.849 killing process with pid 3790328 00:10:48.849 10:12:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 3790328 00:10:48.849 10:12:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 3790328 00:10:50.226 10:12:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:50.226 10:12:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:50.226 10:12:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:50.226 10:12:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:10:50.226 10:12:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:10:50.226 10:12:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:50.226 10:12:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:10:50.226 10:12:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:50.226 10:12:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:50.226 10:12:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:50.226 10:12:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:50.226 10:12:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:52.129 10:12:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:52.129 00:10:52.129 real 0m16.626s 00:10:52.129 user 0m40.594s 00:10:52.129 sys 0m5.108s 00:10:52.129 10:12:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:52.129 10:12:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:52.129 ************************************ 00:10:52.129 END TEST nvmf_nmic 00:10:52.129 ************************************ 00:10:52.129 10:12:46 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:52.129 10:12:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:52.129 10:12:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:52.129 10:12:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:52.388 ************************************ 00:10:52.388 START TEST nvmf_fio_target 00:10:52.388 ************************************ 00:10:52.388 10:12:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:52.388 * Looking for test storage... 00:10:52.388 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:52.388 10:12:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:52.388 10:12:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lcov --version 00:10:52.388 10:12:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:52.388 10:12:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:52.388 10:12:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:52.388 10:12:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:52.388 10:12:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:52.388 10:12:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:10:52.388 10:12:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:10:52.388 10:12:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:10:52.388 10:12:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:10:52.388 10:12:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:10:52.388 10:12:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:10:52.388 10:12:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:10:52.388 10:12:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:52.388 10:12:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:10:52.388 10:12:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:10:52.388 10:12:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:52.388 10:12:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:52.389 10:12:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:10:52.389 10:12:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:10:52.389 10:12:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:52.389 10:12:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:10:52.389 10:12:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:10:52.389 10:12:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:10:52.389 10:12:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:10:52.389 10:12:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:52.389 10:12:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:10:52.389 10:12:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:10:52.389 10:12:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:52.389 10:12:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:52.389 10:12:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:10:52.389 10:12:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:52.389 10:12:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:52.389 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:52.389 --rc genhtml_branch_coverage=1 00:10:52.389 --rc genhtml_function_coverage=1 00:10:52.389 --rc genhtml_legend=1 00:10:52.389 --rc geninfo_all_blocks=1 00:10:52.389 --rc geninfo_unexecuted_blocks=1 00:10:52.389 00:10:52.389 ' 00:10:52.389 10:12:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:52.389 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:52.389 --rc genhtml_branch_coverage=1 00:10:52.389 --rc genhtml_function_coverage=1 00:10:52.389 --rc genhtml_legend=1 00:10:52.389 --rc geninfo_all_blocks=1 00:10:52.389 --rc geninfo_unexecuted_blocks=1 00:10:52.389 00:10:52.389 ' 00:10:52.389 10:12:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:52.389 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:52.389 --rc genhtml_branch_coverage=1 00:10:52.389 --rc genhtml_function_coverage=1 00:10:52.389 --rc genhtml_legend=1 00:10:52.389 --rc geninfo_all_blocks=1 00:10:52.389 --rc geninfo_unexecuted_blocks=1 00:10:52.389 00:10:52.389 ' 00:10:52.389 10:12:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:52.389 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:52.389 --rc genhtml_branch_coverage=1 00:10:52.389 --rc genhtml_function_coverage=1 00:10:52.389 --rc genhtml_legend=1 00:10:52.389 --rc geninfo_all_blocks=1 00:10:52.389 --rc geninfo_unexecuted_blocks=1 00:10:52.389 00:10:52.389 ' 00:10:52.389 10:12:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:52.389 10:12:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:10:52.389 10:12:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:52.389 10:12:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:52.389 10:12:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:52.389 10:12:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:52.389 10:12:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:52.389 10:12:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:52.389 10:12:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:52.389 10:12:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:52.389 10:12:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:52.389 10:12:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:52.389 10:12:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:10:52.389 10:12:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:10:52.389 10:12:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:52.389 10:12:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:52.389 10:12:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:52.389 10:12:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:52.389 10:12:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:52.389 10:12:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:10:52.389 10:12:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:52.389 10:12:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:52.389 10:12:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:52.389 10:12:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:52.389 10:12:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:52.389 10:12:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:52.389 10:12:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:10:52.389 10:12:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:52.389 10:12:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:10:52.389 10:12:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:52.389 10:12:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:52.389 10:12:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:52.389 10:12:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:52.389 10:12:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:52.389 10:12:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:52.389 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:52.389 10:12:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:52.389 10:12:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:52.389 10:12:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:52.389 10:12:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:52.389 10:12:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:52.389 10:12:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:52.389 10:12:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:10:52.389 10:12:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:52.389 10:12:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:52.389 10:12:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:52.389 10:12:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:52.389 10:12:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:52.389 10:12:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:52.389 10:12:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:52.389 10:12:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:52.389 10:12:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:52.389 10:12:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:52.389 10:12:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:10:52.389 10:12:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:57.656 10:12:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:57.656 10:12:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:10:57.657 10:12:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:57.657 10:12:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:57.657 10:12:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:57.657 10:12:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:57.657 10:12:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:57.657 10:12:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:10:57.657 10:12:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:57.657 10:12:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:10:57.657 10:12:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:10:57.657 10:12:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:10:57.657 10:12:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:10:57.657 10:12:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:10:57.657 10:12:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:10:57.657 10:12:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:57.657 10:12:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:57.657 10:12:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:57.657 10:12:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:57.657 10:12:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:57.657 10:12:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:57.657 10:12:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:57.657 10:12:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:57.657 10:12:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:57.657 10:12:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:57.657 10:12:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:57.657 10:12:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:57.657 10:12:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:57.657 10:12:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:57.657 10:12:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:57.657 10:12:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:57.657 10:12:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:57.657 10:12:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:57.657 10:12:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:57.657 10:12:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:10:57.657 Found 0000:af:00.0 (0x8086 - 0x159b) 00:10:57.657 10:12:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:57.657 10:12:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:57.657 10:12:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:57.657 10:12:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:57.657 10:12:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:57.657 10:12:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:57.657 10:12:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:10:57.657 Found 0000:af:00.1 (0x8086 - 0x159b) 00:10:57.657 10:12:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:57.657 10:12:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:57.657 10:12:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:57.657 10:12:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:57.657 10:12:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:57.657 10:12:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:57.657 10:12:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:57.657 10:12:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:57.657 10:12:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:57.657 10:12:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:57.657 10:12:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:57.657 10:12:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:57.657 10:12:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:57.657 10:12:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:57.657 10:12:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:57.657 10:12:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:10:57.657 Found net devices under 0000:af:00.0: cvl_0_0 00:10:57.657 10:12:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:57.657 10:12:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:57.657 10:12:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:57.657 10:12:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:57.657 10:12:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:57.657 10:12:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:57.657 10:12:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:57.657 10:12:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:57.657 10:12:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:10:57.657 Found net devices under 0000:af:00.1: cvl_0_1 00:10:57.657 10:12:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:57.657 10:12:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:57.657 10:12:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:10:57.657 10:12:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:57.657 10:12:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:57.657 10:12:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:57.657 10:12:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:57.657 10:12:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:57.657 10:12:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:57.657 10:12:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:57.657 10:12:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:57.657 10:12:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:57.657 10:12:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:57.657 10:12:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:57.657 10:12:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:57.657 10:12:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:57.657 10:12:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:57.657 10:12:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:57.657 10:12:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:57.657 10:12:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:57.657 10:12:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:57.657 10:12:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:57.657 10:12:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:57.657 10:12:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:57.657 10:12:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:57.657 10:12:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:57.657 10:12:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:57.657 10:12:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:57.657 10:12:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:57.657 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:57.657 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.444 ms 00:10:57.657 00:10:57.657 --- 10.0.0.2 ping statistics --- 00:10:57.657 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:57.657 rtt min/avg/max/mdev = 0.444/0.444/0.444/0.000 ms 00:10:57.657 10:12:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:57.657 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:57.657 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.146 ms 00:10:57.657 00:10:57.657 --- 10.0.0.1 ping statistics --- 00:10:57.657 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:57.657 rtt min/avg/max/mdev = 0.146/0.146/0.146/0.000 ms 00:10:57.657 10:12:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:57.657 10:12:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:10:57.658 10:12:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:57.658 10:12:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:57.658 10:12:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:57.658 10:12:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:57.658 10:12:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:57.658 10:12:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:57.658 10:12:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:57.658 10:12:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:10:57.658 10:12:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:57.658 10:12:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:57.658 10:12:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:57.658 10:12:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=3795306 00:10:57.658 10:12:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 3795306 00:10:57.658 10:12:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:57.658 10:12:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 3795306 ']' 00:10:57.658 10:12:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:57.658 10:12:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:57.658 10:12:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:57.658 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:57.658 10:12:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:57.658 10:12:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:57.658 [2024-12-13 10:12:51.429352] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:10:57.658 [2024-12-13 10:12:51.429438] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:57.658 [2024-12-13 10:12:51.548047] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:57.917 [2024-12-13 10:12:51.655770] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:57.917 [2024-12-13 10:12:51.655814] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:57.917 [2024-12-13 10:12:51.655825] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:57.917 [2024-12-13 10:12:51.655834] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:57.917 [2024-12-13 10:12:51.655842] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:57.917 [2024-12-13 10:12:51.658176] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:10:57.917 [2024-12-13 10:12:51.658252] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:10:57.917 [2024-12-13 10:12:51.658359] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:10:57.917 [2024-12-13 10:12:51.658368] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:10:58.483 10:12:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:58.483 10:12:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:10:58.483 10:12:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:58.483 10:12:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:58.483 10:12:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:58.483 10:12:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:58.483 10:12:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:58.758 [2024-12-13 10:12:52.439600] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:58.758 10:12:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:59.016 10:12:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:10:59.016 10:12:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:59.275 10:12:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:10:59.275 10:12:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:59.534 10:12:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:10:59.534 10:12:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:59.791 10:12:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:10:59.791 10:12:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:11:00.049 10:12:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:00.306 10:12:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:11:00.306 10:12:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:00.564 10:12:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:11:00.564 10:12:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:00.822 10:12:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:11:00.822 10:12:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:11:01.080 10:12:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:01.080 10:12:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:11:01.080 10:12:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:01.338 10:12:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:11:01.338 10:12:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:01.595 10:12:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:01.853 [2024-12-13 10:12:55.509245] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:01.853 10:12:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:11:01.853 10:12:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:11:02.111 10:12:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:03.486 10:12:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:11:03.486 10:12:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:11:03.486 10:12:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:03.486 10:12:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:11:03.486 10:12:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:11:03.486 10:12:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:11:05.385 10:12:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:05.385 10:12:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:05.385 10:12:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:05.385 10:12:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:11:05.385 10:12:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:05.385 10:12:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:11:05.385 10:12:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:11:05.385 [global] 00:11:05.385 thread=1 00:11:05.385 invalidate=1 00:11:05.385 rw=write 00:11:05.385 time_based=1 00:11:05.385 runtime=1 00:11:05.385 ioengine=libaio 00:11:05.385 direct=1 00:11:05.385 bs=4096 00:11:05.385 iodepth=1 00:11:05.385 norandommap=0 00:11:05.385 numjobs=1 00:11:05.385 00:11:05.385 verify_dump=1 00:11:05.385 verify_backlog=512 00:11:05.385 verify_state_save=0 00:11:05.385 do_verify=1 00:11:05.385 verify=crc32c-intel 00:11:05.385 [job0] 00:11:05.385 filename=/dev/nvme0n1 00:11:05.385 [job1] 00:11:05.385 filename=/dev/nvme0n2 00:11:05.385 [job2] 00:11:05.385 filename=/dev/nvme0n3 00:11:05.385 [job3] 00:11:05.385 filename=/dev/nvme0n4 00:11:05.385 Could not set queue depth (nvme0n1) 00:11:05.385 Could not set queue depth (nvme0n2) 00:11:05.386 Could not set queue depth (nvme0n3) 00:11:05.386 Could not set queue depth (nvme0n4) 00:11:05.643 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:05.643 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:05.643 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:05.643 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:05.643 fio-3.35 00:11:05.643 Starting 4 threads 00:11:07.017 00:11:07.017 job0: (groupid=0, jobs=1): err= 0: pid=3796841: Fri Dec 13 10:13:00 2024 00:11:07.017 read: IOPS=538, BW=2153KiB/s (2204kB/s)(2172KiB/1009msec) 00:11:07.017 slat (nsec): min=7217, max=55679, avg=8726.25, stdev=3634.38 00:11:07.017 clat (usec): min=236, max=42076, avg=1413.91, stdev=6679.74 00:11:07.017 lat (usec): min=244, max=42101, avg=1422.63, stdev=6682.42 00:11:07.017 clat percentiles (usec): 00:11:07.017 | 1.00th=[ 251], 5.00th=[ 262], 10.00th=[ 265], 20.00th=[ 273], 00:11:07.017 | 30.00th=[ 277], 40.00th=[ 281], 50.00th=[ 285], 60.00th=[ 289], 00:11:07.017 | 70.00th=[ 297], 80.00th=[ 306], 90.00th=[ 318], 95.00th=[ 367], 00:11:07.017 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:11:07.017 | 99.99th=[42206] 00:11:07.017 write: IOPS=1014, BW=4059KiB/s (4157kB/s)(4096KiB/1009msec); 0 zone resets 00:11:07.017 slat (nsec): min=6279, max=49606, avg=12326.77, stdev=2395.74 00:11:07.017 clat (usec): min=150, max=2631, avg=213.13, stdev=99.76 00:11:07.017 lat (usec): min=158, max=2642, avg=225.45, stdev=99.97 00:11:07.017 clat percentiles (usec): 00:11:07.017 | 1.00th=[ 169], 5.00th=[ 178], 10.00th=[ 182], 20.00th=[ 186], 00:11:07.017 | 30.00th=[ 190], 40.00th=[ 194], 50.00th=[ 200], 60.00th=[ 206], 00:11:07.017 | 70.00th=[ 227], 80.00th=[ 239], 90.00th=[ 243], 95.00th=[ 249], 00:11:07.017 | 99.00th=[ 351], 99.50th=[ 371], 99.90th=[ 2008], 99.95th=[ 2638], 00:11:07.017 | 99.99th=[ 2638] 00:11:07.017 bw ( KiB/s): min= 8192, max= 8192, per=58.40%, avg=8192.00, stdev= 0.00, samples=1 00:11:07.017 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:11:07.017 lat (usec) : 250=62.86%, 500=35.99%, 750=0.06% 00:11:07.017 lat (msec) : 4=0.13%, 50=0.96% 00:11:07.017 cpu : usr=1.59%, sys=2.28%, ctx=1569, majf=0, minf=1 00:11:07.018 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:07.018 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:07.018 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:07.018 issued rwts: total=543,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:07.018 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:07.018 job1: (groupid=0, jobs=1): err= 0: pid=3796842: Fri Dec 13 10:13:00 2024 00:11:07.018 read: IOPS=537, BW=2149KiB/s (2200kB/s)(2168KiB/1009msec) 00:11:07.018 slat (nsec): min=7622, max=37720, avg=10330.45, stdev=4057.02 00:11:07.018 clat (usec): min=221, max=41381, avg=1398.69, stdev=6685.28 00:11:07.018 lat (usec): min=231, max=41389, avg=1409.02, stdev=6687.24 00:11:07.018 clat percentiles (usec): 00:11:07.018 | 1.00th=[ 237], 5.00th=[ 245], 10.00th=[ 249], 20.00th=[ 255], 00:11:07.018 | 30.00th=[ 265], 40.00th=[ 269], 50.00th=[ 273], 60.00th=[ 277], 00:11:07.018 | 70.00th=[ 281], 80.00th=[ 285], 90.00th=[ 297], 95.00th=[ 314], 00:11:07.018 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:11:07.018 | 99.99th=[41157] 00:11:07.018 write: IOPS=1014, BW=4059KiB/s (4157kB/s)(4096KiB/1009msec); 0 zone resets 00:11:07.018 slat (usec): min=10, max=27494, avg=40.65, stdev=858.78 00:11:07.018 clat (usec): min=147, max=534, avg=192.93, stdev=21.54 00:11:07.018 lat (usec): min=159, max=27765, avg=233.58, stdev=861.50 00:11:07.018 clat percentiles (usec): 00:11:07.018 | 1.00th=[ 157], 5.00th=[ 167], 10.00th=[ 174], 20.00th=[ 180], 00:11:07.018 | 30.00th=[ 184], 40.00th=[ 188], 50.00th=[ 190], 60.00th=[ 194], 00:11:07.018 | 70.00th=[ 198], 80.00th=[ 204], 90.00th=[ 215], 95.00th=[ 225], 00:11:07.018 | 99.00th=[ 255], 99.50th=[ 289], 99.90th=[ 338], 99.95th=[ 537], 00:11:07.018 | 99.99th=[ 537] 00:11:07.018 bw ( KiB/s): min= 8192, max= 8192, per=58.40%, avg=8192.00, stdev= 0.00, samples=1 00:11:07.018 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:11:07.018 lat (usec) : 250=68.45%, 500=30.52%, 750=0.06% 00:11:07.018 lat (msec) : 50=0.96% 00:11:07.018 cpu : usr=1.19%, sys=2.88%, ctx=1568, majf=0, minf=1 00:11:07.018 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:07.018 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:07.018 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:07.018 issued rwts: total=542,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:07.018 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:07.018 job2: (groupid=0, jobs=1): err= 0: pid=3796843: Fri Dec 13 10:13:00 2024 00:11:07.018 read: IOPS=579, BW=2319KiB/s (2375kB/s)(2340KiB/1009msec) 00:11:07.018 slat (nsec): min=7589, max=33112, avg=9206.11, stdev=2611.01 00:11:07.018 clat (usec): min=212, max=41984, avg=1317.07, stdev=6449.55 00:11:07.018 lat (usec): min=220, max=42007, avg=1326.28, stdev=6451.66 00:11:07.018 clat percentiles (usec): 00:11:07.018 | 1.00th=[ 221], 5.00th=[ 231], 10.00th=[ 235], 20.00th=[ 241], 00:11:07.018 | 30.00th=[ 245], 40.00th=[ 249], 50.00th=[ 255], 60.00th=[ 260], 00:11:07.018 | 70.00th=[ 265], 80.00th=[ 277], 90.00th=[ 375], 95.00th=[ 498], 00:11:07.018 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:11:07.018 | 99.99th=[42206] 00:11:07.018 write: IOPS=1014, BW=4059KiB/s (4157kB/s)(4096KiB/1009msec); 0 zone resets 00:11:07.018 slat (nsec): min=10793, max=44758, avg=13104.78, stdev=3180.24 00:11:07.018 clat (usec): min=153, max=681, avg=209.16, stdev=34.55 00:11:07.018 lat (usec): min=165, max=693, avg=222.27, stdev=35.01 00:11:07.018 clat percentiles (usec): 00:11:07.018 | 1.00th=[ 163], 5.00th=[ 172], 10.00th=[ 178], 20.00th=[ 184], 00:11:07.018 | 30.00th=[ 190], 40.00th=[ 196], 50.00th=[ 202], 60.00th=[ 210], 00:11:07.018 | 70.00th=[ 225], 80.00th=[ 237], 90.00th=[ 243], 95.00th=[ 247], 00:11:07.018 | 99.00th=[ 347], 99.50th=[ 375], 99.90th=[ 515], 99.95th=[ 685], 00:11:07.018 | 99.99th=[ 685] 00:11:07.018 bw ( KiB/s): min= 8192, max= 8192, per=58.40%, avg=8192.00, stdev= 0.00, samples=1 00:11:07.018 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:11:07.018 lat (usec) : 250=76.94%, 500=21.63%, 750=0.50% 00:11:07.018 lat (msec) : 50=0.93% 00:11:07.018 cpu : usr=0.99%, sys=1.88%, ctx=1610, majf=0, minf=1 00:11:07.018 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:07.018 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:07.018 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:07.018 issued rwts: total=585,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:07.018 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:07.018 job3: (groupid=0, jobs=1): err= 0: pid=3796844: Fri Dec 13 10:13:00 2024 00:11:07.018 read: IOPS=21, BW=86.1KiB/s (88.2kB/s)(88.0KiB/1022msec) 00:11:07.018 slat (nsec): min=12007, max=46493, avg=22486.95, stdev=6756.72 00:11:07.018 clat (usec): min=40581, max=44140, avg=41095.92, stdev=686.86 00:11:07.018 lat (usec): min=40593, max=44165, avg=41118.41, stdev=687.65 00:11:07.018 clat percentiles (usec): 00:11:07.018 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:11:07.018 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:11:07.018 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:11:07.018 | 99.00th=[44303], 99.50th=[44303], 99.90th=[44303], 99.95th=[44303], 00:11:07.018 | 99.99th=[44303] 00:11:07.018 write: IOPS=500, BW=2004KiB/s (2052kB/s)(2048KiB/1022msec); 0 zone resets 00:11:07.018 slat (usec): min=11, max=746, avg=17.03, stdev=34.79 00:11:07.018 clat (usec): min=163, max=530, avg=207.33, stdev=29.97 00:11:07.018 lat (usec): min=185, max=1033, avg=224.36, stdev=49.60 00:11:07.018 clat percentiles (usec): 00:11:07.018 | 1.00th=[ 176], 5.00th=[ 182], 10.00th=[ 186], 20.00th=[ 190], 00:11:07.018 | 30.00th=[ 194], 40.00th=[ 198], 50.00th=[ 202], 60.00th=[ 206], 00:11:07.018 | 70.00th=[ 210], 80.00th=[ 219], 90.00th=[ 231], 95.00th=[ 249], 00:11:07.018 | 99.00th=[ 293], 99.50th=[ 424], 99.90th=[ 529], 99.95th=[ 529], 00:11:07.018 | 99.99th=[ 529] 00:11:07.018 bw ( KiB/s): min= 4096, max= 4096, per=29.20%, avg=4096.00, stdev= 0.00, samples=1 00:11:07.018 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:07.018 lat (usec) : 250=91.57%, 500=4.12%, 750=0.19% 00:11:07.018 lat (msec) : 50=4.12% 00:11:07.018 cpu : usr=0.39%, sys=1.08%, ctx=536, majf=0, minf=1 00:11:07.018 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:07.018 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:07.018 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:07.018 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:07.018 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:07.018 00:11:07.018 Run status group 0 (all jobs): 00:11:07.018 READ: bw=6622KiB/s (6781kB/s), 86.1KiB/s-2319KiB/s (88.2kB/s-2375kB/s), io=6768KiB (6930kB), run=1009-1022msec 00:11:07.018 WRITE: bw=13.7MiB/s (14.4MB/s), 2004KiB/s-4059KiB/s (2052kB/s-4157kB/s), io=14.0MiB (14.7MB), run=1009-1022msec 00:11:07.018 00:11:07.018 Disk stats (read/write): 00:11:07.018 nvme0n1: ios=591/1024, merge=0/0, ticks=1191/207, in_queue=1398, util=98.10% 00:11:07.018 nvme0n2: ios=562/1024, merge=0/0, ticks=1574/182, in_queue=1756, util=98.88% 00:11:07.018 nvme0n3: ios=606/1024, merge=0/0, ticks=1587/204, in_queue=1791, util=98.85% 00:11:07.018 nvme0n4: ios=75/512, merge=0/0, ticks=1322/99, in_queue=1421, util=98.74% 00:11:07.018 10:13:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:11:07.018 [global] 00:11:07.018 thread=1 00:11:07.018 invalidate=1 00:11:07.018 rw=randwrite 00:11:07.018 time_based=1 00:11:07.018 runtime=1 00:11:07.018 ioengine=libaio 00:11:07.018 direct=1 00:11:07.018 bs=4096 00:11:07.018 iodepth=1 00:11:07.018 norandommap=0 00:11:07.018 numjobs=1 00:11:07.018 00:11:07.018 verify_dump=1 00:11:07.018 verify_backlog=512 00:11:07.018 verify_state_save=0 00:11:07.018 do_verify=1 00:11:07.018 verify=crc32c-intel 00:11:07.018 [job0] 00:11:07.018 filename=/dev/nvme0n1 00:11:07.018 [job1] 00:11:07.018 filename=/dev/nvme0n2 00:11:07.018 [job2] 00:11:07.018 filename=/dev/nvme0n3 00:11:07.018 [job3] 00:11:07.018 filename=/dev/nvme0n4 00:11:07.018 Could not set queue depth (nvme0n1) 00:11:07.018 Could not set queue depth (nvme0n2) 00:11:07.018 Could not set queue depth (nvme0n3) 00:11:07.018 Could not set queue depth (nvme0n4) 00:11:07.276 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:07.276 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:07.276 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:07.276 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:07.276 fio-3.35 00:11:07.276 Starting 4 threads 00:11:08.649 00:11:08.649 job0: (groupid=0, jobs=1): err= 0: pid=3797213: Fri Dec 13 10:13:02 2024 00:11:08.649 read: IOPS=21, BW=85.1KiB/s (87.1kB/s)(88.0KiB/1034msec) 00:11:08.649 slat (nsec): min=4562, max=23883, avg=20619.50, stdev=4156.18 00:11:08.649 clat (usec): min=40820, max=41081, avg=40971.44, stdev=55.30 00:11:08.649 lat (usec): min=40824, max=41104, avg=40992.06, stdev=57.13 00:11:08.649 clat percentiles (usec): 00:11:08.649 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:11:08.649 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:11:08.649 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:11:08.649 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:11:08.649 | 99.99th=[41157] 00:11:08.649 write: IOPS=495, BW=1981KiB/s (2028kB/s)(2048KiB/1034msec); 0 zone resets 00:11:08.649 slat (nsec): min=5455, max=34859, avg=9996.32, stdev=2624.30 00:11:08.649 clat (usec): min=141, max=385, avg=241.50, stdev=37.50 00:11:08.649 lat (usec): min=149, max=420, avg=251.50, stdev=37.55 00:11:08.649 clat percentiles (usec): 00:11:08.649 | 1.00th=[ 182], 5.00th=[ 196], 10.00th=[ 202], 20.00th=[ 210], 00:11:08.649 | 30.00th=[ 217], 40.00th=[ 225], 50.00th=[ 231], 60.00th=[ 241], 00:11:08.649 | 70.00th=[ 255], 80.00th=[ 281], 90.00th=[ 297], 95.00th=[ 306], 00:11:08.649 | 99.00th=[ 330], 99.50th=[ 347], 99.90th=[ 388], 99.95th=[ 388], 00:11:08.649 | 99.99th=[ 388] 00:11:08.649 bw ( KiB/s): min= 4096, max= 4096, per=29.71%, avg=4096.00, stdev= 0.00, samples=1 00:11:08.649 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:08.649 lat (usec) : 250=64.98%, 500=30.90% 00:11:08.649 lat (msec) : 50=4.12% 00:11:08.649 cpu : usr=0.19%, sys=0.97%, ctx=538, majf=0, minf=1 00:11:08.649 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:08.649 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:08.649 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:08.649 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:08.649 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:08.649 job1: (groupid=0, jobs=1): err= 0: pid=3797214: Fri Dec 13 10:13:02 2024 00:11:08.649 read: IOPS=1486, BW=5946KiB/s (6089kB/s)(6160KiB/1036msec) 00:11:08.649 slat (nsec): min=2189, max=25424, avg=7828.60, stdev=1623.70 00:11:08.649 clat (usec): min=192, max=41398, avg=407.82, stdev=2537.68 00:11:08.649 lat (usec): min=198, max=41412, avg=415.65, stdev=2538.38 00:11:08.649 clat percentiles (usec): 00:11:08.649 | 1.00th=[ 210], 5.00th=[ 221], 10.00th=[ 227], 20.00th=[ 233], 00:11:08.649 | 30.00th=[ 237], 40.00th=[ 243], 50.00th=[ 247], 60.00th=[ 251], 00:11:08.649 | 70.00th=[ 258], 80.00th=[ 262], 90.00th=[ 269], 95.00th=[ 281], 00:11:08.649 | 99.00th=[ 445], 99.50th=[ 482], 99.90th=[41157], 99.95th=[41157], 00:11:08.649 | 99.99th=[41157] 00:11:08.649 write: IOPS=1976, BW=7907KiB/s (8097kB/s)(8192KiB/1036msec); 0 zone resets 00:11:08.649 slat (nsec): min=3087, max=40719, avg=10528.64, stdev=4085.89 00:11:08.649 clat (usec): min=113, max=3165, avg=177.56, stdev=79.98 00:11:08.649 lat (usec): min=116, max=3178, avg=188.09, stdev=81.19 00:11:08.649 clat percentiles (usec): 00:11:08.649 | 1.00th=[ 119], 5.00th=[ 124], 10.00th=[ 128], 20.00th=[ 145], 00:11:08.649 | 30.00th=[ 151], 40.00th=[ 157], 50.00th=[ 163], 60.00th=[ 169], 00:11:08.649 | 70.00th=[ 182], 80.00th=[ 208], 90.00th=[ 241], 95.00th=[ 281], 00:11:08.649 | 99.00th=[ 310], 99.50th=[ 322], 99.90th=[ 416], 99.95th=[ 490], 00:11:08.649 | 99.99th=[ 3163] 00:11:08.649 bw ( KiB/s): min= 7528, max= 8856, per=59.43%, avg=8192.00, stdev=939.04, samples=2 00:11:08.649 iops : min= 1882, max= 2214, avg=2048.00, stdev=234.76, samples=2 00:11:08.649 lat (usec) : 250=77.42%, 500=22.35%, 750=0.03% 00:11:08.649 lat (msec) : 4=0.03%, 50=0.17% 00:11:08.649 cpu : usr=3.00%, sys=4.83%, ctx=3591, majf=0, minf=2 00:11:08.649 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:08.649 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:08.649 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:08.649 issued rwts: total=1540,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:08.649 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:08.649 job2: (groupid=0, jobs=1): err= 0: pid=3797215: Fri Dec 13 10:13:02 2024 00:11:08.649 read: IOPS=21, BW=87.6KiB/s (89.7kB/s)(88.0KiB/1005msec) 00:11:08.649 slat (nsec): min=9641, max=24877, avg=23567.50, stdev=3122.39 00:11:08.649 clat (usec): min=40804, max=42018, avg=41095.95, stdev=371.09 00:11:08.649 lat (usec): min=40813, max=42043, avg=41119.52, stdev=371.65 00:11:08.649 clat percentiles (usec): 00:11:08.649 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:11:08.649 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:11:08.649 | 70.00th=[41157], 80.00th=[41157], 90.00th=[42206], 95.00th=[42206], 00:11:08.649 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:11:08.649 | 99.99th=[42206] 00:11:08.649 write: IOPS=509, BW=2038KiB/s (2087kB/s)(2048KiB/1005msec); 0 zone resets 00:11:08.649 slat (nsec): min=9528, max=43121, avg=10695.47, stdev=2146.45 00:11:08.649 clat (usec): min=150, max=342, avg=179.06, stdev=17.32 00:11:08.649 lat (usec): min=160, max=376, avg=189.76, stdev=18.28 00:11:08.649 clat percentiles (usec): 00:11:08.649 | 1.00th=[ 153], 5.00th=[ 157], 10.00th=[ 161], 20.00th=[ 165], 00:11:08.649 | 30.00th=[ 169], 40.00th=[ 174], 50.00th=[ 178], 60.00th=[ 182], 00:11:08.649 | 70.00th=[ 186], 80.00th=[ 192], 90.00th=[ 200], 95.00th=[ 210], 00:11:08.649 | 99.00th=[ 221], 99.50th=[ 229], 99.90th=[ 343], 99.95th=[ 343], 00:11:08.649 | 99.99th=[ 343] 00:11:08.649 bw ( KiB/s): min= 4096, max= 4096, per=29.71%, avg=4096.00, stdev= 0.00, samples=1 00:11:08.649 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:08.649 lat (usec) : 250=95.51%, 500=0.37% 00:11:08.649 lat (msec) : 50=4.12% 00:11:08.650 cpu : usr=0.40%, sys=0.50%, ctx=536, majf=0, minf=1 00:11:08.650 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:08.650 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:08.650 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:08.650 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:08.650 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:08.650 job3: (groupid=0, jobs=1): err= 0: pid=3797216: Fri Dec 13 10:13:02 2024 00:11:08.650 read: IOPS=27, BW=112KiB/s (114kB/s)(116KiB/1040msec) 00:11:08.650 slat (nsec): min=8257, max=27811, avg=19708.90, stdev=5965.17 00:11:08.650 clat (usec): min=234, max=42057, avg=31192.27, stdev=17760.62 00:11:08.650 lat (usec): min=243, max=42084, avg=31211.98, stdev=17766.40 00:11:08.650 clat percentiles (usec): 00:11:08.650 | 1.00th=[ 235], 5.00th=[ 239], 10.00th=[ 241], 20.00th=[ 281], 00:11:08.650 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:11:08.650 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:11:08.650 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:11:08.650 | 99.99th=[42206] 00:11:08.650 write: IOPS=492, BW=1969KiB/s (2016kB/s)(2048KiB/1040msec); 0 zone resets 00:11:08.650 slat (nsec): min=11139, max=52067, avg=15013.42, stdev=5092.41 00:11:08.650 clat (usec): min=163, max=1164, avg=240.36, stdev=73.75 00:11:08.650 lat (usec): min=176, max=1177, avg=255.37, stdev=74.41 00:11:08.650 clat percentiles (usec): 00:11:08.650 | 1.00th=[ 169], 5.00th=[ 184], 10.00th=[ 190], 20.00th=[ 198], 00:11:08.650 | 30.00th=[ 206], 40.00th=[ 215], 50.00th=[ 223], 60.00th=[ 231], 00:11:08.650 | 70.00th=[ 262], 80.00th=[ 281], 90.00th=[ 297], 95.00th=[ 318], 00:11:08.650 | 99.00th=[ 383], 99.50th=[ 791], 99.90th=[ 1172], 99.95th=[ 1172], 00:11:08.650 | 99.99th=[ 1172] 00:11:08.650 bw ( KiB/s): min= 4096, max= 4096, per=29.71%, avg=4096.00, stdev= 0.00, samples=1 00:11:08.650 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:08.650 lat (usec) : 250=64.14%, 500=30.87%, 750=0.37%, 1000=0.37% 00:11:08.650 lat (msec) : 2=0.18%, 50=4.07% 00:11:08.650 cpu : usr=0.58%, sys=0.87%, ctx=543, majf=0, minf=1 00:11:08.650 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:08.650 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:08.650 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:08.650 issued rwts: total=29,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:08.650 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:08.650 00:11:08.650 Run status group 0 (all jobs): 00:11:08.650 READ: bw=6204KiB/s (6353kB/s), 85.1KiB/s-5946KiB/s (87.1kB/s-6089kB/s), io=6452KiB (6607kB), run=1005-1040msec 00:11:08.650 WRITE: bw=13.5MiB/s (14.1MB/s), 1969KiB/s-7907KiB/s (2016kB/s-8097kB/s), io=14.0MiB (14.7MB), run=1005-1040msec 00:11:08.650 00:11:08.650 Disk stats (read/write): 00:11:08.650 nvme0n1: ios=53/512, merge=0/0, ticks=1683/125, in_queue=1808, util=98.00% 00:11:08.650 nvme0n2: ios=1566/1955, merge=0/0, ticks=1433/330, in_queue=1763, util=98.27% 00:11:08.650 nvme0n3: ios=77/512, merge=0/0, ticks=930/93, in_queue=1023, util=100.00% 00:11:08.650 nvme0n4: ios=66/512, merge=0/0, ticks=1343/113, in_queue=1456, util=99.90% 00:11:08.650 10:13:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:11:08.650 [global] 00:11:08.650 thread=1 00:11:08.650 invalidate=1 00:11:08.650 rw=write 00:11:08.650 time_based=1 00:11:08.650 runtime=1 00:11:08.650 ioengine=libaio 00:11:08.650 direct=1 00:11:08.650 bs=4096 00:11:08.650 iodepth=128 00:11:08.650 norandommap=0 00:11:08.650 numjobs=1 00:11:08.650 00:11:08.650 verify_dump=1 00:11:08.650 verify_backlog=512 00:11:08.650 verify_state_save=0 00:11:08.650 do_verify=1 00:11:08.650 verify=crc32c-intel 00:11:08.650 [job0] 00:11:08.650 filename=/dev/nvme0n1 00:11:08.650 [job1] 00:11:08.650 filename=/dev/nvme0n2 00:11:08.650 [job2] 00:11:08.650 filename=/dev/nvme0n3 00:11:08.650 [job3] 00:11:08.650 filename=/dev/nvme0n4 00:11:08.650 Could not set queue depth (nvme0n1) 00:11:08.650 Could not set queue depth (nvme0n2) 00:11:08.650 Could not set queue depth (nvme0n3) 00:11:08.650 Could not set queue depth (nvme0n4) 00:11:08.908 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:08.908 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:08.908 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:08.908 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:08.908 fio-3.35 00:11:08.908 Starting 4 threads 00:11:10.319 00:11:10.319 job0: (groupid=0, jobs=1): err= 0: pid=3797587: Fri Dec 13 10:13:03 2024 00:11:10.319 read: IOPS=3047, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1008msec) 00:11:10.319 slat (nsec): min=1396, max=15524k, avg=163441.76, stdev=1162866.27 00:11:10.319 clat (usec): min=7100, max=53141, avg=20054.64, stdev=8907.11 00:11:10.319 lat (usec): min=7115, max=53167, avg=20218.08, stdev=9022.40 00:11:10.319 clat percentiles (usec): 00:11:10.319 | 1.00th=[ 7308], 5.00th=[ 9634], 10.00th=[10683], 20.00th=[12387], 00:11:10.319 | 30.00th=[15664], 40.00th=[16712], 50.00th=[17433], 60.00th=[19006], 00:11:10.319 | 70.00th=[21627], 80.00th=[26608], 90.00th=[37487], 95.00th=[40109], 00:11:10.319 | 99.00th=[40109], 99.50th=[42206], 99.90th=[50070], 99.95th=[52691], 00:11:10.319 | 99.99th=[53216] 00:11:10.319 write: IOPS=3160, BW=12.3MiB/s (12.9MB/s)(12.4MiB/1008msec); 0 zone resets 00:11:10.319 slat (usec): min=2, max=12515, avg=147.93, stdev=837.42 00:11:10.319 clat (usec): min=4919, max=62856, avg=20724.69, stdev=11905.89 00:11:10.319 lat (usec): min=5881, max=62866, avg=20872.63, stdev=11990.08 00:11:10.319 clat percentiles (usec): 00:11:10.319 | 1.00th=[ 7963], 5.00th=[10159], 10.00th=[10945], 20.00th=[12256], 00:11:10.319 | 30.00th=[13304], 40.00th=[14746], 50.00th=[16909], 60.00th=[20317], 00:11:10.319 | 70.00th=[22938], 80.00th=[23725], 90.00th=[38536], 95.00th=[50070], 00:11:10.319 | 99.00th=[62129], 99.50th=[62129], 99.90th=[62653], 99.95th=[62653], 00:11:10.319 | 99.99th=[62653] 00:11:10.319 bw ( KiB/s): min=11400, max=13224, per=17.71%, avg=12312.00, stdev=1289.76, samples=2 00:11:10.319 iops : min= 2850, max= 3306, avg=3078.00, stdev=322.44, samples=2 00:11:10.319 lat (msec) : 10=5.56%, 20=55.85%, 50=36.05%, 100=2.54% 00:11:10.319 cpu : usr=1.89%, sys=4.47%, ctx=298, majf=0, minf=1 00:11:10.319 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:11:10.319 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:10.319 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:10.319 issued rwts: total=3072,3186,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:10.319 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:10.319 job1: (groupid=0, jobs=1): err= 0: pid=3797594: Fri Dec 13 10:13:03 2024 00:11:10.319 read: IOPS=5576, BW=21.8MiB/s (22.8MB/s)(22.0MiB/1010msec) 00:11:10.319 slat (nsec): min=1236, max=11502k, avg=97179.13, stdev=698130.60 00:11:10.319 clat (usec): min=3597, max=23121, avg=11911.71, stdev=2980.73 00:11:10.319 lat (usec): min=3603, max=23133, avg=12008.89, stdev=3021.96 00:11:10.319 clat percentiles (usec): 00:11:10.319 | 1.00th=[ 4555], 5.00th=[ 8356], 10.00th=[ 9634], 20.00th=[10290], 00:11:10.319 | 30.00th=[10552], 40.00th=[10814], 50.00th=[10945], 60.00th=[11469], 00:11:10.319 | 70.00th=[12387], 80.00th=[13566], 90.00th=[16909], 95.00th=[18220], 00:11:10.319 | 99.00th=[20055], 99.50th=[20579], 99.90th=[21890], 99.95th=[21890], 00:11:10.319 | 99.99th=[23200] 00:11:10.319 write: IOPS=5803, BW=22.7MiB/s (23.8MB/s)(22.9MiB/1010msec); 0 zone resets 00:11:10.319 slat (usec): min=2, max=10736, avg=71.85, stdev=304.21 00:11:10.319 clat (usec): min=1491, max=24342, avg=10399.10, stdev=2453.63 00:11:10.319 lat (usec): min=1554, max=24358, avg=10470.95, stdev=2479.69 00:11:10.319 clat percentiles (usec): 00:11:10.319 | 1.00th=[ 3032], 5.00th=[ 4817], 10.00th=[ 6652], 20.00th=[ 9372], 00:11:10.319 | 30.00th=[10421], 40.00th=[10945], 50.00th=[11076], 60.00th=[11207], 00:11:10.319 | 70.00th=[11469], 80.00th=[11731], 90.00th=[12125], 95.00th=[12518], 00:11:10.319 | 99.00th=[18220], 99.50th=[19268], 99.90th=[20841], 99.95th=[21890], 00:11:10.319 | 99.99th=[24249] 00:11:10.319 bw ( KiB/s): min=21304, max=24576, per=33.00%, avg=22940.00, stdev=2313.65, samples=2 00:11:10.319 iops : min= 5326, max= 6144, avg=5735.00, stdev=578.41, samples=2 00:11:10.319 lat (msec) : 2=0.01%, 4=1.62%, 10=19.91%, 20=77.68%, 50=0.78% 00:11:10.319 cpu : usr=4.76%, sys=5.15%, ctx=744, majf=0, minf=2 00:11:10.319 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:11:10.319 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:10.319 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:10.319 issued rwts: total=5632,5862,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:10.319 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:10.319 job2: (groupid=0, jobs=1): err= 0: pid=3797613: Fri Dec 13 10:13:03 2024 00:11:10.319 read: IOPS=3047, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1008msec) 00:11:10.319 slat (nsec): min=1738, max=20449k, avg=140730.92, stdev=994987.28 00:11:10.319 clat (usec): min=8541, max=60280, avg=19792.85, stdev=9487.06 00:11:10.319 lat (usec): min=8549, max=60307, avg=19933.58, stdev=9579.50 00:11:10.319 clat percentiles (usec): 00:11:10.319 | 1.00th=[ 8586], 5.00th=[11076], 10.00th=[12518], 20.00th=[13435], 00:11:10.319 | 30.00th=[14746], 40.00th=[15008], 50.00th=[15795], 60.00th=[17171], 00:11:10.319 | 70.00th=[19268], 80.00th=[26084], 90.00th=[35390], 95.00th=[42730], 00:11:10.319 | 99.00th=[50594], 99.50th=[50594], 99.90th=[54789], 99.95th=[59507], 00:11:10.319 | 99.99th=[60031] 00:11:10.319 write: IOPS=3356, BW=13.1MiB/s (13.7MB/s)(13.2MiB/1008msec); 0 zone resets 00:11:10.319 slat (usec): min=2, max=23072, avg=149.48, stdev=941.44 00:11:10.319 clat (usec): min=3303, max=50398, avg=19336.48, stdev=7649.48 00:11:10.319 lat (usec): min=3314, max=50406, avg=19485.96, stdev=7725.18 00:11:10.319 clat percentiles (usec): 00:11:10.319 | 1.00th=[ 5014], 5.00th=[10814], 10.00th=[11338], 20.00th=[13435], 00:11:10.319 | 30.00th=[14484], 40.00th=[15795], 50.00th=[18482], 60.00th=[20841], 00:11:10.319 | 70.00th=[22938], 80.00th=[23725], 90.00th=[26870], 95.00th=[31065], 00:11:10.319 | 99.00th=[47449], 99.50th=[48497], 99.90th=[50594], 99.95th=[50594], 00:11:10.319 | 99.99th=[50594] 00:11:10.319 bw ( KiB/s): min=12288, max=13752, per=18.73%, avg=13020.00, stdev=1035.20, samples=2 00:11:10.319 iops : min= 3072, max= 3438, avg=3255.00, stdev=258.80, samples=2 00:11:10.319 lat (msec) : 4=0.11%, 10=2.99%, 20=59.66%, 50=36.55%, 100=0.70% 00:11:10.319 cpu : usr=2.98%, sys=4.97%, ctx=302, majf=0, minf=1 00:11:10.319 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:11:10.319 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:10.319 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:10.319 issued rwts: total=3072,3383,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:10.319 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:10.319 job3: (groupid=0, jobs=1): err= 0: pid=3797619: Fri Dec 13 10:13:03 2024 00:11:10.319 read: IOPS=4710, BW=18.4MiB/s (19.3MB/s)(18.5MiB/1005msec) 00:11:10.319 slat (nsec): min=1348, max=12038k, avg=117229.36, stdev=826653.77 00:11:10.319 clat (usec): min=1167, max=67514, avg=13456.76, stdev=5804.08 00:11:10.319 lat (usec): min=4558, max=67516, avg=13573.99, stdev=5871.13 00:11:10.319 clat percentiles (usec): 00:11:10.319 | 1.00th=[ 5080], 5.00th=[ 8717], 10.00th=[ 9634], 20.00th=[11076], 00:11:10.319 | 30.00th=[11469], 40.00th=[11600], 50.00th=[11731], 60.00th=[12125], 00:11:10.319 | 70.00th=[13435], 80.00th=[15795], 90.00th=[18744], 95.00th=[20841], 00:11:10.319 | 99.00th=[32113], 99.50th=[64750], 99.90th=[67634], 99.95th=[67634], 00:11:10.319 | 99.99th=[67634] 00:11:10.319 write: IOPS=5094, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1005msec); 0 zone resets 00:11:10.319 slat (usec): min=2, max=12018, avg=81.92, stdev=369.08 00:11:10.319 clat (usec): min=2295, max=67515, avg=12429.93, stdev=6321.38 00:11:10.319 lat (usec): min=2335, max=67519, avg=12511.86, stdev=6339.56 00:11:10.319 clat percentiles (usec): 00:11:10.319 | 1.00th=[ 3916], 5.00th=[ 5932], 10.00th=[ 7832], 20.00th=[11207], 00:11:10.319 | 30.00th=[11863], 40.00th=[12125], 50.00th=[12256], 60.00th=[12387], 00:11:10.319 | 70.00th=[12518], 80.00th=[12780], 90.00th=[14615], 95.00th=[15664], 00:11:10.319 | 99.00th=[58459], 99.50th=[59507], 99.90th=[59507], 99.95th=[60031], 00:11:10.319 | 99.99th=[67634] 00:11:10.320 bw ( KiB/s): min=20464, max=20480, per=29.45%, avg=20472.00, stdev=11.31, samples=2 00:11:10.320 iops : min= 5116, max= 5120, avg=5118.00, stdev= 2.83, samples=2 00:11:10.320 lat (msec) : 2=0.01%, 4=0.63%, 10=13.72%, 20=81.13%, 50=3.46% 00:11:10.320 lat (msec) : 100=1.05% 00:11:10.320 cpu : usr=3.39%, sys=5.78%, ctx=653, majf=0, minf=1 00:11:10.320 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:11:10.320 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:10.320 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:10.320 issued rwts: total=4734,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:10.320 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:10.320 00:11:10.320 Run status group 0 (all jobs): 00:11:10.320 READ: bw=63.9MiB/s (67.0MB/s), 11.9MiB/s-21.8MiB/s (12.5MB/s-22.8MB/s), io=64.5MiB (67.6MB), run=1005-1010msec 00:11:10.320 WRITE: bw=67.9MiB/s (71.2MB/s), 12.3MiB/s-22.7MiB/s (12.9MB/s-23.8MB/s), io=68.6MiB (71.9MB), run=1005-1010msec 00:11:10.320 00:11:10.320 Disk stats (read/write): 00:11:10.320 nvme0n1: ios=2596/2823, merge=0/0, ticks=34435/40365, in_queue=74800, util=100.00% 00:11:10.320 nvme0n2: ios=4608/5071, merge=0/0, ticks=52821/51598, in_queue=104419, util=86.59% 00:11:10.320 nvme0n3: ios=2586/2895, merge=0/0, ticks=30662/31630, in_queue=62292, util=94.89% 00:11:10.320 nvme0n4: ios=4114/4159, merge=0/0, ticks=54127/51669, in_queue=105796, util=100.00% 00:11:10.320 10:13:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:11:10.320 [global] 00:11:10.320 thread=1 00:11:10.320 invalidate=1 00:11:10.320 rw=randwrite 00:11:10.320 time_based=1 00:11:10.320 runtime=1 00:11:10.320 ioengine=libaio 00:11:10.320 direct=1 00:11:10.320 bs=4096 00:11:10.320 iodepth=128 00:11:10.320 norandommap=0 00:11:10.320 numjobs=1 00:11:10.320 00:11:10.320 verify_dump=1 00:11:10.320 verify_backlog=512 00:11:10.320 verify_state_save=0 00:11:10.320 do_verify=1 00:11:10.320 verify=crc32c-intel 00:11:10.320 [job0] 00:11:10.320 filename=/dev/nvme0n1 00:11:10.320 [job1] 00:11:10.320 filename=/dev/nvme0n2 00:11:10.320 [job2] 00:11:10.320 filename=/dev/nvme0n3 00:11:10.320 [job3] 00:11:10.320 filename=/dev/nvme0n4 00:11:10.320 Could not set queue depth (nvme0n1) 00:11:10.320 Could not set queue depth (nvme0n2) 00:11:10.320 Could not set queue depth (nvme0n3) 00:11:10.320 Could not set queue depth (nvme0n4) 00:11:10.577 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:10.577 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:10.577 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:10.577 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:10.577 fio-3.35 00:11:10.577 Starting 4 threads 00:11:11.946 00:11:11.946 job0: (groupid=0, jobs=1): err= 0: pid=3798069: Fri Dec 13 10:13:05 2024 00:11:11.946 read: IOPS=3011, BW=11.8MiB/s (12.3MB/s)(12.0MiB/1020msec) 00:11:11.946 slat (nsec): min=1431, max=30363k, avg=168350.42, stdev=1322511.84 00:11:11.946 clat (msec): min=4, max=151, avg=18.60, stdev=18.65 00:11:11.946 lat (msec): min=4, max=151, avg=18.77, stdev=18.84 00:11:11.946 clat percentiles (msec): 00:11:11.946 | 1.00th=[ 6], 5.00th=[ 9], 10.00th=[ 11], 20.00th=[ 12], 00:11:11.946 | 30.00th=[ 12], 40.00th=[ 13], 50.00th=[ 13], 60.00th=[ 13], 00:11:11.946 | 70.00th=[ 14], 80.00th=[ 21], 90.00th=[ 31], 95.00th=[ 55], 00:11:11.946 | 99.00th=[ 120], 99.50th=[ 138], 99.90th=[ 153], 99.95th=[ 153], 00:11:11.946 | 99.99th=[ 153] 00:11:11.946 write: IOPS=3284, BW=12.8MiB/s (13.5MB/s)(13.1MiB/1020msec); 0 zone resets 00:11:11.946 slat (usec): min=2, max=20432, avg=135.02, stdev=892.51 00:11:11.946 clat (msec): min=3, max=151, avg=21.43, stdev=22.60 00:11:11.946 lat (msec): min=3, max=151, avg=21.56, stdev=22.72 00:11:11.946 clat percentiles (msec): 00:11:11.946 | 1.00th=[ 5], 5.00th=[ 7], 10.00th=[ 9], 20.00th=[ 12], 00:11:11.946 | 30.00th=[ 12], 40.00th=[ 13], 50.00th=[ 13], 60.00th=[ 18], 00:11:11.946 | 70.00th=[ 23], 80.00th=[ 25], 90.00th=[ 33], 95.00th=[ 60], 00:11:11.946 | 99.00th=[ 130], 99.50th=[ 131], 99.90th=[ 138], 99.95th=[ 153], 00:11:11.946 | 99.99th=[ 153] 00:11:11.946 bw ( KiB/s): min= 9992, max=15792, per=18.77%, avg=12892.00, stdev=4101.22, samples=2 00:11:11.946 iops : min= 2498, max= 3948, avg=3223.00, stdev=1025.30, samples=2 00:11:11.946 lat (msec) : 4=0.44%, 10=10.12%, 20=60.79%, 50=22.95%, 100=3.24% 00:11:11.946 lat (msec) : 250=2.46% 00:11:11.946 cpu : usr=2.06%, sys=4.51%, ctx=302, majf=0, minf=1 00:11:11.946 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:11:11.946 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:11.946 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:11.946 issued rwts: total=3072,3350,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:11.946 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:11.946 job1: (groupid=0, jobs=1): err= 0: pid=3798079: Fri Dec 13 10:13:05 2024 00:11:11.946 read: IOPS=4808, BW=18.8MiB/s (19.7MB/s)(19.0MiB/1009msec) 00:11:11.946 slat (nsec): min=1347, max=10070k, avg=97030.80, stdev=684205.31 00:11:11.946 clat (usec): min=3815, max=22080, avg=12122.16, stdev=2967.53 00:11:11.946 lat (usec): min=3827, max=22104, avg=12219.19, stdev=3011.34 00:11:11.946 clat percentiles (usec): 00:11:11.946 | 1.00th=[ 5080], 5.00th=[ 8717], 10.00th=[ 9765], 20.00th=[10290], 00:11:11.946 | 30.00th=[10552], 40.00th=[10814], 50.00th=[11207], 60.00th=[11731], 00:11:11.946 | 70.00th=[12518], 80.00th=[14746], 90.00th=[16909], 95.00th=[18220], 00:11:11.946 | 99.00th=[20317], 99.50th=[21103], 99.90th=[21103], 99.95th=[21365], 00:11:11.946 | 99.99th=[22152] 00:11:11.946 write: IOPS=5074, BW=19.8MiB/s (20.8MB/s)(20.0MiB/1009msec); 0 zone resets 00:11:11.946 slat (usec): min=2, max=14793, avg=96.66, stdev=640.44 00:11:11.946 clat (usec): min=887, max=90071, avg=13473.25, stdev=12030.91 00:11:11.946 lat (usec): min=916, max=90084, avg=13569.91, stdev=12110.82 00:11:11.946 clat percentiles (usec): 00:11:11.946 | 1.00th=[ 3130], 5.00th=[ 5473], 10.00th=[ 7177], 20.00th=[ 8848], 00:11:11.946 | 30.00th=[10159], 40.00th=[10814], 50.00th=[11076], 60.00th=[11338], 00:11:11.946 | 70.00th=[11469], 80.00th=[11600], 90.00th=[19268], 95.00th=[43779], 00:11:11.946 | 99.00th=[72877], 99.50th=[82314], 99.90th=[89654], 99.95th=[89654], 00:11:11.946 | 99.99th=[89654] 00:11:11.946 bw ( KiB/s): min=16384, max=24576, per=29.82%, avg=20480.00, stdev=5792.62, samples=2 00:11:11.946 iops : min= 4096, max= 6144, avg=5120.00, stdev=1448.15, samples=2 00:11:11.946 lat (usec) : 1000=0.05% 00:11:11.946 lat (msec) : 4=1.19%, 10=19.00%, 20=74.07%, 50=4.09%, 100=1.59% 00:11:11.946 cpu : usr=3.67%, sys=5.06%, ctx=555, majf=0, minf=2 00:11:11.946 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:11:11.946 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:11.946 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:11.946 issued rwts: total=4852,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:11.946 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:11.946 job2: (groupid=0, jobs=1): err= 0: pid=3798099: Fri Dec 13 10:13:05 2024 00:11:11.946 read: IOPS=3513, BW=13.7MiB/s (14.4MB/s)(14.0MiB/1020msec) 00:11:11.946 slat (nsec): min=1128, max=19490k, avg=127017.26, stdev=1052134.70 00:11:11.946 clat (usec): min=1366, max=42042, avg=16584.35, stdev=5758.34 00:11:11.946 lat (usec): min=1372, max=43386, avg=16711.36, stdev=5863.40 00:11:11.946 clat percentiles (usec): 00:11:11.946 | 1.00th=[ 1778], 5.00th=[10552], 10.00th=[12125], 20.00th=[12911], 00:11:11.946 | 30.00th=[13435], 40.00th=[13960], 50.00th=[14615], 60.00th=[15008], 00:11:11.946 | 70.00th=[19268], 80.00th=[22676], 90.00th=[23987], 95.00th=[26870], 00:11:11.946 | 99.00th=[30802], 99.50th=[32375], 99.90th=[42206], 99.95th=[42206], 00:11:11.946 | 99.99th=[42206] 00:11:11.946 write: IOPS=3849, BW=15.0MiB/s (15.8MB/s)(15.3MiB/1020msec); 0 zone resets 00:11:11.946 slat (usec): min=2, max=18448, avg=115.57, stdev=850.68 00:11:11.946 clat (usec): min=545, max=80858, avg=17831.79, stdev=12748.33 00:11:11.946 lat (usec): min=621, max=80869, avg=17947.36, stdev=12834.64 00:11:11.946 clat percentiles (usec): 00:11:11.946 | 1.00th=[ 3884], 5.00th=[ 6718], 10.00th=[ 8225], 20.00th=[ 9634], 00:11:11.946 | 30.00th=[11469], 40.00th=[12518], 50.00th=[13566], 60.00th=[15008], 00:11:11.946 | 70.00th=[19006], 80.00th=[23987], 90.00th=[30540], 95.00th=[47449], 00:11:11.946 | 99.00th=[72877], 99.50th=[77071], 99.90th=[79168], 99.95th=[81265], 00:11:11.946 | 99.99th=[81265] 00:11:11.946 bw ( KiB/s): min= 9920, max=20472, per=22.12%, avg=15196.00, stdev=7461.39, samples=2 00:11:11.946 iops : min= 2480, max= 5118, avg=3799.00, stdev=1865.35, samples=2 00:11:11.946 lat (usec) : 750=0.05% 00:11:11.946 lat (msec) : 2=0.48%, 4=0.93%, 10=11.66%, 20=58.97%, 50=25.93% 00:11:11.946 lat (msec) : 100=1.97% 00:11:11.946 cpu : usr=3.34%, sys=3.93%, ctx=303, majf=0, minf=2 00:11:11.946 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:11:11.947 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:11.947 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:11.947 issued rwts: total=3584,3926,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:11.947 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:11.947 job3: (groupid=0, jobs=1): err= 0: pid=3798105: Fri Dec 13 10:13:05 2024 00:11:11.947 read: IOPS=4590, BW=17.9MiB/s (18.8MB/s)(18.1MiB/1012msec) 00:11:11.947 slat (nsec): min=1330, max=12500k, avg=109617.34, stdev=776556.30 00:11:11.947 clat (usec): min=4264, max=25484, avg=13316.85, stdev=3397.70 00:11:11.947 lat (usec): min=4271, max=25488, avg=13426.47, stdev=3449.07 00:11:11.947 clat percentiles (usec): 00:11:11.947 | 1.00th=[ 5800], 5.00th=[ 9372], 10.00th=[10552], 20.00th=[11600], 00:11:11.947 | 30.00th=[11863], 40.00th=[11994], 50.00th=[12125], 60.00th=[12387], 00:11:11.947 | 70.00th=[13304], 80.00th=[14746], 90.00th=[19006], 95.00th=[21103], 00:11:11.947 | 99.00th=[23200], 99.50th=[23987], 99.90th=[25560], 99.95th=[25560], 00:11:11.947 | 99.99th=[25560] 00:11:11.947 write: IOPS=5059, BW=19.8MiB/s (20.7MB/s)(20.0MiB/1012msec); 0 zone resets 00:11:11.947 slat (usec): min=2, max=33194, avg=90.25, stdev=685.33 00:11:11.947 clat (usec): min=751, max=49323, avg=12949.70, stdev=6395.37 00:11:11.947 lat (usec): min=1960, max=49333, avg=13039.94, stdev=6423.98 00:11:11.947 clat percentiles (usec): 00:11:11.947 | 1.00th=[ 3458], 5.00th=[ 5669], 10.00th=[ 7373], 20.00th=[10028], 00:11:11.947 | 30.00th=[11469], 40.00th=[11994], 50.00th=[12518], 60.00th=[12649], 00:11:11.947 | 70.00th=[12780], 80.00th=[13042], 90.00th=[14353], 95.00th=[25822], 00:11:11.947 | 99.00th=[41157], 99.50th=[43779], 99.90th=[45876], 99.95th=[45876], 00:11:11.947 | 99.99th=[49546] 00:11:11.947 bw ( KiB/s): min=19784, max=20464, per=29.30%, avg=20124.00, stdev=480.83, samples=2 00:11:11.947 iops : min= 4946, max= 5116, avg=5031.00, stdev=120.21, samples=2 00:11:11.947 lat (usec) : 1000=0.01% 00:11:11.947 lat (msec) : 2=0.02%, 4=0.79%, 10=13.28%, 20=78.63%, 50=7.27% 00:11:11.947 cpu : usr=3.36%, sys=5.44%, ctx=606, majf=0, minf=1 00:11:11.947 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:11:11.947 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:11.947 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:11.947 issued rwts: total=4646,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:11.947 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:11.947 00:11:11.947 Run status group 0 (all jobs): 00:11:11.947 READ: bw=61.9MiB/s (64.9MB/s), 11.8MiB/s-18.8MiB/s (12.3MB/s-19.7MB/s), io=63.1MiB (66.2MB), run=1009-1020msec 00:11:11.947 WRITE: bw=67.1MiB/s (70.3MB/s), 12.8MiB/s-19.8MiB/s (13.5MB/s-20.8MB/s), io=68.4MiB (71.7MB), run=1009-1020msec 00:11:11.947 00:11:11.947 Disk stats (read/write): 00:11:11.947 nvme0n1: ios=2586/2887, merge=0/0, ticks=46271/51860, in_queue=98131, util=97.49% 00:11:11.947 nvme0n2: ios=3930/4096, merge=0/0, ticks=43658/55189, in_queue=98847, util=98.07% 00:11:11.947 nvme0n3: ios=3072/3487, merge=0/0, ticks=45492/52923, in_queue=98415, util=88.30% 00:11:11.947 nvme0n4: ios=4126/4215, merge=0/0, ticks=54233/48826, in_queue=103059, util=97.68% 00:11:11.947 10:13:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:11:11.947 10:13:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=3798213 00:11:11.947 10:13:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:11:11.947 10:13:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:11:11.947 [global] 00:11:11.947 thread=1 00:11:11.947 invalidate=1 00:11:11.947 rw=read 00:11:11.947 time_based=1 00:11:11.947 runtime=10 00:11:11.947 ioengine=libaio 00:11:11.947 direct=1 00:11:11.947 bs=4096 00:11:11.947 iodepth=1 00:11:11.947 norandommap=1 00:11:11.947 numjobs=1 00:11:11.947 00:11:11.947 [job0] 00:11:11.947 filename=/dev/nvme0n1 00:11:11.947 [job1] 00:11:11.947 filename=/dev/nvme0n2 00:11:11.947 [job2] 00:11:11.947 filename=/dev/nvme0n3 00:11:11.947 [job3] 00:11:11.947 filename=/dev/nvme0n4 00:11:11.947 Could not set queue depth (nvme0n1) 00:11:11.947 Could not set queue depth (nvme0n2) 00:11:11.947 Could not set queue depth (nvme0n3) 00:11:11.947 Could not set queue depth (nvme0n4) 00:11:12.204 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:12.204 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:12.204 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:12.204 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:12.204 fio-3.35 00:11:12.204 Starting 4 threads 00:11:14.727 10:13:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:11:14.986 10:13:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:11:14.986 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=22634496, buflen=4096 00:11:14.986 fio: pid=3798526, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:15.243 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=344064, buflen=4096 00:11:15.243 fio: pid=3798525, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:15.243 10:13:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:15.243 10:13:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:11:15.243 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=43786240, buflen=4096 00:11:15.243 fio: pid=3798523, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:15.501 10:13:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:15.501 10:13:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:11:15.758 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=19013632, buflen=4096 00:11:15.758 fio: pid=3798524, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:15.758 00:11:15.758 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3798523: Fri Dec 13 10:13:09 2024 00:11:15.758 read: IOPS=3425, BW=13.4MiB/s (14.0MB/s)(41.8MiB/3121msec) 00:11:15.758 slat (usec): min=3, max=27487, avg=11.43, stdev=280.96 00:11:15.758 clat (usec): min=178, max=4145, avg=276.59, stdev=56.11 00:11:15.758 lat (usec): min=195, max=27938, avg=288.02, stdev=288.65 00:11:15.758 clat percentiles (usec): 00:11:15.758 | 1.00th=[ 202], 5.00th=[ 212], 10.00th=[ 227], 20.00th=[ 255], 00:11:15.758 | 30.00th=[ 269], 40.00th=[ 277], 50.00th=[ 285], 60.00th=[ 289], 00:11:15.758 | 70.00th=[ 293], 80.00th=[ 297], 90.00th=[ 306], 95.00th=[ 310], 00:11:15.758 | 99.00th=[ 326], 99.50th=[ 371], 99.90th=[ 742], 99.95th=[ 1106], 00:11:15.758 | 99.99th=[ 1762] 00:11:15.758 bw ( KiB/s): min=13104, max=14695, per=56.02%, avg=13758.50, stdev=727.99, samples=6 00:11:15.758 iops : min= 3276, max= 3673, avg=3439.50, stdev=181.80, samples=6 00:11:15.758 lat (usec) : 250=18.31%, 500=81.49%, 750=0.09%, 1000=0.03% 00:11:15.758 lat (msec) : 2=0.06%, 10=0.01% 00:11:15.758 cpu : usr=1.79%, sys=5.54%, ctx=10693, majf=0, minf=1 00:11:15.758 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:15.758 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:15.758 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:15.758 issued rwts: total=10691,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:15.758 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:15.758 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3798524: Fri Dec 13 10:13:09 2024 00:11:15.758 read: IOPS=1361, BW=5444KiB/s (5574kB/s)(18.1MiB/3411msec) 00:11:15.758 slat (usec): min=5, max=13811, avg=13.32, stdev=245.32 00:11:15.758 clat (usec): min=187, max=42107, avg=715.39, stdev=4398.40 00:11:15.759 lat (usec): min=194, max=55077, avg=728.71, stdev=4433.13 00:11:15.759 clat percentiles (usec): 00:11:15.759 | 1.00th=[ 200], 5.00th=[ 210], 10.00th=[ 215], 20.00th=[ 223], 00:11:15.759 | 30.00th=[ 229], 40.00th=[ 233], 50.00th=[ 237], 60.00th=[ 243], 00:11:15.759 | 70.00th=[ 247], 80.00th=[ 253], 90.00th=[ 262], 95.00th=[ 273], 00:11:15.759 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:11:15.759 | 99.99th=[42206] 00:11:15.759 bw ( KiB/s): min= 96, max=16152, per=19.43%, avg=4771.17, stdev=5981.71, samples=6 00:11:15.759 iops : min= 24, max= 4038, avg=1192.67, stdev=1495.48, samples=6 00:11:15.759 lat (usec) : 250=74.15%, 500=24.60%, 750=0.06% 00:11:15.759 lat (msec) : 50=1.16% 00:11:15.759 cpu : usr=0.38%, sys=1.26%, ctx=4647, majf=0, minf=2 00:11:15.759 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:15.759 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:15.759 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:15.759 issued rwts: total=4643,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:15.759 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:15.759 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3798525: Fri Dec 13 10:13:09 2024 00:11:15.759 read: IOPS=29, BW=115KiB/s (118kB/s)(336KiB/2927msec) 00:11:15.759 slat (usec): min=7, max=12633, avg=169.19, stdev=1368.04 00:11:15.759 clat (usec): min=259, max=42088, avg=34417.64, stdev=15333.90 00:11:15.759 lat (usec): min=282, max=53945, avg=34588.57, stdev=15463.07 00:11:15.759 clat percentiles (usec): 00:11:15.759 | 1.00th=[ 260], 5.00th=[ 289], 10.00th=[ 306], 20.00th=[40633], 00:11:15.759 | 30.00th=[40633], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:11:15.759 | 70.00th=[41157], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:11:15.759 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:11:15.759 | 99.99th=[42206] 00:11:15.759 bw ( KiB/s): min= 96, max= 144, per=0.48%, avg=118.40, stdev=19.10, samples=5 00:11:15.759 iops : min= 24, max= 36, avg=29.60, stdev= 4.77, samples=5 00:11:15.759 lat (usec) : 500=14.12%, 750=2.35% 00:11:15.759 lat (msec) : 50=82.35% 00:11:15.759 cpu : usr=0.10%, sys=0.00%, ctx=86, majf=0, minf=2 00:11:15.759 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:15.759 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:15.759 complete : 0=1.2%, 4=98.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:15.759 issued rwts: total=85,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:15.759 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:15.759 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3798526: Fri Dec 13 10:13:09 2024 00:11:15.759 read: IOPS=2023, BW=8091KiB/s (8285kB/s)(21.6MiB/2732msec) 00:11:15.759 slat (nsec): min=6158, max=33443, avg=7268.46, stdev=1530.24 00:11:15.759 clat (usec): min=219, max=42007, avg=481.96, stdev=2902.41 00:11:15.759 lat (usec): min=227, max=42030, avg=489.23, stdev=2903.50 00:11:15.759 clat percentiles (usec): 00:11:15.759 | 1.00th=[ 237], 5.00th=[ 247], 10.00th=[ 253], 20.00th=[ 260], 00:11:15.759 | 30.00th=[ 265], 40.00th=[ 269], 50.00th=[ 273], 60.00th=[ 281], 00:11:15.759 | 70.00th=[ 285], 80.00th=[ 289], 90.00th=[ 297], 95.00th=[ 306], 00:11:15.759 | 99.00th=[ 318], 99.50th=[40633], 99.90th=[41157], 99.95th=[42206], 00:11:15.759 | 99.99th=[42206] 00:11:15.759 bw ( KiB/s): min= 96, max=14504, per=35.96%, avg=8832.00, stdev=7277.21, samples=5 00:11:15.759 iops : min= 24, max= 3626, avg=2208.00, stdev=1819.30, samples=5 00:11:15.759 lat (usec) : 250=6.97%, 500=92.51% 00:11:15.759 lat (msec) : 50=0.51% 00:11:15.759 cpu : usr=0.37%, sys=2.05%, ctx=5527, majf=0, minf=2 00:11:15.759 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:15.759 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:15.759 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:15.759 issued rwts: total=5527,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:15.759 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:15.759 00:11:15.759 Run status group 0 (all jobs): 00:11:15.759 READ: bw=24.0MiB/s (25.1MB/s), 115KiB/s-13.4MiB/s (118kB/s-14.0MB/s), io=81.8MiB (85.8MB), run=2732-3411msec 00:11:15.759 00:11:15.759 Disk stats (read/write): 00:11:15.759 nvme0n1: ios=10690/0, merge=0/0, ticks=2841/0, in_queue=2841, util=94.61% 00:11:15.759 nvme0n2: ios=4641/0, merge=0/0, ticks=3252/0, in_queue=3252, util=95.66% 00:11:15.759 nvme0n3: ios=82/0, merge=0/0, ticks=2811/0, in_queue=2811, util=96.15% 00:11:15.759 nvme0n4: ios=5523/0, merge=0/0, ticks=2509/0, in_queue=2509, util=96.45% 00:11:15.759 10:13:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:15.759 10:13:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:11:16.017 10:13:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:16.017 10:13:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:11:16.274 10:13:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:16.274 10:13:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:11:16.531 10:13:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:16.531 10:13:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:11:16.811 10:13:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:16.811 10:13:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:11:17.146 10:13:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:11:17.146 10:13:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 3798213 00:11:17.146 10:13:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:11:17.146 10:13:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:18.085 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:18.085 10:13:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:18.086 10:13:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:11:18.086 10:13:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:18.086 10:13:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:18.086 10:13:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:18.086 10:13:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:18.086 10:13:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:11:18.086 10:13:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:11:18.086 10:13:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:11:18.086 nvmf hotplug test: fio failed as expected 00:11:18.086 10:13:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:18.343 10:13:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:11:18.343 10:13:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:11:18.343 10:13:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:11:18.343 10:13:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:11:18.343 10:13:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:11:18.343 10:13:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:18.343 10:13:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:11:18.343 10:13:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:18.343 10:13:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:11:18.343 10:13:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:18.343 10:13:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:18.343 rmmod nvme_tcp 00:11:18.343 rmmod nvme_fabrics 00:11:18.343 rmmod nvme_keyring 00:11:18.343 10:13:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:18.343 10:13:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:11:18.343 10:13:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:11:18.343 10:13:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 3795306 ']' 00:11:18.343 10:13:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 3795306 00:11:18.343 10:13:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 3795306 ']' 00:11:18.343 10:13:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 3795306 00:11:18.343 10:13:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:11:18.343 10:13:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:18.343 10:13:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3795306 00:11:18.343 10:13:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:18.343 10:13:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:18.343 10:13:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3795306' 00:11:18.343 killing process with pid 3795306 00:11:18.343 10:13:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 3795306 00:11:18.343 10:13:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 3795306 00:11:19.715 10:13:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:19.715 10:13:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:19.715 10:13:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:19.715 10:13:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:11:19.715 10:13:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:11:19.715 10:13:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:19.715 10:13:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:11:19.715 10:13:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:19.715 10:13:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:19.715 10:13:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:19.715 10:13:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:19.715 10:13:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:21.617 10:13:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:21.617 00:11:21.617 real 0m29.381s 00:11:21.617 user 1m59.508s 00:11:21.617 sys 0m8.016s 00:11:21.617 10:13:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:21.617 10:13:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:21.617 ************************************ 00:11:21.617 END TEST nvmf_fio_target 00:11:21.617 ************************************ 00:11:21.617 10:13:15 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:11:21.617 10:13:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:21.617 10:13:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:21.617 10:13:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:21.617 ************************************ 00:11:21.617 START TEST nvmf_bdevio 00:11:21.617 ************************************ 00:11:21.617 10:13:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:11:21.876 * Looking for test storage... 00:11:21.876 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:21.877 10:13:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:21.877 10:13:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lcov --version 00:11:21.877 10:13:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:21.877 10:13:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:21.877 10:13:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:21.877 10:13:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:21.877 10:13:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:21.877 10:13:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:11:21.877 10:13:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:11:21.877 10:13:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:11:21.877 10:13:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:11:21.877 10:13:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:11:21.877 10:13:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:11:21.877 10:13:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:11:21.877 10:13:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:21.877 10:13:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:11:21.877 10:13:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:11:21.877 10:13:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:21.877 10:13:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:21.877 10:13:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:11:21.877 10:13:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:11:21.877 10:13:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:21.877 10:13:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:11:21.877 10:13:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:11:21.877 10:13:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:11:21.877 10:13:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:11:21.877 10:13:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:21.877 10:13:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:11:21.877 10:13:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:11:21.877 10:13:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:21.877 10:13:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:21.877 10:13:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:11:21.877 10:13:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:21.877 10:13:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:21.877 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:21.877 --rc genhtml_branch_coverage=1 00:11:21.877 --rc genhtml_function_coverage=1 00:11:21.877 --rc genhtml_legend=1 00:11:21.877 --rc geninfo_all_blocks=1 00:11:21.877 --rc geninfo_unexecuted_blocks=1 00:11:21.877 00:11:21.877 ' 00:11:21.877 10:13:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:21.877 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:21.877 --rc genhtml_branch_coverage=1 00:11:21.877 --rc genhtml_function_coverage=1 00:11:21.877 --rc genhtml_legend=1 00:11:21.877 --rc geninfo_all_blocks=1 00:11:21.877 --rc geninfo_unexecuted_blocks=1 00:11:21.877 00:11:21.877 ' 00:11:21.877 10:13:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:21.877 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:21.877 --rc genhtml_branch_coverage=1 00:11:21.877 --rc genhtml_function_coverage=1 00:11:21.877 --rc genhtml_legend=1 00:11:21.877 --rc geninfo_all_blocks=1 00:11:21.877 --rc geninfo_unexecuted_blocks=1 00:11:21.877 00:11:21.877 ' 00:11:21.877 10:13:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:21.877 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:21.877 --rc genhtml_branch_coverage=1 00:11:21.877 --rc genhtml_function_coverage=1 00:11:21.877 --rc genhtml_legend=1 00:11:21.877 --rc geninfo_all_blocks=1 00:11:21.877 --rc geninfo_unexecuted_blocks=1 00:11:21.877 00:11:21.877 ' 00:11:21.877 10:13:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:21.877 10:13:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:11:21.877 10:13:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:21.877 10:13:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:21.877 10:13:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:21.877 10:13:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:21.877 10:13:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:21.877 10:13:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:21.877 10:13:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:21.877 10:13:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:21.877 10:13:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:21.877 10:13:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:21.877 10:13:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:11:21.877 10:13:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:11:21.877 10:13:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:21.877 10:13:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:21.877 10:13:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:21.877 10:13:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:21.877 10:13:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:21.877 10:13:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:11:21.877 10:13:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:21.877 10:13:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:21.877 10:13:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:21.877 10:13:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:21.877 10:13:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:21.877 10:13:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:21.877 10:13:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:11:21.877 10:13:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:21.877 10:13:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:11:21.877 10:13:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:21.877 10:13:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:21.877 10:13:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:21.877 10:13:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:21.877 10:13:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:21.877 10:13:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:21.877 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:21.877 10:13:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:21.877 10:13:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:21.877 10:13:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:21.877 10:13:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:21.877 10:13:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:21.877 10:13:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:11:21.877 10:13:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:21.877 10:13:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:21.877 10:13:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:21.877 10:13:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:21.878 10:13:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:21.878 10:13:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:21.878 10:13:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:21.878 10:13:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:21.878 10:13:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:21.878 10:13:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:21.878 10:13:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:11:21.878 10:13:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:27.149 10:13:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:27.149 10:13:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:11:27.149 10:13:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:27.149 10:13:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:27.149 10:13:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:27.149 10:13:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:27.149 10:13:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:27.149 10:13:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:11:27.149 10:13:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:27.149 10:13:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:11:27.149 10:13:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:11:27.149 10:13:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:11:27.149 10:13:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:11:27.149 10:13:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:11:27.149 10:13:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:11:27.149 10:13:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:27.149 10:13:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:27.149 10:13:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:27.149 10:13:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:27.149 10:13:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:27.149 10:13:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:27.149 10:13:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:27.149 10:13:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:27.149 10:13:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:27.149 10:13:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:27.149 10:13:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:27.149 10:13:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:27.149 10:13:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:27.149 10:13:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:27.149 10:13:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:27.149 10:13:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:27.149 10:13:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:27.149 10:13:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:27.149 10:13:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:27.149 10:13:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:11:27.149 Found 0000:af:00.0 (0x8086 - 0x159b) 00:11:27.149 10:13:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:27.149 10:13:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:27.149 10:13:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:27.149 10:13:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:27.149 10:13:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:27.149 10:13:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:27.149 10:13:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:11:27.149 Found 0000:af:00.1 (0x8086 - 0x159b) 00:11:27.149 10:13:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:27.149 10:13:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:27.149 10:13:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:27.149 10:13:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:27.149 10:13:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:27.149 10:13:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:27.149 10:13:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:27.149 10:13:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:27.149 10:13:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:27.149 10:13:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:27.149 10:13:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:27.149 10:13:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:27.149 10:13:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:27.149 10:13:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:27.149 10:13:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:27.149 10:13:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:11:27.149 Found net devices under 0000:af:00.0: cvl_0_0 00:11:27.149 10:13:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:27.149 10:13:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:27.149 10:13:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:27.149 10:13:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:27.149 10:13:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:27.149 10:13:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:27.149 10:13:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:27.149 10:13:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:27.149 10:13:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:11:27.149 Found net devices under 0000:af:00.1: cvl_0_1 00:11:27.149 10:13:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:27.149 10:13:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:27.149 10:13:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:11:27.149 10:13:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:27.149 10:13:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:27.149 10:13:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:27.149 10:13:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:27.149 10:13:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:27.149 10:13:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:27.149 10:13:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:27.149 10:13:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:27.149 10:13:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:27.149 10:13:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:27.149 10:13:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:27.149 10:13:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:27.149 10:13:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:27.149 10:13:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:27.149 10:13:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:27.149 10:13:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:27.149 10:13:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:27.149 10:13:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:27.149 10:13:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:27.149 10:13:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:27.149 10:13:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:27.149 10:13:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:27.407 10:13:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:27.407 10:13:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:27.407 10:13:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:27.407 10:13:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:27.407 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:27.407 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.412 ms 00:11:27.407 00:11:27.407 --- 10.0.0.2 ping statistics --- 00:11:27.407 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:27.407 rtt min/avg/max/mdev = 0.412/0.412/0.412/0.000 ms 00:11:27.407 10:13:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:27.407 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:27.407 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.199 ms 00:11:27.407 00:11:27.407 --- 10.0.0.1 ping statistics --- 00:11:27.407 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:27.407 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:11:27.407 10:13:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:27.407 10:13:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:11:27.407 10:13:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:27.407 10:13:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:27.408 10:13:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:27.408 10:13:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:27.408 10:13:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:27.408 10:13:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:27.408 10:13:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:27.408 10:13:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:11:27.408 10:13:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:27.408 10:13:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:27.408 10:13:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:27.408 10:13:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=3802933 00:11:27.408 10:13:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:11:27.408 10:13:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 3802933 00:11:27.408 10:13:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 3802933 ']' 00:11:27.408 10:13:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:27.408 10:13:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:27.408 10:13:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:27.408 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:27.408 10:13:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:27.408 10:13:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:27.408 [2024-12-13 10:13:21.225565] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:11:27.408 [2024-12-13 10:13:21.225653] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:27.666 [2024-12-13 10:13:21.345802] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:27.666 [2024-12-13 10:13:21.450825] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:27.666 [2024-12-13 10:13:21.450888] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:27.666 [2024-12-13 10:13:21.450900] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:27.666 [2024-12-13 10:13:21.450912] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:27.666 [2024-12-13 10:13:21.450920] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:27.666 [2024-12-13 10:13:21.453408] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:11:27.666 [2024-12-13 10:13:21.453544] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 5 00:11:27.666 [2024-12-13 10:13:21.453614] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:11:27.666 [2024-12-13 10:13:21.453636] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 6 00:11:28.231 10:13:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:28.231 10:13:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:11:28.231 10:13:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:28.231 10:13:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:28.231 10:13:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:28.231 10:13:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:28.231 10:13:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:28.231 10:13:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.231 10:13:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:28.231 [2024-12-13 10:13:22.070368] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:28.231 10:13:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.231 10:13:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:28.231 10:13:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.231 10:13:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:28.489 Malloc0 00:11:28.489 10:13:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.489 10:13:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:28.489 10:13:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.489 10:13:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:28.489 10:13:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.489 10:13:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:28.489 10:13:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.489 10:13:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:28.489 10:13:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.489 10:13:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:28.489 10:13:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.489 10:13:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:28.489 [2024-12-13 10:13:22.202029] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:28.489 10:13:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.489 10:13:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:11:28.489 10:13:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:11:28.489 10:13:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:11:28.489 10:13:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:11:28.489 10:13:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:11:28.489 10:13:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:11:28.489 { 00:11:28.489 "params": { 00:11:28.489 "name": "Nvme$subsystem", 00:11:28.489 "trtype": "$TEST_TRANSPORT", 00:11:28.489 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:28.489 "adrfam": "ipv4", 00:11:28.489 "trsvcid": "$NVMF_PORT", 00:11:28.489 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:28.489 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:28.489 "hdgst": ${hdgst:-false}, 00:11:28.489 "ddgst": ${ddgst:-false} 00:11:28.489 }, 00:11:28.489 "method": "bdev_nvme_attach_controller" 00:11:28.489 } 00:11:28.489 EOF 00:11:28.489 )") 00:11:28.489 10:13:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:11:28.489 10:13:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:11:28.489 10:13:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:11:28.489 10:13:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:11:28.489 "params": { 00:11:28.489 "name": "Nvme1", 00:11:28.489 "trtype": "tcp", 00:11:28.489 "traddr": "10.0.0.2", 00:11:28.489 "adrfam": "ipv4", 00:11:28.489 "trsvcid": "4420", 00:11:28.489 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:28.489 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:28.489 "hdgst": false, 00:11:28.489 "ddgst": false 00:11:28.489 }, 00:11:28.489 "method": "bdev_nvme_attach_controller" 00:11:28.489 }' 00:11:28.489 [2024-12-13 10:13:22.266467] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:11:28.489 [2024-12-13 10:13:22.266553] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3803176 ] 00:11:28.489 [2024-12-13 10:13:22.374409] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:28.747 [2024-12-13 10:13:22.496633] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:11:28.747 [2024-12-13 10:13:22.496702] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:11:28.747 [2024-12-13 10:13:22.496707] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:11:29.312 I/O targets: 00:11:29.312 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:11:29.312 00:11:29.312 00:11:29.312 CUnit - A unit testing framework for C - Version 2.1-3 00:11:29.312 http://cunit.sourceforge.net/ 00:11:29.312 00:11:29.312 00:11:29.312 Suite: bdevio tests on: Nvme1n1 00:11:29.312 Test: blockdev write read block ...passed 00:11:29.312 Test: blockdev write zeroes read block ...passed 00:11:29.312 Test: blockdev write zeroes read no split ...passed 00:11:29.312 Test: blockdev write zeroes read split ...passed 00:11:29.312 Test: blockdev write zeroes read split partial ...passed 00:11:29.312 Test: blockdev reset ...[2024-12-13 10:13:23.173380] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:11:29.312 [2024-12-13 10:13:23.173494] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000326480 (9): Bad file descriptor 00:11:29.570 [2024-12-13 10:13:23.322244] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:11:29.570 passed 00:11:29.570 Test: blockdev write read 8 blocks ...passed 00:11:29.570 Test: blockdev write read size > 128k ...passed 00:11:29.570 Test: blockdev write read invalid size ...passed 00:11:29.570 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:29.570 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:29.570 Test: blockdev write read max offset ...passed 00:11:29.570 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:29.570 Test: blockdev writev readv 8 blocks ...passed 00:11:29.570 Test: blockdev writev readv 30 x 1block ...passed 00:11:29.828 Test: blockdev writev readv block ...passed 00:11:29.828 Test: blockdev writev readv size > 128k ...passed 00:11:29.828 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:29.828 Test: blockdev comparev and writev ...[2024-12-13 10:13:23.499306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:29.828 [2024-12-13 10:13:23.499348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:11:29.828 [2024-12-13 10:13:23.499368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:29.828 [2024-12-13 10:13:23.499380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:11:29.828 [2024-12-13 10:13:23.499694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:29.828 [2024-12-13 10:13:23.499711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:11:29.828 [2024-12-13 10:13:23.499728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:29.828 [2024-12-13 10:13:23.499737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:11:29.828 [2024-12-13 10:13:23.500030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:29.828 [2024-12-13 10:13:23.500047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:11:29.828 [2024-12-13 10:13:23.500063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:29.828 [2024-12-13 10:13:23.500073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:11:29.828 [2024-12-13 10:13:23.500355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:29.828 [2024-12-13 10:13:23.500371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:11:29.828 [2024-12-13 10:13:23.500390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:29.828 [2024-12-13 10:13:23.500400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:11:29.828 passed 00:11:29.828 Test: blockdev nvme passthru rw ...passed 00:11:29.828 Test: blockdev nvme passthru vendor specific ...[2024-12-13 10:13:23.582885] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:29.828 [2024-12-13 10:13:23.582916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:11:29.828 [2024-12-13 10:13:23.583055] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:29.828 [2024-12-13 10:13:23.583069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:11:29.828 [2024-12-13 10:13:23.583200] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:29.828 [2024-12-13 10:13:23.583214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:11:29.828 [2024-12-13 10:13:23.583337] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:29.828 [2024-12-13 10:13:23.583351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:11:29.828 passed 00:11:29.828 Test: blockdev nvme admin passthru ...passed 00:11:29.828 Test: blockdev copy ...passed 00:11:29.828 00:11:29.828 Run Summary: Type Total Ran Passed Failed Inactive 00:11:29.828 suites 1 1 n/a 0 0 00:11:29.828 tests 23 23 23 0 0 00:11:29.828 asserts 152 152 152 0 n/a 00:11:29.828 00:11:29.828 Elapsed time = 1.430 seconds 00:11:30.762 10:13:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:30.762 10:13:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.762 10:13:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:30.762 10:13:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.762 10:13:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:11:30.762 10:13:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:11:30.762 10:13:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:30.762 10:13:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:11:30.762 10:13:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:30.762 10:13:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:11:30.762 10:13:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:30.762 10:13:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:30.762 rmmod nvme_tcp 00:11:30.762 rmmod nvme_fabrics 00:11:30.762 rmmod nvme_keyring 00:11:30.762 10:13:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:30.762 10:13:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:11:30.762 10:13:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:11:30.762 10:13:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 3802933 ']' 00:11:30.762 10:13:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 3802933 00:11:30.762 10:13:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 3802933 ']' 00:11:30.762 10:13:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 3802933 00:11:30.762 10:13:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:11:30.762 10:13:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:30.762 10:13:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3802933 00:11:31.019 10:13:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:11:31.019 10:13:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:11:31.019 10:13:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3802933' 00:11:31.019 killing process with pid 3802933 00:11:31.019 10:13:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 3802933 00:11:31.019 10:13:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 3802933 00:11:32.392 10:13:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:32.392 10:13:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:32.392 10:13:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:32.392 10:13:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:11:32.392 10:13:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:11:32.392 10:13:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:11:32.392 10:13:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:32.392 10:13:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:32.392 10:13:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:32.392 10:13:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:32.392 10:13:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:32.392 10:13:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:34.295 10:13:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:34.295 00:11:34.295 real 0m12.601s 00:11:34.295 user 0m23.540s 00:11:34.295 sys 0m4.838s 00:11:34.295 10:13:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:34.295 10:13:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:34.295 ************************************ 00:11:34.295 END TEST nvmf_bdevio 00:11:34.295 ************************************ 00:11:34.295 10:13:28 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:11:34.295 00:11:34.295 real 5m2.297s 00:11:34.295 user 11m59.751s 00:11:34.295 sys 1m36.875s 00:11:34.295 10:13:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:34.295 10:13:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:34.295 ************************************ 00:11:34.295 END TEST nvmf_target_core 00:11:34.295 ************************************ 00:11:34.295 10:13:28 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:11:34.295 10:13:28 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:34.295 10:13:28 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:34.295 10:13:28 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:34.554 ************************************ 00:11:34.554 START TEST nvmf_target_extra 00:11:34.554 ************************************ 00:11:34.554 10:13:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:11:34.554 * Looking for test storage... 00:11:34.554 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:11:34.554 10:13:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:34.554 10:13:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # lcov --version 00:11:34.554 10:13:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:34.554 10:13:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:34.554 10:13:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:34.554 10:13:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:34.554 10:13:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:34.554 10:13:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:11:34.554 10:13:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:11:34.554 10:13:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:11:34.554 10:13:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:11:34.554 10:13:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:11:34.554 10:13:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:11:34.554 10:13:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:11:34.554 10:13:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:34.554 10:13:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:11:34.554 10:13:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:11:34.554 10:13:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:34.554 10:13:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:34.554 10:13:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:11:34.554 10:13:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:11:34.554 10:13:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:34.554 10:13:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:11:34.554 10:13:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:11:34.554 10:13:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:11:34.554 10:13:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:11:34.554 10:13:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:34.554 10:13:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:11:34.554 10:13:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:11:34.554 10:13:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:34.554 10:13:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:34.554 10:13:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:11:34.554 10:13:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:34.554 10:13:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:34.554 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:34.554 --rc genhtml_branch_coverage=1 00:11:34.554 --rc genhtml_function_coverage=1 00:11:34.554 --rc genhtml_legend=1 00:11:34.554 --rc geninfo_all_blocks=1 00:11:34.554 --rc geninfo_unexecuted_blocks=1 00:11:34.554 00:11:34.554 ' 00:11:34.554 10:13:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:34.554 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:34.554 --rc genhtml_branch_coverage=1 00:11:34.554 --rc genhtml_function_coverage=1 00:11:34.554 --rc genhtml_legend=1 00:11:34.554 --rc geninfo_all_blocks=1 00:11:34.554 --rc geninfo_unexecuted_blocks=1 00:11:34.554 00:11:34.554 ' 00:11:34.554 10:13:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:34.554 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:34.554 --rc genhtml_branch_coverage=1 00:11:34.554 --rc genhtml_function_coverage=1 00:11:34.554 --rc genhtml_legend=1 00:11:34.554 --rc geninfo_all_blocks=1 00:11:34.554 --rc geninfo_unexecuted_blocks=1 00:11:34.554 00:11:34.554 ' 00:11:34.554 10:13:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:34.554 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:34.554 --rc genhtml_branch_coverage=1 00:11:34.554 --rc genhtml_function_coverage=1 00:11:34.554 --rc genhtml_legend=1 00:11:34.555 --rc geninfo_all_blocks=1 00:11:34.555 --rc geninfo_unexecuted_blocks=1 00:11:34.555 00:11:34.555 ' 00:11:34.555 10:13:28 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:34.555 10:13:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:11:34.555 10:13:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:34.555 10:13:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:34.555 10:13:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:34.555 10:13:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:34.555 10:13:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:34.555 10:13:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:34.555 10:13:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:34.555 10:13:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:34.555 10:13:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:34.555 10:13:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:34.555 10:13:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:11:34.555 10:13:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:11:34.555 10:13:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:34.555 10:13:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:34.555 10:13:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:34.555 10:13:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:34.555 10:13:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:34.555 10:13:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:11:34.555 10:13:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:34.555 10:13:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:34.555 10:13:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:34.555 10:13:28 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:34.555 10:13:28 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:34.555 10:13:28 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:34.555 10:13:28 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:11:34.555 10:13:28 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:34.555 10:13:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:11:34.555 10:13:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:34.555 10:13:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:34.555 10:13:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:34.555 10:13:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:34.555 10:13:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:34.555 10:13:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:34.555 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:34.555 10:13:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:34.555 10:13:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:34.555 10:13:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:34.555 10:13:28 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:11:34.555 10:13:28 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:11:34.555 10:13:28 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:11:34.555 10:13:28 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:11:34.555 10:13:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:34.555 10:13:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:34.555 10:13:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:34.555 ************************************ 00:11:34.555 START TEST nvmf_example 00:11:34.555 ************************************ 00:11:34.555 10:13:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:11:34.814 * Looking for test storage... 00:11:34.814 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:34.814 10:13:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:34.814 10:13:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # lcov --version 00:11:34.814 10:13:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:34.814 10:13:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:34.814 10:13:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:34.815 10:13:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:34.815 10:13:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:34.815 10:13:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:11:34.815 10:13:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:11:34.815 10:13:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:11:34.815 10:13:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:11:34.815 10:13:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:11:34.815 10:13:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:11:34.815 10:13:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:11:34.815 10:13:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:34.815 10:13:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:11:34.815 10:13:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:11:34.815 10:13:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:34.815 10:13:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:34.815 10:13:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:11:34.815 10:13:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:11:34.815 10:13:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:34.815 10:13:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:11:34.815 10:13:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:11:34.815 10:13:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:11:34.815 10:13:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:11:34.815 10:13:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:34.815 10:13:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:11:34.815 10:13:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:11:34.815 10:13:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:34.815 10:13:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:34.815 10:13:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:11:34.815 10:13:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:34.815 10:13:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:34.815 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:34.815 --rc genhtml_branch_coverage=1 00:11:34.815 --rc genhtml_function_coverage=1 00:11:34.815 --rc genhtml_legend=1 00:11:34.815 --rc geninfo_all_blocks=1 00:11:34.815 --rc geninfo_unexecuted_blocks=1 00:11:34.815 00:11:34.815 ' 00:11:34.815 10:13:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:34.815 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:34.815 --rc genhtml_branch_coverage=1 00:11:34.815 --rc genhtml_function_coverage=1 00:11:34.815 --rc genhtml_legend=1 00:11:34.815 --rc geninfo_all_blocks=1 00:11:34.815 --rc geninfo_unexecuted_blocks=1 00:11:34.815 00:11:34.815 ' 00:11:34.815 10:13:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:34.815 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:34.815 --rc genhtml_branch_coverage=1 00:11:34.815 --rc genhtml_function_coverage=1 00:11:34.815 --rc genhtml_legend=1 00:11:34.815 --rc geninfo_all_blocks=1 00:11:34.815 --rc geninfo_unexecuted_blocks=1 00:11:34.815 00:11:34.815 ' 00:11:34.815 10:13:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:34.815 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:34.815 --rc genhtml_branch_coverage=1 00:11:34.815 --rc genhtml_function_coverage=1 00:11:34.815 --rc genhtml_legend=1 00:11:34.815 --rc geninfo_all_blocks=1 00:11:34.815 --rc geninfo_unexecuted_blocks=1 00:11:34.815 00:11:34.815 ' 00:11:34.815 10:13:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:34.815 10:13:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:11:34.815 10:13:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:34.815 10:13:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:34.815 10:13:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:34.815 10:13:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:34.815 10:13:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:34.815 10:13:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:34.815 10:13:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:34.815 10:13:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:34.815 10:13:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:34.815 10:13:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:34.815 10:13:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:11:34.815 10:13:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:11:34.815 10:13:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:34.815 10:13:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:34.815 10:13:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:34.815 10:13:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:34.815 10:13:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:34.815 10:13:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:11:34.815 10:13:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:34.815 10:13:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:34.815 10:13:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:34.815 10:13:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:34.815 10:13:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:34.815 10:13:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:34.815 10:13:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:11:34.815 10:13:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:34.815 10:13:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:11:34.815 10:13:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:34.815 10:13:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:34.815 10:13:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:34.815 10:13:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:34.815 10:13:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:34.815 10:13:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:34.815 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:34.815 10:13:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:34.815 10:13:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:34.815 10:13:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:34.815 10:13:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:11:34.815 10:13:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:11:34.815 10:13:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:11:34.815 10:13:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:11:34.815 10:13:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:11:34.815 10:13:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:11:34.815 10:13:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:11:34.815 10:13:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:11:34.815 10:13:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:34.815 10:13:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:34.815 10:13:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:11:34.816 10:13:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:34.816 10:13:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:34.816 10:13:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:34.816 10:13:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:34.816 10:13:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:34.816 10:13:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:34.816 10:13:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:34.816 10:13:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:34.816 10:13:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:34.816 10:13:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:34.816 10:13:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:11:34.816 10:13:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:40.084 10:13:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:40.084 10:13:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:11:40.084 10:13:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:40.084 10:13:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:40.084 10:13:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:40.084 10:13:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:40.084 10:13:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:40.084 10:13:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:11:40.084 10:13:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:40.084 10:13:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:11:40.084 10:13:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:11:40.084 10:13:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:11:40.084 10:13:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:11:40.084 10:13:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:11:40.084 10:13:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:11:40.084 10:13:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:40.084 10:13:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:40.084 10:13:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:40.085 10:13:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:40.085 10:13:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:40.085 10:13:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:40.085 10:13:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:40.085 10:13:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:40.085 10:13:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:40.085 10:13:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:40.085 10:13:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:40.085 10:13:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:40.085 10:13:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:40.085 10:13:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:40.085 10:13:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:40.085 10:13:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:40.085 10:13:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:40.085 10:13:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:40.085 10:13:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:40.085 10:13:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:11:40.085 Found 0000:af:00.0 (0x8086 - 0x159b) 00:11:40.085 10:13:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:40.085 10:13:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:40.085 10:13:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:40.085 10:13:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:40.085 10:13:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:40.085 10:13:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:40.085 10:13:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:11:40.085 Found 0000:af:00.1 (0x8086 - 0x159b) 00:11:40.085 10:13:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:40.085 10:13:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:40.085 10:13:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:40.085 10:13:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:40.085 10:13:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:40.085 10:13:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:40.085 10:13:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:40.085 10:13:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:40.085 10:13:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:40.085 10:13:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:40.085 10:13:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:40.085 10:13:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:40.085 10:13:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:40.085 10:13:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:40.085 10:13:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:40.085 10:13:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:11:40.085 Found net devices under 0000:af:00.0: cvl_0_0 00:11:40.085 10:13:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:40.085 10:13:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:40.085 10:13:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:40.085 10:13:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:40.085 10:13:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:40.085 10:13:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:40.085 10:13:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:40.085 10:13:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:40.085 10:13:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:11:40.085 Found net devices under 0000:af:00.1: cvl_0_1 00:11:40.085 10:13:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:40.085 10:13:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:40.085 10:13:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # is_hw=yes 00:11:40.085 10:13:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:40.085 10:13:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:40.085 10:13:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:40.085 10:13:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:40.085 10:13:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:40.085 10:13:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:40.085 10:13:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:40.085 10:13:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:40.085 10:13:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:40.085 10:13:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:40.085 10:13:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:40.085 10:13:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:40.085 10:13:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:40.085 10:13:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:40.085 10:13:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:40.085 10:13:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:40.085 10:13:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:40.085 10:13:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:40.085 10:13:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:40.085 10:13:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:40.085 10:13:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:40.085 10:13:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:40.344 10:13:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:40.344 10:13:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:40.344 10:13:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:40.344 10:13:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:40.344 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:40.344 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.421 ms 00:11:40.344 00:11:40.344 --- 10.0.0.2 ping statistics --- 00:11:40.344 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:40.344 rtt min/avg/max/mdev = 0.421/0.421/0.421/0.000 ms 00:11:40.344 10:13:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:40.344 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:40.344 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.195 ms 00:11:40.344 00:11:40.344 --- 10.0.0.1 ping statistics --- 00:11:40.344 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:40.344 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:11:40.344 10:13:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:40.344 10:13:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # return 0 00:11:40.344 10:13:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:40.344 10:13:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:40.344 10:13:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:40.344 10:13:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:40.344 10:13:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:40.344 10:13:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:40.344 10:13:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:40.344 10:13:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:11:40.344 10:13:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:11:40.344 10:13:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:40.344 10:13:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:40.344 10:13:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:11:40.344 10:13:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:11:40.344 10:13:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=3807371 00:11:40.344 10:13:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:40.344 10:13:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 3807371 00:11:40.344 10:13:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # '[' -z 3807371 ']' 00:11:40.344 10:13:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:40.344 10:13:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:40.344 10:13:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:11:40.344 10:13:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:40.344 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:40.344 10:13:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:40.344 10:13:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:41.277 10:13:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:41.277 10:13:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@868 -- # return 0 00:11:41.277 10:13:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:11:41.277 10:13:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:41.277 10:13:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:41.277 10:13:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:41.277 10:13:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.277 10:13:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:41.277 10:13:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.277 10:13:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:11:41.277 10:13:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.277 10:13:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:41.277 10:13:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.277 10:13:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:11:41.277 10:13:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:41.277 10:13:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.277 10:13:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:41.277 10:13:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.277 10:13:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:11:41.277 10:13:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:41.277 10:13:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.277 10:13:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:41.277 10:13:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.277 10:13:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:41.277 10:13:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.277 10:13:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:41.277 10:13:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.277 10:13:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:11:41.277 10:13:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:11:53.467 Initializing NVMe Controllers 00:11:53.467 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:53.467 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:11:53.467 Initialization complete. Launching workers. 00:11:53.467 ======================================================== 00:11:53.467 Latency(us) 00:11:53.467 Device Information : IOPS MiB/s Average min max 00:11:53.467 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 16177.02 63.19 3956.15 828.11 15322.12 00:11:53.467 ======================================================== 00:11:53.467 Total : 16177.02 63.19 3956.15 828.11 15322.12 00:11:53.467 00:11:53.467 10:13:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:11:53.467 10:13:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:11:53.467 10:13:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:53.467 10:13:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:11:53.467 10:13:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:53.467 10:13:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:11:53.467 10:13:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:53.467 10:13:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:53.467 rmmod nvme_tcp 00:11:53.467 rmmod nvme_fabrics 00:11:53.467 rmmod nvme_keyring 00:11:53.467 10:13:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:53.467 10:13:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:11:53.467 10:13:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:11:53.467 10:13:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@517 -- # '[' -n 3807371 ']' 00:11:53.467 10:13:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # killprocess 3807371 00:11:53.467 10:13:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # '[' -z 3807371 ']' 00:11:53.467 10:13:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # kill -0 3807371 00:11:53.467 10:13:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # uname 00:11:53.467 10:13:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:53.467 10:13:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3807371 00:11:53.467 10:13:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # process_name=nvmf 00:11:53.467 10:13:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@964 -- # '[' nvmf = sudo ']' 00:11:53.467 10:13:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3807371' 00:11:53.467 killing process with pid 3807371 00:11:53.467 10:13:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@973 -- # kill 3807371 00:11:53.467 10:13:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@978 -- # wait 3807371 00:11:53.467 nvmf threads initialize successfully 00:11:53.467 bdev subsystem init successfully 00:11:53.467 created a nvmf target service 00:11:53.467 create targets's poll groups done 00:11:53.467 all subsystems of target started 00:11:53.467 nvmf target is running 00:11:53.467 all subsystems of target stopped 00:11:53.467 destroy targets's poll groups done 00:11:53.467 destroyed the nvmf target service 00:11:53.467 bdev subsystem finish successfully 00:11:53.467 nvmf threads destroy successfully 00:11:53.467 10:13:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:53.467 10:13:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:53.467 10:13:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:53.467 10:13:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr 00:11:53.467 10:13:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-restore 00:11:53.467 10:13:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-save 00:11:53.467 10:13:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:53.467 10:13:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:53.467 10:13:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:53.467 10:13:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:53.467 10:13:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:53.467 10:13:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:55.373 10:13:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:55.373 10:13:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:11:55.373 10:13:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:55.373 10:13:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:55.373 00:11:55.373 real 0m20.531s 00:11:55.373 user 0m49.869s 00:11:55.373 sys 0m5.819s 00:11:55.373 10:13:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:55.373 10:13:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:55.373 ************************************ 00:11:55.373 END TEST nvmf_example 00:11:55.373 ************************************ 00:11:55.373 10:13:48 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:11:55.373 10:13:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:55.373 10:13:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:55.373 10:13:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:55.373 ************************************ 00:11:55.373 START TEST nvmf_filesystem 00:11:55.373 ************************************ 00:11:55.373 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:11:55.373 * Looking for test storage... 00:11:55.373 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:55.373 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:55.373 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lcov --version 00:11:55.373 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:55.373 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:55.373 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:55.373 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:55.373 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:55.373 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:11:55.373 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:11:55.373 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:11:55.373 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:11:55.373 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:11:55.373 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:11:55.373 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:11:55.373 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:55.373 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:11:55.373 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:11:55.373 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:55.373 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:55.373 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:11:55.373 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:11:55.373 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:55.373 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:11:55.373 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:11:55.373 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:11:55.373 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:11:55.373 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:55.373 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:11:55.373 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:11:55.373 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:55.373 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:55.373 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:11:55.373 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:55.373 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:55.373 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:55.373 --rc genhtml_branch_coverage=1 00:11:55.373 --rc genhtml_function_coverage=1 00:11:55.373 --rc genhtml_legend=1 00:11:55.373 --rc geninfo_all_blocks=1 00:11:55.373 --rc geninfo_unexecuted_blocks=1 00:11:55.373 00:11:55.373 ' 00:11:55.373 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:55.374 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:55.374 --rc genhtml_branch_coverage=1 00:11:55.374 --rc genhtml_function_coverage=1 00:11:55.374 --rc genhtml_legend=1 00:11:55.374 --rc geninfo_all_blocks=1 00:11:55.374 --rc geninfo_unexecuted_blocks=1 00:11:55.374 00:11:55.374 ' 00:11:55.374 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:55.374 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:55.374 --rc genhtml_branch_coverage=1 00:11:55.374 --rc genhtml_function_coverage=1 00:11:55.374 --rc genhtml_legend=1 00:11:55.374 --rc geninfo_all_blocks=1 00:11:55.374 --rc geninfo_unexecuted_blocks=1 00:11:55.374 00:11:55.374 ' 00:11:55.374 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:55.374 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:55.374 --rc genhtml_branch_coverage=1 00:11:55.374 --rc genhtml_function_coverage=1 00:11:55.374 --rc genhtml_legend=1 00:11:55.374 --rc geninfo_all_blocks=1 00:11:55.374 --rc geninfo_unexecuted_blocks=1 00:11:55.374 00:11:55.374 ' 00:11:55.374 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:11:55.374 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:11:55.374 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:11:55.374 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:11:55.374 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:11:55.374 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:11:55.374 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:11:55.374 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:11:55.374 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:11:55.374 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:11:55.374 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:11:55.374 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:11:55.374 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:11:55.374 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:11:55.374 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:11:55.374 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:11:55.374 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:11:55.374 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:11:55.374 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:11:55.374 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:11:55.374 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:11:55.374 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:11:55.374 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:11:55.374 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:11:55.374 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:11:55.374 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:11:55.374 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:11:55.374 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:11:55.374 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:11:55.374 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:11:55.374 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:11:55.374 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_CET=n 00:11:55.374 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:11:55.374 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:11:55.374 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:11:55.374 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:11:55.374 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:11:55.374 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:11:55.374 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:11:55.374 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:11:55.374 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:11:55.374 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:11:55.374 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:11:55.374 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:11:55.374 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:11:55.374 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:11:55.374 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:11:55.374 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:11:55.374 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:11:55.374 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:11:55.374 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:11:55.374 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:11:55.374 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:11:55.374 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:11:55.374 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:11:55.374 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:11:55.374 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:11:55.374 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:11:55.374 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:11:55.374 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:11:55.374 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:11:55.374 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:11:55.374 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:11:55.374 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:11:55.374 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:11:55.374 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=n 00:11:55.374 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:11:55.374 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:11:55.374 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:11:55.374 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:11:55.374 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:11:55.374 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:11:55.374 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:11:55.374 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:11:55.374 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:11:55.374 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:11:55.374 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:11:55.374 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:11:55.374 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:11:55.374 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:11:55.374 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:11:55.374 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:11:55.374 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:11:55.374 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:11:55.374 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_FC=n 00:11:55.374 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:11:55.374 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:11:55.374 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:11:55.374 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:11:55.374 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:11:55.374 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:11:55.374 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:11:55.374 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:11:55.374 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:11:55.374 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:11:55.374 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:11:55.375 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:11:55.375 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:11:55.375 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@90 -- # CONFIG_URING=n 00:11:55.375 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:11:55.375 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:11:55.375 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:11:55.375 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:11:55.375 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:11:55.375 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:55.375 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:11:55.375 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:55.375 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:11:55.375 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:11:55.375 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:11:55.375 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:11:55.375 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:11:55.375 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:11:55.375 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:11:55.375 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:11:55.375 #define SPDK_CONFIG_H 00:11:55.375 #define SPDK_CONFIG_AIO_FSDEV 1 00:11:55.375 #define SPDK_CONFIG_APPS 1 00:11:55.375 #define SPDK_CONFIG_ARCH native 00:11:55.375 #define SPDK_CONFIG_ASAN 1 00:11:55.375 #undef SPDK_CONFIG_AVAHI 00:11:55.375 #undef SPDK_CONFIG_CET 00:11:55.375 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:11:55.375 #define SPDK_CONFIG_COVERAGE 1 00:11:55.375 #define SPDK_CONFIG_CROSS_PREFIX 00:11:55.375 #undef SPDK_CONFIG_CRYPTO 00:11:55.375 #undef SPDK_CONFIG_CRYPTO_MLX5 00:11:55.375 #undef SPDK_CONFIG_CUSTOMOCF 00:11:55.375 #undef SPDK_CONFIG_DAOS 00:11:55.375 #define SPDK_CONFIG_DAOS_DIR 00:11:55.375 #define SPDK_CONFIG_DEBUG 1 00:11:55.375 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:11:55.375 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:11:55.375 #define SPDK_CONFIG_DPDK_INC_DIR 00:11:55.375 #define SPDK_CONFIG_DPDK_LIB_DIR 00:11:55.375 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:11:55.375 #undef SPDK_CONFIG_DPDK_UADK 00:11:55.375 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:11:55.375 #define SPDK_CONFIG_EXAMPLES 1 00:11:55.375 #undef SPDK_CONFIG_FC 00:11:55.375 #define SPDK_CONFIG_FC_PATH 00:11:55.375 #define SPDK_CONFIG_FIO_PLUGIN 1 00:11:55.375 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:11:55.375 #define SPDK_CONFIG_FSDEV 1 00:11:55.375 #undef SPDK_CONFIG_FUSE 00:11:55.375 #undef SPDK_CONFIG_FUZZER 00:11:55.375 #define SPDK_CONFIG_FUZZER_LIB 00:11:55.375 #undef SPDK_CONFIG_GOLANG 00:11:55.375 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:11:55.375 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:11:55.375 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:11:55.375 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:11:55.375 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:11:55.375 #undef SPDK_CONFIG_HAVE_LIBBSD 00:11:55.375 #undef SPDK_CONFIG_HAVE_LZ4 00:11:55.375 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:11:55.375 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:11:55.375 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:11:55.375 #define SPDK_CONFIG_IDXD 1 00:11:55.375 #define SPDK_CONFIG_IDXD_KERNEL 1 00:11:55.375 #undef SPDK_CONFIG_IPSEC_MB 00:11:55.375 #define SPDK_CONFIG_IPSEC_MB_DIR 00:11:55.375 #define SPDK_CONFIG_ISAL 1 00:11:55.375 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:11:55.375 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:11:55.375 #define SPDK_CONFIG_LIBDIR 00:11:55.375 #undef SPDK_CONFIG_LTO 00:11:55.375 #define SPDK_CONFIG_MAX_LCORES 128 00:11:55.375 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:11:55.375 #define SPDK_CONFIG_NVME_CUSE 1 00:11:55.375 #undef SPDK_CONFIG_OCF 00:11:55.375 #define SPDK_CONFIG_OCF_PATH 00:11:55.375 #define SPDK_CONFIG_OPENSSL_PATH 00:11:55.375 #undef SPDK_CONFIG_PGO_CAPTURE 00:11:55.375 #define SPDK_CONFIG_PGO_DIR 00:11:55.375 #undef SPDK_CONFIG_PGO_USE 00:11:55.375 #define SPDK_CONFIG_PREFIX /usr/local 00:11:55.375 #undef SPDK_CONFIG_RAID5F 00:11:55.375 #undef SPDK_CONFIG_RBD 00:11:55.375 #define SPDK_CONFIG_RDMA 1 00:11:55.375 #define SPDK_CONFIG_RDMA_PROV verbs 00:11:55.375 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:11:55.375 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:11:55.375 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:11:55.375 #define SPDK_CONFIG_SHARED 1 00:11:55.375 #undef SPDK_CONFIG_SMA 00:11:55.375 #define SPDK_CONFIG_TESTS 1 00:11:55.375 #undef SPDK_CONFIG_TSAN 00:11:55.375 #define SPDK_CONFIG_UBLK 1 00:11:55.375 #define SPDK_CONFIG_UBSAN 1 00:11:55.375 #undef SPDK_CONFIG_UNIT_TESTS 00:11:55.375 #undef SPDK_CONFIG_URING 00:11:55.375 #define SPDK_CONFIG_URING_PATH 00:11:55.375 #undef SPDK_CONFIG_URING_ZNS 00:11:55.375 #undef SPDK_CONFIG_USDT 00:11:55.375 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:11:55.375 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:11:55.375 #undef SPDK_CONFIG_VFIO_USER 00:11:55.375 #define SPDK_CONFIG_VFIO_USER_DIR 00:11:55.375 #define SPDK_CONFIG_VHOST 1 00:11:55.375 #define SPDK_CONFIG_VIRTIO 1 00:11:55.375 #undef SPDK_CONFIG_VTUNE 00:11:55.375 #define SPDK_CONFIG_VTUNE_DIR 00:11:55.375 #define SPDK_CONFIG_WERROR 1 00:11:55.375 #define SPDK_CONFIG_WPDK_DIR 00:11:55.375 #undef SPDK_CONFIG_XNVME 00:11:55.375 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:11:55.375 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:11:55.375 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:55.375 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:11:55.375 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:55.375 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:55.375 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:55.375 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:55.375 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:55.375 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:55.375 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:11:55.375 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:55.375 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:11:55.375 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:11:55.375 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:11:55.375 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:11:55.375 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:11:55.375 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:11:55.375 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:11:55.375 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:11:55.375 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:11:55.375 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:11:55.375 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:11:55.375 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:11:55.375 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:11:55.375 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:11:55.375 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:11:55.376 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:11:55.376 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:11:55.376 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:11:55.376 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:11:55.376 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:11:55.376 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:11:55.376 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:11:55.376 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:11:55.376 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:11:55.376 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:11:55.376 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:11:55.376 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:11:55.376 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 1 00:11:55.376 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:11:55.376 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:11:55.376 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:11:55.376 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:11:55.376 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:11:55.376 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:11:55.376 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:11:55.376 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:11:55.376 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:11:55.376 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:11:55.376 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:11:55.637 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:11:55.637 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:11:55.637 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:11:55.637 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:11:55.637 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:11:55.637 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:11:55.637 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:11:55.637 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:11:55.637 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:11:55.637 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:11:55.637 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:11:55.637 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:11:55.637 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:11:55.637 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:11:55.637 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:11:55.637 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:11:55.637 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:11:55.637 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:11:55.637 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:11:55.637 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:11:55.637 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:11:55.637 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:11:55.637 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 0 00:11:55.637 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:11:55.637 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:11:55.637 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:11:55.637 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:11:55.637 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:11:55.637 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:11:55.637 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:11:55.637 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:11:55.637 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:11:55.637 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:11:55.637 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:11:55.637 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:11:55.637 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:11:55.637 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:11:55.637 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:11:55.637 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:11:55.637 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:11:55.637 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:11:55.637 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:11:55.637 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:11:55.637 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:11:55.637 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:11:55.637 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:11:55.637 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:11:55.637 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:11:55.637 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:11:55.637 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:11:55.637 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 1 00:11:55.637 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:11:55.637 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:11:55.637 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:11:55.637 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 00:11:55.637 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:11:55.637 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:11:55.637 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:11:55.637 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:11:55.637 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:11:55.637 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:11:55.637 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:11:55.637 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:11:55.637 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:11:55.637 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:11:55.637 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:11:55.637 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:11:55.637 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:11:55.637 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : 00:11:55.637 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:11:55.637 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:11:55.637 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:11:55.637 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:11:55.637 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:11:55.637 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:11:55.637 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:11:55.637 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:11:55.637 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:11:55.637 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:11:55.637 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:11:55.637 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:11:55.637 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:11:55.638 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:11:55.638 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:11:55.638 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:11:55.638 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:11:55.638 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:11:55.638 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:11:55.638 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:11:55.638 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:11:55.638 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:11:55.638 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:11:55.638 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:11:55.638 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:11:55.638 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:11:55.638 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:11:55.638 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:11:55.638 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:11:55.638 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:11:55.638 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:11:55.638 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:11:55.638 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:11:55.638 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:11:55.638 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:11:55.638 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # : 0 00:11:55.638 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:11:55.638 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:11:55.638 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:11:55.638 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:11:55.638 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:11:55.638 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:55.638 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:55.638 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:55.638 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:55.638 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:11:55.638 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:11:55.638 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:11:55.638 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:11:55.638 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:11:55.638 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:11:55.638 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:55.638 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:55.638 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:55.638 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:55.638 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:11:55.638 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:11:55.638 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@206 -- # cat 00:11:55.638 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:11:55.638 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:55.638 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:55.638 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:55.638 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:55.638 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:11:55.638 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:11:55.638 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:55.638 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:55.638 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:55.638 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:55.638 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:55.638 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:55.638 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:55.638 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:55.638 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:11:55.638 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:11:55.638 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:55.638 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:55.638 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:11:55.638 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:11:55.638 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@269 -- # _LCOV= 00:11:55.638 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:11:55.638 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:11:55.638 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:11:55.638 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:11:55.638 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@275 -- # lcov_opt= 00:11:55.638 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:11:55.638 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # export valgrind= 00:11:55.638 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # valgrind= 00:11:55.638 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # uname -s 00:11:55.638 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:11:55.638 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:11:55.639 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:11:55.639 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:11:55.639 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@289 -- # MAKE=make 00:11:55.639 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j96 00:11:55.639 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:11:55.639 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:11:55.639 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:11:55.639 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # TEST_MODE= 00:11:55.639 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@310 -- # for i in "$@" 00:11:55.639 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@311 -- # case "$i" in 00:11:55.639 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@316 -- # TEST_TRANSPORT=tcp 00:11:55.639 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # [[ -z 3809932 ]] 00:11:55.639 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # kill -0 3809932 00:11:55.639 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1696 -- # set_test_storage 2147483648 00:11:55.639 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:11:55.639 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:11:55.639 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local mount target_dir 00:11:55.639 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:11:55.639 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:11:55.639 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:11:55.639 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:11:55.639 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.mZiBv7 00:11:55.639 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:11:55.639 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:11:55.639 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:11:55.639 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.mZiBv7/tests/target /tmp/spdk.mZiBv7 00:11:55.639 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:11:55.639 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:55.639 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # df -T 00:11:55.639 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:11:55.639 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_devtmpfs 00:11:55.639 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:11:55.639 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=67108864 00:11:55.639 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=67108864 00:11:55.639 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:11:55.639 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:55.639 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/pmem0 00:11:55.639 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=ext2 00:11:55.639 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=722997248 00:11:55.639 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=5284429824 00:11:55.639 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=4561432576 00:11:55.639 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:55.639 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_root 00:11:55.639 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=overlay 00:11:55.639 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=88694931456 00:11:55.639 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=95552405504 00:11:55.639 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=6857474048 00:11:55.639 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:55.639 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:55.639 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:55.639 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=47764836352 00:11:55.639 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=47776202752 00:11:55.639 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=11366400 00:11:55.639 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:55.639 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:55.639 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:55.639 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=19087462400 00:11:55.639 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=19110481920 00:11:55.639 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=23019520 00:11:55.639 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:55.639 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:55.639 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:55.639 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=47775793152 00:11:55.639 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=47776202752 00:11:55.639 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=409600 00:11:55.639 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:55.639 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:55.639 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:55.639 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=9555226624 00:11:55.639 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=9555238912 00:11:55.639 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:11:55.639 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:55.639 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:11:55.639 * Looking for test storage... 00:11:55.639 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@381 -- # local target_space new_size 00:11:55.639 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:11:55.639 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:55.639 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:11:55.639 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # mount=/ 00:11:55.639 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@387 -- # target_space=88694931456 00:11:55.639 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:11:55.639 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:11:55.639 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == tmpfs ]] 00:11:55.639 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == ramfs ]] 00:11:55.639 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ / == / ]] 00:11:55.639 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@394 -- # new_size=9072066560 00:11:55.639 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@395 -- # (( new_size * 100 / sizes[/] > 95 )) 00:11:55.639 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:55.639 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:55.639 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:55.639 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:55.639 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@402 -- # return 0 00:11:55.639 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1698 -- # set -o errtrace 00:11:55.639 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1699 -- # shopt -s extdebug 00:11:55.639 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1700 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:11:55.639 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1702 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:11:55.639 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1703 -- # true 00:11:55.639 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # xtrace_fd 00:11:55.639 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:11:55.639 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:11:55.639 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:11:55.639 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:11:55.639 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:11:55.640 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:11:55.640 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:11:55.640 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:11:55.640 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:55.640 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lcov --version 00:11:55.640 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:55.640 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:55.640 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:55.640 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:55.640 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:55.640 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:11:55.640 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:11:55.640 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:11:55.640 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:11:55.640 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:11:55.640 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:11:55.640 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:11:55.640 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:55.640 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:11:55.640 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:11:55.640 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:55.640 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:55.640 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:11:55.640 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:11:55.640 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:55.640 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:11:55.640 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:11:55.640 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:11:55.640 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:11:55.640 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:55.640 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:11:55.640 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:11:55.640 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:55.640 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:55.640 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:11:55.640 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:55.640 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:55.640 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:55.640 --rc genhtml_branch_coverage=1 00:11:55.640 --rc genhtml_function_coverage=1 00:11:55.640 --rc genhtml_legend=1 00:11:55.640 --rc geninfo_all_blocks=1 00:11:55.640 --rc geninfo_unexecuted_blocks=1 00:11:55.640 00:11:55.640 ' 00:11:55.640 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:55.640 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:55.640 --rc genhtml_branch_coverage=1 00:11:55.640 --rc genhtml_function_coverage=1 00:11:55.640 --rc genhtml_legend=1 00:11:55.640 --rc geninfo_all_blocks=1 00:11:55.640 --rc geninfo_unexecuted_blocks=1 00:11:55.640 00:11:55.640 ' 00:11:55.640 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:55.640 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:55.640 --rc genhtml_branch_coverage=1 00:11:55.640 --rc genhtml_function_coverage=1 00:11:55.640 --rc genhtml_legend=1 00:11:55.640 --rc geninfo_all_blocks=1 00:11:55.640 --rc geninfo_unexecuted_blocks=1 00:11:55.640 00:11:55.640 ' 00:11:55.640 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:55.640 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:55.640 --rc genhtml_branch_coverage=1 00:11:55.640 --rc genhtml_function_coverage=1 00:11:55.640 --rc genhtml_legend=1 00:11:55.640 --rc geninfo_all_blocks=1 00:11:55.640 --rc geninfo_unexecuted_blocks=1 00:11:55.640 00:11:55.640 ' 00:11:55.640 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:55.640 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:11:55.640 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:55.640 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:55.640 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:55.640 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:55.640 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:55.640 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:55.640 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:55.640 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:55.640 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:55.640 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:55.640 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:11:55.640 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:11:55.640 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:55.640 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:55.640 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:55.640 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:55.640 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:55.640 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:11:55.640 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:55.640 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:55.640 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:55.640 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:55.640 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:55.640 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:55.640 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:11:55.640 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:55.640 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:11:55.640 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:55.640 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:55.640 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:55.640 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:55.640 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:55.640 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:55.640 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:55.640 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:55.640 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:55.640 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:55.640 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:11:55.640 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:11:55.641 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:11:55.641 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:55.641 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:55.641 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:55.641 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:55.641 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:55.641 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:55.641 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:55.641 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:55.641 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:55.641 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:55.641 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:11:55.641 10:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:00.904 10:13:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:00.904 10:13:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:12:00.904 10:13:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:00.904 10:13:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:00.904 10:13:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:00.904 10:13:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:00.904 10:13:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:00.904 10:13:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:12:00.904 10:13:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:00.904 10:13:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:12:00.904 10:13:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:12:00.904 10:13:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:12:00.904 10:13:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:12:00.904 10:13:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:12:00.904 10:13:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:12:00.904 10:13:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:00.904 10:13:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:00.904 10:13:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:00.904 10:13:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:00.904 10:13:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:00.904 10:13:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:00.904 10:13:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:00.904 10:13:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:00.904 10:13:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:00.904 10:13:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:00.904 10:13:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:00.904 10:13:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:00.904 10:13:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:00.904 10:13:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:00.904 10:13:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:00.904 10:13:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:00.904 10:13:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:00.904 10:13:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:00.904 10:13:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:00.904 10:13:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:12:00.904 Found 0000:af:00.0 (0x8086 - 0x159b) 00:12:00.904 10:13:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:00.904 10:13:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:00.904 10:13:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:00.904 10:13:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:00.904 10:13:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:00.904 10:13:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:00.904 10:13:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:12:00.904 Found 0000:af:00.1 (0x8086 - 0x159b) 00:12:00.904 10:13:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:00.904 10:13:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:00.904 10:13:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:00.904 10:13:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:00.904 10:13:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:00.904 10:13:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:00.904 10:13:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:00.904 10:13:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:00.904 10:13:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:00.904 10:13:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:00.904 10:13:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:00.904 10:13:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:00.904 10:13:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:00.905 10:13:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:00.905 10:13:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:00.905 10:13:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:12:00.905 Found net devices under 0000:af:00.0: cvl_0_0 00:12:00.905 10:13:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:00.905 10:13:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:00.905 10:13:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:00.905 10:13:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:00.905 10:13:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:00.905 10:13:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:00.905 10:13:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:00.905 10:13:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:00.905 10:13:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:12:00.905 Found net devices under 0000:af:00.1: cvl_0_1 00:12:00.905 10:13:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:00.905 10:13:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:00.905 10:13:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # is_hw=yes 00:12:00.905 10:13:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:00.905 10:13:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:00.905 10:13:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:00.905 10:13:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:00.905 10:13:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:00.905 10:13:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:00.905 10:13:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:00.905 10:13:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:00.905 10:13:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:00.905 10:13:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:00.905 10:13:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:00.905 10:13:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:00.905 10:13:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:00.905 10:13:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:00.905 10:13:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:00.905 10:13:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:00.905 10:13:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:00.905 10:13:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:01.162 10:13:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:01.162 10:13:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:01.162 10:13:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:01.162 10:13:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:01.162 10:13:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:01.162 10:13:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:01.162 10:13:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:01.162 10:13:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:01.162 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:01.162 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.391 ms 00:12:01.162 00:12:01.162 --- 10.0.0.2 ping statistics --- 00:12:01.162 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:01.162 rtt min/avg/max/mdev = 0.391/0.391/0.391/0.000 ms 00:12:01.162 10:13:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:01.162 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:01.162 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.196 ms 00:12:01.162 00:12:01.162 --- 10.0.0.1 ping statistics --- 00:12:01.162 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:01.162 rtt min/avg/max/mdev = 0.196/0.196/0.196/0.000 ms 00:12:01.162 10:13:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:01.162 10:13:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # return 0 00:12:01.162 10:13:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:01.162 10:13:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:01.162 10:13:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:01.162 10:13:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:01.162 10:13:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:01.162 10:13:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:01.162 10:13:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:01.162 10:13:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:12:01.162 10:13:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:01.162 10:13:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:01.162 10:13:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:01.162 ************************************ 00:12:01.162 START TEST nvmf_filesystem_no_in_capsule 00:12:01.162 ************************************ 00:12:01.162 10:13:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 0 00:12:01.162 10:13:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:12:01.162 10:13:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:12:01.162 10:13:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:01.162 10:13:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:01.163 10:13:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:01.163 10:13:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=3812917 00:12:01.163 10:13:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 3812917 00:12:01.163 10:13:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:01.163 10:13:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 3812917 ']' 00:12:01.163 10:13:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:01.163 10:13:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:01.163 10:13:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:01.163 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:01.163 10:13:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:01.163 10:13:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:01.419 [2024-12-13 10:13:55.127249] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:12:01.419 [2024-12-13 10:13:55.127335] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:01.419 [2024-12-13 10:13:55.244700] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:01.676 [2024-12-13 10:13:55.354602] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:01.676 [2024-12-13 10:13:55.354646] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:01.676 [2024-12-13 10:13:55.354657] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:01.676 [2024-12-13 10:13:55.354668] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:01.676 [2024-12-13 10:13:55.354676] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:01.676 [2024-12-13 10:13:55.356871] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:12:01.676 [2024-12-13 10:13:55.356952] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:12:01.676 [2024-12-13 10:13:55.357048] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:12:01.676 [2024-12-13 10:13:55.357058] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:12:02.240 10:13:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:02.240 10:13:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:12:02.240 10:13:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:02.240 10:13:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:02.240 10:13:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:02.240 10:13:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:02.240 10:13:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:12:02.240 10:13:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:12:02.240 10:13:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.240 10:13:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:02.240 [2024-12-13 10:13:55.978046] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:02.240 10:13:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.240 10:13:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:12:02.240 10:13:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.240 10:13:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:02.804 Malloc1 00:12:02.804 10:13:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.804 10:13:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:02.804 10:13:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.804 10:13:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:02.804 10:13:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.804 10:13:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:02.804 10:13:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.804 10:13:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:02.804 10:13:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.804 10:13:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:02.804 10:13:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.804 10:13:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:02.804 [2024-12-13 10:13:56.593444] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:02.804 10:13:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.804 10:13:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:12:02.804 10:13:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:12:02.804 10:13:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:12:02.804 10:13:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:12:02.804 10:13:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:12:02.804 10:13:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:12:02.804 10:13:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.804 10:13:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:02.804 10:13:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.804 10:13:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:12:02.804 { 00:12:02.804 "name": "Malloc1", 00:12:02.804 "aliases": [ 00:12:02.804 "585aac02-0140-48b2-9e4f-33f94cbdcb28" 00:12:02.804 ], 00:12:02.804 "product_name": "Malloc disk", 00:12:02.804 "block_size": 512, 00:12:02.804 "num_blocks": 1048576, 00:12:02.804 "uuid": "585aac02-0140-48b2-9e4f-33f94cbdcb28", 00:12:02.804 "assigned_rate_limits": { 00:12:02.804 "rw_ios_per_sec": 0, 00:12:02.804 "rw_mbytes_per_sec": 0, 00:12:02.804 "r_mbytes_per_sec": 0, 00:12:02.804 "w_mbytes_per_sec": 0 00:12:02.804 }, 00:12:02.804 "claimed": true, 00:12:02.804 "claim_type": "exclusive_write", 00:12:02.804 "zoned": false, 00:12:02.804 "supported_io_types": { 00:12:02.804 "read": true, 00:12:02.804 "write": true, 00:12:02.804 "unmap": true, 00:12:02.804 "flush": true, 00:12:02.804 "reset": true, 00:12:02.804 "nvme_admin": false, 00:12:02.804 "nvme_io": false, 00:12:02.804 "nvme_io_md": false, 00:12:02.804 "write_zeroes": true, 00:12:02.804 "zcopy": true, 00:12:02.804 "get_zone_info": false, 00:12:02.804 "zone_management": false, 00:12:02.804 "zone_append": false, 00:12:02.804 "compare": false, 00:12:02.804 "compare_and_write": false, 00:12:02.804 "abort": true, 00:12:02.804 "seek_hole": false, 00:12:02.804 "seek_data": false, 00:12:02.804 "copy": true, 00:12:02.804 "nvme_iov_md": false 00:12:02.804 }, 00:12:02.804 "memory_domains": [ 00:12:02.804 { 00:12:02.804 "dma_device_id": "system", 00:12:02.804 "dma_device_type": 1 00:12:02.804 }, 00:12:02.804 { 00:12:02.804 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:02.804 "dma_device_type": 2 00:12:02.804 } 00:12:02.804 ], 00:12:02.804 "driver_specific": {} 00:12:02.804 } 00:12:02.804 ]' 00:12:02.804 10:13:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:12:02.804 10:13:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:12:02.804 10:13:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:12:03.061 10:13:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:12:03.061 10:13:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:12:03.061 10:13:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:12:03.061 10:13:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:12:03.061 10:13:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:03.991 10:13:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:12:03.991 10:13:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:12:03.991 10:13:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:03.991 10:13:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:03.991 10:13:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:12:06.508 10:13:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:06.508 10:13:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:06.508 10:13:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:06.508 10:13:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:06.508 10:13:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:06.508 10:13:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:12:06.508 10:13:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:12:06.508 10:13:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:12:06.508 10:13:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:12:06.508 10:13:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:12:06.508 10:13:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:12:06.508 10:13:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:12:06.508 10:13:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:12:06.508 10:13:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:12:06.508 10:13:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:12:06.508 10:13:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:12:06.508 10:13:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:12:06.508 10:14:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:12:06.509 10:14:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:12:07.439 10:14:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:12:07.439 10:14:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:12:07.439 10:14:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:07.439 10:14:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:07.439 10:14:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:07.696 ************************************ 00:12:07.696 START TEST filesystem_ext4 00:12:07.696 ************************************ 00:12:07.696 10:14:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:12:07.696 10:14:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:12:07.696 10:14:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:07.696 10:14:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:12:07.696 10:14:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:12:07.696 10:14:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:12:07.696 10:14:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:12:07.696 10:14:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # local force 00:12:07.696 10:14:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:12:07.696 10:14:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:12:07.696 10:14:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:12:07.696 mke2fs 1.47.0 (5-Feb-2023) 00:12:07.696 Discarding device blocks: 0/522240 done 00:12:07.696 Creating filesystem with 522240 1k blocks and 130560 inodes 00:12:07.696 Filesystem UUID: f1f16f86-b47a-4fae-aff4-3cf4bc941805 00:12:07.696 Superblock backups stored on blocks: 00:12:07.696 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:12:07.696 00:12:07.696 Allocating group tables: 0/64 done 00:12:07.696 Writing inode tables: 0/64 done 00:12:08.259 Creating journal (8192 blocks): done 00:12:10.191 Writing superblocks and filesystem accounting information: 0/64 done 00:12:10.191 00:12:10.191 10:14:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@949 -- # return 0 00:12:10.191 10:14:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:16.734 10:14:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:16.734 10:14:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:12:16.734 10:14:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:16.734 10:14:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:12:16.734 10:14:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:12:16.734 10:14:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:16.734 10:14:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 3812917 00:12:16.734 10:14:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:16.734 10:14:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:16.734 10:14:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:16.734 10:14:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:16.734 00:12:16.734 real 0m8.740s 00:12:16.734 user 0m0.025s 00:12:16.734 sys 0m0.075s 00:12:16.734 10:14:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:16.734 10:14:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:12:16.734 ************************************ 00:12:16.734 END TEST filesystem_ext4 00:12:16.734 ************************************ 00:12:16.734 10:14:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:12:16.734 10:14:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:16.734 10:14:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:16.734 10:14:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:16.734 ************************************ 00:12:16.734 START TEST filesystem_btrfs 00:12:16.734 ************************************ 00:12:16.734 10:14:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:12:16.734 10:14:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:12:16.734 10:14:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:16.734 10:14:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:12:16.734 10:14:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:12:16.734 10:14:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:12:16.734 10:14:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:12:16.734 10:14:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # local force 00:12:16.734 10:14:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:12:16.734 10:14:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:12:16.734 10:14:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:12:16.734 btrfs-progs v6.8.1 00:12:16.734 See https://btrfs.readthedocs.io for more information. 00:12:16.734 00:12:16.734 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:12:16.734 NOTE: several default settings have changed in version 5.15, please make sure 00:12:16.734 this does not affect your deployments: 00:12:16.734 - DUP for metadata (-m dup) 00:12:16.734 - enabled no-holes (-O no-holes) 00:12:16.734 - enabled free-space-tree (-R free-space-tree) 00:12:16.734 00:12:16.734 Label: (null) 00:12:16.734 UUID: 68d4a410-0deb-45c0-9f58-4656fbb96781 00:12:16.734 Node size: 16384 00:12:16.734 Sector size: 4096 (CPU page size: 4096) 00:12:16.734 Filesystem size: 510.00MiB 00:12:16.734 Block group profiles: 00:12:16.734 Data: single 8.00MiB 00:12:16.734 Metadata: DUP 32.00MiB 00:12:16.734 System: DUP 8.00MiB 00:12:16.734 SSD detected: yes 00:12:16.734 Zoned device: no 00:12:16.734 Features: extref, skinny-metadata, no-holes, free-space-tree 00:12:16.734 Checksum: crc32c 00:12:16.734 Number of devices: 1 00:12:16.734 Devices: 00:12:16.734 ID SIZE PATH 00:12:16.734 1 510.00MiB /dev/nvme0n1p1 00:12:16.734 00:12:16.734 10:14:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@949 -- # return 0 00:12:16.734 10:14:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:16.992 10:14:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:16.992 10:14:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:12:16.992 10:14:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:16.992 10:14:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:12:16.992 10:14:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:12:16.992 10:14:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:16.992 10:14:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 3812917 00:12:16.992 10:14:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:16.992 10:14:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:16.992 10:14:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:16.992 10:14:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:16.992 00:12:16.992 real 0m0.576s 00:12:16.992 user 0m0.033s 00:12:16.992 sys 0m0.107s 00:12:16.992 10:14:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:16.992 10:14:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:12:16.992 ************************************ 00:12:16.992 END TEST filesystem_btrfs 00:12:16.992 ************************************ 00:12:16.992 10:14:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:12:16.992 10:14:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:16.992 10:14:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:16.992 10:14:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:16.992 ************************************ 00:12:16.992 START TEST filesystem_xfs 00:12:16.992 ************************************ 00:12:16.992 10:14:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:12:16.992 10:14:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:12:16.992 10:14:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:16.992 10:14:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:12:16.992 10:14:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:12:16.992 10:14:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:12:16.992 10:14:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # local i=0 00:12:16.992 10:14:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # local force 00:12:16.992 10:14:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:12:16.992 10:14:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@938 -- # force=-f 00:12:16.992 10:14:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:12:17.249 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:12:17.249 = sectsz=512 attr=2, projid32bit=1 00:12:17.249 = crc=1 finobt=1, sparse=1, rmapbt=0 00:12:17.249 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:12:17.249 data = bsize=4096 blocks=130560, imaxpct=25 00:12:17.249 = sunit=0 swidth=0 blks 00:12:17.249 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:12:17.249 log =internal log bsize=4096 blocks=16384, version=2 00:12:17.249 = sectsz=512 sunit=0 blks, lazy-count=1 00:12:17.249 realtime =none extsz=4096 blocks=0, rtextents=0 00:12:17.812 Discarding blocks...Done. 00:12:17.812 10:14:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@949 -- # return 0 00:12:17.812 10:14:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:19.707 10:14:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:19.707 10:14:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:12:19.707 10:14:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:19.707 10:14:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:12:19.707 10:14:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:12:19.707 10:14:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:19.707 10:14:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 3812917 00:12:19.707 10:14:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:19.707 10:14:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:19.707 10:14:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:19.707 10:14:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:19.707 00:12:19.707 real 0m2.667s 00:12:19.707 user 0m0.026s 00:12:19.707 sys 0m0.072s 00:12:19.707 10:14:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:19.707 10:14:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:12:19.707 ************************************ 00:12:19.707 END TEST filesystem_xfs 00:12:19.707 ************************************ 00:12:19.707 10:14:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:12:19.963 10:14:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:12:19.963 10:14:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:20.220 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:20.220 10:14:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:20.220 10:14:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:12:20.220 10:14:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:20.220 10:14:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:20.220 10:14:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:20.220 10:14:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:20.220 10:14:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:12:20.220 10:14:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:20.220 10:14:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.220 10:14:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:20.220 10:14:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.220 10:14:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:12:20.220 10:14:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 3812917 00:12:20.220 10:14:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 3812917 ']' 00:12:20.220 10:14:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # kill -0 3812917 00:12:20.220 10:14:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # uname 00:12:20.220 10:14:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:20.220 10:14:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3812917 00:12:20.220 10:14:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:20.220 10:14:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:20.220 10:14:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3812917' 00:12:20.220 killing process with pid 3812917 00:12:20.220 10:14:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@973 -- # kill 3812917 00:12:20.220 10:14:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@978 -- # wait 3812917 00:12:23.493 10:14:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:12:23.493 00:12:23.493 real 0m21.637s 00:12:23.493 user 1m23.840s 00:12:23.493 sys 0m1.546s 00:12:23.493 10:14:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:23.493 10:14:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:23.493 ************************************ 00:12:23.493 END TEST nvmf_filesystem_no_in_capsule 00:12:23.493 ************************************ 00:12:23.493 10:14:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:12:23.493 10:14:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:23.493 10:14:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:23.493 10:14:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:23.493 ************************************ 00:12:23.493 START TEST nvmf_filesystem_in_capsule 00:12:23.493 ************************************ 00:12:23.493 10:14:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 4096 00:12:23.493 10:14:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:12:23.493 10:14:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:12:23.493 10:14:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:23.493 10:14:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:23.493 10:14:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:23.493 10:14:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=3816720 00:12:23.493 10:14:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 3816720 00:12:23.493 10:14:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:23.493 10:14:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 3816720 ']' 00:12:23.493 10:14:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:23.494 10:14:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:23.494 10:14:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:23.494 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:23.494 10:14:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:23.494 10:14:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:23.494 [2024-12-13 10:14:16.838737] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:12:23.494 [2024-12-13 10:14:16.838824] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:23.494 [2024-12-13 10:14:16.955826] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:23.494 [2024-12-13 10:14:17.063215] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:23.494 [2024-12-13 10:14:17.063258] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:23.494 [2024-12-13 10:14:17.063268] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:23.494 [2024-12-13 10:14:17.063294] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:23.494 [2024-12-13 10:14:17.063302] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:23.494 [2024-12-13 10:14:17.065746] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:12:23.494 [2024-12-13 10:14:17.065821] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:12:23.494 [2024-12-13 10:14:17.065919] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:12:23.494 [2024-12-13 10:14:17.065928] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:12:24.057 10:14:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:24.057 10:14:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:12:24.057 10:14:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:24.057 10:14:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:24.057 10:14:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:24.057 10:14:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:24.057 10:14:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:12:24.057 10:14:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:12:24.057 10:14:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.057 10:14:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:24.057 [2024-12-13 10:14:17.690482] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:24.057 10:14:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.057 10:14:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:12:24.057 10:14:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.057 10:14:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:24.621 Malloc1 00:12:24.621 10:14:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.621 10:14:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:24.621 10:14:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.621 10:14:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:24.621 10:14:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.621 10:14:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:24.621 10:14:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.621 10:14:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:24.621 10:14:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.621 10:14:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:24.621 10:14:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.621 10:14:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:24.621 [2024-12-13 10:14:18.304377] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:24.621 10:14:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.621 10:14:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:12:24.621 10:14:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:12:24.621 10:14:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:12:24.621 10:14:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:12:24.621 10:14:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:12:24.621 10:14:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:12:24.621 10:14:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.621 10:14:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:24.621 10:14:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.621 10:14:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:12:24.621 { 00:12:24.621 "name": "Malloc1", 00:12:24.621 "aliases": [ 00:12:24.621 "b22b346a-d536-4d6b-9ee3-2612613dd34a" 00:12:24.621 ], 00:12:24.621 "product_name": "Malloc disk", 00:12:24.621 "block_size": 512, 00:12:24.621 "num_blocks": 1048576, 00:12:24.621 "uuid": "b22b346a-d536-4d6b-9ee3-2612613dd34a", 00:12:24.621 "assigned_rate_limits": { 00:12:24.621 "rw_ios_per_sec": 0, 00:12:24.621 "rw_mbytes_per_sec": 0, 00:12:24.621 "r_mbytes_per_sec": 0, 00:12:24.621 "w_mbytes_per_sec": 0 00:12:24.621 }, 00:12:24.621 "claimed": true, 00:12:24.621 "claim_type": "exclusive_write", 00:12:24.621 "zoned": false, 00:12:24.621 "supported_io_types": { 00:12:24.621 "read": true, 00:12:24.621 "write": true, 00:12:24.621 "unmap": true, 00:12:24.621 "flush": true, 00:12:24.621 "reset": true, 00:12:24.621 "nvme_admin": false, 00:12:24.621 "nvme_io": false, 00:12:24.621 "nvme_io_md": false, 00:12:24.621 "write_zeroes": true, 00:12:24.621 "zcopy": true, 00:12:24.621 "get_zone_info": false, 00:12:24.621 "zone_management": false, 00:12:24.621 "zone_append": false, 00:12:24.621 "compare": false, 00:12:24.621 "compare_and_write": false, 00:12:24.621 "abort": true, 00:12:24.621 "seek_hole": false, 00:12:24.621 "seek_data": false, 00:12:24.621 "copy": true, 00:12:24.621 "nvme_iov_md": false 00:12:24.621 }, 00:12:24.621 "memory_domains": [ 00:12:24.621 { 00:12:24.621 "dma_device_id": "system", 00:12:24.621 "dma_device_type": 1 00:12:24.621 }, 00:12:24.621 { 00:12:24.621 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:24.621 "dma_device_type": 2 00:12:24.621 } 00:12:24.621 ], 00:12:24.621 "driver_specific": {} 00:12:24.621 } 00:12:24.621 ]' 00:12:24.621 10:14:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:12:24.621 10:14:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:12:24.621 10:14:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:12:24.621 10:14:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:12:24.621 10:14:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:12:24.621 10:14:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:12:24.621 10:14:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:12:24.621 10:14:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:26.041 10:14:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:12:26.041 10:14:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:12:26.041 10:14:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:26.041 10:14:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:26.041 10:14:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:12:27.986 10:14:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:27.987 10:14:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:27.987 10:14:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:27.987 10:14:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:27.987 10:14:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:27.987 10:14:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:12:27.987 10:14:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:12:27.987 10:14:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:12:27.987 10:14:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:12:27.987 10:14:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:12:27.987 10:14:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:12:27.987 10:14:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:12:27.987 10:14:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:12:27.987 10:14:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:12:27.987 10:14:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:12:27.987 10:14:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:12:27.987 10:14:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:12:28.244 10:14:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:12:28.244 10:14:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:12:29.173 10:14:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:12:29.173 10:14:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:12:29.173 10:14:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:29.173 10:14:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:29.173 10:14:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:29.431 ************************************ 00:12:29.431 START TEST filesystem_in_capsule_ext4 00:12:29.431 ************************************ 00:12:29.431 10:14:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:12:29.431 10:14:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:12:29.431 10:14:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:29.431 10:14:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:12:29.431 10:14:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:12:29.431 10:14:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:12:29.431 10:14:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:12:29.431 10:14:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # local force 00:12:29.431 10:14:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:12:29.431 10:14:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:12:29.431 10:14:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:12:29.431 mke2fs 1.47.0 (5-Feb-2023) 00:12:29.431 Discarding device blocks: 0/522240 done 00:12:29.431 Creating filesystem with 522240 1k blocks and 130560 inodes 00:12:29.431 Filesystem UUID: 96ec6e0d-39f2-4405-a671-e9c5951d9dcb 00:12:29.431 Superblock backups stored on blocks: 00:12:29.431 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:12:29.431 00:12:29.431 Allocating group tables: 0/64 done 00:12:29.431 Writing inode tables: 0/64 done 00:12:29.688 Creating journal (8192 blocks): done 00:12:31.876 Writing superblocks and filesystem accounting information: 0/64 2/64 done 00:12:31.876 00:12:31.876 10:14:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@949 -- # return 0 00:12:31.876 10:14:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:37.124 10:14:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:37.381 10:14:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:12:37.382 10:14:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:37.382 10:14:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:12:37.382 10:14:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:12:37.382 10:14:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:37.382 10:14:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 3816720 00:12:37.382 10:14:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:37.382 10:14:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:37.382 10:14:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:37.382 10:14:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:37.382 00:12:37.382 real 0m8.023s 00:12:37.382 user 0m0.030s 00:12:37.382 sys 0m0.071s 00:12:37.382 10:14:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:37.382 10:14:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:12:37.382 ************************************ 00:12:37.382 END TEST filesystem_in_capsule_ext4 00:12:37.382 ************************************ 00:12:37.382 10:14:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:12:37.382 10:14:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:37.382 10:14:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:37.382 10:14:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:37.382 ************************************ 00:12:37.382 START TEST filesystem_in_capsule_btrfs 00:12:37.382 ************************************ 00:12:37.382 10:14:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:12:37.382 10:14:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:12:37.382 10:14:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:37.382 10:14:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:12:37.382 10:14:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:12:37.382 10:14:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:12:37.382 10:14:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:12:37.382 10:14:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # local force 00:12:37.382 10:14:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:12:37.382 10:14:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:12:37.382 10:14:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:12:37.639 btrfs-progs v6.8.1 00:12:37.639 See https://btrfs.readthedocs.io for more information. 00:12:37.639 00:12:37.639 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:12:37.639 NOTE: several default settings have changed in version 5.15, please make sure 00:12:37.639 this does not affect your deployments: 00:12:37.639 - DUP for metadata (-m dup) 00:12:37.639 - enabled no-holes (-O no-holes) 00:12:37.639 - enabled free-space-tree (-R free-space-tree) 00:12:37.639 00:12:37.639 Label: (null) 00:12:37.639 UUID: 4981d7b8-f899-43c1-b29b-241ee94a2d8d 00:12:37.639 Node size: 16384 00:12:37.639 Sector size: 4096 (CPU page size: 4096) 00:12:37.639 Filesystem size: 510.00MiB 00:12:37.639 Block group profiles: 00:12:37.639 Data: single 8.00MiB 00:12:37.639 Metadata: DUP 32.00MiB 00:12:37.639 System: DUP 8.00MiB 00:12:37.639 SSD detected: yes 00:12:37.639 Zoned device: no 00:12:37.639 Features: extref, skinny-metadata, no-holes, free-space-tree 00:12:37.639 Checksum: crc32c 00:12:37.639 Number of devices: 1 00:12:37.639 Devices: 00:12:37.639 ID SIZE PATH 00:12:37.639 1 510.00MiB /dev/nvme0n1p1 00:12:37.639 00:12:37.639 10:14:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@949 -- # return 0 00:12:37.639 10:14:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:38.568 10:14:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:38.568 10:14:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:12:38.568 10:14:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:38.568 10:14:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:12:38.568 10:14:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:12:38.568 10:14:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:38.568 10:14:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 3816720 00:12:38.568 10:14:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:38.568 10:14:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:38.569 10:14:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:38.569 10:14:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:38.569 00:12:38.569 real 0m1.074s 00:12:38.569 user 0m0.024s 00:12:38.569 sys 0m0.115s 00:12:38.569 10:14:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:38.569 10:14:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:12:38.569 ************************************ 00:12:38.569 END TEST filesystem_in_capsule_btrfs 00:12:38.569 ************************************ 00:12:38.569 10:14:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:12:38.569 10:14:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:38.569 10:14:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:38.569 10:14:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:38.569 ************************************ 00:12:38.569 START TEST filesystem_in_capsule_xfs 00:12:38.569 ************************************ 00:12:38.569 10:14:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:12:38.569 10:14:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:12:38.569 10:14:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:38.569 10:14:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:12:38.569 10:14:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:12:38.569 10:14:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:12:38.569 10:14:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # local i=0 00:12:38.569 10:14:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # local force 00:12:38.569 10:14:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:12:38.569 10:14:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@938 -- # force=-f 00:12:38.569 10:14:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:12:38.569 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:12:38.569 = sectsz=512 attr=2, projid32bit=1 00:12:38.569 = crc=1 finobt=1, sparse=1, rmapbt=0 00:12:38.569 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:12:38.569 data = bsize=4096 blocks=130560, imaxpct=25 00:12:38.569 = sunit=0 swidth=0 blks 00:12:38.569 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:12:38.569 log =internal log bsize=4096 blocks=16384, version=2 00:12:38.569 = sectsz=512 sunit=0 blks, lazy-count=1 00:12:38.569 realtime =none extsz=4096 blocks=0, rtextents=0 00:12:39.500 Discarding blocks...Done. 00:12:39.500 10:14:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@949 -- # return 0 00:12:39.500 10:14:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:41.399 10:14:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:41.399 10:14:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:12:41.399 10:14:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:41.399 10:14:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:12:41.399 10:14:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:12:41.399 10:14:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:41.399 10:14:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 3816720 00:12:41.399 10:14:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:41.399 10:14:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:41.399 10:14:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:41.399 10:14:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:41.399 00:12:41.399 real 0m2.686s 00:12:41.399 user 0m0.028s 00:12:41.399 sys 0m0.070s 00:12:41.399 10:14:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:41.399 10:14:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:12:41.399 ************************************ 00:12:41.399 END TEST filesystem_in_capsule_xfs 00:12:41.399 ************************************ 00:12:41.399 10:14:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:12:41.658 10:14:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:12:41.658 10:14:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:41.915 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:41.915 10:14:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:41.915 10:14:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:12:41.915 10:14:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:41.915 10:14:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:41.915 10:14:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:41.915 10:14:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:41.915 10:14:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:12:41.915 10:14:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:41.915 10:14:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.915 10:14:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:41.915 10:14:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.915 10:14:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:12:41.915 10:14:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 3816720 00:12:41.915 10:14:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 3816720 ']' 00:12:41.915 10:14:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # kill -0 3816720 00:12:41.915 10:14:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # uname 00:12:41.915 10:14:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:41.915 10:14:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3816720 00:12:41.915 10:14:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:41.915 10:14:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:41.915 10:14:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3816720' 00:12:41.915 killing process with pid 3816720 00:12:41.915 10:14:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@973 -- # kill 3816720 00:12:41.915 10:14:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@978 -- # wait 3816720 00:12:45.195 10:14:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:12:45.195 00:12:45.195 real 0m21.588s 00:12:45.195 user 1m23.607s 00:12:45.195 sys 0m1.617s 00:12:45.195 10:14:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:45.195 10:14:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:45.195 ************************************ 00:12:45.195 END TEST nvmf_filesystem_in_capsule 00:12:45.195 ************************************ 00:12:45.195 10:14:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:12:45.195 10:14:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:45.195 10:14:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:12:45.195 10:14:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:45.195 10:14:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:12:45.195 10:14:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:45.195 10:14:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:45.195 rmmod nvme_tcp 00:12:45.195 rmmod nvme_fabrics 00:12:45.195 rmmod nvme_keyring 00:12:45.195 10:14:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:45.195 10:14:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:12:45.195 10:14:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:12:45.195 10:14:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:12:45.195 10:14:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:45.195 10:14:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:45.195 10:14:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:45.195 10:14:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr 00:12:45.195 10:14:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-save 00:12:45.195 10:14:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:45.195 10:14:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-restore 00:12:45.195 10:14:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:45.195 10:14:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:45.195 10:14:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:45.195 10:14:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:45.195 10:14:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:47.099 10:14:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:47.099 00:12:47.099 real 0m51.495s 00:12:47.099 user 2m49.383s 00:12:47.100 sys 0m7.481s 00:12:47.100 10:14:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:47.100 10:14:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:47.100 ************************************ 00:12:47.100 END TEST nvmf_filesystem 00:12:47.100 ************************************ 00:12:47.100 10:14:40 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:12:47.100 10:14:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:47.100 10:14:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:47.100 10:14:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:47.100 ************************************ 00:12:47.100 START TEST nvmf_target_discovery 00:12:47.100 ************************************ 00:12:47.100 10:14:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:12:47.100 * Looking for test storage... 00:12:47.100 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:47.100 10:14:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:47.100 10:14:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # lcov --version 00:12:47.100 10:14:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:47.100 10:14:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:47.100 10:14:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:47.100 10:14:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:47.100 10:14:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:47.100 10:14:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:12:47.100 10:14:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:12:47.100 10:14:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:12:47.100 10:14:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:12:47.100 10:14:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:12:47.100 10:14:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:12:47.100 10:14:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:12:47.100 10:14:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:47.100 10:14:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:12:47.100 10:14:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:12:47.100 10:14:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:47.100 10:14:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:47.100 10:14:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:12:47.100 10:14:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:12:47.100 10:14:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:47.100 10:14:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:12:47.100 10:14:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:12:47.100 10:14:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:12:47.100 10:14:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:12:47.100 10:14:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:47.100 10:14:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:12:47.100 10:14:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:12:47.100 10:14:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:47.100 10:14:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:47.100 10:14:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:12:47.100 10:14:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:47.100 10:14:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:47.100 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:47.100 --rc genhtml_branch_coverage=1 00:12:47.100 --rc genhtml_function_coverage=1 00:12:47.100 --rc genhtml_legend=1 00:12:47.100 --rc geninfo_all_blocks=1 00:12:47.100 --rc geninfo_unexecuted_blocks=1 00:12:47.100 00:12:47.100 ' 00:12:47.100 10:14:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:47.100 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:47.100 --rc genhtml_branch_coverage=1 00:12:47.100 --rc genhtml_function_coverage=1 00:12:47.100 --rc genhtml_legend=1 00:12:47.100 --rc geninfo_all_blocks=1 00:12:47.100 --rc geninfo_unexecuted_blocks=1 00:12:47.100 00:12:47.100 ' 00:12:47.100 10:14:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:47.100 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:47.100 --rc genhtml_branch_coverage=1 00:12:47.100 --rc genhtml_function_coverage=1 00:12:47.100 --rc genhtml_legend=1 00:12:47.100 --rc geninfo_all_blocks=1 00:12:47.100 --rc geninfo_unexecuted_blocks=1 00:12:47.100 00:12:47.100 ' 00:12:47.100 10:14:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:47.100 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:47.100 --rc genhtml_branch_coverage=1 00:12:47.100 --rc genhtml_function_coverage=1 00:12:47.100 --rc genhtml_legend=1 00:12:47.100 --rc geninfo_all_blocks=1 00:12:47.100 --rc geninfo_unexecuted_blocks=1 00:12:47.100 00:12:47.100 ' 00:12:47.100 10:14:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:47.100 10:14:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:12:47.100 10:14:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:47.100 10:14:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:47.100 10:14:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:47.100 10:14:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:47.100 10:14:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:47.100 10:14:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:47.100 10:14:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:47.100 10:14:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:47.100 10:14:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:47.100 10:14:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:47.100 10:14:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:12:47.100 10:14:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:12:47.100 10:14:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:47.100 10:14:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:47.100 10:14:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:47.100 10:14:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:47.100 10:14:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:47.100 10:14:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:12:47.100 10:14:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:47.100 10:14:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:47.100 10:14:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:47.100 10:14:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:47.100 10:14:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:47.100 10:14:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:47.100 10:14:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:12:47.100 10:14:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:47.101 10:14:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:12:47.101 10:14:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:47.101 10:14:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:47.101 10:14:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:47.101 10:14:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:47.101 10:14:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:47.101 10:14:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:47.101 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:47.101 10:14:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:47.101 10:14:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:47.101 10:14:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:47.101 10:14:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:12:47.101 10:14:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:12:47.101 10:14:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:12:47.101 10:14:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:12:47.101 10:14:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:12:47.101 10:14:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:47.101 10:14:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:47.101 10:14:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:47.101 10:14:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:47.101 10:14:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:47.101 10:14:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:47.101 10:14:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:47.101 10:14:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:47.101 10:14:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:47.101 10:14:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:47.101 10:14:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:12:47.101 10:14:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:52.373 10:14:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:52.373 10:14:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:12:52.373 10:14:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:52.373 10:14:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:52.373 10:14:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:52.374 10:14:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:52.374 10:14:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:52.374 10:14:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:12:52.374 10:14:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:52.374 10:14:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:12:52.374 10:14:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:12:52.374 10:14:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:12:52.374 10:14:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:12:52.374 10:14:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:12:52.374 10:14:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:12:52.374 10:14:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:52.374 10:14:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:52.374 10:14:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:52.374 10:14:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:52.374 10:14:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:52.374 10:14:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:52.374 10:14:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:52.374 10:14:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:52.374 10:14:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:52.374 10:14:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:52.374 10:14:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:52.374 10:14:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:52.374 10:14:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:52.374 10:14:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:52.374 10:14:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:52.374 10:14:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:52.374 10:14:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:52.374 10:14:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:52.374 10:14:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:52.374 10:14:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:12:52.374 Found 0000:af:00.0 (0x8086 - 0x159b) 00:12:52.374 10:14:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:52.374 10:14:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:52.374 10:14:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:52.374 10:14:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:52.374 10:14:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:52.374 10:14:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:52.374 10:14:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:12:52.374 Found 0000:af:00.1 (0x8086 - 0x159b) 00:12:52.374 10:14:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:52.374 10:14:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:52.374 10:14:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:52.374 10:14:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:52.374 10:14:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:52.374 10:14:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:52.374 10:14:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:52.374 10:14:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:52.374 10:14:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:52.374 10:14:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:52.374 10:14:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:52.374 10:14:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:52.374 10:14:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:52.374 10:14:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:52.374 10:14:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:52.374 10:14:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:12:52.374 Found net devices under 0000:af:00.0: cvl_0_0 00:12:52.374 10:14:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:52.374 10:14:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:52.374 10:14:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:52.374 10:14:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:52.374 10:14:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:52.374 10:14:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:52.374 10:14:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:52.374 10:14:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:52.374 10:14:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:12:52.374 Found net devices under 0000:af:00.1: cvl_0_1 00:12:52.374 10:14:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:52.374 10:14:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:52.374 10:14:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:12:52.374 10:14:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:52.374 10:14:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:52.374 10:14:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:52.374 10:14:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:52.374 10:14:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:52.374 10:14:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:52.374 10:14:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:52.374 10:14:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:52.374 10:14:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:52.374 10:14:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:52.374 10:14:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:52.374 10:14:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:52.374 10:14:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:52.374 10:14:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:52.374 10:14:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:52.374 10:14:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:52.374 10:14:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:52.374 10:14:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:52.633 10:14:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:52.633 10:14:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:52.633 10:14:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:52.633 10:14:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:52.633 10:14:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:52.633 10:14:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:52.633 10:14:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:52.633 10:14:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:52.633 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:52.633 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.484 ms 00:12:52.633 00:12:52.633 --- 10.0.0.2 ping statistics --- 00:12:52.633 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:52.633 rtt min/avg/max/mdev = 0.484/0.484/0.484/0.000 ms 00:12:52.633 10:14:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:52.633 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:52.633 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.205 ms 00:12:52.633 00:12:52.633 --- 10.0.0.1 ping statistics --- 00:12:52.633 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:52.633 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:12:52.633 10:14:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:52.633 10:14:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # return 0 00:12:52.633 10:14:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:52.633 10:14:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:52.633 10:14:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:52.633 10:14:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:52.633 10:14:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:52.633 10:14:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:52.633 10:14:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:52.633 10:14:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:12:52.633 10:14:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:52.633 10:14:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:52.633 10:14:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:52.633 10:14:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # nvmfpid=3823780 00:12:52.633 10:14:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # waitforlisten 3823780 00:12:52.633 10:14:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:52.633 10:14:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # '[' -z 3823780 ']' 00:12:52.633 10:14:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:52.633 10:14:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:52.633 10:14:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:52.633 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:52.633 10:14:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:52.633 10:14:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:52.891 [2024-12-13 10:14:46.546140] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:12:52.891 [2024-12-13 10:14:46.546235] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:52.891 [2024-12-13 10:14:46.664773] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:52.891 [2024-12-13 10:14:46.773570] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:52.891 [2024-12-13 10:14:46.773617] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:52.891 [2024-12-13 10:14:46.773627] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:52.891 [2024-12-13 10:14:46.773654] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:52.891 [2024-12-13 10:14:46.773663] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:52.891 [2024-12-13 10:14:46.775977] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:12:52.891 [2024-12-13 10:14:46.776052] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:12:52.891 [2024-12-13 10:14:46.776158] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:12:52.891 [2024-12-13 10:14:46.776167] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:12:53.827 10:14:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:53.827 10:14:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@868 -- # return 0 00:12:53.827 10:14:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:53.827 10:14:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:53.827 10:14:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:53.827 10:14:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:53.827 10:14:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:53.827 10:14:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.827 10:14:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:53.827 [2024-12-13 10:14:47.404034] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:53.827 10:14:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.827 10:14:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:12:53.827 10:14:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:53.827 10:14:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:12:53.827 10:14:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.827 10:14:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:53.827 Null1 00:12:53.827 10:14:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.827 10:14:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:53.827 10:14:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.827 10:14:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:53.827 10:14:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.827 10:14:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:12:53.827 10:14:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.827 10:14:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:53.827 10:14:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.827 10:14:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:53.827 10:14:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.827 10:14:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:53.827 [2024-12-13 10:14:47.467051] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:53.827 10:14:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.827 10:14:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:53.827 10:14:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:12:53.827 10:14:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.827 10:14:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:53.827 Null2 00:12:53.827 10:14:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.827 10:14:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:12:53.827 10:14:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.827 10:14:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:53.827 10:14:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.827 10:14:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:12:53.827 10:14:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.827 10:14:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:53.827 10:14:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.827 10:14:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:12:53.827 10:14:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.827 10:14:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:53.827 10:14:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.827 10:14:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:53.827 10:14:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:12:53.827 10:14:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.827 10:14:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:53.827 Null3 00:12:53.827 10:14:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.827 10:14:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:12:53.827 10:14:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.827 10:14:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:53.827 10:14:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.827 10:14:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:12:53.827 10:14:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.827 10:14:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:53.827 10:14:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.827 10:14:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:12:53.827 10:14:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.827 10:14:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:53.827 10:14:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.828 10:14:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:53.828 10:14:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:12:53.828 10:14:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.828 10:14:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:53.828 Null4 00:12:53.828 10:14:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.828 10:14:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:12:53.828 10:14:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.828 10:14:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:53.828 10:14:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.828 10:14:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:12:53.828 10:14:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.828 10:14:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:53.828 10:14:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.828 10:14:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:12:53.828 10:14:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.828 10:14:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:53.828 10:14:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.828 10:14:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:53.828 10:14:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.828 10:14:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:53.828 10:14:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.828 10:14:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:12:53.828 10:14:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.828 10:14:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:53.828 10:14:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.828 10:14:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:12:54.087 00:12:54.087 Discovery Log Number of Records 6, Generation counter 6 00:12:54.087 =====Discovery Log Entry 0====== 00:12:54.087 trtype: tcp 00:12:54.087 adrfam: ipv4 00:12:54.087 subtype: current discovery subsystem 00:12:54.087 treq: not required 00:12:54.087 portid: 0 00:12:54.087 trsvcid: 4420 00:12:54.087 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:54.087 traddr: 10.0.0.2 00:12:54.087 eflags: explicit discovery connections, duplicate discovery information 00:12:54.087 sectype: none 00:12:54.087 =====Discovery Log Entry 1====== 00:12:54.087 trtype: tcp 00:12:54.087 adrfam: ipv4 00:12:54.087 subtype: nvme subsystem 00:12:54.087 treq: not required 00:12:54.087 portid: 0 00:12:54.087 trsvcid: 4420 00:12:54.087 subnqn: nqn.2016-06.io.spdk:cnode1 00:12:54.087 traddr: 10.0.0.2 00:12:54.087 eflags: none 00:12:54.087 sectype: none 00:12:54.087 =====Discovery Log Entry 2====== 00:12:54.087 trtype: tcp 00:12:54.087 adrfam: ipv4 00:12:54.087 subtype: nvme subsystem 00:12:54.087 treq: not required 00:12:54.087 portid: 0 00:12:54.087 trsvcid: 4420 00:12:54.087 subnqn: nqn.2016-06.io.spdk:cnode2 00:12:54.087 traddr: 10.0.0.2 00:12:54.087 eflags: none 00:12:54.087 sectype: none 00:12:54.087 =====Discovery Log Entry 3====== 00:12:54.087 trtype: tcp 00:12:54.087 adrfam: ipv4 00:12:54.087 subtype: nvme subsystem 00:12:54.087 treq: not required 00:12:54.087 portid: 0 00:12:54.087 trsvcid: 4420 00:12:54.087 subnqn: nqn.2016-06.io.spdk:cnode3 00:12:54.087 traddr: 10.0.0.2 00:12:54.087 eflags: none 00:12:54.087 sectype: none 00:12:54.087 =====Discovery Log Entry 4====== 00:12:54.087 trtype: tcp 00:12:54.087 adrfam: ipv4 00:12:54.087 subtype: nvme subsystem 00:12:54.087 treq: not required 00:12:54.087 portid: 0 00:12:54.087 trsvcid: 4420 00:12:54.087 subnqn: nqn.2016-06.io.spdk:cnode4 00:12:54.087 traddr: 10.0.0.2 00:12:54.087 eflags: none 00:12:54.087 sectype: none 00:12:54.087 =====Discovery Log Entry 5====== 00:12:54.087 trtype: tcp 00:12:54.087 adrfam: ipv4 00:12:54.087 subtype: discovery subsystem referral 00:12:54.087 treq: not required 00:12:54.087 portid: 0 00:12:54.087 trsvcid: 4430 00:12:54.087 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:54.087 traddr: 10.0.0.2 00:12:54.087 eflags: none 00:12:54.087 sectype: none 00:12:54.087 10:14:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:12:54.087 Perform nvmf subsystem discovery via RPC 00:12:54.087 10:14:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:12:54.087 10:14:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.087 10:14:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:54.087 [ 00:12:54.087 { 00:12:54.087 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:54.087 "subtype": "Discovery", 00:12:54.087 "listen_addresses": [ 00:12:54.087 { 00:12:54.087 "trtype": "TCP", 00:12:54.087 "adrfam": "IPv4", 00:12:54.087 "traddr": "10.0.0.2", 00:12:54.087 "trsvcid": "4420" 00:12:54.087 } 00:12:54.087 ], 00:12:54.087 "allow_any_host": true, 00:12:54.087 "hosts": [] 00:12:54.087 }, 00:12:54.087 { 00:12:54.087 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:54.087 "subtype": "NVMe", 00:12:54.087 "listen_addresses": [ 00:12:54.087 { 00:12:54.087 "trtype": "TCP", 00:12:54.087 "adrfam": "IPv4", 00:12:54.087 "traddr": "10.0.0.2", 00:12:54.087 "trsvcid": "4420" 00:12:54.087 } 00:12:54.087 ], 00:12:54.087 "allow_any_host": true, 00:12:54.087 "hosts": [], 00:12:54.087 "serial_number": "SPDK00000000000001", 00:12:54.087 "model_number": "SPDK bdev Controller", 00:12:54.087 "max_namespaces": 32, 00:12:54.087 "min_cntlid": 1, 00:12:54.087 "max_cntlid": 65519, 00:12:54.087 "namespaces": [ 00:12:54.087 { 00:12:54.087 "nsid": 1, 00:12:54.087 "bdev_name": "Null1", 00:12:54.087 "name": "Null1", 00:12:54.087 "nguid": "22025B929812491A9E6187DA93568DD1", 00:12:54.087 "uuid": "22025b92-9812-491a-9e61-87da93568dd1" 00:12:54.087 } 00:12:54.087 ] 00:12:54.087 }, 00:12:54.087 { 00:12:54.087 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:12:54.087 "subtype": "NVMe", 00:12:54.087 "listen_addresses": [ 00:12:54.087 { 00:12:54.087 "trtype": "TCP", 00:12:54.087 "adrfam": "IPv4", 00:12:54.087 "traddr": "10.0.0.2", 00:12:54.087 "trsvcid": "4420" 00:12:54.087 } 00:12:54.087 ], 00:12:54.087 "allow_any_host": true, 00:12:54.087 "hosts": [], 00:12:54.087 "serial_number": "SPDK00000000000002", 00:12:54.087 "model_number": "SPDK bdev Controller", 00:12:54.087 "max_namespaces": 32, 00:12:54.087 "min_cntlid": 1, 00:12:54.087 "max_cntlid": 65519, 00:12:54.087 "namespaces": [ 00:12:54.087 { 00:12:54.087 "nsid": 1, 00:12:54.087 "bdev_name": "Null2", 00:12:54.087 "name": "Null2", 00:12:54.087 "nguid": "EDB0DFE8B44947E18F6113E92BDA7ED7", 00:12:54.087 "uuid": "edb0dfe8-b449-47e1-8f61-13e92bda7ed7" 00:12:54.087 } 00:12:54.087 ] 00:12:54.087 }, 00:12:54.087 { 00:12:54.087 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:12:54.087 "subtype": "NVMe", 00:12:54.087 "listen_addresses": [ 00:12:54.087 { 00:12:54.087 "trtype": "TCP", 00:12:54.087 "adrfam": "IPv4", 00:12:54.087 "traddr": "10.0.0.2", 00:12:54.087 "trsvcid": "4420" 00:12:54.087 } 00:12:54.087 ], 00:12:54.088 "allow_any_host": true, 00:12:54.088 "hosts": [], 00:12:54.088 "serial_number": "SPDK00000000000003", 00:12:54.088 "model_number": "SPDK bdev Controller", 00:12:54.088 "max_namespaces": 32, 00:12:54.088 "min_cntlid": 1, 00:12:54.088 "max_cntlid": 65519, 00:12:54.088 "namespaces": [ 00:12:54.088 { 00:12:54.088 "nsid": 1, 00:12:54.088 "bdev_name": "Null3", 00:12:54.088 "name": "Null3", 00:12:54.088 "nguid": "9C688A0DE89144849E9A71779BDD1F02", 00:12:54.088 "uuid": "9c688a0d-e891-4484-9e9a-71779bdd1f02" 00:12:54.088 } 00:12:54.088 ] 00:12:54.088 }, 00:12:54.088 { 00:12:54.088 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:12:54.088 "subtype": "NVMe", 00:12:54.088 "listen_addresses": [ 00:12:54.088 { 00:12:54.088 "trtype": "TCP", 00:12:54.088 "adrfam": "IPv4", 00:12:54.088 "traddr": "10.0.0.2", 00:12:54.088 "trsvcid": "4420" 00:12:54.088 } 00:12:54.088 ], 00:12:54.088 "allow_any_host": true, 00:12:54.088 "hosts": [], 00:12:54.088 "serial_number": "SPDK00000000000004", 00:12:54.088 "model_number": "SPDK bdev Controller", 00:12:54.088 "max_namespaces": 32, 00:12:54.088 "min_cntlid": 1, 00:12:54.088 "max_cntlid": 65519, 00:12:54.088 "namespaces": [ 00:12:54.088 { 00:12:54.088 "nsid": 1, 00:12:54.088 "bdev_name": "Null4", 00:12:54.088 "name": "Null4", 00:12:54.088 "nguid": "52AC2ECBA74E4CC9BADDE1702D40FF29", 00:12:54.088 "uuid": "52ac2ecb-a74e-4cc9-badd-e1702d40ff29" 00:12:54.088 } 00:12:54.088 ] 00:12:54.088 } 00:12:54.088 ] 00:12:54.088 10:14:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.088 10:14:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:12:54.088 10:14:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:54.088 10:14:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:54.088 10:14:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.088 10:14:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:54.088 10:14:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.088 10:14:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:12:54.088 10:14:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.088 10:14:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:54.088 10:14:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.088 10:14:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:54.088 10:14:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:12:54.088 10:14:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.088 10:14:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:54.088 10:14:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.088 10:14:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:12:54.088 10:14:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.088 10:14:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:54.088 10:14:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.088 10:14:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:54.088 10:14:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:12:54.088 10:14:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.088 10:14:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:54.088 10:14:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.088 10:14:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:12:54.088 10:14:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.088 10:14:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:54.088 10:14:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.088 10:14:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:54.088 10:14:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:12:54.088 10:14:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.088 10:14:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:54.088 10:14:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.088 10:14:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:12:54.088 10:14:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.088 10:14:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:54.088 10:14:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.088 10:14:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:12:54.088 10:14:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.088 10:14:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:54.088 10:14:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.088 10:14:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:12:54.088 10:14:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:12:54.088 10:14:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.088 10:14:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:54.088 10:14:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.088 10:14:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:12:54.088 10:14:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:12:54.088 10:14:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:12:54.088 10:14:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:12:54.088 10:14:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:54.088 10:14:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:12:54.088 10:14:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:54.088 10:14:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:12:54.088 10:14:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:54.088 10:14:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:54.088 rmmod nvme_tcp 00:12:54.347 rmmod nvme_fabrics 00:12:54.347 rmmod nvme_keyring 00:12:54.347 10:14:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:54.347 10:14:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:12:54.347 10:14:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:12:54.347 10:14:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@517 -- # '[' -n 3823780 ']' 00:12:54.347 10:14:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # killprocess 3823780 00:12:54.347 10:14:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # '[' -z 3823780 ']' 00:12:54.347 10:14:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # kill -0 3823780 00:12:54.347 10:14:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # uname 00:12:54.347 10:14:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:54.347 10:14:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3823780 00:12:54.347 10:14:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:54.347 10:14:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:54.347 10:14:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3823780' 00:12:54.347 killing process with pid 3823780 00:12:54.347 10:14:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@973 -- # kill 3823780 00:12:54.347 10:14:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@978 -- # wait 3823780 00:12:55.724 10:14:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:55.724 10:14:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:55.724 10:14:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:55.724 10:14:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr 00:12:55.724 10:14:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:55.724 10:14:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-save 00:12:55.724 10:14:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:12:55.724 10:14:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:55.724 10:14:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:55.724 10:14:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:55.724 10:14:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:55.724 10:14:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:57.629 10:14:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:57.629 00:12:57.629 real 0m10.689s 00:12:57.629 user 0m10.606s 00:12:57.629 sys 0m4.765s 00:12:57.629 10:14:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:57.629 10:14:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:57.629 ************************************ 00:12:57.629 END TEST nvmf_target_discovery 00:12:57.629 ************************************ 00:12:57.629 10:14:51 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:12:57.629 10:14:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:57.629 10:14:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:57.629 10:14:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:57.629 ************************************ 00:12:57.629 START TEST nvmf_referrals 00:12:57.629 ************************************ 00:12:57.629 10:14:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:12:57.629 * Looking for test storage... 00:12:57.629 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:57.629 10:14:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:57.629 10:14:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # lcov --version 00:12:57.629 10:14:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:57.629 10:14:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:57.629 10:14:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:57.629 10:14:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:57.629 10:14:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:57.629 10:14:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:12:57.629 10:14:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:12:57.629 10:14:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:12:57.629 10:14:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:12:57.629 10:14:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:12:57.629 10:14:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:12:57.629 10:14:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:12:57.629 10:14:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:57.629 10:14:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:12:57.629 10:14:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:12:57.629 10:14:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:57.629 10:14:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:57.629 10:14:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:12:57.629 10:14:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:12:57.629 10:14:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:57.629 10:14:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:12:57.629 10:14:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:12:57.629 10:14:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:12:57.888 10:14:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:12:57.888 10:14:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:57.888 10:14:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:12:57.888 10:14:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:12:57.888 10:14:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:57.888 10:14:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:57.888 10:14:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:12:57.888 10:14:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:57.888 10:14:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:57.888 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:57.888 --rc genhtml_branch_coverage=1 00:12:57.888 --rc genhtml_function_coverage=1 00:12:57.888 --rc genhtml_legend=1 00:12:57.888 --rc geninfo_all_blocks=1 00:12:57.888 --rc geninfo_unexecuted_blocks=1 00:12:57.888 00:12:57.888 ' 00:12:57.888 10:14:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:57.888 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:57.888 --rc genhtml_branch_coverage=1 00:12:57.888 --rc genhtml_function_coverage=1 00:12:57.888 --rc genhtml_legend=1 00:12:57.888 --rc geninfo_all_blocks=1 00:12:57.888 --rc geninfo_unexecuted_blocks=1 00:12:57.888 00:12:57.888 ' 00:12:57.888 10:14:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:57.888 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:57.888 --rc genhtml_branch_coverage=1 00:12:57.889 --rc genhtml_function_coverage=1 00:12:57.889 --rc genhtml_legend=1 00:12:57.889 --rc geninfo_all_blocks=1 00:12:57.889 --rc geninfo_unexecuted_blocks=1 00:12:57.889 00:12:57.889 ' 00:12:57.889 10:14:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:57.889 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:57.889 --rc genhtml_branch_coverage=1 00:12:57.889 --rc genhtml_function_coverage=1 00:12:57.889 --rc genhtml_legend=1 00:12:57.889 --rc geninfo_all_blocks=1 00:12:57.889 --rc geninfo_unexecuted_blocks=1 00:12:57.889 00:12:57.889 ' 00:12:57.889 10:14:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:57.889 10:14:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:12:57.889 10:14:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:57.889 10:14:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:57.889 10:14:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:57.889 10:14:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:57.889 10:14:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:57.889 10:14:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:57.889 10:14:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:57.889 10:14:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:57.889 10:14:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:57.889 10:14:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:57.889 10:14:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:12:57.889 10:14:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:12:57.889 10:14:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:57.889 10:14:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:57.889 10:14:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:57.889 10:14:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:57.889 10:14:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:57.889 10:14:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:12:57.889 10:14:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:57.889 10:14:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:57.889 10:14:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:57.889 10:14:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:57.889 10:14:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:57.889 10:14:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:57.889 10:14:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:12:57.889 10:14:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:57.889 10:14:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:12:57.889 10:14:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:57.889 10:14:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:57.889 10:14:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:57.889 10:14:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:57.889 10:14:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:57.889 10:14:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:57.889 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:57.889 10:14:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:57.889 10:14:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:57.889 10:14:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:57.889 10:14:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:12:57.889 10:14:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:12:57.889 10:14:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:12:57.889 10:14:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:12:57.889 10:14:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:12:57.889 10:14:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:12:57.889 10:14:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:12:57.889 10:14:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:57.889 10:14:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:57.889 10:14:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:57.889 10:14:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:57.889 10:14:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:57.889 10:14:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:57.889 10:14:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:57.889 10:14:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:57.889 10:14:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:57.889 10:14:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:57.889 10:14:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:12:57.889 10:14:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:03.162 10:14:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:03.162 10:14:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:13:03.162 10:14:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:03.162 10:14:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:03.162 10:14:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:03.162 10:14:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:03.162 10:14:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:03.162 10:14:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:13:03.162 10:14:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:03.162 10:14:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:13:03.162 10:14:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:13:03.162 10:14:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:13:03.162 10:14:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:13:03.162 10:14:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:13:03.162 10:14:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:13:03.162 10:14:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:03.162 10:14:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:03.162 10:14:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:03.162 10:14:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:03.162 10:14:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:03.162 10:14:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:03.162 10:14:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:03.162 10:14:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:03.162 10:14:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:03.162 10:14:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:03.162 10:14:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:03.162 10:14:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:03.162 10:14:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:03.162 10:14:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:03.162 10:14:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:03.162 10:14:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:03.162 10:14:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:03.162 10:14:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:03.162 10:14:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:03.162 10:14:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:13:03.162 Found 0000:af:00.0 (0x8086 - 0x159b) 00:13:03.162 10:14:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:03.162 10:14:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:03.162 10:14:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:03.162 10:14:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:03.162 10:14:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:03.162 10:14:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:03.162 10:14:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:13:03.162 Found 0000:af:00.1 (0x8086 - 0x159b) 00:13:03.162 10:14:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:03.162 10:14:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:03.162 10:14:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:03.162 10:14:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:03.162 10:14:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:03.162 10:14:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:03.162 10:14:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:03.162 10:14:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:03.162 10:14:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:03.162 10:14:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:03.162 10:14:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:03.162 10:14:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:03.162 10:14:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:03.162 10:14:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:03.162 10:14:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:03.162 10:14:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:13:03.162 Found net devices under 0000:af:00.0: cvl_0_0 00:13:03.162 10:14:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:03.162 10:14:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:03.162 10:14:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:03.162 10:14:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:03.162 10:14:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:03.162 10:14:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:03.162 10:14:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:03.162 10:14:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:03.162 10:14:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:13:03.162 Found net devices under 0000:af:00.1: cvl_0_1 00:13:03.162 10:14:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:03.162 10:14:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:03.162 10:14:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # is_hw=yes 00:13:03.162 10:14:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:03.162 10:14:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:03.162 10:14:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:03.162 10:14:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:03.162 10:14:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:03.162 10:14:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:03.162 10:14:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:03.162 10:14:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:03.162 10:14:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:03.162 10:14:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:03.162 10:14:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:03.162 10:14:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:03.162 10:14:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:03.162 10:14:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:03.162 10:14:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:03.162 10:14:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:03.162 10:14:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:03.162 10:14:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:03.162 10:14:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:03.162 10:14:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:03.162 10:14:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:03.162 10:14:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:03.162 10:14:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:03.163 10:14:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:03.163 10:14:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:03.163 10:14:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:03.163 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:03.163 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.378 ms 00:13:03.163 00:13:03.163 --- 10.0.0.2 ping statistics --- 00:13:03.163 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:03.163 rtt min/avg/max/mdev = 0.378/0.378/0.378/0.000 ms 00:13:03.163 10:14:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:03.163 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:03.163 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.203 ms 00:13:03.163 00:13:03.163 --- 10.0.0.1 ping statistics --- 00:13:03.163 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:03.163 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:13:03.163 10:14:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:03.163 10:14:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # return 0 00:13:03.163 10:14:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:03.163 10:14:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:03.163 10:14:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:03.163 10:14:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:03.163 10:14:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:03.163 10:14:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:03.163 10:14:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:03.163 10:14:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:13:03.163 10:14:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:03.163 10:14:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:03.163 10:14:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:03.163 10:14:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # nvmfpid=3827721 00:13:03.163 10:14:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # waitforlisten 3827721 00:13:03.163 10:14:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:03.163 10:14:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # '[' -z 3827721 ']' 00:13:03.163 10:14:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:03.163 10:14:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:03.163 10:14:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:03.163 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:03.163 10:14:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:03.163 10:14:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:03.422 [2024-12-13 10:14:57.109808] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:13:03.422 [2024-12-13 10:14:57.109893] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:03.422 [2024-12-13 10:14:57.229420] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:03.681 [2024-12-13 10:14:57.333982] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:03.681 [2024-12-13 10:14:57.334027] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:03.681 [2024-12-13 10:14:57.334037] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:03.681 [2024-12-13 10:14:57.334048] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:03.681 [2024-12-13 10:14:57.334056] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:03.681 [2024-12-13 10:14:57.336575] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:13:03.681 [2024-12-13 10:14:57.336595] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:13:03.681 [2024-12-13 10:14:57.336693] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:13:03.681 [2024-12-13 10:14:57.336701] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:13:04.249 10:14:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:04.249 10:14:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@868 -- # return 0 00:13:04.249 10:14:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:04.249 10:14:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:04.249 10:14:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:04.249 10:14:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:04.249 10:14:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:04.249 10:14:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.249 10:14:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:04.249 [2024-12-13 10:14:57.961653] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:04.249 10:14:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.249 10:14:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:13:04.249 10:14:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.249 10:14:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:04.249 [2024-12-13 10:14:57.987894] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:13:04.249 10:14:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.249 10:14:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:13:04.249 10:14:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.249 10:14:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:04.249 10:14:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.249 10:14:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:13:04.249 10:14:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.249 10:14:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:04.249 10:14:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.249 10:14:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:13:04.249 10:14:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.249 10:14:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:04.249 10:14:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.249 10:14:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:04.249 10:14:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:13:04.249 10:14:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.249 10:14:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:04.249 10:14:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.249 10:14:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:13:04.249 10:14:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:13:04.249 10:14:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:13:04.249 10:14:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:04.249 10:14:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:13:04.249 10:14:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.249 10:14:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:13:04.249 10:14:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:04.249 10:14:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.249 10:14:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:13:04.249 10:14:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:13:04.249 10:14:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:13:04.249 10:14:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:13:04.249 10:14:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:13:04.249 10:14:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:04.249 10:14:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:13:04.249 10:14:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:13:04.508 10:14:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:13:04.508 10:14:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:13:04.508 10:14:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:13:04.508 10:14:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.508 10:14:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:04.508 10:14:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.508 10:14:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:13:04.508 10:14:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.508 10:14:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:04.508 10:14:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.508 10:14:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:13:04.508 10:14:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.508 10:14:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:04.508 10:14:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.508 10:14:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:04.508 10:14:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:13:04.508 10:14:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.508 10:14:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:04.508 10:14:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.508 10:14:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:13:04.508 10:14:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:13:04.508 10:14:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:13:04.508 10:14:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:13:04.508 10:14:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:04.508 10:14:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:13:04.508 10:14:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:13:04.768 10:14:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:13:04.768 10:14:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:13:04.768 10:14:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:13:04.768 10:14:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.768 10:14:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:04.768 10:14:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.768 10:14:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:13:04.768 10:14:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.768 10:14:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:04.768 10:14:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.768 10:14:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:13:04.768 10:14:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:13:04.768 10:14:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:04.768 10:14:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:13:04.768 10:14:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.768 10:14:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:13:04.768 10:14:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:04.768 10:14:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.768 10:14:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:13:04.768 10:14:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:13:04.768 10:14:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:13:04.768 10:14:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:13:04.768 10:14:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:13:04.768 10:14:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:04.768 10:14:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:13:04.768 10:14:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:13:05.027 10:14:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:13:05.027 10:14:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:13:05.027 10:14:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:13:05.027 10:14:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:13:05.027 10:14:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:13:05.027 10:14:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:13:05.027 10:14:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:05.285 10:14:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:13:05.285 10:14:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:13:05.285 10:14:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:13:05.285 10:14:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:13:05.285 10:14:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:05.285 10:14:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:13:05.544 10:14:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:13:05.544 10:14:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:13:05.544 10:14:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.544 10:14:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:05.544 10:14:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.544 10:14:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:13:05.544 10:14:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:13:05.544 10:14:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:05.544 10:14:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:13:05.544 10:14:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.544 10:14:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:13:05.544 10:14:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:05.544 10:14:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.544 10:14:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:13:05.545 10:14:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:13:05.545 10:14:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:13:05.545 10:14:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:13:05.545 10:14:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:13:05.545 10:14:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:13:05.545 10:14:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:05.545 10:14:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:13:05.545 10:14:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:13:05.545 10:14:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:13:05.545 10:14:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:13:05.545 10:14:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:13:05.545 10:14:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:13:05.545 10:14:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:05.545 10:14:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:13:05.803 10:14:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:13:05.803 10:14:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:13:05.803 10:14:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:13:05.803 10:14:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:13:05.803 10:14:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:05.803 10:14:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:13:06.062 10:14:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:13:06.062 10:14:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:13:06.062 10:14:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.062 10:14:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:06.062 10:14:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.062 10:14:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:06.062 10:14:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.062 10:14:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:06.062 10:14:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:13:06.062 10:14:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.062 10:14:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:13:06.062 10:14:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:13:06.062 10:14:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:13:06.062 10:14:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:13:06.062 10:14:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:13:06.062 10:14:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:06.062 10:14:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:13:06.321 10:15:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:13:06.321 10:15:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:13:06.321 10:15:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:13:06.321 10:15:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:13:06.321 10:15:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:06.321 10:15:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:13:06.321 10:15:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:06.321 10:15:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:13:06.321 10:15:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:06.321 10:15:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:06.321 rmmod nvme_tcp 00:13:06.321 rmmod nvme_fabrics 00:13:06.321 rmmod nvme_keyring 00:13:06.321 10:15:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:06.321 10:15:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:13:06.321 10:15:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:13:06.321 10:15:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@517 -- # '[' -n 3827721 ']' 00:13:06.321 10:15:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # killprocess 3827721 00:13:06.321 10:15:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # '[' -z 3827721 ']' 00:13:06.321 10:15:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # kill -0 3827721 00:13:06.321 10:15:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # uname 00:13:06.321 10:15:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:06.321 10:15:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3827721 00:13:06.321 10:15:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:06.321 10:15:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:06.321 10:15:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3827721' 00:13:06.321 killing process with pid 3827721 00:13:06.321 10:15:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@973 -- # kill 3827721 00:13:06.321 10:15:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@978 -- # wait 3827721 00:13:07.699 10:15:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:07.699 10:15:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:07.699 10:15:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:07.699 10:15:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr 00:13:07.699 10:15:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-save 00:13:07.699 10:15:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:07.699 10:15:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-restore 00:13:07.699 10:15:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:07.699 10:15:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:07.699 10:15:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:07.699 10:15:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:07.699 10:15:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:09.603 10:15:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:09.603 00:13:09.603 real 0m12.012s 00:13:09.603 user 0m17.297s 00:13:09.603 sys 0m5.042s 00:13:09.603 10:15:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:09.603 10:15:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:09.603 ************************************ 00:13:09.603 END TEST nvmf_referrals 00:13:09.603 ************************************ 00:13:09.603 10:15:03 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:13:09.603 10:15:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:09.603 10:15:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:09.603 10:15:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:09.603 ************************************ 00:13:09.603 START TEST nvmf_connect_disconnect 00:13:09.603 ************************************ 00:13:09.603 10:15:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:13:09.863 * Looking for test storage... 00:13:09.863 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:09.863 10:15:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:09.863 10:15:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # lcov --version 00:13:09.863 10:15:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:09.863 10:15:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:09.863 10:15:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:09.863 10:15:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:09.863 10:15:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:09.863 10:15:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:13:09.863 10:15:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:13:09.863 10:15:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:13:09.863 10:15:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:13:09.863 10:15:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:13:09.863 10:15:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:13:09.863 10:15:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:13:09.863 10:15:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:09.863 10:15:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:13:09.863 10:15:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:13:09.863 10:15:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:09.863 10:15:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:09.863 10:15:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:13:09.863 10:15:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:13:09.863 10:15:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:09.863 10:15:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:13:09.863 10:15:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:13:09.863 10:15:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:13:09.863 10:15:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:13:09.864 10:15:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:09.864 10:15:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:13:09.864 10:15:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:13:09.864 10:15:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:09.864 10:15:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:09.864 10:15:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:13:09.864 10:15:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:09.864 10:15:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:09.864 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:09.864 --rc genhtml_branch_coverage=1 00:13:09.864 --rc genhtml_function_coverage=1 00:13:09.864 --rc genhtml_legend=1 00:13:09.864 --rc geninfo_all_blocks=1 00:13:09.864 --rc geninfo_unexecuted_blocks=1 00:13:09.864 00:13:09.864 ' 00:13:09.864 10:15:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:09.864 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:09.864 --rc genhtml_branch_coverage=1 00:13:09.864 --rc genhtml_function_coverage=1 00:13:09.864 --rc genhtml_legend=1 00:13:09.864 --rc geninfo_all_blocks=1 00:13:09.864 --rc geninfo_unexecuted_blocks=1 00:13:09.864 00:13:09.864 ' 00:13:09.864 10:15:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:09.864 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:09.864 --rc genhtml_branch_coverage=1 00:13:09.864 --rc genhtml_function_coverage=1 00:13:09.864 --rc genhtml_legend=1 00:13:09.864 --rc geninfo_all_blocks=1 00:13:09.864 --rc geninfo_unexecuted_blocks=1 00:13:09.864 00:13:09.864 ' 00:13:09.864 10:15:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:09.864 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:09.864 --rc genhtml_branch_coverage=1 00:13:09.864 --rc genhtml_function_coverage=1 00:13:09.864 --rc genhtml_legend=1 00:13:09.864 --rc geninfo_all_blocks=1 00:13:09.864 --rc geninfo_unexecuted_blocks=1 00:13:09.864 00:13:09.864 ' 00:13:09.864 10:15:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:09.864 10:15:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:13:09.864 10:15:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:09.864 10:15:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:09.864 10:15:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:09.864 10:15:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:09.864 10:15:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:09.864 10:15:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:09.864 10:15:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:09.864 10:15:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:09.864 10:15:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:09.864 10:15:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:09.864 10:15:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:13:09.864 10:15:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:13:09.864 10:15:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:09.864 10:15:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:09.864 10:15:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:09.864 10:15:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:09.864 10:15:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:09.864 10:15:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:13:09.864 10:15:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:09.864 10:15:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:09.864 10:15:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:09.864 10:15:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:09.864 10:15:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:09.864 10:15:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:09.864 10:15:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:13:09.864 10:15:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:09.864 10:15:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:13:09.864 10:15:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:09.864 10:15:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:09.864 10:15:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:09.864 10:15:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:09.864 10:15:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:09.864 10:15:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:09.864 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:09.864 10:15:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:09.864 10:15:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:09.864 10:15:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:09.864 10:15:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:09.864 10:15:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:09.864 10:15:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:13:09.864 10:15:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:09.864 10:15:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:09.864 10:15:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:09.864 10:15:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:09.864 10:15:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:09.864 10:15:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:09.864 10:15:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:09.864 10:15:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:09.864 10:15:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:09.864 10:15:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:09.864 10:15:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:13:09.864 10:15:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:15.131 10:15:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:15.131 10:15:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:13:15.131 10:15:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:15.131 10:15:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:15.131 10:15:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:15.131 10:15:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:15.131 10:15:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:15.131 10:15:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:13:15.131 10:15:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:15.131 10:15:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:13:15.131 10:15:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:13:15.131 10:15:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:13:15.131 10:15:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:13:15.131 10:15:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:13:15.131 10:15:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:13:15.131 10:15:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:15.131 10:15:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:15.131 10:15:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:15.131 10:15:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:15.131 10:15:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:15.131 10:15:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:15.131 10:15:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:15.131 10:15:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:15.131 10:15:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:15.131 10:15:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:15.131 10:15:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:15.131 10:15:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:15.131 10:15:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:15.131 10:15:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:15.131 10:15:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:15.131 10:15:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:15.131 10:15:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:15.131 10:15:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:15.131 10:15:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:15.131 10:15:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:13:15.131 Found 0000:af:00.0 (0x8086 - 0x159b) 00:13:15.131 10:15:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:15.131 10:15:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:15.131 10:15:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:15.131 10:15:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:15.131 10:15:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:15.131 10:15:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:15.131 10:15:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:13:15.131 Found 0000:af:00.1 (0x8086 - 0x159b) 00:13:15.131 10:15:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:15.131 10:15:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:15.132 10:15:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:15.132 10:15:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:15.132 10:15:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:15.132 10:15:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:15.132 10:15:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:15.132 10:15:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:15.132 10:15:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:15.132 10:15:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:15.132 10:15:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:15.132 10:15:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:15.132 10:15:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:15.132 10:15:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:15.132 10:15:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:15.132 10:15:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:13:15.132 Found net devices under 0000:af:00.0: cvl_0_0 00:13:15.132 10:15:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:15.132 10:15:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:15.132 10:15:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:15.132 10:15:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:15.132 10:15:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:15.132 10:15:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:15.132 10:15:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:15.132 10:15:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:15.132 10:15:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:13:15.132 Found net devices under 0000:af:00.1: cvl_0_1 00:13:15.132 10:15:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:15.132 10:15:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:15.132 10:15:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:13:15.132 10:15:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:15.132 10:15:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:15.132 10:15:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:15.132 10:15:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:15.132 10:15:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:15.132 10:15:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:15.132 10:15:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:15.132 10:15:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:15.132 10:15:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:15.132 10:15:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:15.132 10:15:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:15.132 10:15:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:15.132 10:15:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:15.132 10:15:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:15.132 10:15:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:15.132 10:15:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:15.132 10:15:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:15.132 10:15:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:15.132 10:15:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:15.132 10:15:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:15.132 10:15:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:15.132 10:15:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:15.132 10:15:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:15.132 10:15:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:15.132 10:15:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:15.132 10:15:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:15.132 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:15.132 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.399 ms 00:13:15.132 00:13:15.132 --- 10.0.0.2 ping statistics --- 00:13:15.132 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:15.132 rtt min/avg/max/mdev = 0.399/0.399/0.399/0.000 ms 00:13:15.132 10:15:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:15.132 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:15.132 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.210 ms 00:13:15.132 00:13:15.132 --- 10.0.0.1 ping statistics --- 00:13:15.132 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:15.132 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:13:15.132 10:15:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:15.132 10:15:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # return 0 00:13:15.132 10:15:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:15.132 10:15:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:15.132 10:15:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:15.132 10:15:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:15.132 10:15:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:15.132 10:15:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:15.132 10:15:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:15.391 10:15:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:13:15.391 10:15:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:15.391 10:15:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:15.391 10:15:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:15.391 10:15:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # nvmfpid=3832237 00:13:15.391 10:15:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # waitforlisten 3832237 00:13:15.391 10:15:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:15.391 10:15:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # '[' -z 3832237 ']' 00:13:15.391 10:15:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:15.391 10:15:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:15.391 10:15:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:15.391 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:15.391 10:15:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:15.391 10:15:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:15.391 [2024-12-13 10:15:09.101377] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:13:15.391 [2024-12-13 10:15:09.101464] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:15.391 [2024-12-13 10:15:09.219375] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:15.649 [2024-12-13 10:15:09.329202] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:15.649 [2024-12-13 10:15:09.329243] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:15.649 [2024-12-13 10:15:09.329255] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:15.649 [2024-12-13 10:15:09.329265] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:15.649 [2024-12-13 10:15:09.329272] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:15.649 [2024-12-13 10:15:09.331582] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:13:15.649 [2024-12-13 10:15:09.331656] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:13:15.649 [2024-12-13 10:15:09.331720] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:13:15.649 [2024-12-13 10:15:09.331730] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:13:16.216 10:15:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:16.216 10:15:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@868 -- # return 0 00:13:16.216 10:15:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:16.216 10:15:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:16.216 10:15:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:16.216 10:15:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:16.216 10:15:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:13:16.216 10:15:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.216 10:15:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:16.216 [2024-12-13 10:15:09.977661] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:16.216 10:15:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.216 10:15:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:13:16.216 10:15:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.216 10:15:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:16.216 10:15:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.216 10:15:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:13:16.216 10:15:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:16.217 10:15:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.217 10:15:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:16.217 10:15:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.217 10:15:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:16.217 10:15:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.217 10:15:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:16.217 10:15:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.217 10:15:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:16.217 10:15:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.217 10:15:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:16.217 [2024-12-13 10:15:10.098240] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:16.217 10:15:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.217 10:15:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:13:16.217 10:15:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:13:16.217 10:15:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:13:16.217 10:15:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:13:18.749 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:21.331 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:23.943 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:25.846 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:28.379 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:30.912 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:32.814 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:35.347 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:37.879 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:40.412 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:42.314 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:44.846 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:47.426 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:49.323 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:51.850 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:54.376 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:56.903 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:58.801 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:01.328 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:03.855 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:05.752 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:08.279 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:10.806 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:13.333 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:15.231 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:17.758 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:20.286 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:22.849 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:25.404 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:27.302 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:29.830 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:32.356 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:34.259 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:36.789 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:39.320 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:41.853 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:44.386 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:46.290 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:48.823 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:50.725 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:53.258 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:55.792 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:58.323 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:00.233 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:02.765 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:05.299 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:07.202 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:09.735 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:12.267 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:14.799 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:16.702 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:19.258 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:21.366 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:23.898 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:26.429 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:28.332 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:30.864 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:33.403 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:35.934 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:37.837 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:40.370 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:42.904 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:44.807 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:47.338 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:49.870 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:52.402 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:54.305 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:56.838 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:59.371 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:01.903 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:03.807 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:06.339 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:08.868 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:11.402 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:13.304 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:15.837 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:18.369 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:21.006 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:23.538 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:25.442 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:27.982 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:29.891 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:32.429 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:34.964 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:37.501 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:39.405 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:41.938 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:44.475 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:46.382 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:49.107 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:51.643 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:53.549 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:56.085 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:58.621 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:00.527 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:03.061 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:05.596 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:07.501 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:10.036 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:12.574 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:12.574 10:19:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:17:12.574 10:19:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:17:12.574 10:19:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:12.574 10:19:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:17:12.574 10:19:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:12.574 10:19:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:17:12.574 10:19:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:12.574 10:19:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:12.574 rmmod nvme_tcp 00:17:12.574 rmmod nvme_fabrics 00:17:12.574 rmmod nvme_keyring 00:17:12.574 10:19:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:12.574 10:19:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:17:12.574 10:19:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:17:12.574 10:19:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@517 -- # '[' -n 3832237 ']' 00:17:12.574 10:19:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # killprocess 3832237 00:17:12.574 10:19:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # '[' -z 3832237 ']' 00:17:12.574 10:19:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # kill -0 3832237 00:17:12.574 10:19:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # uname 00:17:12.574 10:19:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:12.574 10:19:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3832237 00:17:12.574 10:19:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:12.574 10:19:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:12.574 10:19:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3832237' 00:17:12.574 killing process with pid 3832237 00:17:12.574 10:19:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@973 -- # kill 3832237 00:17:12.574 10:19:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@978 -- # wait 3832237 00:17:13.953 10:19:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:13.953 10:19:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:13.953 10:19:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:13.953 10:19:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr 00:17:13.953 10:19:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:13.953 10:19:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:17:13.953 10:19:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:17:13.953 10:19:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:13.953 10:19:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:13.953 10:19:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:13.953 10:19:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:13.953 10:19:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:15.861 10:19:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:15.861 00:17:15.861 real 4m6.079s 00:17:15.861 user 15m41.140s 00:17:15.861 sys 0m24.748s 00:17:15.861 10:19:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:15.861 10:19:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:17:15.861 ************************************ 00:17:15.861 END TEST nvmf_connect_disconnect 00:17:15.861 ************************************ 00:17:15.861 10:19:09 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:17:15.861 10:19:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:15.861 10:19:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:15.861 10:19:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:15.861 ************************************ 00:17:15.861 START TEST nvmf_multitarget 00:17:15.861 ************************************ 00:17:15.861 10:19:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:17:15.861 * Looking for test storage... 00:17:15.861 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:15.861 10:19:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:15.861 10:19:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # lcov --version 00:17:15.861 10:19:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:16.122 10:19:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:16.122 10:19:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:16.122 10:19:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:16.122 10:19:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:16.122 10:19:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:17:16.122 10:19:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:17:16.122 10:19:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:17:16.122 10:19:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:17:16.122 10:19:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:17:16.122 10:19:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:17:16.122 10:19:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:17:16.122 10:19:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:16.122 10:19:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:17:16.122 10:19:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:17:16.122 10:19:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:16.122 10:19:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:16.122 10:19:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:17:16.122 10:19:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:17:16.122 10:19:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:16.122 10:19:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:17:16.122 10:19:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:17:16.122 10:19:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:17:16.122 10:19:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:17:16.122 10:19:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:16.122 10:19:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:17:16.122 10:19:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:17:16.122 10:19:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:16.122 10:19:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:16.122 10:19:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:17:16.122 10:19:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:16.122 10:19:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:16.122 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:16.122 --rc genhtml_branch_coverage=1 00:17:16.122 --rc genhtml_function_coverage=1 00:17:16.122 --rc genhtml_legend=1 00:17:16.122 --rc geninfo_all_blocks=1 00:17:16.122 --rc geninfo_unexecuted_blocks=1 00:17:16.122 00:17:16.122 ' 00:17:16.122 10:19:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:16.122 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:16.122 --rc genhtml_branch_coverage=1 00:17:16.122 --rc genhtml_function_coverage=1 00:17:16.122 --rc genhtml_legend=1 00:17:16.122 --rc geninfo_all_blocks=1 00:17:16.122 --rc geninfo_unexecuted_blocks=1 00:17:16.122 00:17:16.122 ' 00:17:16.122 10:19:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:16.122 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:16.122 --rc genhtml_branch_coverage=1 00:17:16.122 --rc genhtml_function_coverage=1 00:17:16.122 --rc genhtml_legend=1 00:17:16.122 --rc geninfo_all_blocks=1 00:17:16.122 --rc geninfo_unexecuted_blocks=1 00:17:16.122 00:17:16.122 ' 00:17:16.122 10:19:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:16.122 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:16.122 --rc genhtml_branch_coverage=1 00:17:16.122 --rc genhtml_function_coverage=1 00:17:16.122 --rc genhtml_legend=1 00:17:16.122 --rc geninfo_all_blocks=1 00:17:16.122 --rc geninfo_unexecuted_blocks=1 00:17:16.122 00:17:16.122 ' 00:17:16.122 10:19:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:16.122 10:19:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:17:16.122 10:19:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:16.122 10:19:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:16.123 10:19:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:16.123 10:19:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:16.123 10:19:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:16.123 10:19:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:16.123 10:19:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:16.123 10:19:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:16.123 10:19:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:16.123 10:19:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:16.123 10:19:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:17:16.123 10:19:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:17:16.123 10:19:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:16.123 10:19:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:16.123 10:19:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:16.123 10:19:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:16.123 10:19:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:16.123 10:19:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:17:16.123 10:19:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:16.123 10:19:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:16.123 10:19:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:16.123 10:19:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:16.123 10:19:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:16.123 10:19:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:16.123 10:19:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:17:16.123 10:19:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:16.123 10:19:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:17:16.123 10:19:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:16.123 10:19:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:16.123 10:19:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:16.123 10:19:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:16.123 10:19:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:16.123 10:19:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:16.123 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:16.123 10:19:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:16.123 10:19:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:16.123 10:19:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:16.123 10:19:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:17:16.123 10:19:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:17:16.123 10:19:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:16.123 10:19:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:16.123 10:19:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:16.123 10:19:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:16.123 10:19:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:16.123 10:19:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:16.123 10:19:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:16.123 10:19:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:16.123 10:19:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:16.123 10:19:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:16.123 10:19:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:17:16.123 10:19:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:17:21.559 10:19:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:21.559 10:19:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:17:21.559 10:19:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:21.559 10:19:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:21.559 10:19:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:21.559 10:19:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:21.559 10:19:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:21.559 10:19:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:17:21.560 10:19:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:21.560 10:19:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:17:21.560 10:19:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:17:21.560 10:19:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:17:21.560 10:19:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:17:21.560 10:19:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:17:21.560 10:19:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:17:21.560 10:19:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:21.560 10:19:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:21.560 10:19:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:21.560 10:19:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:21.560 10:19:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:21.560 10:19:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:21.560 10:19:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:21.560 10:19:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:21.560 10:19:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:21.560 10:19:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:21.560 10:19:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:21.560 10:19:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:21.560 10:19:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:21.560 10:19:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:21.560 10:19:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:21.560 10:19:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:21.560 10:19:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:21.560 10:19:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:21.560 10:19:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:21.560 10:19:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:17:21.560 Found 0000:af:00.0 (0x8086 - 0x159b) 00:17:21.560 10:19:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:21.560 10:19:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:21.560 10:19:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:21.560 10:19:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:21.560 10:19:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:21.560 10:19:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:21.560 10:19:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:17:21.560 Found 0000:af:00.1 (0x8086 - 0x159b) 00:17:21.560 10:19:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:21.560 10:19:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:21.560 10:19:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:21.560 10:19:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:21.560 10:19:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:21.560 10:19:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:21.560 10:19:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:21.560 10:19:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:21.560 10:19:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:21.560 10:19:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:21.560 10:19:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:21.560 10:19:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:21.560 10:19:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:21.560 10:19:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:21.560 10:19:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:21.560 10:19:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:17:21.560 Found net devices under 0000:af:00.0: cvl_0_0 00:17:21.560 10:19:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:21.560 10:19:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:21.560 10:19:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:21.560 10:19:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:21.560 10:19:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:21.560 10:19:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:21.560 10:19:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:21.560 10:19:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:21.560 10:19:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:17:21.560 Found net devices under 0000:af:00.1: cvl_0_1 00:17:21.560 10:19:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:21.560 10:19:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:21.560 10:19:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # is_hw=yes 00:17:21.560 10:19:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:21.560 10:19:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:21.560 10:19:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:21.560 10:19:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:21.560 10:19:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:21.560 10:19:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:21.560 10:19:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:21.560 10:19:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:21.560 10:19:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:21.560 10:19:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:21.560 10:19:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:21.560 10:19:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:21.560 10:19:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:21.560 10:19:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:21.560 10:19:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:21.560 10:19:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:21.560 10:19:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:21.560 10:19:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:21.560 10:19:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:21.560 10:19:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:21.560 10:19:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:21.560 10:19:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:21.560 10:19:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:21.560 10:19:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:21.560 10:19:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:21.820 10:19:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:21.820 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:21.820 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.375 ms 00:17:21.820 00:17:21.820 --- 10.0.0.2 ping statistics --- 00:17:21.820 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:21.820 rtt min/avg/max/mdev = 0.375/0.375/0.375/0.000 ms 00:17:21.820 10:19:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:21.820 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:21.820 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.214 ms 00:17:21.820 00:17:21.820 --- 10.0.0.1 ping statistics --- 00:17:21.820 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:21.820 rtt min/avg/max/mdev = 0.214/0.214/0.214/0.000 ms 00:17:21.820 10:19:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:21.820 10:19:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # return 0 00:17:21.820 10:19:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:21.820 10:19:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:21.820 10:19:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:21.820 10:19:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:21.820 10:19:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:21.820 10:19:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:21.820 10:19:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:21.820 10:19:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:17:21.820 10:19:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:21.820 10:19:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:21.820 10:19:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:17:21.820 10:19:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # nvmfpid=3876199 00:17:21.820 10:19:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:21.820 10:19:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # waitforlisten 3876199 00:17:21.820 10:19:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # '[' -z 3876199 ']' 00:17:21.820 10:19:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:21.820 10:19:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:21.820 10:19:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:21.820 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:21.820 10:19:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:21.820 10:19:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:17:21.820 [2024-12-13 10:19:15.597784] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:17:21.820 [2024-12-13 10:19:15.597875] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:22.079 [2024-12-13 10:19:15.712522] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:22.079 [2024-12-13 10:19:15.828760] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:22.079 [2024-12-13 10:19:15.828803] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:22.079 [2024-12-13 10:19:15.828814] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:22.079 [2024-12-13 10:19:15.828824] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:22.079 [2024-12-13 10:19:15.828832] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:22.079 [2024-12-13 10:19:15.831206] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:17:22.079 [2024-12-13 10:19:15.831279] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:17:22.079 [2024-12-13 10:19:15.831396] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:17:22.079 [2024-12-13 10:19:15.831404] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:17:22.648 10:19:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:22.648 10:19:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@868 -- # return 0 00:17:22.648 10:19:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:22.648 10:19:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:22.648 10:19:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:17:22.648 10:19:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:22.648 10:19:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:17:22.648 10:19:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:17:22.648 10:19:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:17:22.907 10:19:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:17:22.907 10:19:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:17:22.907 "nvmf_tgt_1" 00:17:22.907 10:19:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:17:22.907 "nvmf_tgt_2" 00:17:22.907 10:19:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:17:22.907 10:19:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:17:23.166 10:19:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:17:23.166 10:19:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:17:23.166 true 00:17:23.166 10:19:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:17:23.425 true 00:17:23.425 10:19:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:17:23.425 10:19:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:17:23.425 10:19:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:17:23.425 10:19:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:17:23.425 10:19:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:17:23.425 10:19:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:23.425 10:19:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:17:23.425 10:19:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:23.425 10:19:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:17:23.425 10:19:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:23.425 10:19:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:23.425 rmmod nvme_tcp 00:17:23.425 rmmod nvme_fabrics 00:17:23.425 rmmod nvme_keyring 00:17:23.425 10:19:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:23.425 10:19:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:17:23.425 10:19:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:17:23.425 10:19:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@517 -- # '[' -n 3876199 ']' 00:17:23.425 10:19:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # killprocess 3876199 00:17:23.425 10:19:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # '[' -z 3876199 ']' 00:17:23.425 10:19:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # kill -0 3876199 00:17:23.425 10:19:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # uname 00:17:23.425 10:19:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:23.425 10:19:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3876199 00:17:23.685 10:19:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:23.685 10:19:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:23.685 10:19:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3876199' 00:17:23.685 killing process with pid 3876199 00:17:23.685 10:19:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@973 -- # kill 3876199 00:17:23.685 10:19:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@978 -- # wait 3876199 00:17:24.622 10:19:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:24.622 10:19:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:24.622 10:19:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:24.622 10:19:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr 00:17:24.622 10:19:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-restore 00:17:24.622 10:19:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-save 00:17:24.622 10:19:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:24.622 10:19:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:24.622 10:19:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:24.622 10:19:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:24.622 10:19:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:24.622 10:19:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:27.160 10:19:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:27.160 00:17:27.160 real 0m10.950s 00:17:27.160 user 0m12.538s 00:17:27.160 sys 0m4.868s 00:17:27.160 10:19:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:27.160 10:19:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:17:27.160 ************************************ 00:17:27.160 END TEST nvmf_multitarget 00:17:27.160 ************************************ 00:17:27.160 10:19:20 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:17:27.160 10:19:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:27.160 10:19:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:27.160 10:19:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:27.160 ************************************ 00:17:27.160 START TEST nvmf_rpc 00:17:27.160 ************************************ 00:17:27.160 10:19:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:17:27.160 * Looking for test storage... 00:17:27.160 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:27.160 10:19:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:27.160 10:19:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:17:27.160 10:19:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:27.160 10:19:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:27.160 10:19:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:27.160 10:19:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:27.160 10:19:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:27.160 10:19:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:17:27.160 10:19:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:17:27.160 10:19:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:17:27.160 10:19:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:17:27.160 10:19:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:17:27.160 10:19:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:17:27.160 10:19:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:17:27.160 10:19:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:27.160 10:19:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:17:27.160 10:19:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:17:27.160 10:19:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:27.160 10:19:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:27.160 10:19:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:17:27.160 10:19:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:17:27.160 10:19:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:27.160 10:19:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:17:27.160 10:19:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:17:27.160 10:19:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:17:27.160 10:19:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:17:27.160 10:19:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:27.160 10:19:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:17:27.160 10:19:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:17:27.160 10:19:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:27.160 10:19:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:27.160 10:19:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:17:27.160 10:19:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:27.160 10:19:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:27.160 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:27.160 --rc genhtml_branch_coverage=1 00:17:27.160 --rc genhtml_function_coverage=1 00:17:27.160 --rc genhtml_legend=1 00:17:27.160 --rc geninfo_all_blocks=1 00:17:27.160 --rc geninfo_unexecuted_blocks=1 00:17:27.160 00:17:27.160 ' 00:17:27.160 10:19:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:27.160 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:27.160 --rc genhtml_branch_coverage=1 00:17:27.160 --rc genhtml_function_coverage=1 00:17:27.160 --rc genhtml_legend=1 00:17:27.160 --rc geninfo_all_blocks=1 00:17:27.160 --rc geninfo_unexecuted_blocks=1 00:17:27.160 00:17:27.160 ' 00:17:27.160 10:19:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:27.160 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:27.160 --rc genhtml_branch_coverage=1 00:17:27.160 --rc genhtml_function_coverage=1 00:17:27.160 --rc genhtml_legend=1 00:17:27.160 --rc geninfo_all_blocks=1 00:17:27.160 --rc geninfo_unexecuted_blocks=1 00:17:27.160 00:17:27.160 ' 00:17:27.160 10:19:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:27.160 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:27.160 --rc genhtml_branch_coverage=1 00:17:27.160 --rc genhtml_function_coverage=1 00:17:27.160 --rc genhtml_legend=1 00:17:27.160 --rc geninfo_all_blocks=1 00:17:27.160 --rc geninfo_unexecuted_blocks=1 00:17:27.160 00:17:27.160 ' 00:17:27.160 10:19:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:27.160 10:19:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:17:27.160 10:19:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:27.160 10:19:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:27.160 10:19:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:27.160 10:19:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:27.160 10:19:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:27.160 10:19:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:27.160 10:19:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:27.160 10:19:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:27.160 10:19:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:27.160 10:19:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:27.160 10:19:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:17:27.161 10:19:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:17:27.161 10:19:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:27.161 10:19:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:27.161 10:19:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:27.161 10:19:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:27.161 10:19:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:27.161 10:19:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:17:27.161 10:19:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:27.161 10:19:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:27.161 10:19:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:27.161 10:19:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:27.161 10:19:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:27.161 10:19:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:27.161 10:19:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:17:27.161 10:19:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:27.161 10:19:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:17:27.161 10:19:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:27.161 10:19:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:27.161 10:19:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:27.161 10:19:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:27.161 10:19:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:27.161 10:19:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:27.161 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:27.161 10:19:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:27.161 10:19:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:27.161 10:19:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:27.161 10:19:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:17:27.161 10:19:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:17:27.161 10:19:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:27.161 10:19:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:27.161 10:19:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:27.161 10:19:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:27.161 10:19:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:27.161 10:19:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:27.161 10:19:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:27.161 10:19:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:27.161 10:19:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:27.161 10:19:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:27.161 10:19:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:17:27.161 10:19:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:32.434 10:19:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:32.434 10:19:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:17:32.434 10:19:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:32.434 10:19:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:32.434 10:19:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:32.434 10:19:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:32.434 10:19:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:32.434 10:19:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:17:32.434 10:19:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:32.435 10:19:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:17:32.435 10:19:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:17:32.435 10:19:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:17:32.435 10:19:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:17:32.435 10:19:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:17:32.435 10:19:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:17:32.435 10:19:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:32.435 10:19:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:32.435 10:19:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:32.435 10:19:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:32.435 10:19:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:32.435 10:19:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:32.435 10:19:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:32.435 10:19:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:32.435 10:19:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:32.435 10:19:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:32.435 10:19:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:32.435 10:19:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:32.435 10:19:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:32.435 10:19:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:32.435 10:19:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:32.435 10:19:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:32.435 10:19:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:32.435 10:19:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:32.435 10:19:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:32.435 10:19:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:17:32.435 Found 0000:af:00.0 (0x8086 - 0x159b) 00:17:32.435 10:19:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:32.435 10:19:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:32.435 10:19:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:32.435 10:19:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:32.435 10:19:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:32.435 10:19:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:32.435 10:19:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:17:32.435 Found 0000:af:00.1 (0x8086 - 0x159b) 00:17:32.435 10:19:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:32.435 10:19:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:32.435 10:19:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:32.435 10:19:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:32.435 10:19:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:32.435 10:19:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:32.435 10:19:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:32.435 10:19:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:32.435 10:19:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:32.435 10:19:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:32.435 10:19:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:32.435 10:19:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:32.435 10:19:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:32.435 10:19:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:32.435 10:19:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:32.435 10:19:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:17:32.435 Found net devices under 0000:af:00.0: cvl_0_0 00:17:32.435 10:19:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:32.435 10:19:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:32.435 10:19:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:32.435 10:19:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:32.435 10:19:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:32.435 10:19:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:32.435 10:19:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:32.435 10:19:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:32.435 10:19:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:17:32.435 Found net devices under 0000:af:00.1: cvl_0_1 00:17:32.435 10:19:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:32.435 10:19:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:32.435 10:19:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # is_hw=yes 00:17:32.435 10:19:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:32.435 10:19:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:32.435 10:19:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:32.435 10:19:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:32.435 10:19:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:32.435 10:19:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:32.435 10:19:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:32.435 10:19:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:32.435 10:19:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:32.435 10:19:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:32.435 10:19:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:32.435 10:19:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:32.435 10:19:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:32.435 10:19:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:32.435 10:19:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:32.435 10:19:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:32.435 10:19:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:32.435 10:19:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:32.435 10:19:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:32.435 10:19:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:32.435 10:19:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:32.435 10:19:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:32.435 10:19:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:32.435 10:19:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:32.435 10:19:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:32.435 10:19:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:32.435 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:32.435 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.349 ms 00:17:32.435 00:17:32.435 --- 10.0.0.2 ping statistics --- 00:17:32.435 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:32.435 rtt min/avg/max/mdev = 0.349/0.349/0.349/0.000 ms 00:17:32.435 10:19:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:32.435 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:32.435 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.140 ms 00:17:32.435 00:17:32.435 --- 10.0.0.1 ping statistics --- 00:17:32.435 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:32.435 rtt min/avg/max/mdev = 0.140/0.140/0.140/0.000 ms 00:17:32.435 10:19:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:32.435 10:19:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # return 0 00:17:32.435 10:19:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:32.435 10:19:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:32.435 10:19:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:32.435 10:19:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:32.436 10:19:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:32.436 10:19:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:32.436 10:19:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:32.436 10:19:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:17:32.436 10:19:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:32.436 10:19:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:32.436 10:19:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:32.436 10:19:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # nvmfpid=3880139 00:17:32.436 10:19:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # waitforlisten 3880139 00:17:32.436 10:19:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:32.436 10:19:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # '[' -z 3880139 ']' 00:17:32.436 10:19:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:32.436 10:19:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:32.436 10:19:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:32.436 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:32.694 10:19:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:32.694 10:19:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:32.694 [2024-12-13 10:19:26.403118] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:17:32.694 [2024-12-13 10:19:26.403218] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:32.694 [2024-12-13 10:19:26.522892] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:32.954 [2024-12-13 10:19:26.628661] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:32.954 [2024-12-13 10:19:26.628707] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:32.954 [2024-12-13 10:19:26.628717] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:32.954 [2024-12-13 10:19:26.628728] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:32.954 [2024-12-13 10:19:26.628735] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:32.954 [2024-12-13 10:19:26.631165] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:17:32.954 [2024-12-13 10:19:26.631181] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:17:32.954 [2024-12-13 10:19:26.631285] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:17:32.954 [2024-12-13 10:19:26.631293] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:17:33.522 10:19:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:33.522 10:19:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@868 -- # return 0 00:17:33.522 10:19:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:33.522 10:19:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:33.522 10:19:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:33.523 10:19:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:33.523 10:19:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:17:33.523 10:19:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.523 10:19:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:33.523 10:19:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.523 10:19:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:17:33.523 "tick_rate": 2100000000, 00:17:33.523 "poll_groups": [ 00:17:33.523 { 00:17:33.523 "name": "nvmf_tgt_poll_group_000", 00:17:33.523 "admin_qpairs": 0, 00:17:33.523 "io_qpairs": 0, 00:17:33.523 "current_admin_qpairs": 0, 00:17:33.523 "current_io_qpairs": 0, 00:17:33.523 "pending_bdev_io": 0, 00:17:33.523 "completed_nvme_io": 0, 00:17:33.523 "transports": [] 00:17:33.523 }, 00:17:33.523 { 00:17:33.523 "name": "nvmf_tgt_poll_group_001", 00:17:33.523 "admin_qpairs": 0, 00:17:33.523 "io_qpairs": 0, 00:17:33.523 "current_admin_qpairs": 0, 00:17:33.523 "current_io_qpairs": 0, 00:17:33.523 "pending_bdev_io": 0, 00:17:33.523 "completed_nvme_io": 0, 00:17:33.523 "transports": [] 00:17:33.523 }, 00:17:33.523 { 00:17:33.523 "name": "nvmf_tgt_poll_group_002", 00:17:33.523 "admin_qpairs": 0, 00:17:33.523 "io_qpairs": 0, 00:17:33.523 "current_admin_qpairs": 0, 00:17:33.523 "current_io_qpairs": 0, 00:17:33.523 "pending_bdev_io": 0, 00:17:33.523 "completed_nvme_io": 0, 00:17:33.523 "transports": [] 00:17:33.523 }, 00:17:33.523 { 00:17:33.523 "name": "nvmf_tgt_poll_group_003", 00:17:33.523 "admin_qpairs": 0, 00:17:33.523 "io_qpairs": 0, 00:17:33.523 "current_admin_qpairs": 0, 00:17:33.523 "current_io_qpairs": 0, 00:17:33.523 "pending_bdev_io": 0, 00:17:33.523 "completed_nvme_io": 0, 00:17:33.523 "transports": [] 00:17:33.523 } 00:17:33.523 ] 00:17:33.523 }' 00:17:33.523 10:19:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:17:33.523 10:19:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:17:33.523 10:19:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:17:33.523 10:19:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:17:33.523 10:19:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:17:33.523 10:19:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:17:33.523 10:19:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:17:33.523 10:19:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:33.523 10:19:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.523 10:19:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:33.523 [2024-12-13 10:19:27.372046] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:33.523 10:19:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.523 10:19:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:17:33.523 10:19:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.523 10:19:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:33.782 10:19:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.783 10:19:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:17:33.783 "tick_rate": 2100000000, 00:17:33.783 "poll_groups": [ 00:17:33.783 { 00:17:33.783 "name": "nvmf_tgt_poll_group_000", 00:17:33.783 "admin_qpairs": 0, 00:17:33.783 "io_qpairs": 0, 00:17:33.783 "current_admin_qpairs": 0, 00:17:33.783 "current_io_qpairs": 0, 00:17:33.783 "pending_bdev_io": 0, 00:17:33.783 "completed_nvme_io": 0, 00:17:33.783 "transports": [ 00:17:33.783 { 00:17:33.783 "trtype": "TCP" 00:17:33.783 } 00:17:33.783 ] 00:17:33.783 }, 00:17:33.783 { 00:17:33.783 "name": "nvmf_tgt_poll_group_001", 00:17:33.783 "admin_qpairs": 0, 00:17:33.783 "io_qpairs": 0, 00:17:33.783 "current_admin_qpairs": 0, 00:17:33.783 "current_io_qpairs": 0, 00:17:33.783 "pending_bdev_io": 0, 00:17:33.783 "completed_nvme_io": 0, 00:17:33.783 "transports": [ 00:17:33.783 { 00:17:33.783 "trtype": "TCP" 00:17:33.783 } 00:17:33.783 ] 00:17:33.783 }, 00:17:33.783 { 00:17:33.783 "name": "nvmf_tgt_poll_group_002", 00:17:33.783 "admin_qpairs": 0, 00:17:33.783 "io_qpairs": 0, 00:17:33.783 "current_admin_qpairs": 0, 00:17:33.783 "current_io_qpairs": 0, 00:17:33.783 "pending_bdev_io": 0, 00:17:33.783 "completed_nvme_io": 0, 00:17:33.783 "transports": [ 00:17:33.783 { 00:17:33.783 "trtype": "TCP" 00:17:33.783 } 00:17:33.783 ] 00:17:33.783 }, 00:17:33.783 { 00:17:33.783 "name": "nvmf_tgt_poll_group_003", 00:17:33.783 "admin_qpairs": 0, 00:17:33.783 "io_qpairs": 0, 00:17:33.783 "current_admin_qpairs": 0, 00:17:33.783 "current_io_qpairs": 0, 00:17:33.783 "pending_bdev_io": 0, 00:17:33.783 "completed_nvme_io": 0, 00:17:33.783 "transports": [ 00:17:33.783 { 00:17:33.783 "trtype": "TCP" 00:17:33.783 } 00:17:33.783 ] 00:17:33.783 } 00:17:33.783 ] 00:17:33.783 }' 00:17:33.783 10:19:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:17:33.783 10:19:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:17:33.783 10:19:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:17:33.783 10:19:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:17:33.783 10:19:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:17:33.783 10:19:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:17:33.783 10:19:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:17:33.783 10:19:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:17:33.783 10:19:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:17:33.783 10:19:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:17:33.783 10:19:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:17:33.783 10:19:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:17:33.783 10:19:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:17:33.783 10:19:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:17:33.783 10:19:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.783 10:19:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:33.783 Malloc1 00:17:33.783 10:19:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.783 10:19:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:17:33.783 10:19:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.783 10:19:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:33.783 10:19:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.783 10:19:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:33.783 10:19:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.783 10:19:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:33.783 10:19:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.783 10:19:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:17:33.783 10:19:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.783 10:19:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:33.783 10:19:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.783 10:19:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:33.783 10:19:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.783 10:19:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:33.783 [2024-12-13 10:19:27.613044] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:33.783 10:19:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.783 10:19:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:17:33.783 10:19:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:17:33.783 10:19:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:17:33.783 10:19:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:17:33.783 10:19:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:33.783 10:19:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:17:33.783 10:19:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:33.783 10:19:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:17:33.783 10:19:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:33.783 10:19:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:17:33.783 10:19:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:17:33.783 10:19:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:17:33.783 [2024-12-13 10:19:27.642505] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562' 00:17:33.783 Failed to write to /dev/nvme-fabrics: Input/output error 00:17:33.783 could not add new controller: failed to write to nvme-fabrics device 00:17:33.783 10:19:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:17:33.783 10:19:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:33.783 10:19:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:33.783 10:19:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:33.783 10:19:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:17:33.783 10:19:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.783 10:19:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:34.043 10:19:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.043 10:19:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:34.981 10:19:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:17:34.981 10:19:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:17:34.981 10:19:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:34.981 10:19:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:17:34.981 10:19:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:17:37.516 10:19:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:37.516 10:19:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:37.516 10:19:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:37.516 10:19:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:17:37.516 10:19:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:37.516 10:19:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:17:37.516 10:19:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:37.516 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:37.516 10:19:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:37.516 10:19:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:17:37.516 10:19:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:17:37.516 10:19:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:37.516 10:19:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:17:37.516 10:19:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:37.516 10:19:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:17:37.516 10:19:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:17:37.516 10:19:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.516 10:19:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:37.516 10:19:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.516 10:19:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:37.516 10:19:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:17:37.516 10:19:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:37.516 10:19:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:17:37.516 10:19:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:37.516 10:19:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:17:37.516 10:19:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:37.516 10:19:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:17:37.516 10:19:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:37.516 10:19:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:17:37.516 10:19:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:17:37.516 10:19:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:37.516 [2024-12-13 10:19:31.154929] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562' 00:17:37.516 Failed to write to /dev/nvme-fabrics: Input/output error 00:17:37.516 could not add new controller: failed to write to nvme-fabrics device 00:17:37.516 10:19:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:17:37.516 10:19:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:37.516 10:19:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:37.516 10:19:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:37.516 10:19:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:17:37.516 10:19:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.516 10:19:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:37.516 10:19:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.516 10:19:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:38.895 10:19:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:17:38.895 10:19:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:17:38.895 10:19:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:38.895 10:19:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:17:38.895 10:19:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:17:40.801 10:19:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:40.801 10:19:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:40.801 10:19:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:40.801 10:19:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:17:40.801 10:19:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:40.801 10:19:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:17:40.801 10:19:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:40.801 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:40.801 10:19:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:40.801 10:19:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:17:40.801 10:19:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:17:40.801 10:19:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:40.801 10:19:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:17:40.801 10:19:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:40.801 10:19:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:17:40.801 10:19:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:40.801 10:19:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.801 10:19:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:40.801 10:19:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.801 10:19:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:17:40.801 10:19:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:17:40.801 10:19:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:40.801 10:19:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.801 10:19:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:41.060 10:19:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.060 10:19:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:41.060 10:19:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.060 10:19:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:41.060 [2024-12-13 10:19:34.707257] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:41.060 10:19:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.060 10:19:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:17:41.060 10:19:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.060 10:19:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:41.060 10:19:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.060 10:19:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:41.060 10:19:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.060 10:19:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:41.060 10:19:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.060 10:19:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:42.438 10:19:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:17:42.438 10:19:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:17:42.438 10:19:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:42.438 10:19:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:17:42.438 10:19:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:17:44.343 10:19:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:44.343 10:19:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:44.343 10:19:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:44.343 10:19:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:17:44.343 10:19:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:44.343 10:19:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:17:44.343 10:19:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:44.343 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:44.343 10:19:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:44.343 10:19:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:17:44.343 10:19:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:17:44.343 10:19:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:44.343 10:19:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:17:44.343 10:19:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:44.343 10:19:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:17:44.343 10:19:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:17:44.343 10:19:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.343 10:19:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:44.343 10:19:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.343 10:19:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:44.343 10:19:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.343 10:19:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:44.602 10:19:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.602 10:19:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:17:44.602 10:19:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:44.602 10:19:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.602 10:19:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:44.602 10:19:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.602 10:19:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:44.602 10:19:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.602 10:19:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:44.602 [2024-12-13 10:19:38.250854] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:44.602 10:19:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.602 10:19:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:17:44.602 10:19:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.602 10:19:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:44.603 10:19:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.603 10:19:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:44.603 10:19:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.603 10:19:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:44.603 10:19:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.603 10:19:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:45.540 10:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:17:45.540 10:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:17:45.540 10:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:45.540 10:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:17:45.540 10:19:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:17:48.075 10:19:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:48.075 10:19:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:48.075 10:19:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:48.075 10:19:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:17:48.076 10:19:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:48.076 10:19:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:17:48.076 10:19:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:48.076 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:48.076 10:19:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:48.076 10:19:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:17:48.076 10:19:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:17:48.076 10:19:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:48.076 10:19:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:17:48.076 10:19:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:48.076 10:19:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:17:48.076 10:19:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:17:48.076 10:19:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.076 10:19:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:48.076 10:19:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.076 10:19:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:48.076 10:19:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.076 10:19:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:48.076 10:19:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.076 10:19:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:17:48.076 10:19:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:48.076 10:19:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.076 10:19:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:48.076 10:19:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.076 10:19:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:48.076 10:19:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.076 10:19:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:48.076 [2024-12-13 10:19:41.745640] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:48.076 10:19:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.076 10:19:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:17:48.076 10:19:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.076 10:19:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:48.076 10:19:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.076 10:19:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:48.076 10:19:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.076 10:19:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:48.076 10:19:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.076 10:19:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:49.014 10:19:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:17:49.014 10:19:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:17:49.014 10:19:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:49.014 10:19:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:17:49.014 10:19:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:17:51.549 10:19:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:51.549 10:19:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:51.549 10:19:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:51.549 10:19:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:17:51.549 10:19:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:51.549 10:19:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:17:51.549 10:19:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:51.549 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:51.549 10:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:51.549 10:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:17:51.549 10:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:17:51.549 10:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:51.549 10:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:17:51.549 10:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:51.549 10:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:17:51.549 10:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:17:51.549 10:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.549 10:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:51.549 10:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.549 10:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:51.549 10:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.549 10:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:51.549 10:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.549 10:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:17:51.549 10:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:51.549 10:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.549 10:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:51.549 10:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.549 10:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:51.549 10:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.549 10:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:51.549 [2024-12-13 10:19:45.226123] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:51.549 10:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.549 10:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:17:51.549 10:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.549 10:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:51.549 10:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.549 10:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:51.549 10:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.549 10:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:51.549 10:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.549 10:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:52.927 10:19:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:17:52.927 10:19:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:17:52.927 10:19:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:52.927 10:19:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:17:52.927 10:19:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:17:54.834 10:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:54.834 10:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:54.834 10:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:54.834 10:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:17:54.834 10:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:54.834 10:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:17:54.834 10:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:54.834 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:54.834 10:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:54.834 10:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:17:54.834 10:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:17:54.834 10:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:54.834 10:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:17:54.834 10:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:54.834 10:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:17:55.093 10:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:17:55.093 10:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.093 10:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:55.093 10:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.093 10:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:55.093 10:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.093 10:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:55.093 10:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.093 10:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:17:55.093 10:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:55.093 10:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.093 10:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:55.093 10:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.093 10:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:55.093 10:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.093 10:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:55.093 [2024-12-13 10:19:48.755440] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:55.093 10:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.094 10:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:17:55.094 10:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.094 10:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:55.094 10:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.094 10:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:55.094 10:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.094 10:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:55.094 10:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.094 10:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:56.029 10:19:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:17:56.029 10:19:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:17:56.029 10:19:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:56.029 10:19:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:17:56.029 10:19:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:17:58.565 10:19:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:58.565 10:19:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:58.565 10:19:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:58.565 10:19:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:17:58.565 10:19:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:58.565 10:19:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:17:58.565 10:19:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:58.565 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:58.565 10:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:58.565 10:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:17:58.565 10:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:17:58.565 10:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:58.565 10:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:17:58.565 10:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:58.565 10:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:17:58.565 10:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:17:58.565 10:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.565 10:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:58.565 10:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.565 10:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:58.565 10:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.565 10:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:58.565 10:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.565 10:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:17:58.565 10:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:17:58.565 10:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:58.565 10:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.565 10:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:58.565 10:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.565 10:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:58.565 10:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.566 10:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:58.566 [2024-12-13 10:19:52.246432] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:58.566 10:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.566 10:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:58.566 10:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.566 10:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:58.566 10:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.566 10:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:58.566 10:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.566 10:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:58.566 10:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.566 10:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:58.566 10:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.566 10:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:58.566 10:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.566 10:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:58.566 10:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.566 10:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:58.566 10:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.566 10:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:17:58.566 10:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:58.566 10:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.566 10:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:58.566 10:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.566 10:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:58.566 10:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.566 10:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:58.566 [2024-12-13 10:19:52.294560] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:58.566 10:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.566 10:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:58.566 10:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.566 10:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:58.566 10:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.566 10:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:58.566 10:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.566 10:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:58.566 10:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.566 10:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:58.566 10:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.566 10:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:58.566 10:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.566 10:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:58.566 10:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.566 10:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:58.566 10:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.566 10:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:17:58.566 10:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:58.566 10:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.566 10:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:58.566 10:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.566 10:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:58.566 10:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.566 10:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:58.566 [2024-12-13 10:19:52.342709] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:58.566 10:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.566 10:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:58.566 10:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.566 10:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:58.566 10:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.566 10:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:58.566 10:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.566 10:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:58.566 10:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.566 10:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:58.566 10:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.566 10:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:58.566 10:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.566 10:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:58.566 10:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.566 10:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:58.566 10:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.566 10:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:17:58.566 10:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:58.566 10:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.566 10:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:58.566 10:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.566 10:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:58.566 10:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.566 10:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:58.566 [2024-12-13 10:19:52.390883] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:58.566 10:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.566 10:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:58.566 10:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.566 10:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:58.566 10:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.566 10:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:58.566 10:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.566 10:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:58.566 10:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.566 10:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:58.566 10:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.566 10:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:58.566 10:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.566 10:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:58.566 10:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.566 10:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:58.566 10:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.566 10:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:17:58.567 10:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:58.567 10:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.567 10:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:58.567 10:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.567 10:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:58.567 10:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.567 10:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:58.567 [2024-12-13 10:19:52.439054] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:58.567 10:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.567 10:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:58.567 10:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.567 10:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:58.567 10:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.567 10:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:58.567 10:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.567 10:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:58.827 10:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.827 10:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:58.827 10:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.827 10:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:58.827 10:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.827 10:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:58.827 10:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.827 10:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:58.827 10:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.827 10:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:17:58.827 10:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.827 10:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:58.827 10:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.827 10:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:17:58.827 "tick_rate": 2100000000, 00:17:58.827 "poll_groups": [ 00:17:58.827 { 00:17:58.827 "name": "nvmf_tgt_poll_group_000", 00:17:58.827 "admin_qpairs": 2, 00:17:58.827 "io_qpairs": 168, 00:17:58.827 "current_admin_qpairs": 0, 00:17:58.827 "current_io_qpairs": 0, 00:17:58.827 "pending_bdev_io": 0, 00:17:58.827 "completed_nvme_io": 235, 00:17:58.827 "transports": [ 00:17:58.827 { 00:17:58.827 "trtype": "TCP" 00:17:58.827 } 00:17:58.827 ] 00:17:58.827 }, 00:17:58.827 { 00:17:58.827 "name": "nvmf_tgt_poll_group_001", 00:17:58.827 "admin_qpairs": 2, 00:17:58.827 "io_qpairs": 168, 00:17:58.827 "current_admin_qpairs": 0, 00:17:58.827 "current_io_qpairs": 0, 00:17:58.827 "pending_bdev_io": 0, 00:17:58.827 "completed_nvme_io": 270, 00:17:58.827 "transports": [ 00:17:58.827 { 00:17:58.827 "trtype": "TCP" 00:17:58.827 } 00:17:58.827 ] 00:17:58.827 }, 00:17:58.827 { 00:17:58.827 "name": "nvmf_tgt_poll_group_002", 00:17:58.827 "admin_qpairs": 1, 00:17:58.827 "io_qpairs": 168, 00:17:58.827 "current_admin_qpairs": 0, 00:17:58.827 "current_io_qpairs": 0, 00:17:58.827 "pending_bdev_io": 0, 00:17:58.827 "completed_nvme_io": 251, 00:17:58.827 "transports": [ 00:17:58.827 { 00:17:58.827 "trtype": "TCP" 00:17:58.827 } 00:17:58.827 ] 00:17:58.827 }, 00:17:58.827 { 00:17:58.827 "name": "nvmf_tgt_poll_group_003", 00:17:58.827 "admin_qpairs": 2, 00:17:58.827 "io_qpairs": 168, 00:17:58.827 "current_admin_qpairs": 0, 00:17:58.827 "current_io_qpairs": 0, 00:17:58.827 "pending_bdev_io": 0, 00:17:58.827 "completed_nvme_io": 266, 00:17:58.827 "transports": [ 00:17:58.827 { 00:17:58.827 "trtype": "TCP" 00:17:58.827 } 00:17:58.827 ] 00:17:58.827 } 00:17:58.827 ] 00:17:58.827 }' 00:17:58.827 10:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:17:58.827 10:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:17:58.827 10:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:17:58.827 10:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:17:58.827 10:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:17:58.827 10:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:17:58.827 10:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:17:58.827 10:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:17:58.827 10:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:17:58.827 10:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 672 > 0 )) 00:17:58.827 10:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:17:58.827 10:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:17:58.827 10:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:17:58.827 10:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:58.827 10:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:17:58.827 10:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:58.827 10:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:17:58.827 10:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:58.827 10:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:58.827 rmmod nvme_tcp 00:17:58.827 rmmod nvme_fabrics 00:17:58.827 rmmod nvme_keyring 00:17:58.827 10:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:58.827 10:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:17:58.827 10:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:17:58.827 10:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@517 -- # '[' -n 3880139 ']' 00:17:58.827 10:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # killprocess 3880139 00:17:58.827 10:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # '[' -z 3880139 ']' 00:17:58.827 10:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # kill -0 3880139 00:17:58.827 10:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # uname 00:17:58.827 10:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:58.827 10:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3880139 00:17:58.827 10:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:58.827 10:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:58.827 10:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3880139' 00:17:58.827 killing process with pid 3880139 00:17:58.827 10:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@973 -- # kill 3880139 00:17:58.827 10:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@978 -- # wait 3880139 00:18:00.206 10:19:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:00.206 10:19:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:00.207 10:19:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:00.207 10:19:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr 00:18:00.207 10:19:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:00.207 10:19:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-save 00:18:00.207 10:19:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-restore 00:18:00.207 10:19:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:00.207 10:19:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:00.207 10:19:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:00.207 10:19:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:00.207 10:19:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:02.743 10:19:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:02.743 00:18:02.743 real 0m35.477s 00:18:02.743 user 1m49.454s 00:18:02.743 sys 0m6.263s 00:18:02.743 10:19:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:02.743 10:19:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:02.743 ************************************ 00:18:02.743 END TEST nvmf_rpc 00:18:02.743 ************************************ 00:18:02.743 10:19:56 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:18:02.743 10:19:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:02.743 10:19:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:02.743 10:19:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:02.743 ************************************ 00:18:02.743 START TEST nvmf_invalid 00:18:02.743 ************************************ 00:18:02.743 10:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:18:02.743 * Looking for test storage... 00:18:02.743 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:02.743 10:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:02.743 10:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # lcov --version 00:18:02.743 10:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:02.743 10:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:02.743 10:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:02.743 10:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:02.743 10:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:02.743 10:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:18:02.743 10:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:18:02.743 10:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:18:02.743 10:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:18:02.743 10:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:18:02.743 10:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:18:02.743 10:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:18:02.743 10:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:02.744 10:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:18:02.744 10:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:18:02.744 10:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:02.744 10:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:02.744 10:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:18:02.744 10:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:18:02.744 10:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:02.744 10:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:18:02.744 10:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:18:02.744 10:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:18:02.744 10:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:18:02.744 10:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:02.744 10:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:18:02.744 10:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:18:02.744 10:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:02.744 10:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:02.744 10:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:18:02.744 10:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:02.744 10:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:02.744 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:02.744 --rc genhtml_branch_coverage=1 00:18:02.744 --rc genhtml_function_coverage=1 00:18:02.744 --rc genhtml_legend=1 00:18:02.744 --rc geninfo_all_blocks=1 00:18:02.744 --rc geninfo_unexecuted_blocks=1 00:18:02.744 00:18:02.744 ' 00:18:02.744 10:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:02.744 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:02.744 --rc genhtml_branch_coverage=1 00:18:02.744 --rc genhtml_function_coverage=1 00:18:02.744 --rc genhtml_legend=1 00:18:02.744 --rc geninfo_all_blocks=1 00:18:02.744 --rc geninfo_unexecuted_blocks=1 00:18:02.744 00:18:02.744 ' 00:18:02.744 10:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:02.744 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:02.744 --rc genhtml_branch_coverage=1 00:18:02.744 --rc genhtml_function_coverage=1 00:18:02.744 --rc genhtml_legend=1 00:18:02.744 --rc geninfo_all_blocks=1 00:18:02.744 --rc geninfo_unexecuted_blocks=1 00:18:02.744 00:18:02.744 ' 00:18:02.744 10:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:02.744 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:02.744 --rc genhtml_branch_coverage=1 00:18:02.744 --rc genhtml_function_coverage=1 00:18:02.744 --rc genhtml_legend=1 00:18:02.744 --rc geninfo_all_blocks=1 00:18:02.744 --rc geninfo_unexecuted_blocks=1 00:18:02.744 00:18:02.744 ' 00:18:02.744 10:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:02.744 10:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:18:02.744 10:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:02.744 10:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:02.744 10:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:02.744 10:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:02.744 10:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:02.744 10:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:02.744 10:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:02.744 10:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:02.744 10:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:02.744 10:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:02.744 10:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:18:02.744 10:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:18:02.744 10:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:02.744 10:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:02.744 10:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:02.744 10:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:02.744 10:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:02.744 10:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:18:02.744 10:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:02.744 10:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:02.744 10:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:02.744 10:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:02.744 10:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:02.744 10:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:02.744 10:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:18:02.744 10:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:02.744 10:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:18:02.744 10:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:02.744 10:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:02.744 10:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:02.744 10:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:02.744 10:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:02.744 10:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:02.744 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:02.744 10:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:02.744 10:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:02.744 10:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:02.744 10:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:18:02.744 10:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:02.744 10:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:18:02.744 10:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:18:02.744 10:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:18:02.744 10:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:18:02.744 10:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:02.744 10:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:02.744 10:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:02.744 10:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:02.744 10:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:02.744 10:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:02.744 10:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:02.744 10:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:02.744 10:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:02.744 10:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:02.744 10:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:18:02.744 10:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:18:08.019 10:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:08.019 10:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:18:08.019 10:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:08.019 10:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:08.019 10:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:08.019 10:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:08.019 10:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:08.019 10:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:18:08.019 10:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:08.019 10:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:18:08.019 10:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:18:08.019 10:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:18:08.019 10:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:18:08.019 10:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:18:08.019 10:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:18:08.019 10:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:08.019 10:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:08.019 10:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:08.019 10:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:08.019 10:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:08.019 10:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:08.019 10:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:08.019 10:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:08.019 10:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:08.019 10:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:08.019 10:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:08.019 10:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:08.019 10:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:08.019 10:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:18:08.019 10:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:18:08.019 10:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:18:08.019 10:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:18:08.019 10:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:08.019 10:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:08.019 10:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:18:08.019 Found 0000:af:00.0 (0x8086 - 0x159b) 00:18:08.019 10:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:08.019 10:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:08.019 10:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:08.019 10:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:08.019 10:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:08.019 10:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:08.019 10:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:18:08.019 Found 0000:af:00.1 (0x8086 - 0x159b) 00:18:08.019 10:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:08.019 10:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:08.019 10:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:08.019 10:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:08.019 10:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:08.019 10:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:08.019 10:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:18:08.019 10:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:18:08.019 10:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:08.019 10:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:08.019 10:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:08.019 10:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:08.019 10:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:08.019 10:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:08.019 10:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:08.019 10:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:18:08.019 Found net devices under 0000:af:00.0: cvl_0_0 00:18:08.019 10:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:08.019 10:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:08.019 10:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:08.019 10:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:08.019 10:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:08.019 10:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:08.019 10:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:08.019 10:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:08.019 10:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:18:08.019 Found net devices under 0000:af:00.1: cvl_0_1 00:18:08.019 10:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:08.019 10:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:18:08.019 10:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # is_hw=yes 00:18:08.019 10:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:18:08.019 10:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:18:08.019 10:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:18:08.019 10:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:08.019 10:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:08.019 10:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:08.019 10:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:08.019 10:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:08.019 10:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:08.019 10:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:08.019 10:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:08.019 10:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:08.019 10:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:08.019 10:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:08.019 10:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:08.019 10:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:08.019 10:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:08.019 10:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:08.019 10:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:08.019 10:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:08.019 10:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:08.019 10:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:08.019 10:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:08.019 10:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:08.020 10:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:08.020 10:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:08.020 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:08.020 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.332 ms 00:18:08.020 00:18:08.020 --- 10.0.0.2 ping statistics --- 00:18:08.020 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:08.020 rtt min/avg/max/mdev = 0.332/0.332/0.332/0.000 ms 00:18:08.020 10:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:08.020 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:08.020 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.207 ms 00:18:08.020 00:18:08.020 --- 10.0.0.1 ping statistics --- 00:18:08.020 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:08.020 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:18:08.020 10:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:08.020 10:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # return 0 00:18:08.020 10:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:08.020 10:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:08.020 10:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:08.020 10:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:08.020 10:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:08.020 10:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:08.020 10:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:08.020 10:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:18:08.020 10:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:08.020 10:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:08.020 10:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:18:08.020 10:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # nvmfpid=3888015 00:18:08.020 10:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # waitforlisten 3888015 00:18:08.020 10:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:08.020 10:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # '[' -z 3888015 ']' 00:18:08.020 10:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:08.020 10:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:08.020 10:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:08.020 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:08.020 10:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:08.020 10:20:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:18:08.020 [2024-12-13 10:20:01.846582] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:18:08.020 [2024-12-13 10:20:01.846667] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:08.279 [2024-12-13 10:20:01.965404] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:08.279 [2024-12-13 10:20:02.072617] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:08.279 [2024-12-13 10:20:02.072669] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:08.279 [2024-12-13 10:20:02.072680] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:08.279 [2024-12-13 10:20:02.072691] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:08.279 [2024-12-13 10:20:02.072699] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:08.279 [2024-12-13 10:20:02.075088] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:18:08.279 [2024-12-13 10:20:02.075165] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:18:08.279 [2024-12-13 10:20:02.075267] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:18:08.279 [2024-12-13 10:20:02.075277] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:18:08.847 10:20:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:08.847 10:20:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@868 -- # return 0 00:18:08.847 10:20:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:08.847 10:20:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:08.847 10:20:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:18:08.847 10:20:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:08.847 10:20:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:18:08.847 10:20:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode9944 00:18:09.106 [2024-12-13 10:20:02.856252] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:18:09.106 10:20:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:18:09.106 { 00:18:09.106 "nqn": "nqn.2016-06.io.spdk:cnode9944", 00:18:09.106 "tgt_name": "foobar", 00:18:09.106 "method": "nvmf_create_subsystem", 00:18:09.106 "req_id": 1 00:18:09.106 } 00:18:09.106 Got JSON-RPC error response 00:18:09.106 response: 00:18:09.106 { 00:18:09.106 "code": -32603, 00:18:09.106 "message": "Unable to find target foobar" 00:18:09.106 }' 00:18:09.106 10:20:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:18:09.106 { 00:18:09.106 "nqn": "nqn.2016-06.io.spdk:cnode9944", 00:18:09.106 "tgt_name": "foobar", 00:18:09.106 "method": "nvmf_create_subsystem", 00:18:09.106 "req_id": 1 00:18:09.106 } 00:18:09.106 Got JSON-RPC error response 00:18:09.106 response: 00:18:09.106 { 00:18:09.106 "code": -32603, 00:18:09.106 "message": "Unable to find target foobar" 00:18:09.106 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:18:09.106 10:20:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:18:09.106 10:20:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode12612 00:18:09.365 [2024-12-13 10:20:03.056928] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode12612: invalid serial number 'SPDKISFASTANDAWESOME' 00:18:09.365 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:18:09.365 { 00:18:09.365 "nqn": "nqn.2016-06.io.spdk:cnode12612", 00:18:09.365 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:18:09.365 "method": "nvmf_create_subsystem", 00:18:09.365 "req_id": 1 00:18:09.365 } 00:18:09.365 Got JSON-RPC error response 00:18:09.365 response: 00:18:09.365 { 00:18:09.365 "code": -32602, 00:18:09.365 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:18:09.365 }' 00:18:09.365 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:18:09.365 { 00:18:09.365 "nqn": "nqn.2016-06.io.spdk:cnode12612", 00:18:09.365 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:18:09.365 "method": "nvmf_create_subsystem", 00:18:09.365 "req_id": 1 00:18:09.365 } 00:18:09.365 Got JSON-RPC error response 00:18:09.365 response: 00:18:09.365 { 00:18:09.365 "code": -32602, 00:18:09.365 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:18:09.365 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:18:09.365 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:18:09.365 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode31434 00:18:09.365 [2024-12-13 10:20:03.245565] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode31434: invalid model number 'SPDK_Controller' 00:18:09.625 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:18:09.625 { 00:18:09.625 "nqn": "nqn.2016-06.io.spdk:cnode31434", 00:18:09.625 "model_number": "SPDK_Controller\u001f", 00:18:09.625 "method": "nvmf_create_subsystem", 00:18:09.625 "req_id": 1 00:18:09.625 } 00:18:09.625 Got JSON-RPC error response 00:18:09.625 response: 00:18:09.625 { 00:18:09.625 "code": -32602, 00:18:09.625 "message": "Invalid MN SPDK_Controller\u001f" 00:18:09.625 }' 00:18:09.625 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:18:09.625 { 00:18:09.625 "nqn": "nqn.2016-06.io.spdk:cnode31434", 00:18:09.625 "model_number": "SPDK_Controller\u001f", 00:18:09.625 "method": "nvmf_create_subsystem", 00:18:09.625 "req_id": 1 00:18:09.625 } 00:18:09.625 Got JSON-RPC error response 00:18:09.625 response: 00:18:09.625 { 00:18:09.625 "code": -32602, 00:18:09.625 "message": "Invalid MN SPDK_Controller\u001f" 00:18:09.625 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:18:09.625 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:18:09.625 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:18:09.625 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:18:09.625 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:18:09.625 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:18:09.625 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:18:09.625 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:09.625 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:18:09.625 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:18:09.625 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:18:09.625 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:09.625 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:09.625 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:18:09.625 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:18:09.625 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:18:09.625 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:09.625 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:09.625 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:18:09.625 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:18:09.625 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:18:09.625 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:09.625 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:09.625 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:18:09.625 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:18:09.625 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:18:09.625 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:09.625 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:09.625 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:18:09.625 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:18:09.625 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:18:09.625 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:09.625 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:09.625 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:18:09.625 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:18:09.625 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:18:09.625 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:09.625 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:09.625 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:18:09.625 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:18:09.625 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:18:09.625 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:09.625 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:09.625 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:18:09.625 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:18:09.625 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:18:09.625 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:09.625 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:09.625 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:18:09.625 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:18:09.625 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:18:09.625 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:09.626 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:09.626 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:18:09.626 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:18:09.626 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:18:09.626 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:09.626 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:09.626 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:18:09.626 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:18:09.626 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:18:09.626 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:09.626 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:09.626 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:18:09.626 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:18:09.626 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:18:09.626 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:09.626 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:09.626 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:18:09.626 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:18:09.626 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:18:09.626 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:09.626 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:09.626 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:18:09.626 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:18:09.626 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:18:09.626 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:09.626 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:09.626 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:18:09.626 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:18:09.626 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:18:09.626 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:09.626 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:09.626 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:18:09.626 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:18:09.626 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:18:09.626 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:09.626 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:09.626 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:18:09.626 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:18:09.626 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:18:09.626 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:09.626 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:09.626 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:18:09.626 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:18:09.626 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:18:09.626 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:09.626 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:09.626 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:18:09.626 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:18:09.626 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:18:09.626 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:09.626 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:09.626 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:18:09.626 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:18:09.626 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:18:09.626 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:09.626 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:09.626 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:18:09.626 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:18:09.626 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:18:09.626 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:09.626 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:09.626 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ - == \- ]] 00:18:09.626 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@29 -- # string='\-B<[WZr(d(o"pK]*IpK=' 00:18:09.626 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '\-B<[WZr(d(o"pK]*IpK=' 00:18:09.626 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s '\-B<[WZr(d(o"pK]*IpK=' nqn.2016-06.io.spdk:cnode32123 00:18:09.885 [2024-12-13 10:20:03.606812] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode32123: invalid serial number '\-B<[WZr(d(o"pK]*IpK=' 00:18:09.885 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:18:09.885 { 00:18:09.885 "nqn": "nqn.2016-06.io.spdk:cnode32123", 00:18:09.885 "serial_number": "\\-B<[WZr(d(o\"pK]*IpK\u007f=", 00:18:09.885 "method": "nvmf_create_subsystem", 00:18:09.885 "req_id": 1 00:18:09.885 } 00:18:09.885 Got JSON-RPC error response 00:18:09.885 response: 00:18:09.885 { 00:18:09.885 "code": -32602, 00:18:09.885 "message": "Invalid SN \\-B<[WZr(d(o\"pK]*IpK\u007f=" 00:18:09.885 }' 00:18:09.885 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:18:09.885 { 00:18:09.885 "nqn": "nqn.2016-06.io.spdk:cnode32123", 00:18:09.885 "serial_number": "\\-B<[WZr(d(o\"pK]*IpK\u007f=", 00:18:09.885 "method": "nvmf_create_subsystem", 00:18:09.885 "req_id": 1 00:18:09.885 } 00:18:09.885 Got JSON-RPC error response 00:18:09.885 response: 00:18:09.885 { 00:18:09.885 "code": -32602, 00:18:09.886 "message": "Invalid SN \\-B<[WZr(d(o\"pK]*IpK\u007f=" 00:18:09.886 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:18:09.886 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:18:09.886 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:18:09.886 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:18:09.886 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:18:09.886 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:18:09.886 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:18:09.886 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:09.886 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:18:09.886 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:18:09.886 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:18:09.886 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:09.886 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:09.886 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:18:09.886 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:18:09.886 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:18:09.886 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:09.886 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:09.886 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:18:09.886 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:18:09.886 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:18:09.886 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:09.886 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:09.886 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:18:09.886 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:18:09.886 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:18:09.886 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:09.886 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:09.886 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:18:09.886 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:18:09.886 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:18:09.886 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:09.886 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:09.886 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:18:09.886 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:18:09.886 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:18:09.886 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:09.886 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:09.886 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:18:09.886 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:18:09.886 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:18:09.886 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:09.886 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:09.886 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:18:09.886 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:18:09.886 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:18:09.886 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:09.886 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:09.886 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:18:09.886 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:18:09.886 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:18:09.886 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:09.886 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:09.886 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:18:09.886 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:18:09.886 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:18:09.886 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:09.886 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:09.886 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:18:09.886 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:18:09.886 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:18:09.886 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:09.886 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:09.886 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:18:09.886 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:18:09.886 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:18:09.886 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:09.886 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:09.886 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:18:09.886 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:18:09.886 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:18:09.886 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:09.886 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:09.886 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:18:09.886 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:18:09.886 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:18:09.886 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:09.886 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:09.886 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:18:09.886 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:18:09.886 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:18:09.886 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:09.886 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:09.886 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:18:09.886 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:18:09.886 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:18:09.886 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:09.886 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:09.886 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:18:09.886 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:18:09.886 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:18:09.886 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:09.886 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:09.886 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:18:09.886 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:18:09.886 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:18:09.886 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:09.886 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:09.886 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:18:09.886 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:18:09.886 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:18:09.886 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:09.886 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:09.886 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:18:09.886 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:18:09.886 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:18:09.886 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:09.886 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:09.886 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:18:09.886 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:18:09.886 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:18:09.886 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:09.886 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:09.886 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:18:10.146 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:18:10.146 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:18:10.146 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:10.146 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:10.146 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:18:10.146 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:18:10.146 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:18:10.146 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:10.146 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:10.146 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:18:10.146 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:18:10.146 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:18:10.146 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:10.146 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:10.146 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:18:10.146 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:18:10.146 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:18:10.146 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:10.146 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:10.146 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:18:10.146 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:18:10.146 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:18:10.146 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:10.146 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:10.146 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:18:10.146 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:18:10.146 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:18:10.146 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:10.146 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:10.146 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:18:10.146 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:18:10.146 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:18:10.146 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:10.146 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:10.146 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:18:10.146 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:18:10.146 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:18:10.146 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:10.146 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:10.146 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:18:10.146 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:18:10.147 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:18:10.147 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:10.147 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:10.147 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:18:10.147 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:18:10.147 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:18:10.147 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:10.147 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:10.147 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:18:10.147 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:18:10.147 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:18:10.147 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:10.147 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:10.147 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:18:10.147 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:18:10.147 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:18:10.147 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:10.147 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:10.147 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:18:10.147 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:18:10.147 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:18:10.147 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:10.147 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:10.147 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:18:10.147 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:18:10.147 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:18:10.147 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:10.147 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:10.147 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:18:10.147 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:18:10.147 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:18:10.147 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:10.147 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:10.147 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:18:10.147 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:18:10.147 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:18:10.147 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:10.147 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:10.147 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:18:10.147 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:18:10.147 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:18:10.147 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:10.147 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:10.147 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:18:10.147 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:18:10.147 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:18:10.147 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:10.147 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:10.147 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:18:10.147 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:18:10.147 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:18:10.147 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:10.147 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:10.147 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:18:10.147 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:18:10.147 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:18:10.147 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:10.147 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:10.147 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ > == \- ]] 00:18:10.147 10:20:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '>###### /dev/null' 00:18:13.287 10:20:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:15.823 10:20:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:15.823 00:18:15.823 real 0m12.995s 00:18:15.823 user 0m23.561s 00:18:15.823 sys 0m5.085s 00:18:15.823 10:20:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:15.823 10:20:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:18:15.823 ************************************ 00:18:15.823 END TEST nvmf_invalid 00:18:15.823 ************************************ 00:18:15.823 10:20:09 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:18:15.823 10:20:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:15.823 10:20:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:15.823 10:20:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:15.823 ************************************ 00:18:15.823 START TEST nvmf_connect_stress 00:18:15.823 ************************************ 00:18:15.823 10:20:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:18:15.823 * Looking for test storage... 00:18:15.823 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:15.823 10:20:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:15.823 10:20:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # lcov --version 00:18:15.823 10:20:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:15.823 10:20:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:15.823 10:20:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:15.823 10:20:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:15.823 10:20:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:15.823 10:20:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:18:15.823 10:20:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:18:15.823 10:20:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:18:15.823 10:20:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:18:15.823 10:20:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:18:15.823 10:20:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:18:15.823 10:20:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:18:15.823 10:20:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:15.823 10:20:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:18:15.823 10:20:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:18:15.824 10:20:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:15.824 10:20:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:15.824 10:20:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:18:15.824 10:20:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:18:15.824 10:20:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:15.824 10:20:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:18:15.824 10:20:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:18:15.824 10:20:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:18:15.824 10:20:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:18:15.824 10:20:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:15.824 10:20:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:18:15.824 10:20:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:18:15.824 10:20:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:15.824 10:20:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:15.824 10:20:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:18:15.824 10:20:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:15.824 10:20:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:15.824 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:15.824 --rc genhtml_branch_coverage=1 00:18:15.824 --rc genhtml_function_coverage=1 00:18:15.824 --rc genhtml_legend=1 00:18:15.824 --rc geninfo_all_blocks=1 00:18:15.824 --rc geninfo_unexecuted_blocks=1 00:18:15.824 00:18:15.824 ' 00:18:15.824 10:20:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:15.824 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:15.824 --rc genhtml_branch_coverage=1 00:18:15.824 --rc genhtml_function_coverage=1 00:18:15.824 --rc genhtml_legend=1 00:18:15.824 --rc geninfo_all_blocks=1 00:18:15.824 --rc geninfo_unexecuted_blocks=1 00:18:15.824 00:18:15.824 ' 00:18:15.824 10:20:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:15.824 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:15.824 --rc genhtml_branch_coverage=1 00:18:15.824 --rc genhtml_function_coverage=1 00:18:15.824 --rc genhtml_legend=1 00:18:15.824 --rc geninfo_all_blocks=1 00:18:15.824 --rc geninfo_unexecuted_blocks=1 00:18:15.824 00:18:15.824 ' 00:18:15.824 10:20:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:15.824 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:15.824 --rc genhtml_branch_coverage=1 00:18:15.824 --rc genhtml_function_coverage=1 00:18:15.824 --rc genhtml_legend=1 00:18:15.824 --rc geninfo_all_blocks=1 00:18:15.824 --rc geninfo_unexecuted_blocks=1 00:18:15.824 00:18:15.824 ' 00:18:15.824 10:20:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:15.824 10:20:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:18:15.824 10:20:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:15.824 10:20:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:15.824 10:20:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:15.824 10:20:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:15.824 10:20:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:15.824 10:20:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:15.824 10:20:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:15.824 10:20:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:15.824 10:20:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:15.824 10:20:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:15.824 10:20:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:18:15.824 10:20:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:18:15.824 10:20:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:15.824 10:20:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:15.824 10:20:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:15.824 10:20:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:15.824 10:20:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:15.824 10:20:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:18:15.824 10:20:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:15.824 10:20:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:15.824 10:20:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:15.824 10:20:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:15.824 10:20:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:15.824 10:20:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:15.824 10:20:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:18:15.824 10:20:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:15.824 10:20:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:18:15.824 10:20:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:15.824 10:20:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:15.824 10:20:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:15.824 10:20:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:15.824 10:20:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:15.824 10:20:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:15.824 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:15.824 10:20:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:15.824 10:20:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:15.824 10:20:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:15.824 10:20:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:18:15.824 10:20:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:15.824 10:20:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:15.824 10:20:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:15.824 10:20:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:15.824 10:20:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:15.824 10:20:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:15.824 10:20:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:15.824 10:20:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:15.824 10:20:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:15.824 10:20:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:15.824 10:20:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:18:15.824 10:20:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:21.097 10:20:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:21.097 10:20:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:18:21.097 10:20:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:21.097 10:20:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:21.097 10:20:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:21.097 10:20:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:21.097 10:20:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:21.097 10:20:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:18:21.097 10:20:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:21.097 10:20:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:18:21.097 10:20:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:18:21.097 10:20:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:18:21.097 10:20:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:18:21.097 10:20:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:18:21.097 10:20:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:18:21.097 10:20:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:21.097 10:20:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:21.097 10:20:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:21.097 10:20:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:21.097 10:20:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:21.097 10:20:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:21.097 10:20:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:21.097 10:20:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:21.097 10:20:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:21.097 10:20:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:21.097 10:20:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:21.097 10:20:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:21.097 10:20:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:21.097 10:20:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:18:21.097 10:20:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:18:21.097 10:20:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:18:21.097 10:20:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:18:21.097 10:20:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:21.098 10:20:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:21.098 10:20:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:18:21.098 Found 0000:af:00.0 (0x8086 - 0x159b) 00:18:21.098 10:20:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:21.098 10:20:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:21.098 10:20:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:21.098 10:20:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:21.098 10:20:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:21.098 10:20:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:21.098 10:20:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:18:21.098 Found 0000:af:00.1 (0x8086 - 0x159b) 00:18:21.098 10:20:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:21.098 10:20:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:21.098 10:20:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:21.098 10:20:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:21.098 10:20:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:21.098 10:20:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:21.098 10:20:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:18:21.098 10:20:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:18:21.098 10:20:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:21.098 10:20:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:21.098 10:20:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:21.098 10:20:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:21.098 10:20:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:21.098 10:20:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:21.098 10:20:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:21.098 10:20:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:18:21.098 Found net devices under 0000:af:00.0: cvl_0_0 00:18:21.098 10:20:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:21.098 10:20:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:21.098 10:20:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:21.098 10:20:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:21.098 10:20:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:21.098 10:20:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:21.098 10:20:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:21.098 10:20:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:21.098 10:20:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:18:21.098 Found net devices under 0000:af:00.1: cvl_0_1 00:18:21.098 10:20:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:21.098 10:20:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:18:21.098 10:20:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:18:21.098 10:20:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:18:21.098 10:20:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:18:21.098 10:20:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:18:21.098 10:20:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:21.098 10:20:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:21.098 10:20:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:21.098 10:20:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:21.098 10:20:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:21.098 10:20:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:21.098 10:20:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:21.098 10:20:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:21.098 10:20:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:21.098 10:20:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:21.098 10:20:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:21.098 10:20:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:21.098 10:20:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:21.098 10:20:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:21.098 10:20:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:21.098 10:20:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:21.098 10:20:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:21.098 10:20:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:21.098 10:20:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:21.098 10:20:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:21.098 10:20:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:21.098 10:20:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:21.098 10:20:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:21.098 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:21.098 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.339 ms 00:18:21.098 00:18:21.098 --- 10.0.0.2 ping statistics --- 00:18:21.098 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:21.098 rtt min/avg/max/mdev = 0.339/0.339/0.339/0.000 ms 00:18:21.098 10:20:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:21.098 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:21.098 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.223 ms 00:18:21.098 00:18:21.098 --- 10.0.0.1 ping statistics --- 00:18:21.098 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:21.098 rtt min/avg/max/mdev = 0.223/0.223/0.223/0.000 ms 00:18:21.098 10:20:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:21.098 10:20:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # return 0 00:18:21.098 10:20:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:21.098 10:20:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:21.098 10:20:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:21.098 10:20:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:21.098 10:20:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:21.098 10:20:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:21.357 10:20:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:21.357 10:20:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:18:21.357 10:20:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:21.357 10:20:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:21.357 10:20:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:21.357 10:20:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # nvmfpid=3892542 00:18:21.357 10:20:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # waitforlisten 3892542 00:18:21.357 10:20:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:18:21.357 10:20:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # '[' -z 3892542 ']' 00:18:21.357 10:20:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:21.357 10:20:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:21.357 10:20:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:21.357 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:21.357 10:20:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:21.357 10:20:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:21.357 [2024-12-13 10:20:15.123842] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:18:21.357 [2024-12-13 10:20:15.123934] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:21.357 [2024-12-13 10:20:15.241229] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:21.616 [2024-12-13 10:20:15.340214] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:21.616 [2024-12-13 10:20:15.340259] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:21.616 [2024-12-13 10:20:15.340271] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:21.616 [2024-12-13 10:20:15.340282] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:21.616 [2024-12-13 10:20:15.340290] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:21.616 [2024-12-13 10:20:15.342619] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:18:21.616 [2024-12-13 10:20:15.342680] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:18:21.616 [2024-12-13 10:20:15.342702] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:18:22.183 10:20:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:22.183 10:20:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@868 -- # return 0 00:18:22.183 10:20:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:22.183 10:20:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:22.183 10:20:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:22.183 10:20:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:22.183 10:20:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:22.183 10:20:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.183 10:20:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:22.183 [2024-12-13 10:20:15.972103] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:22.183 10:20:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.183 10:20:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:18:22.183 10:20:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.183 10:20:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:22.183 10:20:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.183 10:20:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:22.183 10:20:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.183 10:20:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:22.183 [2024-12-13 10:20:15.994073] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:22.183 10:20:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.184 10:20:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:18:22.184 10:20:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.184 10:20:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:22.184 NULL1 00:18:22.184 10:20:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.184 10:20:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=3892712 00:18:22.184 10:20:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:18:22.184 10:20:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:18:22.184 10:20:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:18:22.184 10:20:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:18:22.184 10:20:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:22.184 10:20:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:22.184 10:20:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:22.184 10:20:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:22.184 10:20:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:22.184 10:20:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:22.184 10:20:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:22.184 10:20:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:22.184 10:20:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:22.184 10:20:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:22.184 10:20:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:22.184 10:20:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:22.184 10:20:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:22.184 10:20:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:22.184 10:20:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:22.184 10:20:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:22.184 10:20:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:22.184 10:20:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:22.184 10:20:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:22.184 10:20:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:22.184 10:20:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:22.184 10:20:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:22.184 10:20:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:22.184 10:20:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:22.184 10:20:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:22.184 10:20:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:22.184 10:20:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:22.184 10:20:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:22.184 10:20:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:22.184 10:20:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:22.442 10:20:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:22.442 10:20:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:22.442 10:20:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:22.442 10:20:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:22.442 10:20:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:22.442 10:20:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:22.442 10:20:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:22.443 10:20:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:22.443 10:20:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:22.443 10:20:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:22.443 10:20:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3892712 00:18:22.443 10:20:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:22.443 10:20:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.443 10:20:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:22.701 10:20:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.701 10:20:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3892712 00:18:22.701 10:20:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:22.701 10:20:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.701 10:20:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:22.960 10:20:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.960 10:20:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3892712 00:18:22.960 10:20:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:22.960 10:20:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.960 10:20:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:23.218 10:20:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.218 10:20:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3892712 00:18:23.218 10:20:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:23.218 10:20:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.218 10:20:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:23.785 10:20:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.785 10:20:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3892712 00:18:23.785 10:20:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:23.785 10:20:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.785 10:20:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:24.044 10:20:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.044 10:20:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3892712 00:18:24.044 10:20:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:24.044 10:20:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.044 10:20:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:24.303 10:20:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.303 10:20:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3892712 00:18:24.303 10:20:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:24.303 10:20:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.303 10:20:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:24.562 10:20:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.562 10:20:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3892712 00:18:24.562 10:20:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:24.562 10:20:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.562 10:20:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:25.130 10:20:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.130 10:20:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3892712 00:18:25.130 10:20:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:25.130 10:20:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.130 10:20:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:25.389 10:20:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.389 10:20:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3892712 00:18:25.389 10:20:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:25.389 10:20:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.389 10:20:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:25.648 10:20:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.648 10:20:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3892712 00:18:25.648 10:20:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:25.648 10:20:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.648 10:20:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:25.907 10:20:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.907 10:20:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3892712 00:18:25.907 10:20:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:25.907 10:20:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.907 10:20:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:26.166 10:20:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.166 10:20:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3892712 00:18:26.166 10:20:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:26.166 10:20:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.166 10:20:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:26.735 10:20:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.735 10:20:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3892712 00:18:26.735 10:20:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:26.735 10:20:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.735 10:20:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:26.994 10:20:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.994 10:20:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3892712 00:18:26.994 10:20:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:26.994 10:20:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.994 10:20:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:27.253 10:20:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.253 10:20:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3892712 00:18:27.253 10:20:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:27.254 10:20:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.254 10:20:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:27.513 10:20:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.513 10:20:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3892712 00:18:27.513 10:20:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:27.513 10:20:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.513 10:20:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:28.081 10:20:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.081 10:20:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3892712 00:18:28.081 10:20:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:28.081 10:20:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.081 10:20:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:28.340 10:20:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.340 10:20:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3892712 00:18:28.340 10:20:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:28.340 10:20:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.340 10:20:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:28.599 10:20:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.599 10:20:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3892712 00:18:28.599 10:20:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:28.599 10:20:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.599 10:20:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:28.858 10:20:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.858 10:20:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3892712 00:18:28.858 10:20:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:28.858 10:20:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.858 10:20:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:29.117 10:20:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.117 10:20:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3892712 00:18:29.117 10:20:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:29.117 10:20:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.117 10:20:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:29.685 10:20:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.685 10:20:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3892712 00:18:29.685 10:20:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:29.685 10:20:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.685 10:20:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:29.944 10:20:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.944 10:20:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3892712 00:18:29.944 10:20:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:29.944 10:20:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.944 10:20:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:30.203 10:20:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.203 10:20:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3892712 00:18:30.203 10:20:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:30.203 10:20:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.203 10:20:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:30.462 10:20:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.462 10:20:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3892712 00:18:30.462 10:20:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:30.462 10:20:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.462 10:20:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:31.030 10:20:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.030 10:20:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3892712 00:18:31.030 10:20:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:31.030 10:20:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.030 10:20:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:31.289 10:20:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.289 10:20:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3892712 00:18:31.289 10:20:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:31.289 10:20:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.289 10:20:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:31.548 10:20:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.548 10:20:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3892712 00:18:31.548 10:20:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:31.548 10:20:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.548 10:20:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:31.806 10:20:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.806 10:20:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3892712 00:18:31.806 10:20:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:31.806 10:20:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.806 10:20:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:32.135 10:20:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.135 10:20:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3892712 00:18:32.135 10:20:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:32.135 10:20:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.135 10:20:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:32.394 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:18:32.654 10:20:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.654 10:20:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3892712 00:18:32.654 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (3892712) - No such process 00:18:32.654 10:20:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 3892712 00:18:32.654 10:20:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:18:32.654 10:20:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:18:32.654 10:20:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:18:32.654 10:20:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:32.654 10:20:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:18:32.654 10:20:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:32.654 10:20:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:18:32.654 10:20:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:32.654 10:20:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:32.654 rmmod nvme_tcp 00:18:32.654 rmmod nvme_fabrics 00:18:32.654 rmmod nvme_keyring 00:18:32.654 10:20:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:32.654 10:20:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:18:32.654 10:20:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:18:32.654 10:20:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@517 -- # '[' -n 3892542 ']' 00:18:32.654 10:20:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # killprocess 3892542 00:18:32.654 10:20:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # '[' -z 3892542 ']' 00:18:32.654 10:20:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # kill -0 3892542 00:18:32.654 10:20:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # uname 00:18:32.654 10:20:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:32.654 10:20:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3892542 00:18:32.654 10:20:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:32.654 10:20:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:32.654 10:20:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3892542' 00:18:32.654 killing process with pid 3892542 00:18:32.654 10:20:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@973 -- # kill 3892542 00:18:32.654 10:20:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@978 -- # wait 3892542 00:18:34.032 10:20:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:34.032 10:20:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:34.032 10:20:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:34.032 10:20:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr 00:18:34.032 10:20:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-restore 00:18:34.032 10:20:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-save 00:18:34.032 10:20:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:34.032 10:20:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:34.032 10:20:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:34.032 10:20:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:34.032 10:20:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:34.032 10:20:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:35.937 10:20:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:35.937 00:18:35.937 real 0m20.417s 00:18:35.937 user 0m43.940s 00:18:35.937 sys 0m8.128s 00:18:35.937 10:20:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:35.937 10:20:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:35.938 ************************************ 00:18:35.938 END TEST nvmf_connect_stress 00:18:35.938 ************************************ 00:18:35.938 10:20:29 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:18:35.938 10:20:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:35.938 10:20:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:35.938 10:20:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:35.938 ************************************ 00:18:35.938 START TEST nvmf_fused_ordering 00:18:35.938 ************************************ 00:18:35.938 10:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:18:35.938 * Looking for test storage... 00:18:35.938 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:35.938 10:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:35.938 10:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # lcov --version 00:18:35.938 10:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:36.197 10:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:36.197 10:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:36.197 10:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:36.197 10:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:36.197 10:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:18:36.197 10:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:18:36.197 10:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:18:36.197 10:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:18:36.197 10:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:18:36.197 10:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:18:36.197 10:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:18:36.197 10:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:36.197 10:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:18:36.197 10:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:18:36.197 10:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:36.197 10:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:36.197 10:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:18:36.197 10:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:18:36.197 10:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:36.197 10:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:18:36.197 10:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:18:36.197 10:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:18:36.197 10:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:18:36.197 10:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:36.197 10:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:18:36.198 10:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:18:36.198 10:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:36.198 10:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:36.198 10:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:18:36.198 10:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:36.198 10:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:36.198 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:36.198 --rc genhtml_branch_coverage=1 00:18:36.198 --rc genhtml_function_coverage=1 00:18:36.198 --rc genhtml_legend=1 00:18:36.198 --rc geninfo_all_blocks=1 00:18:36.198 --rc geninfo_unexecuted_blocks=1 00:18:36.198 00:18:36.198 ' 00:18:36.198 10:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:36.198 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:36.198 --rc genhtml_branch_coverage=1 00:18:36.198 --rc genhtml_function_coverage=1 00:18:36.198 --rc genhtml_legend=1 00:18:36.198 --rc geninfo_all_blocks=1 00:18:36.198 --rc geninfo_unexecuted_blocks=1 00:18:36.198 00:18:36.198 ' 00:18:36.198 10:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:36.198 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:36.198 --rc genhtml_branch_coverage=1 00:18:36.198 --rc genhtml_function_coverage=1 00:18:36.198 --rc genhtml_legend=1 00:18:36.198 --rc geninfo_all_blocks=1 00:18:36.198 --rc geninfo_unexecuted_blocks=1 00:18:36.198 00:18:36.198 ' 00:18:36.198 10:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:36.198 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:36.198 --rc genhtml_branch_coverage=1 00:18:36.198 --rc genhtml_function_coverage=1 00:18:36.198 --rc genhtml_legend=1 00:18:36.198 --rc geninfo_all_blocks=1 00:18:36.198 --rc geninfo_unexecuted_blocks=1 00:18:36.198 00:18:36.198 ' 00:18:36.198 10:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:36.198 10:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:18:36.198 10:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:36.198 10:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:36.198 10:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:36.198 10:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:36.198 10:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:36.198 10:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:36.198 10:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:36.198 10:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:36.198 10:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:36.198 10:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:36.198 10:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:18:36.198 10:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:18:36.198 10:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:36.198 10:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:36.198 10:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:36.198 10:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:36.198 10:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:36.198 10:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:18:36.198 10:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:36.198 10:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:36.198 10:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:36.198 10:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:36.198 10:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:36.198 10:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:36.198 10:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:18:36.198 10:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:36.198 10:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:18:36.198 10:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:36.198 10:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:36.198 10:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:36.198 10:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:36.198 10:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:36.198 10:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:36.198 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:36.198 10:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:36.198 10:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:36.198 10:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:36.198 10:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:18:36.198 10:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:36.198 10:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:36.198 10:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:36.198 10:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:36.198 10:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:36.198 10:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:36.198 10:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:36.198 10:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:36.198 10:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:36.198 10:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:36.198 10:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:18:36.198 10:20:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:41.470 10:20:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:41.470 10:20:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:18:41.470 10:20:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:41.470 10:20:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:41.470 10:20:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:41.470 10:20:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:41.470 10:20:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:41.470 10:20:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:18:41.470 10:20:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:41.470 10:20:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:18:41.470 10:20:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:18:41.470 10:20:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:18:41.470 10:20:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:18:41.470 10:20:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:18:41.470 10:20:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:18:41.470 10:20:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:41.470 10:20:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:41.470 10:20:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:41.470 10:20:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:41.470 10:20:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:41.470 10:20:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:41.470 10:20:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:41.470 10:20:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:41.470 10:20:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:41.470 10:20:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:41.470 10:20:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:41.470 10:20:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:41.470 10:20:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:41.470 10:20:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:18:41.470 10:20:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:18:41.470 10:20:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:18:41.470 10:20:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:18:41.470 10:20:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:41.470 10:20:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:41.470 10:20:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:18:41.470 Found 0000:af:00.0 (0x8086 - 0x159b) 00:18:41.470 10:20:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:41.470 10:20:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:41.470 10:20:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:41.470 10:20:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:41.470 10:20:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:41.470 10:20:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:41.470 10:20:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:18:41.470 Found 0000:af:00.1 (0x8086 - 0x159b) 00:18:41.470 10:20:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:41.470 10:20:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:41.470 10:20:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:41.470 10:20:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:41.470 10:20:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:41.470 10:20:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:41.470 10:20:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:18:41.470 10:20:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:18:41.470 10:20:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:41.470 10:20:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:41.470 10:20:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:41.470 10:20:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:41.470 10:20:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:41.470 10:20:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:41.470 10:20:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:41.470 10:20:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:18:41.470 Found net devices under 0000:af:00.0: cvl_0_0 00:18:41.470 10:20:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:41.470 10:20:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:41.470 10:20:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:41.470 10:20:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:41.470 10:20:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:41.470 10:20:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:41.470 10:20:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:41.470 10:20:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:41.470 10:20:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:18:41.470 Found net devices under 0000:af:00.1: cvl_0_1 00:18:41.470 10:20:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:41.470 10:20:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:18:41.470 10:20:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # is_hw=yes 00:18:41.470 10:20:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:18:41.470 10:20:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:18:41.470 10:20:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:18:41.470 10:20:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:41.470 10:20:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:41.470 10:20:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:41.470 10:20:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:41.470 10:20:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:41.470 10:20:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:41.470 10:20:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:41.470 10:20:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:41.470 10:20:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:41.470 10:20:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:41.470 10:20:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:41.470 10:20:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:41.470 10:20:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:41.470 10:20:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:41.470 10:20:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:41.470 10:20:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:41.470 10:20:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:41.470 10:20:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:41.470 10:20:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:41.470 10:20:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:41.470 10:20:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:41.470 10:20:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:41.470 10:20:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:41.470 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:41.470 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.428 ms 00:18:41.470 00:18:41.470 --- 10.0.0.2 ping statistics --- 00:18:41.470 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:41.470 rtt min/avg/max/mdev = 0.428/0.428/0.428/0.000 ms 00:18:41.470 10:20:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:41.470 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:41.470 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.216 ms 00:18:41.470 00:18:41.470 --- 10.0.0.1 ping statistics --- 00:18:41.470 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:41.470 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:18:41.470 10:20:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:41.470 10:20:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # return 0 00:18:41.470 10:20:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:41.470 10:20:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:41.470 10:20:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:41.470 10:20:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:41.470 10:20:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:41.470 10:20:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:41.470 10:20:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:41.470 10:20:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:18:41.470 10:20:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:41.470 10:20:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:41.470 10:20:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:41.470 10:20:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # nvmfpid=3897994 00:18:41.470 10:20:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # waitforlisten 3897994 00:18:41.470 10:20:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # '[' -z 3897994 ']' 00:18:41.470 10:20:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:41.470 10:20:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:41.471 10:20:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:41.471 10:20:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:41.471 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:41.471 10:20:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:41.471 10:20:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:41.729 [2024-12-13 10:20:35.394818] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:18:41.729 [2024-12-13 10:20:35.394914] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:41.729 [2024-12-13 10:20:35.511509] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:41.729 [2024-12-13 10:20:35.613665] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:41.729 [2024-12-13 10:20:35.613709] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:41.729 [2024-12-13 10:20:35.613718] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:41.729 [2024-12-13 10:20:35.613728] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:41.729 [2024-12-13 10:20:35.613735] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:41.729 [2024-12-13 10:20:35.615038] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:18:42.666 10:20:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:42.666 10:20:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@868 -- # return 0 00:18:42.666 10:20:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:42.666 10:20:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:42.666 10:20:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:42.666 10:20:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:42.666 10:20:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:42.666 10:20:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.666 10:20:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:42.666 [2024-12-13 10:20:36.231498] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:42.666 10:20:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.666 10:20:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:18:42.666 10:20:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.666 10:20:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:42.666 10:20:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.666 10:20:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:42.666 10:20:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.666 10:20:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:42.666 [2024-12-13 10:20:36.247646] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:42.666 10:20:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.666 10:20:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:18:42.666 10:20:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.666 10:20:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:42.666 NULL1 00:18:42.666 10:20:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.666 10:20:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:18:42.666 10:20:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.666 10:20:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:42.666 10:20:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.666 10:20:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:18:42.666 10:20:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.666 10:20:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:42.666 10:20:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.666 10:20:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:18:42.666 [2024-12-13 10:20:36.318798] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:18:42.666 [2024-12-13 10:20:36.318855] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3898077 ] 00:18:42.925 Attached to nqn.2016-06.io.spdk:cnode1 00:18:42.926 Namespace ID: 1 size: 1GB 00:18:42.926 fused_ordering(0) 00:18:42.926 fused_ordering(1) 00:18:42.926 fused_ordering(2) 00:18:42.926 fused_ordering(3) 00:18:42.926 fused_ordering(4) 00:18:42.926 fused_ordering(5) 00:18:42.926 fused_ordering(6) 00:18:42.926 fused_ordering(7) 00:18:42.926 fused_ordering(8) 00:18:42.926 fused_ordering(9) 00:18:42.926 fused_ordering(10) 00:18:42.926 fused_ordering(11) 00:18:42.926 fused_ordering(12) 00:18:42.926 fused_ordering(13) 00:18:42.926 fused_ordering(14) 00:18:42.926 fused_ordering(15) 00:18:42.926 fused_ordering(16) 00:18:42.926 fused_ordering(17) 00:18:42.926 fused_ordering(18) 00:18:42.926 fused_ordering(19) 00:18:42.926 fused_ordering(20) 00:18:42.926 fused_ordering(21) 00:18:42.926 fused_ordering(22) 00:18:42.926 fused_ordering(23) 00:18:42.926 fused_ordering(24) 00:18:42.926 fused_ordering(25) 00:18:42.926 fused_ordering(26) 00:18:42.926 fused_ordering(27) 00:18:42.926 fused_ordering(28) 00:18:42.926 fused_ordering(29) 00:18:42.926 fused_ordering(30) 00:18:42.926 fused_ordering(31) 00:18:42.926 fused_ordering(32) 00:18:42.926 fused_ordering(33) 00:18:42.926 fused_ordering(34) 00:18:42.926 fused_ordering(35) 00:18:42.926 fused_ordering(36) 00:18:42.926 fused_ordering(37) 00:18:42.926 fused_ordering(38) 00:18:42.926 fused_ordering(39) 00:18:42.926 fused_ordering(40) 00:18:42.926 fused_ordering(41) 00:18:42.926 fused_ordering(42) 00:18:42.926 fused_ordering(43) 00:18:42.926 fused_ordering(44) 00:18:42.926 fused_ordering(45) 00:18:42.926 fused_ordering(46) 00:18:42.926 fused_ordering(47) 00:18:42.926 fused_ordering(48) 00:18:42.926 fused_ordering(49) 00:18:42.926 fused_ordering(50) 00:18:42.926 fused_ordering(51) 00:18:42.926 fused_ordering(52) 00:18:42.926 fused_ordering(53) 00:18:42.926 fused_ordering(54) 00:18:42.926 fused_ordering(55) 00:18:42.926 fused_ordering(56) 00:18:42.926 fused_ordering(57) 00:18:42.926 fused_ordering(58) 00:18:42.926 fused_ordering(59) 00:18:42.926 fused_ordering(60) 00:18:42.926 fused_ordering(61) 00:18:42.926 fused_ordering(62) 00:18:42.926 fused_ordering(63) 00:18:42.926 fused_ordering(64) 00:18:42.926 fused_ordering(65) 00:18:42.926 fused_ordering(66) 00:18:42.926 fused_ordering(67) 00:18:42.926 fused_ordering(68) 00:18:42.926 fused_ordering(69) 00:18:42.926 fused_ordering(70) 00:18:42.926 fused_ordering(71) 00:18:42.926 fused_ordering(72) 00:18:42.926 fused_ordering(73) 00:18:42.926 fused_ordering(74) 00:18:42.926 fused_ordering(75) 00:18:42.926 fused_ordering(76) 00:18:42.926 fused_ordering(77) 00:18:42.926 fused_ordering(78) 00:18:42.926 fused_ordering(79) 00:18:42.926 fused_ordering(80) 00:18:42.926 fused_ordering(81) 00:18:42.926 fused_ordering(82) 00:18:42.926 fused_ordering(83) 00:18:42.926 fused_ordering(84) 00:18:42.926 fused_ordering(85) 00:18:42.926 fused_ordering(86) 00:18:42.926 fused_ordering(87) 00:18:42.926 fused_ordering(88) 00:18:42.926 fused_ordering(89) 00:18:42.926 fused_ordering(90) 00:18:42.926 fused_ordering(91) 00:18:42.926 fused_ordering(92) 00:18:42.926 fused_ordering(93) 00:18:42.926 fused_ordering(94) 00:18:42.926 fused_ordering(95) 00:18:42.926 fused_ordering(96) 00:18:42.926 fused_ordering(97) 00:18:42.926 fused_ordering(98) 00:18:42.926 fused_ordering(99) 00:18:42.926 fused_ordering(100) 00:18:42.926 fused_ordering(101) 00:18:42.926 fused_ordering(102) 00:18:42.926 fused_ordering(103) 00:18:42.926 fused_ordering(104) 00:18:42.926 fused_ordering(105) 00:18:42.926 fused_ordering(106) 00:18:42.926 fused_ordering(107) 00:18:42.926 fused_ordering(108) 00:18:42.926 fused_ordering(109) 00:18:42.926 fused_ordering(110) 00:18:42.926 fused_ordering(111) 00:18:42.926 fused_ordering(112) 00:18:42.926 fused_ordering(113) 00:18:42.926 fused_ordering(114) 00:18:42.926 fused_ordering(115) 00:18:42.926 fused_ordering(116) 00:18:42.926 fused_ordering(117) 00:18:42.926 fused_ordering(118) 00:18:42.926 fused_ordering(119) 00:18:42.926 fused_ordering(120) 00:18:42.926 fused_ordering(121) 00:18:42.926 fused_ordering(122) 00:18:42.926 fused_ordering(123) 00:18:42.926 fused_ordering(124) 00:18:42.926 fused_ordering(125) 00:18:42.926 fused_ordering(126) 00:18:42.926 fused_ordering(127) 00:18:42.926 fused_ordering(128) 00:18:42.926 fused_ordering(129) 00:18:42.926 fused_ordering(130) 00:18:42.926 fused_ordering(131) 00:18:42.926 fused_ordering(132) 00:18:42.926 fused_ordering(133) 00:18:42.926 fused_ordering(134) 00:18:42.926 fused_ordering(135) 00:18:42.926 fused_ordering(136) 00:18:42.926 fused_ordering(137) 00:18:42.926 fused_ordering(138) 00:18:42.926 fused_ordering(139) 00:18:42.926 fused_ordering(140) 00:18:42.926 fused_ordering(141) 00:18:42.926 fused_ordering(142) 00:18:42.926 fused_ordering(143) 00:18:42.926 fused_ordering(144) 00:18:42.926 fused_ordering(145) 00:18:42.926 fused_ordering(146) 00:18:42.926 fused_ordering(147) 00:18:42.926 fused_ordering(148) 00:18:42.926 fused_ordering(149) 00:18:42.926 fused_ordering(150) 00:18:42.926 fused_ordering(151) 00:18:42.926 fused_ordering(152) 00:18:42.926 fused_ordering(153) 00:18:42.926 fused_ordering(154) 00:18:42.926 fused_ordering(155) 00:18:42.926 fused_ordering(156) 00:18:42.926 fused_ordering(157) 00:18:42.926 fused_ordering(158) 00:18:42.926 fused_ordering(159) 00:18:42.926 fused_ordering(160) 00:18:42.926 fused_ordering(161) 00:18:42.926 fused_ordering(162) 00:18:42.926 fused_ordering(163) 00:18:42.926 fused_ordering(164) 00:18:42.926 fused_ordering(165) 00:18:42.926 fused_ordering(166) 00:18:42.926 fused_ordering(167) 00:18:42.926 fused_ordering(168) 00:18:42.926 fused_ordering(169) 00:18:42.926 fused_ordering(170) 00:18:42.926 fused_ordering(171) 00:18:42.926 fused_ordering(172) 00:18:42.926 fused_ordering(173) 00:18:42.926 fused_ordering(174) 00:18:42.926 fused_ordering(175) 00:18:42.926 fused_ordering(176) 00:18:42.926 fused_ordering(177) 00:18:42.926 fused_ordering(178) 00:18:42.926 fused_ordering(179) 00:18:42.926 fused_ordering(180) 00:18:42.926 fused_ordering(181) 00:18:42.926 fused_ordering(182) 00:18:42.926 fused_ordering(183) 00:18:42.926 fused_ordering(184) 00:18:42.926 fused_ordering(185) 00:18:42.926 fused_ordering(186) 00:18:42.926 fused_ordering(187) 00:18:42.926 fused_ordering(188) 00:18:42.926 fused_ordering(189) 00:18:42.926 fused_ordering(190) 00:18:42.926 fused_ordering(191) 00:18:42.926 fused_ordering(192) 00:18:42.926 fused_ordering(193) 00:18:42.926 fused_ordering(194) 00:18:42.926 fused_ordering(195) 00:18:42.926 fused_ordering(196) 00:18:42.926 fused_ordering(197) 00:18:42.926 fused_ordering(198) 00:18:42.926 fused_ordering(199) 00:18:42.926 fused_ordering(200) 00:18:42.926 fused_ordering(201) 00:18:42.926 fused_ordering(202) 00:18:42.926 fused_ordering(203) 00:18:42.926 fused_ordering(204) 00:18:42.926 fused_ordering(205) 00:18:43.186 fused_ordering(206) 00:18:43.186 fused_ordering(207) 00:18:43.186 fused_ordering(208) 00:18:43.186 fused_ordering(209) 00:18:43.186 fused_ordering(210) 00:18:43.186 fused_ordering(211) 00:18:43.186 fused_ordering(212) 00:18:43.186 fused_ordering(213) 00:18:43.186 fused_ordering(214) 00:18:43.186 fused_ordering(215) 00:18:43.186 fused_ordering(216) 00:18:43.186 fused_ordering(217) 00:18:43.186 fused_ordering(218) 00:18:43.186 fused_ordering(219) 00:18:43.186 fused_ordering(220) 00:18:43.186 fused_ordering(221) 00:18:43.186 fused_ordering(222) 00:18:43.186 fused_ordering(223) 00:18:43.186 fused_ordering(224) 00:18:43.186 fused_ordering(225) 00:18:43.186 fused_ordering(226) 00:18:43.186 fused_ordering(227) 00:18:43.186 fused_ordering(228) 00:18:43.186 fused_ordering(229) 00:18:43.186 fused_ordering(230) 00:18:43.186 fused_ordering(231) 00:18:43.186 fused_ordering(232) 00:18:43.186 fused_ordering(233) 00:18:43.186 fused_ordering(234) 00:18:43.186 fused_ordering(235) 00:18:43.186 fused_ordering(236) 00:18:43.186 fused_ordering(237) 00:18:43.186 fused_ordering(238) 00:18:43.186 fused_ordering(239) 00:18:43.186 fused_ordering(240) 00:18:43.186 fused_ordering(241) 00:18:43.186 fused_ordering(242) 00:18:43.186 fused_ordering(243) 00:18:43.186 fused_ordering(244) 00:18:43.186 fused_ordering(245) 00:18:43.186 fused_ordering(246) 00:18:43.186 fused_ordering(247) 00:18:43.186 fused_ordering(248) 00:18:43.186 fused_ordering(249) 00:18:43.186 fused_ordering(250) 00:18:43.186 fused_ordering(251) 00:18:43.186 fused_ordering(252) 00:18:43.186 fused_ordering(253) 00:18:43.186 fused_ordering(254) 00:18:43.186 fused_ordering(255) 00:18:43.186 fused_ordering(256) 00:18:43.186 fused_ordering(257) 00:18:43.186 fused_ordering(258) 00:18:43.186 fused_ordering(259) 00:18:43.186 fused_ordering(260) 00:18:43.186 fused_ordering(261) 00:18:43.186 fused_ordering(262) 00:18:43.186 fused_ordering(263) 00:18:43.186 fused_ordering(264) 00:18:43.186 fused_ordering(265) 00:18:43.186 fused_ordering(266) 00:18:43.186 fused_ordering(267) 00:18:43.186 fused_ordering(268) 00:18:43.186 fused_ordering(269) 00:18:43.186 fused_ordering(270) 00:18:43.186 fused_ordering(271) 00:18:43.186 fused_ordering(272) 00:18:43.186 fused_ordering(273) 00:18:43.186 fused_ordering(274) 00:18:43.186 fused_ordering(275) 00:18:43.186 fused_ordering(276) 00:18:43.186 fused_ordering(277) 00:18:43.186 fused_ordering(278) 00:18:43.186 fused_ordering(279) 00:18:43.186 fused_ordering(280) 00:18:43.186 fused_ordering(281) 00:18:43.186 fused_ordering(282) 00:18:43.186 fused_ordering(283) 00:18:43.186 fused_ordering(284) 00:18:43.186 fused_ordering(285) 00:18:43.186 fused_ordering(286) 00:18:43.186 fused_ordering(287) 00:18:43.186 fused_ordering(288) 00:18:43.186 fused_ordering(289) 00:18:43.186 fused_ordering(290) 00:18:43.186 fused_ordering(291) 00:18:43.186 fused_ordering(292) 00:18:43.186 fused_ordering(293) 00:18:43.186 fused_ordering(294) 00:18:43.186 fused_ordering(295) 00:18:43.186 fused_ordering(296) 00:18:43.186 fused_ordering(297) 00:18:43.186 fused_ordering(298) 00:18:43.186 fused_ordering(299) 00:18:43.186 fused_ordering(300) 00:18:43.186 fused_ordering(301) 00:18:43.186 fused_ordering(302) 00:18:43.186 fused_ordering(303) 00:18:43.186 fused_ordering(304) 00:18:43.186 fused_ordering(305) 00:18:43.186 fused_ordering(306) 00:18:43.186 fused_ordering(307) 00:18:43.186 fused_ordering(308) 00:18:43.186 fused_ordering(309) 00:18:43.186 fused_ordering(310) 00:18:43.186 fused_ordering(311) 00:18:43.186 fused_ordering(312) 00:18:43.186 fused_ordering(313) 00:18:43.186 fused_ordering(314) 00:18:43.186 fused_ordering(315) 00:18:43.186 fused_ordering(316) 00:18:43.186 fused_ordering(317) 00:18:43.186 fused_ordering(318) 00:18:43.186 fused_ordering(319) 00:18:43.186 fused_ordering(320) 00:18:43.186 fused_ordering(321) 00:18:43.186 fused_ordering(322) 00:18:43.186 fused_ordering(323) 00:18:43.186 fused_ordering(324) 00:18:43.186 fused_ordering(325) 00:18:43.186 fused_ordering(326) 00:18:43.186 fused_ordering(327) 00:18:43.186 fused_ordering(328) 00:18:43.186 fused_ordering(329) 00:18:43.186 fused_ordering(330) 00:18:43.186 fused_ordering(331) 00:18:43.186 fused_ordering(332) 00:18:43.186 fused_ordering(333) 00:18:43.186 fused_ordering(334) 00:18:43.186 fused_ordering(335) 00:18:43.186 fused_ordering(336) 00:18:43.186 fused_ordering(337) 00:18:43.186 fused_ordering(338) 00:18:43.186 fused_ordering(339) 00:18:43.186 fused_ordering(340) 00:18:43.186 fused_ordering(341) 00:18:43.186 fused_ordering(342) 00:18:43.186 fused_ordering(343) 00:18:43.186 fused_ordering(344) 00:18:43.186 fused_ordering(345) 00:18:43.186 fused_ordering(346) 00:18:43.186 fused_ordering(347) 00:18:43.186 fused_ordering(348) 00:18:43.186 fused_ordering(349) 00:18:43.186 fused_ordering(350) 00:18:43.186 fused_ordering(351) 00:18:43.186 fused_ordering(352) 00:18:43.186 fused_ordering(353) 00:18:43.186 fused_ordering(354) 00:18:43.186 fused_ordering(355) 00:18:43.186 fused_ordering(356) 00:18:43.186 fused_ordering(357) 00:18:43.186 fused_ordering(358) 00:18:43.186 fused_ordering(359) 00:18:43.186 fused_ordering(360) 00:18:43.186 fused_ordering(361) 00:18:43.186 fused_ordering(362) 00:18:43.186 fused_ordering(363) 00:18:43.186 fused_ordering(364) 00:18:43.186 fused_ordering(365) 00:18:43.186 fused_ordering(366) 00:18:43.186 fused_ordering(367) 00:18:43.186 fused_ordering(368) 00:18:43.186 fused_ordering(369) 00:18:43.186 fused_ordering(370) 00:18:43.186 fused_ordering(371) 00:18:43.186 fused_ordering(372) 00:18:43.186 fused_ordering(373) 00:18:43.186 fused_ordering(374) 00:18:43.186 fused_ordering(375) 00:18:43.186 fused_ordering(376) 00:18:43.186 fused_ordering(377) 00:18:43.186 fused_ordering(378) 00:18:43.186 fused_ordering(379) 00:18:43.186 fused_ordering(380) 00:18:43.186 fused_ordering(381) 00:18:43.186 fused_ordering(382) 00:18:43.186 fused_ordering(383) 00:18:43.186 fused_ordering(384) 00:18:43.186 fused_ordering(385) 00:18:43.186 fused_ordering(386) 00:18:43.186 fused_ordering(387) 00:18:43.186 fused_ordering(388) 00:18:43.186 fused_ordering(389) 00:18:43.186 fused_ordering(390) 00:18:43.186 fused_ordering(391) 00:18:43.186 fused_ordering(392) 00:18:43.186 fused_ordering(393) 00:18:43.186 fused_ordering(394) 00:18:43.186 fused_ordering(395) 00:18:43.186 fused_ordering(396) 00:18:43.186 fused_ordering(397) 00:18:43.186 fused_ordering(398) 00:18:43.186 fused_ordering(399) 00:18:43.186 fused_ordering(400) 00:18:43.186 fused_ordering(401) 00:18:43.186 fused_ordering(402) 00:18:43.186 fused_ordering(403) 00:18:43.186 fused_ordering(404) 00:18:43.186 fused_ordering(405) 00:18:43.186 fused_ordering(406) 00:18:43.186 fused_ordering(407) 00:18:43.186 fused_ordering(408) 00:18:43.186 fused_ordering(409) 00:18:43.186 fused_ordering(410) 00:18:43.754 fused_ordering(411) 00:18:43.754 fused_ordering(412) 00:18:43.754 fused_ordering(413) 00:18:43.754 fused_ordering(414) 00:18:43.754 fused_ordering(415) 00:18:43.754 fused_ordering(416) 00:18:43.754 fused_ordering(417) 00:18:43.754 fused_ordering(418) 00:18:43.754 fused_ordering(419) 00:18:43.754 fused_ordering(420) 00:18:43.754 fused_ordering(421) 00:18:43.754 fused_ordering(422) 00:18:43.754 fused_ordering(423) 00:18:43.754 fused_ordering(424) 00:18:43.754 fused_ordering(425) 00:18:43.754 fused_ordering(426) 00:18:43.754 fused_ordering(427) 00:18:43.754 fused_ordering(428) 00:18:43.754 fused_ordering(429) 00:18:43.754 fused_ordering(430) 00:18:43.754 fused_ordering(431) 00:18:43.754 fused_ordering(432) 00:18:43.754 fused_ordering(433) 00:18:43.754 fused_ordering(434) 00:18:43.754 fused_ordering(435) 00:18:43.754 fused_ordering(436) 00:18:43.754 fused_ordering(437) 00:18:43.754 fused_ordering(438) 00:18:43.754 fused_ordering(439) 00:18:43.754 fused_ordering(440) 00:18:43.754 fused_ordering(441) 00:18:43.754 fused_ordering(442) 00:18:43.754 fused_ordering(443) 00:18:43.754 fused_ordering(444) 00:18:43.754 fused_ordering(445) 00:18:43.754 fused_ordering(446) 00:18:43.754 fused_ordering(447) 00:18:43.754 fused_ordering(448) 00:18:43.754 fused_ordering(449) 00:18:43.754 fused_ordering(450) 00:18:43.754 fused_ordering(451) 00:18:43.754 fused_ordering(452) 00:18:43.754 fused_ordering(453) 00:18:43.754 fused_ordering(454) 00:18:43.754 fused_ordering(455) 00:18:43.754 fused_ordering(456) 00:18:43.754 fused_ordering(457) 00:18:43.754 fused_ordering(458) 00:18:43.754 fused_ordering(459) 00:18:43.754 fused_ordering(460) 00:18:43.754 fused_ordering(461) 00:18:43.754 fused_ordering(462) 00:18:43.754 fused_ordering(463) 00:18:43.754 fused_ordering(464) 00:18:43.754 fused_ordering(465) 00:18:43.754 fused_ordering(466) 00:18:43.754 fused_ordering(467) 00:18:43.754 fused_ordering(468) 00:18:43.754 fused_ordering(469) 00:18:43.754 fused_ordering(470) 00:18:43.755 fused_ordering(471) 00:18:43.755 fused_ordering(472) 00:18:43.755 fused_ordering(473) 00:18:43.755 fused_ordering(474) 00:18:43.755 fused_ordering(475) 00:18:43.755 fused_ordering(476) 00:18:43.755 fused_ordering(477) 00:18:43.755 fused_ordering(478) 00:18:43.755 fused_ordering(479) 00:18:43.755 fused_ordering(480) 00:18:43.755 fused_ordering(481) 00:18:43.755 fused_ordering(482) 00:18:43.755 fused_ordering(483) 00:18:43.755 fused_ordering(484) 00:18:43.755 fused_ordering(485) 00:18:43.755 fused_ordering(486) 00:18:43.755 fused_ordering(487) 00:18:43.755 fused_ordering(488) 00:18:43.755 fused_ordering(489) 00:18:43.755 fused_ordering(490) 00:18:43.755 fused_ordering(491) 00:18:43.755 fused_ordering(492) 00:18:43.755 fused_ordering(493) 00:18:43.755 fused_ordering(494) 00:18:43.755 fused_ordering(495) 00:18:43.755 fused_ordering(496) 00:18:43.755 fused_ordering(497) 00:18:43.755 fused_ordering(498) 00:18:43.755 fused_ordering(499) 00:18:43.755 fused_ordering(500) 00:18:43.755 fused_ordering(501) 00:18:43.755 fused_ordering(502) 00:18:43.755 fused_ordering(503) 00:18:43.755 fused_ordering(504) 00:18:43.755 fused_ordering(505) 00:18:43.755 fused_ordering(506) 00:18:43.755 fused_ordering(507) 00:18:43.755 fused_ordering(508) 00:18:43.755 fused_ordering(509) 00:18:43.755 fused_ordering(510) 00:18:43.755 fused_ordering(511) 00:18:43.755 fused_ordering(512) 00:18:43.755 fused_ordering(513) 00:18:43.755 fused_ordering(514) 00:18:43.755 fused_ordering(515) 00:18:43.755 fused_ordering(516) 00:18:43.755 fused_ordering(517) 00:18:43.755 fused_ordering(518) 00:18:43.755 fused_ordering(519) 00:18:43.755 fused_ordering(520) 00:18:43.755 fused_ordering(521) 00:18:43.755 fused_ordering(522) 00:18:43.755 fused_ordering(523) 00:18:43.755 fused_ordering(524) 00:18:43.755 fused_ordering(525) 00:18:43.755 fused_ordering(526) 00:18:43.755 fused_ordering(527) 00:18:43.755 fused_ordering(528) 00:18:43.755 fused_ordering(529) 00:18:43.755 fused_ordering(530) 00:18:43.755 fused_ordering(531) 00:18:43.755 fused_ordering(532) 00:18:43.755 fused_ordering(533) 00:18:43.755 fused_ordering(534) 00:18:43.755 fused_ordering(535) 00:18:43.755 fused_ordering(536) 00:18:43.755 fused_ordering(537) 00:18:43.755 fused_ordering(538) 00:18:43.755 fused_ordering(539) 00:18:43.755 fused_ordering(540) 00:18:43.755 fused_ordering(541) 00:18:43.755 fused_ordering(542) 00:18:43.755 fused_ordering(543) 00:18:43.755 fused_ordering(544) 00:18:43.755 fused_ordering(545) 00:18:43.755 fused_ordering(546) 00:18:43.755 fused_ordering(547) 00:18:43.755 fused_ordering(548) 00:18:43.755 fused_ordering(549) 00:18:43.755 fused_ordering(550) 00:18:43.755 fused_ordering(551) 00:18:43.755 fused_ordering(552) 00:18:43.755 fused_ordering(553) 00:18:43.755 fused_ordering(554) 00:18:43.755 fused_ordering(555) 00:18:43.755 fused_ordering(556) 00:18:43.755 fused_ordering(557) 00:18:43.755 fused_ordering(558) 00:18:43.755 fused_ordering(559) 00:18:43.755 fused_ordering(560) 00:18:43.755 fused_ordering(561) 00:18:43.755 fused_ordering(562) 00:18:43.755 fused_ordering(563) 00:18:43.755 fused_ordering(564) 00:18:43.755 fused_ordering(565) 00:18:43.755 fused_ordering(566) 00:18:43.755 fused_ordering(567) 00:18:43.755 fused_ordering(568) 00:18:43.755 fused_ordering(569) 00:18:43.755 fused_ordering(570) 00:18:43.755 fused_ordering(571) 00:18:43.755 fused_ordering(572) 00:18:43.755 fused_ordering(573) 00:18:43.755 fused_ordering(574) 00:18:43.755 fused_ordering(575) 00:18:43.755 fused_ordering(576) 00:18:43.755 fused_ordering(577) 00:18:43.755 fused_ordering(578) 00:18:43.755 fused_ordering(579) 00:18:43.755 fused_ordering(580) 00:18:43.755 fused_ordering(581) 00:18:43.755 fused_ordering(582) 00:18:43.755 fused_ordering(583) 00:18:43.755 fused_ordering(584) 00:18:43.755 fused_ordering(585) 00:18:43.755 fused_ordering(586) 00:18:43.755 fused_ordering(587) 00:18:43.755 fused_ordering(588) 00:18:43.755 fused_ordering(589) 00:18:43.755 fused_ordering(590) 00:18:43.755 fused_ordering(591) 00:18:43.755 fused_ordering(592) 00:18:43.755 fused_ordering(593) 00:18:43.755 fused_ordering(594) 00:18:43.755 fused_ordering(595) 00:18:43.755 fused_ordering(596) 00:18:43.755 fused_ordering(597) 00:18:43.755 fused_ordering(598) 00:18:43.755 fused_ordering(599) 00:18:43.755 fused_ordering(600) 00:18:43.755 fused_ordering(601) 00:18:43.755 fused_ordering(602) 00:18:43.755 fused_ordering(603) 00:18:43.755 fused_ordering(604) 00:18:43.755 fused_ordering(605) 00:18:43.755 fused_ordering(606) 00:18:43.755 fused_ordering(607) 00:18:43.755 fused_ordering(608) 00:18:43.755 fused_ordering(609) 00:18:43.755 fused_ordering(610) 00:18:43.755 fused_ordering(611) 00:18:43.755 fused_ordering(612) 00:18:43.755 fused_ordering(613) 00:18:43.755 fused_ordering(614) 00:18:43.755 fused_ordering(615) 00:18:44.014 fused_ordering(616) 00:18:44.014 fused_ordering(617) 00:18:44.014 fused_ordering(618) 00:18:44.014 fused_ordering(619) 00:18:44.014 fused_ordering(620) 00:18:44.014 fused_ordering(621) 00:18:44.014 fused_ordering(622) 00:18:44.014 fused_ordering(623) 00:18:44.014 fused_ordering(624) 00:18:44.014 fused_ordering(625) 00:18:44.014 fused_ordering(626) 00:18:44.014 fused_ordering(627) 00:18:44.014 fused_ordering(628) 00:18:44.014 fused_ordering(629) 00:18:44.014 fused_ordering(630) 00:18:44.014 fused_ordering(631) 00:18:44.014 fused_ordering(632) 00:18:44.014 fused_ordering(633) 00:18:44.014 fused_ordering(634) 00:18:44.014 fused_ordering(635) 00:18:44.014 fused_ordering(636) 00:18:44.014 fused_ordering(637) 00:18:44.014 fused_ordering(638) 00:18:44.014 fused_ordering(639) 00:18:44.014 fused_ordering(640) 00:18:44.014 fused_ordering(641) 00:18:44.014 fused_ordering(642) 00:18:44.014 fused_ordering(643) 00:18:44.014 fused_ordering(644) 00:18:44.014 fused_ordering(645) 00:18:44.014 fused_ordering(646) 00:18:44.014 fused_ordering(647) 00:18:44.014 fused_ordering(648) 00:18:44.014 fused_ordering(649) 00:18:44.014 fused_ordering(650) 00:18:44.014 fused_ordering(651) 00:18:44.014 fused_ordering(652) 00:18:44.014 fused_ordering(653) 00:18:44.014 fused_ordering(654) 00:18:44.014 fused_ordering(655) 00:18:44.014 fused_ordering(656) 00:18:44.014 fused_ordering(657) 00:18:44.014 fused_ordering(658) 00:18:44.014 fused_ordering(659) 00:18:44.015 fused_ordering(660) 00:18:44.015 fused_ordering(661) 00:18:44.015 fused_ordering(662) 00:18:44.015 fused_ordering(663) 00:18:44.015 fused_ordering(664) 00:18:44.015 fused_ordering(665) 00:18:44.015 fused_ordering(666) 00:18:44.015 fused_ordering(667) 00:18:44.015 fused_ordering(668) 00:18:44.015 fused_ordering(669) 00:18:44.015 fused_ordering(670) 00:18:44.015 fused_ordering(671) 00:18:44.015 fused_ordering(672) 00:18:44.015 fused_ordering(673) 00:18:44.015 fused_ordering(674) 00:18:44.015 fused_ordering(675) 00:18:44.015 fused_ordering(676) 00:18:44.015 fused_ordering(677) 00:18:44.015 fused_ordering(678) 00:18:44.015 fused_ordering(679) 00:18:44.015 fused_ordering(680) 00:18:44.015 fused_ordering(681) 00:18:44.015 fused_ordering(682) 00:18:44.015 fused_ordering(683) 00:18:44.015 fused_ordering(684) 00:18:44.015 fused_ordering(685) 00:18:44.015 fused_ordering(686) 00:18:44.015 fused_ordering(687) 00:18:44.015 fused_ordering(688) 00:18:44.015 fused_ordering(689) 00:18:44.015 fused_ordering(690) 00:18:44.015 fused_ordering(691) 00:18:44.015 fused_ordering(692) 00:18:44.015 fused_ordering(693) 00:18:44.015 fused_ordering(694) 00:18:44.015 fused_ordering(695) 00:18:44.015 fused_ordering(696) 00:18:44.015 fused_ordering(697) 00:18:44.015 fused_ordering(698) 00:18:44.015 fused_ordering(699) 00:18:44.015 fused_ordering(700) 00:18:44.015 fused_ordering(701) 00:18:44.015 fused_ordering(702) 00:18:44.015 fused_ordering(703) 00:18:44.015 fused_ordering(704) 00:18:44.015 fused_ordering(705) 00:18:44.015 fused_ordering(706) 00:18:44.015 fused_ordering(707) 00:18:44.015 fused_ordering(708) 00:18:44.015 fused_ordering(709) 00:18:44.015 fused_ordering(710) 00:18:44.015 fused_ordering(711) 00:18:44.015 fused_ordering(712) 00:18:44.015 fused_ordering(713) 00:18:44.015 fused_ordering(714) 00:18:44.015 fused_ordering(715) 00:18:44.015 fused_ordering(716) 00:18:44.015 fused_ordering(717) 00:18:44.015 fused_ordering(718) 00:18:44.015 fused_ordering(719) 00:18:44.015 fused_ordering(720) 00:18:44.015 fused_ordering(721) 00:18:44.015 fused_ordering(722) 00:18:44.015 fused_ordering(723) 00:18:44.015 fused_ordering(724) 00:18:44.015 fused_ordering(725) 00:18:44.015 fused_ordering(726) 00:18:44.015 fused_ordering(727) 00:18:44.015 fused_ordering(728) 00:18:44.015 fused_ordering(729) 00:18:44.015 fused_ordering(730) 00:18:44.015 fused_ordering(731) 00:18:44.015 fused_ordering(732) 00:18:44.015 fused_ordering(733) 00:18:44.015 fused_ordering(734) 00:18:44.015 fused_ordering(735) 00:18:44.015 fused_ordering(736) 00:18:44.015 fused_ordering(737) 00:18:44.015 fused_ordering(738) 00:18:44.015 fused_ordering(739) 00:18:44.015 fused_ordering(740) 00:18:44.015 fused_ordering(741) 00:18:44.015 fused_ordering(742) 00:18:44.015 fused_ordering(743) 00:18:44.015 fused_ordering(744) 00:18:44.015 fused_ordering(745) 00:18:44.015 fused_ordering(746) 00:18:44.015 fused_ordering(747) 00:18:44.015 fused_ordering(748) 00:18:44.015 fused_ordering(749) 00:18:44.015 fused_ordering(750) 00:18:44.015 fused_ordering(751) 00:18:44.015 fused_ordering(752) 00:18:44.015 fused_ordering(753) 00:18:44.015 fused_ordering(754) 00:18:44.015 fused_ordering(755) 00:18:44.015 fused_ordering(756) 00:18:44.015 fused_ordering(757) 00:18:44.015 fused_ordering(758) 00:18:44.015 fused_ordering(759) 00:18:44.015 fused_ordering(760) 00:18:44.015 fused_ordering(761) 00:18:44.015 fused_ordering(762) 00:18:44.015 fused_ordering(763) 00:18:44.015 fused_ordering(764) 00:18:44.015 fused_ordering(765) 00:18:44.015 fused_ordering(766) 00:18:44.015 fused_ordering(767) 00:18:44.015 fused_ordering(768) 00:18:44.015 fused_ordering(769) 00:18:44.015 fused_ordering(770) 00:18:44.015 fused_ordering(771) 00:18:44.015 fused_ordering(772) 00:18:44.015 fused_ordering(773) 00:18:44.015 fused_ordering(774) 00:18:44.015 fused_ordering(775) 00:18:44.015 fused_ordering(776) 00:18:44.015 fused_ordering(777) 00:18:44.015 fused_ordering(778) 00:18:44.015 fused_ordering(779) 00:18:44.015 fused_ordering(780) 00:18:44.015 fused_ordering(781) 00:18:44.015 fused_ordering(782) 00:18:44.015 fused_ordering(783) 00:18:44.015 fused_ordering(784) 00:18:44.015 fused_ordering(785) 00:18:44.015 fused_ordering(786) 00:18:44.015 fused_ordering(787) 00:18:44.015 fused_ordering(788) 00:18:44.015 fused_ordering(789) 00:18:44.015 fused_ordering(790) 00:18:44.015 fused_ordering(791) 00:18:44.015 fused_ordering(792) 00:18:44.015 fused_ordering(793) 00:18:44.015 fused_ordering(794) 00:18:44.015 fused_ordering(795) 00:18:44.015 fused_ordering(796) 00:18:44.015 fused_ordering(797) 00:18:44.015 fused_ordering(798) 00:18:44.015 fused_ordering(799) 00:18:44.015 fused_ordering(800) 00:18:44.015 fused_ordering(801) 00:18:44.015 fused_ordering(802) 00:18:44.015 fused_ordering(803) 00:18:44.015 fused_ordering(804) 00:18:44.015 fused_ordering(805) 00:18:44.015 fused_ordering(806) 00:18:44.015 fused_ordering(807) 00:18:44.015 fused_ordering(808) 00:18:44.015 fused_ordering(809) 00:18:44.015 fused_ordering(810) 00:18:44.015 fused_ordering(811) 00:18:44.015 fused_ordering(812) 00:18:44.015 fused_ordering(813) 00:18:44.015 fused_ordering(814) 00:18:44.015 fused_ordering(815) 00:18:44.015 fused_ordering(816) 00:18:44.015 fused_ordering(817) 00:18:44.015 fused_ordering(818) 00:18:44.015 fused_ordering(819) 00:18:44.015 fused_ordering(820) 00:18:44.584 fused_ordering(821) 00:18:44.584 fused_ordering(822) 00:18:44.584 fused_ordering(823) 00:18:44.584 fused_ordering(824) 00:18:44.584 fused_ordering(825) 00:18:44.584 fused_ordering(826) 00:18:44.584 fused_ordering(827) 00:18:44.584 fused_ordering(828) 00:18:44.584 fused_ordering(829) 00:18:44.584 fused_ordering(830) 00:18:44.584 fused_ordering(831) 00:18:44.584 fused_ordering(832) 00:18:44.584 fused_ordering(833) 00:18:44.584 fused_ordering(834) 00:18:44.584 fused_ordering(835) 00:18:44.584 fused_ordering(836) 00:18:44.584 fused_ordering(837) 00:18:44.584 fused_ordering(838) 00:18:44.584 fused_ordering(839) 00:18:44.584 fused_ordering(840) 00:18:44.584 fused_ordering(841) 00:18:44.584 fused_ordering(842) 00:18:44.584 fused_ordering(843) 00:18:44.584 fused_ordering(844) 00:18:44.584 fused_ordering(845) 00:18:44.584 fused_ordering(846) 00:18:44.584 fused_ordering(847) 00:18:44.584 fused_ordering(848) 00:18:44.584 fused_ordering(849) 00:18:44.584 fused_ordering(850) 00:18:44.584 fused_ordering(851) 00:18:44.584 fused_ordering(852) 00:18:44.584 fused_ordering(853) 00:18:44.584 fused_ordering(854) 00:18:44.584 fused_ordering(855) 00:18:44.584 fused_ordering(856) 00:18:44.584 fused_ordering(857) 00:18:44.584 fused_ordering(858) 00:18:44.584 fused_ordering(859) 00:18:44.584 fused_ordering(860) 00:18:44.584 fused_ordering(861) 00:18:44.584 fused_ordering(862) 00:18:44.584 fused_ordering(863) 00:18:44.584 fused_ordering(864) 00:18:44.584 fused_ordering(865) 00:18:44.584 fused_ordering(866) 00:18:44.584 fused_ordering(867) 00:18:44.584 fused_ordering(868) 00:18:44.584 fused_ordering(869) 00:18:44.584 fused_ordering(870) 00:18:44.584 fused_ordering(871) 00:18:44.584 fused_ordering(872) 00:18:44.584 fused_ordering(873) 00:18:44.584 fused_ordering(874) 00:18:44.584 fused_ordering(875) 00:18:44.584 fused_ordering(876) 00:18:44.584 fused_ordering(877) 00:18:44.584 fused_ordering(878) 00:18:44.584 fused_ordering(879) 00:18:44.584 fused_ordering(880) 00:18:44.584 fused_ordering(881) 00:18:44.584 fused_ordering(882) 00:18:44.584 fused_ordering(883) 00:18:44.584 fused_ordering(884) 00:18:44.584 fused_ordering(885) 00:18:44.584 fused_ordering(886) 00:18:44.584 fused_ordering(887) 00:18:44.584 fused_ordering(888) 00:18:44.584 fused_ordering(889) 00:18:44.584 fused_ordering(890) 00:18:44.584 fused_ordering(891) 00:18:44.584 fused_ordering(892) 00:18:44.584 fused_ordering(893) 00:18:44.584 fused_ordering(894) 00:18:44.584 fused_ordering(895) 00:18:44.584 fused_ordering(896) 00:18:44.584 fused_ordering(897) 00:18:44.584 fused_ordering(898) 00:18:44.584 fused_ordering(899) 00:18:44.584 fused_ordering(900) 00:18:44.584 fused_ordering(901) 00:18:44.584 fused_ordering(902) 00:18:44.584 fused_ordering(903) 00:18:44.584 fused_ordering(904) 00:18:44.584 fused_ordering(905) 00:18:44.584 fused_ordering(906) 00:18:44.584 fused_ordering(907) 00:18:44.584 fused_ordering(908) 00:18:44.584 fused_ordering(909) 00:18:44.584 fused_ordering(910) 00:18:44.584 fused_ordering(911) 00:18:44.584 fused_ordering(912) 00:18:44.584 fused_ordering(913) 00:18:44.584 fused_ordering(914) 00:18:44.584 fused_ordering(915) 00:18:44.584 fused_ordering(916) 00:18:44.584 fused_ordering(917) 00:18:44.584 fused_ordering(918) 00:18:44.584 fused_ordering(919) 00:18:44.584 fused_ordering(920) 00:18:44.584 fused_ordering(921) 00:18:44.584 fused_ordering(922) 00:18:44.584 fused_ordering(923) 00:18:44.584 fused_ordering(924) 00:18:44.584 fused_ordering(925) 00:18:44.584 fused_ordering(926) 00:18:44.584 fused_ordering(927) 00:18:44.584 fused_ordering(928) 00:18:44.584 fused_ordering(929) 00:18:44.584 fused_ordering(930) 00:18:44.584 fused_ordering(931) 00:18:44.584 fused_ordering(932) 00:18:44.584 fused_ordering(933) 00:18:44.584 fused_ordering(934) 00:18:44.584 fused_ordering(935) 00:18:44.584 fused_ordering(936) 00:18:44.584 fused_ordering(937) 00:18:44.584 fused_ordering(938) 00:18:44.584 fused_ordering(939) 00:18:44.584 fused_ordering(940) 00:18:44.584 fused_ordering(941) 00:18:44.584 fused_ordering(942) 00:18:44.584 fused_ordering(943) 00:18:44.584 fused_ordering(944) 00:18:44.584 fused_ordering(945) 00:18:44.584 fused_ordering(946) 00:18:44.584 fused_ordering(947) 00:18:44.584 fused_ordering(948) 00:18:44.584 fused_ordering(949) 00:18:44.584 fused_ordering(950) 00:18:44.584 fused_ordering(951) 00:18:44.584 fused_ordering(952) 00:18:44.584 fused_ordering(953) 00:18:44.584 fused_ordering(954) 00:18:44.584 fused_ordering(955) 00:18:44.584 fused_ordering(956) 00:18:44.584 fused_ordering(957) 00:18:44.584 fused_ordering(958) 00:18:44.584 fused_ordering(959) 00:18:44.584 fused_ordering(960) 00:18:44.584 fused_ordering(961) 00:18:44.584 fused_ordering(962) 00:18:44.584 fused_ordering(963) 00:18:44.584 fused_ordering(964) 00:18:44.584 fused_ordering(965) 00:18:44.584 fused_ordering(966) 00:18:44.584 fused_ordering(967) 00:18:44.584 fused_ordering(968) 00:18:44.584 fused_ordering(969) 00:18:44.584 fused_ordering(970) 00:18:44.584 fused_ordering(971) 00:18:44.584 fused_ordering(972) 00:18:44.584 fused_ordering(973) 00:18:44.584 fused_ordering(974) 00:18:44.584 fused_ordering(975) 00:18:44.584 fused_ordering(976) 00:18:44.584 fused_ordering(977) 00:18:44.584 fused_ordering(978) 00:18:44.584 fused_ordering(979) 00:18:44.584 fused_ordering(980) 00:18:44.584 fused_ordering(981) 00:18:44.584 fused_ordering(982) 00:18:44.584 fused_ordering(983) 00:18:44.584 fused_ordering(984) 00:18:44.584 fused_ordering(985) 00:18:44.584 fused_ordering(986) 00:18:44.584 fused_ordering(987) 00:18:44.584 fused_ordering(988) 00:18:44.584 fused_ordering(989) 00:18:44.584 fused_ordering(990) 00:18:44.584 fused_ordering(991) 00:18:44.584 fused_ordering(992) 00:18:44.584 fused_ordering(993) 00:18:44.584 fused_ordering(994) 00:18:44.584 fused_ordering(995) 00:18:44.584 fused_ordering(996) 00:18:44.585 fused_ordering(997) 00:18:44.585 fused_ordering(998) 00:18:44.585 fused_ordering(999) 00:18:44.585 fused_ordering(1000) 00:18:44.585 fused_ordering(1001) 00:18:44.585 fused_ordering(1002) 00:18:44.585 fused_ordering(1003) 00:18:44.585 fused_ordering(1004) 00:18:44.585 fused_ordering(1005) 00:18:44.585 fused_ordering(1006) 00:18:44.585 fused_ordering(1007) 00:18:44.585 fused_ordering(1008) 00:18:44.585 fused_ordering(1009) 00:18:44.585 fused_ordering(1010) 00:18:44.585 fused_ordering(1011) 00:18:44.585 fused_ordering(1012) 00:18:44.585 fused_ordering(1013) 00:18:44.585 fused_ordering(1014) 00:18:44.585 fused_ordering(1015) 00:18:44.585 fused_ordering(1016) 00:18:44.585 fused_ordering(1017) 00:18:44.585 fused_ordering(1018) 00:18:44.585 fused_ordering(1019) 00:18:44.585 fused_ordering(1020) 00:18:44.585 fused_ordering(1021) 00:18:44.585 fused_ordering(1022) 00:18:44.585 fused_ordering(1023) 00:18:44.585 10:20:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:18:44.585 10:20:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:18:44.585 10:20:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:44.585 10:20:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:18:44.585 10:20:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:44.585 10:20:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:18:44.585 10:20:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:44.585 10:20:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:44.585 rmmod nvme_tcp 00:18:44.585 rmmod nvme_fabrics 00:18:44.585 rmmod nvme_keyring 00:18:44.844 10:20:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:44.844 10:20:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:18:44.844 10:20:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:18:44.844 10:20:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@517 -- # '[' -n 3897994 ']' 00:18:44.844 10:20:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # killprocess 3897994 00:18:44.844 10:20:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # '[' -z 3897994 ']' 00:18:44.844 10:20:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # kill -0 3897994 00:18:44.844 10:20:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # uname 00:18:44.844 10:20:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:44.844 10:20:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3897994 00:18:44.844 10:20:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:44.844 10:20:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:44.844 10:20:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3897994' 00:18:44.844 killing process with pid 3897994 00:18:44.844 10:20:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@973 -- # kill 3897994 00:18:44.844 10:20:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@978 -- # wait 3897994 00:18:45.782 10:20:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:45.782 10:20:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:45.782 10:20:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:45.782 10:20:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr 00:18:45.782 10:20:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-save 00:18:45.782 10:20:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:45.782 10:20:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-restore 00:18:45.782 10:20:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:45.782 10:20:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:45.782 10:20:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:45.782 10:20:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:45.782 10:20:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:48.320 10:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:48.320 00:18:48.320 real 0m11.987s 00:18:48.320 user 0m6.979s 00:18:48.320 sys 0m5.602s 00:18:48.320 10:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:48.320 10:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:48.320 ************************************ 00:18:48.320 END TEST nvmf_fused_ordering 00:18:48.320 ************************************ 00:18:48.320 10:20:41 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:18:48.320 10:20:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:48.320 10:20:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:48.320 10:20:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:48.320 ************************************ 00:18:48.320 START TEST nvmf_ns_masking 00:18:48.320 ************************************ 00:18:48.320 10:20:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1129 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:18:48.320 * Looking for test storage... 00:18:48.320 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:48.320 10:20:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:48.320 10:20:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # lcov --version 00:18:48.320 10:20:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:48.320 10:20:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:48.320 10:20:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:48.320 10:20:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:48.320 10:20:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:48.320 10:20:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:18:48.320 10:20:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:18:48.320 10:20:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:18:48.320 10:20:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:18:48.320 10:20:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:18:48.320 10:20:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:18:48.320 10:20:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:18:48.320 10:20:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:48.320 10:20:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:18:48.320 10:20:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:18:48.320 10:20:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:48.320 10:20:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:48.320 10:20:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:18:48.320 10:20:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:18:48.320 10:20:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:48.320 10:20:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:18:48.320 10:20:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:18:48.320 10:20:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:18:48.320 10:20:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:18:48.320 10:20:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:48.320 10:20:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:18:48.320 10:20:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:18:48.320 10:20:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:48.320 10:20:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:48.320 10:20:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:18:48.320 10:20:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:48.320 10:20:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:48.320 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:48.320 --rc genhtml_branch_coverage=1 00:18:48.320 --rc genhtml_function_coverage=1 00:18:48.320 --rc genhtml_legend=1 00:18:48.320 --rc geninfo_all_blocks=1 00:18:48.320 --rc geninfo_unexecuted_blocks=1 00:18:48.320 00:18:48.320 ' 00:18:48.320 10:20:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:48.320 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:48.320 --rc genhtml_branch_coverage=1 00:18:48.320 --rc genhtml_function_coverage=1 00:18:48.320 --rc genhtml_legend=1 00:18:48.320 --rc geninfo_all_blocks=1 00:18:48.320 --rc geninfo_unexecuted_blocks=1 00:18:48.320 00:18:48.320 ' 00:18:48.320 10:20:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:48.320 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:48.320 --rc genhtml_branch_coverage=1 00:18:48.320 --rc genhtml_function_coverage=1 00:18:48.320 --rc genhtml_legend=1 00:18:48.320 --rc geninfo_all_blocks=1 00:18:48.320 --rc geninfo_unexecuted_blocks=1 00:18:48.320 00:18:48.320 ' 00:18:48.320 10:20:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:48.320 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:48.320 --rc genhtml_branch_coverage=1 00:18:48.320 --rc genhtml_function_coverage=1 00:18:48.320 --rc genhtml_legend=1 00:18:48.320 --rc geninfo_all_blocks=1 00:18:48.320 --rc geninfo_unexecuted_blocks=1 00:18:48.320 00:18:48.320 ' 00:18:48.320 10:20:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:48.320 10:20:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:18:48.320 10:20:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:48.320 10:20:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:48.320 10:20:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:48.320 10:20:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:48.320 10:20:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:48.321 10:20:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:48.321 10:20:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:48.321 10:20:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:48.321 10:20:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:48.321 10:20:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:48.321 10:20:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:18:48.321 10:20:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:18:48.321 10:20:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:48.321 10:20:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:48.321 10:20:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:48.321 10:20:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:48.321 10:20:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:48.321 10:20:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:18:48.321 10:20:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:48.321 10:20:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:48.321 10:20:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:48.321 10:20:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:48.321 10:20:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:48.321 10:20:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:48.321 10:20:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:18:48.321 10:20:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:48.321 10:20:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:18:48.321 10:20:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:48.321 10:20:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:48.321 10:20:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:48.321 10:20:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:48.321 10:20:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:48.321 10:20:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:48.321 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:48.321 10:20:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:48.321 10:20:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:48.321 10:20:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:48.321 10:20:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:48.321 10:20:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:18:48.321 10:20:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:18:48.321 10:20:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:18:48.321 10:20:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=8c36ac4f-1148-481c-bb17-620206b8e989 00:18:48.321 10:20:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:18:48.321 10:20:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=4cd65251-4dce-409d-a5a5-8109ccb37b1d 00:18:48.321 10:20:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:18:48.321 10:20:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:18:48.321 10:20:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:18:48.321 10:20:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:18:48.321 10:20:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=c3c6a2af-13bc-4129-8b36-30fc292db127 00:18:48.321 10:20:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:18:48.321 10:20:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:48.321 10:20:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:48.321 10:20:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:48.321 10:20:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:48.321 10:20:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:48.321 10:20:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:48.321 10:20:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:48.321 10:20:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:48.321 10:20:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:48.321 10:20:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:48.321 10:20:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:18:48.321 10:20:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:18:53.594 10:20:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:53.594 10:20:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:18:53.594 10:20:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:53.594 10:20:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:53.594 10:20:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:53.594 10:20:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:53.594 10:20:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:53.594 10:20:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:18:53.594 10:20:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:53.594 10:20:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:18:53.594 10:20:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:18:53.594 10:20:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:18:53.594 10:20:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:18:53.594 10:20:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:18:53.594 10:20:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:18:53.594 10:20:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:53.594 10:20:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:53.594 10:20:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:53.594 10:20:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:53.594 10:20:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:53.594 10:20:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:53.594 10:20:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:53.594 10:20:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:53.594 10:20:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:53.594 10:20:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:53.594 10:20:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:53.594 10:20:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:53.594 10:20:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:53.594 10:20:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:18:53.594 10:20:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:18:53.594 10:20:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:18:53.594 10:20:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:18:53.594 10:20:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:53.594 10:20:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:53.594 10:20:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:18:53.594 Found 0000:af:00.0 (0x8086 - 0x159b) 00:18:53.594 10:20:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:53.594 10:20:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:53.594 10:20:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:53.594 10:20:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:53.594 10:20:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:53.594 10:20:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:53.594 10:20:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:18:53.594 Found 0000:af:00.1 (0x8086 - 0x159b) 00:18:53.594 10:20:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:53.594 10:20:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:53.594 10:20:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:53.594 10:20:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:53.594 10:20:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:53.594 10:20:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:53.594 10:20:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:18:53.594 10:20:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:18:53.594 10:20:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:53.594 10:20:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:53.594 10:20:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:53.594 10:20:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:53.594 10:20:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:53.594 10:20:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:53.594 10:20:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:53.594 10:20:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:18:53.594 Found net devices under 0000:af:00.0: cvl_0_0 00:18:53.594 10:20:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:53.594 10:20:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:53.594 10:20:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:53.594 10:20:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:53.594 10:20:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:53.594 10:20:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:53.594 10:20:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:53.594 10:20:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:53.594 10:20:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:18:53.594 Found net devices under 0000:af:00.1: cvl_0_1 00:18:53.594 10:20:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:53.594 10:20:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:18:53.594 10:20:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # is_hw=yes 00:18:53.594 10:20:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:18:53.594 10:20:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:18:53.594 10:20:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:18:53.594 10:20:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:53.594 10:20:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:53.594 10:20:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:53.594 10:20:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:53.594 10:20:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:53.594 10:20:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:53.594 10:20:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:53.594 10:20:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:53.594 10:20:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:53.594 10:20:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:53.594 10:20:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:53.594 10:20:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:53.595 10:20:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:53.595 10:20:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:53.595 10:20:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:53.595 10:20:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:53.595 10:20:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:53.595 10:20:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:53.595 10:20:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:53.854 10:20:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:53.854 10:20:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:53.854 10:20:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:53.854 10:20:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:53.854 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:53.854 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.348 ms 00:18:53.854 00:18:53.854 --- 10.0.0.2 ping statistics --- 00:18:53.854 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:53.854 rtt min/avg/max/mdev = 0.348/0.348/0.348/0.000 ms 00:18:53.854 10:20:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:53.854 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:53.854 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.178 ms 00:18:53.854 00:18:53.854 --- 10.0.0.1 ping statistics --- 00:18:53.854 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:53.854 rtt min/avg/max/mdev = 0.178/0.178/0.178/0.000 ms 00:18:53.854 10:20:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:53.854 10:20:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # return 0 00:18:53.854 10:20:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:53.854 10:20:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:53.854 10:20:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:53.854 10:20:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:53.854 10:20:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:53.854 10:20:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:53.854 10:20:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:53.854 10:20:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:18:53.854 10:20:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:53.854 10:20:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:53.854 10:20:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:18:53.854 10:20:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # nvmfpid=3902118 00:18:53.854 10:20:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # waitforlisten 3902118 00:18:53.854 10:20:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 3902118 ']' 00:18:53.854 10:20:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:53.854 10:20:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:53.854 10:20:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:18:53.854 10:20:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:53.854 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:53.854 10:20:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:53.854 10:20:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:18:54.113 [2024-12-13 10:20:47.751823] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:18:54.113 [2024-12-13 10:20:47.751918] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:54.113 [2024-12-13 10:20:47.869561] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:54.113 [2024-12-13 10:20:47.971525] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:54.113 [2024-12-13 10:20:47.971571] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:54.113 [2024-12-13 10:20:47.971581] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:54.113 [2024-12-13 10:20:47.971591] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:54.113 [2024-12-13 10:20:47.971598] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:54.113 [2024-12-13 10:20:47.973022] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:18:54.681 10:20:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:54.681 10:20:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:18:54.681 10:20:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:54.681 10:20:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:54.681 10:20:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:18:54.940 10:20:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:54.940 10:20:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:18:54.940 [2024-12-13 10:20:48.745159] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:54.940 10:20:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:18:54.940 10:20:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:18:54.940 10:20:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:18:55.199 Malloc1 00:18:55.199 10:20:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:18:55.457 Malloc2 00:18:55.457 10:20:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:18:55.716 10:20:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:18:55.975 10:20:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:55.975 [2024-12-13 10:20:49.844169] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:55.975 10:20:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:18:55.975 10:20:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I c3c6a2af-13bc-4129-8b36-30fc292db127 -a 10.0.0.2 -s 4420 -i 4 00:18:56.233 10:20:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:18:56.233 10:20:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:18:56.233 10:20:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:18:56.233 10:20:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:18:56.233 10:20:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:18:58.767 10:20:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:18:58.767 10:20:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:18:58.767 10:20:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:18:58.767 10:20:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:18:58.767 10:20:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:18:58.767 10:20:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:18:58.767 10:20:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:18:58.767 10:20:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:18:58.767 10:20:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:18:58.767 10:20:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:18:58.767 10:20:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:18:58.767 10:20:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:58.767 10:20:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:58.767 [ 0]:0x1 00:18:58.767 10:20:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:58.767 10:20:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:58.767 10:20:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=de50bc3e721f440fa92b78dfd6becbc1 00:18:58.767 10:20:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ de50bc3e721f440fa92b78dfd6becbc1 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:58.767 10:20:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:18:58.767 10:20:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:18:58.767 10:20:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:58.767 10:20:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:58.767 [ 0]:0x1 00:18:58.767 10:20:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:58.767 10:20:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:58.767 10:20:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=de50bc3e721f440fa92b78dfd6becbc1 00:18:58.767 10:20:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ de50bc3e721f440fa92b78dfd6becbc1 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:58.767 10:20:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:18:58.767 10:20:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:58.767 10:20:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:58.767 [ 1]:0x2 00:18:58.767 10:20:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:58.767 10:20:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:58.767 10:20:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=0f74eeb35fbf40cabdb95296f373c3cd 00:18:58.767 10:20:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 0f74eeb35fbf40cabdb95296f373c3cd != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:58.767 10:20:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:18:58.767 10:20:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:59.026 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:59.026 10:20:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:59.284 10:20:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:18:59.543 10:20:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:18:59.543 10:20:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I c3c6a2af-13bc-4129-8b36-30fc292db127 -a 10.0.0.2 -s 4420 -i 4 00:18:59.543 10:20:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:18:59.543 10:20:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:18:59.543 10:20:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:18:59.543 10:20:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 1 ]] 00:18:59.543 10:20:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=1 00:18:59.543 10:20:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:19:01.445 10:20:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:19:01.445 10:20:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:19:01.445 10:20:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:19:01.703 10:20:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:19:01.703 10:20:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:19:01.703 10:20:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:19:01.703 10:20:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:19:01.703 10:20:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:19:01.703 10:20:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:19:01.703 10:20:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:19:01.703 10:20:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:19:01.703 10:20:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:19:01.703 10:20:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:19:01.703 10:20:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:19:01.703 10:20:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:01.703 10:20:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:19:01.703 10:20:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:01.703 10:20:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:19:01.703 10:20:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:01.703 10:20:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:19:01.703 10:20:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:19:01.703 10:20:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:01.703 10:20:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:19:01.703 10:20:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:01.703 10:20:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:19:01.703 10:20:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:01.703 10:20:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:01.703 10:20:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:01.703 10:20:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:19:01.703 10:20:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:01.703 10:20:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:19:01.703 [ 0]:0x2 00:19:01.703 10:20:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:19:01.703 10:20:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:01.962 10:20:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=0f74eeb35fbf40cabdb95296f373c3cd 00:19:01.962 10:20:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 0f74eeb35fbf40cabdb95296f373c3cd != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:01.962 10:20:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:19:01.962 10:20:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:19:01.962 10:20:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:19:01.962 10:20:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:01.962 [ 0]:0x1 00:19:01.962 10:20:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:19:01.962 10:20:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:02.221 10:20:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=de50bc3e721f440fa92b78dfd6becbc1 00:19:02.221 10:20:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ de50bc3e721f440fa92b78dfd6becbc1 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:02.221 10:20:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:19:02.221 10:20:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:02.221 10:20:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:19:02.221 [ 1]:0x2 00:19:02.221 10:20:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:19:02.221 10:20:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:02.221 10:20:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=0f74eeb35fbf40cabdb95296f373c3cd 00:19:02.221 10:20:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 0f74eeb35fbf40cabdb95296f373c3cd != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:02.221 10:20:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:19:02.480 10:20:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:19:02.480 10:20:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:19:02.480 10:20:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:19:02.480 10:20:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:19:02.480 10:20:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:02.480 10:20:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:19:02.480 10:20:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:02.480 10:20:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:19:02.480 10:20:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:02.480 10:20:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:19:02.480 10:20:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:19:02.480 10:20:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:02.480 10:20:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:19:02.480 10:20:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:02.480 10:20:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:19:02.480 10:20:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:02.480 10:20:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:02.480 10:20:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:02.480 10:20:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:19:02.480 10:20:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:02.480 10:20:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:19:02.480 [ 0]:0x2 00:19:02.480 10:20:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:19:02.480 10:20:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:02.480 10:20:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=0f74eeb35fbf40cabdb95296f373c3cd 00:19:02.480 10:20:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 0f74eeb35fbf40cabdb95296f373c3cd != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:02.480 10:20:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:19:02.480 10:20:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:02.480 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:02.480 10:20:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:19:02.738 10:20:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:19:02.738 10:20:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I c3c6a2af-13bc-4129-8b36-30fc292db127 -a 10.0.0.2 -s 4420 -i 4 00:19:02.997 10:20:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:19:02.997 10:20:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:19:02.997 10:20:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:19:02.997 10:20:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:19:02.997 10:20:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:19:02.997 10:20:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:19:05.040 10:20:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:19:05.040 10:20:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:19:05.040 10:20:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:19:05.040 10:20:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:19:05.040 10:20:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:19:05.040 10:20:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:19:05.040 10:20:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:19:05.040 10:20:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:19:05.040 10:20:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:19:05.040 10:20:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:19:05.040 10:20:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:19:05.040 10:20:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:05.040 10:20:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:19:05.040 [ 0]:0x1 00:19:05.040 10:20:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:19:05.040 10:20:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:05.040 10:20:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=de50bc3e721f440fa92b78dfd6becbc1 00:19:05.040 10:20:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ de50bc3e721f440fa92b78dfd6becbc1 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:05.040 10:20:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:19:05.040 10:20:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:05.040 10:20:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:19:05.040 [ 1]:0x2 00:19:05.040 10:20:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:19:05.040 10:20:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:05.299 10:20:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=0f74eeb35fbf40cabdb95296f373c3cd 00:19:05.299 10:20:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 0f74eeb35fbf40cabdb95296f373c3cd != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:05.299 10:20:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:19:05.299 10:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:19:05.299 10:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:19:05.299 10:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:19:05.299 10:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:19:05.299 10:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:05.299 10:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:19:05.299 10:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:05.299 10:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:19:05.299 10:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:19:05.299 10:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:05.299 10:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:19:05.299 10:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:05.299 10:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:19:05.299 10:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:05.299 10:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:19:05.299 10:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:05.299 10:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:05.299 10:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:05.299 10:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:19:05.299 10:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:19:05.299 10:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:05.559 [ 0]:0x2 00:19:05.559 10:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:19:05.559 10:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:05.559 10:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=0f74eeb35fbf40cabdb95296f373c3cd 00:19:05.559 10:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 0f74eeb35fbf40cabdb95296f373c3cd != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:05.559 10:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:19:05.559 10:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:19:05.559 10:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:19:05.559 10:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:05.559 10:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:05.559 10:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:05.559 10:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:05.559 10:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:05.559 10:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:05.559 10:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:05.559 10:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:19:05.559 10:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:19:05.559 [2024-12-13 10:20:59.411763] nvmf_rpc.c:1873:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:19:05.559 request: 00:19:05.559 { 00:19:05.559 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:05.559 "nsid": 2, 00:19:05.559 "host": "nqn.2016-06.io.spdk:host1", 00:19:05.559 "method": "nvmf_ns_remove_host", 00:19:05.559 "req_id": 1 00:19:05.559 } 00:19:05.559 Got JSON-RPC error response 00:19:05.559 response: 00:19:05.559 { 00:19:05.559 "code": -32602, 00:19:05.559 "message": "Invalid parameters" 00:19:05.559 } 00:19:05.559 10:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:19:05.559 10:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:05.559 10:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:05.559 10:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:05.559 10:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:19:05.559 10:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:19:05.559 10:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:19:05.559 10:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:19:05.559 10:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:05.559 10:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:19:05.559 10:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:05.559 10:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:19:05.559 10:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:05.559 10:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:19:05.818 10:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:19:05.818 10:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:05.818 10:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:19:05.818 10:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:05.818 10:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:19:05.818 10:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:05.818 10:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:05.818 10:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:05.818 10:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:19:05.818 10:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:05.818 10:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:19:05.818 [ 0]:0x2 00:19:05.818 10:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:19:05.818 10:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:05.818 10:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=0f74eeb35fbf40cabdb95296f373c3cd 00:19:05.818 10:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 0f74eeb35fbf40cabdb95296f373c3cd != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:05.818 10:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:19:05.818 10:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:05.818 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:05.818 10:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=3904162 00:19:05.818 10:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:19:05.818 10:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 3904162 /var/tmp/host.sock 00:19:05.818 10:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:19:05.818 10:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 3904162 ']' 00:19:05.818 10:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:19:05.818 10:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:05.818 10:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:19:05.818 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:19:05.818 10:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:05.818 10:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:19:05.818 [2024-12-13 10:20:59.674792] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:19:05.818 [2024-12-13 10:20:59.674886] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3904162 ] 00:19:06.077 [2024-12-13 10:20:59.787138] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:06.077 [2024-12-13 10:20:59.895983] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:19:07.014 10:21:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:07.014 10:21:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:19:07.014 10:21:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:07.273 10:21:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:19:07.273 10:21:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 8c36ac4f-1148-481c-bb17-620206b8e989 00:19:07.273 10:21:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:19:07.273 10:21:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 8C36AC4F1148481CBB17620206B8E989 -i 00:19:07.532 10:21:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 4cd65251-4dce-409d-a5a5-8109ccb37b1d 00:19:07.532 10:21:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:19:07.532 10:21:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 4CD652514DCE409DA5A58109CCB37B1D -i 00:19:07.790 10:21:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:19:08.049 10:21:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:19:08.049 10:21:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:19:08.049 10:21:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:19:08.616 nvme0n1 00:19:08.616 10:21:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:19:08.616 10:21:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:19:08.875 nvme1n2 00:19:08.875 10:21:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:19:08.875 10:21:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:19:08.875 10:21:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:19:08.875 10:21:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:19:08.875 10:21:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:19:08.875 10:21:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:19:08.875 10:21:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:19:08.875 10:21:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:19:08.875 10:21:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:19:09.134 10:21:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 8c36ac4f-1148-481c-bb17-620206b8e989 == \8\c\3\6\a\c\4\f\-\1\1\4\8\-\4\8\1\c\-\b\b\1\7\-\6\2\0\2\0\6\b\8\e\9\8\9 ]] 00:19:09.134 10:21:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:19:09.134 10:21:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:19:09.134 10:21:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:19:09.392 10:21:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 4cd65251-4dce-409d-a5a5-8109ccb37b1d == \4\c\d\6\5\2\5\1\-\4\d\c\e\-\4\0\9\d\-\a\5\a\5\-\8\1\0\9\c\c\b\3\7\b\1\d ]] 00:19:09.392 10:21:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@137 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:09.651 10:21:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:19:09.910 10:21:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # uuid2nguid 8c36ac4f-1148-481c-bb17-620206b8e989 00:19:09.910 10:21:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:19:09.910 10:21:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 8C36AC4F1148481CBB17620206B8E989 00:19:09.910 10:21:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:19:09.910 10:21:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 8C36AC4F1148481CBB17620206B8E989 00:19:09.910 10:21:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:09.910 10:21:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:09.910 10:21:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:09.910 10:21:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:09.910 10:21:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:09.910 10:21:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:09.910 10:21:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:09.910 10:21:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:19:09.910 10:21:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 8C36AC4F1148481CBB17620206B8E989 00:19:09.910 [2024-12-13 10:21:03.758535] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: invalid 00:19:09.910 [2024-12-13 10:21:03.758579] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1: bdev invalid cannot be opened, error=-19 00:19:09.910 [2024-12-13 10:21:03.758592] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.910 request: 00:19:09.910 { 00:19:09.910 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:09.910 "namespace": { 00:19:09.910 "bdev_name": "invalid", 00:19:09.910 "nsid": 1, 00:19:09.910 "nguid": "8C36AC4F1148481CBB17620206B8E989", 00:19:09.910 "no_auto_visible": false, 00:19:09.910 "hide_metadata": false 00:19:09.910 }, 00:19:09.910 "method": "nvmf_subsystem_add_ns", 00:19:09.910 "req_id": 1 00:19:09.910 } 00:19:09.910 Got JSON-RPC error response 00:19:09.910 response: 00:19:09.910 { 00:19:09.910 "code": -32602, 00:19:09.910 "message": "Invalid parameters" 00:19:09.910 } 00:19:09.910 10:21:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:19:09.910 10:21:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:09.911 10:21:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:09.911 10:21:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:09.911 10:21:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # uuid2nguid 8c36ac4f-1148-481c-bb17-620206b8e989 00:19:09.911 10:21:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:19:09.911 10:21:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 8C36AC4F1148481CBB17620206B8E989 -i 00:19:10.169 10:21:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@143 -- # sleep 2s 00:19:12.703 10:21:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # hostrpc bdev_get_bdevs 00:19:12.703 10:21:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # jq length 00:19:12.703 10:21:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:19:12.703 10:21:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # (( 0 == 0 )) 00:19:12.703 10:21:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@146 -- # killprocess 3904162 00:19:12.703 10:21:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 3904162 ']' 00:19:12.703 10:21:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 3904162 00:19:12.703 10:21:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:19:12.703 10:21:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:12.703 10:21:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3904162 00:19:12.703 10:21:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:12.703 10:21:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:12.703 10:21:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3904162' 00:19:12.703 killing process with pid 3904162 00:19:12.703 10:21:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 3904162 00:19:12.703 10:21:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 3904162 00:19:14.606 10:21:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:14.865 10:21:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:19:14.865 10:21:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@150 -- # nvmftestfini 00:19:14.865 10:21:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:14.865 10:21:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:19:14.865 10:21:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:14.865 10:21:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:19:14.865 10:21:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:14.865 10:21:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:14.865 rmmod nvme_tcp 00:19:14.865 rmmod nvme_fabrics 00:19:14.865 rmmod nvme_keyring 00:19:15.124 10:21:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:15.124 10:21:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:19:15.124 10:21:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:19:15.124 10:21:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@517 -- # '[' -n 3902118 ']' 00:19:15.124 10:21:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # killprocess 3902118 00:19:15.124 10:21:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 3902118 ']' 00:19:15.124 10:21:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 3902118 00:19:15.124 10:21:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:19:15.124 10:21:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:15.124 10:21:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3902118 00:19:15.124 10:21:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:15.124 10:21:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:15.124 10:21:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3902118' 00:19:15.124 killing process with pid 3902118 00:19:15.124 10:21:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 3902118 00:19:15.124 10:21:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 3902118 00:19:16.501 10:21:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:16.501 10:21:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:16.501 10:21:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:16.501 10:21:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr 00:19:16.501 10:21:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-save 00:19:16.501 10:21:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:16.501 10:21:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-restore 00:19:16.501 10:21:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:16.501 10:21:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:16.501 10:21:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:16.502 10:21:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:16.502 10:21:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:19.039 10:21:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:19.039 00:19:19.039 real 0m30.605s 00:19:19.039 user 0m38.648s 00:19:19.039 sys 0m6.817s 00:19:19.039 10:21:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:19.039 10:21:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:19:19.039 ************************************ 00:19:19.039 END TEST nvmf_ns_masking 00:19:19.039 ************************************ 00:19:19.039 10:21:12 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:19:19.039 10:21:12 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:19:19.039 10:21:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:19.039 10:21:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:19.039 10:21:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:19.039 ************************************ 00:19:19.039 START TEST nvmf_nvme_cli 00:19:19.039 ************************************ 00:19:19.039 10:21:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:19:19.039 * Looking for test storage... 00:19:19.039 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:19.039 10:21:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:19.039 10:21:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1711 -- # lcov --version 00:19:19.039 10:21:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:19.039 10:21:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:19.039 10:21:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:19.039 10:21:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:19.039 10:21:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:19.039 10:21:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:19:19.039 10:21:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:19:19.039 10:21:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:19:19.039 10:21:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:19:19.039 10:21:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:19:19.039 10:21:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:19:19.039 10:21:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:19:19.039 10:21:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:19.039 10:21:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:19:19.039 10:21:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:19:19.039 10:21:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:19.039 10:21:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:19.039 10:21:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:19:19.039 10:21:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:19:19.039 10:21:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:19.039 10:21:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:19:19.039 10:21:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:19:19.039 10:21:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:19:19.039 10:21:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:19:19.039 10:21:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:19.039 10:21:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:19:19.039 10:21:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:19:19.039 10:21:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:19.039 10:21:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:19.039 10:21:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:19:19.039 10:21:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:19.039 10:21:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:19.039 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:19.039 --rc genhtml_branch_coverage=1 00:19:19.039 --rc genhtml_function_coverage=1 00:19:19.039 --rc genhtml_legend=1 00:19:19.039 --rc geninfo_all_blocks=1 00:19:19.039 --rc geninfo_unexecuted_blocks=1 00:19:19.039 00:19:19.039 ' 00:19:19.039 10:21:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:19.039 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:19.039 --rc genhtml_branch_coverage=1 00:19:19.039 --rc genhtml_function_coverage=1 00:19:19.039 --rc genhtml_legend=1 00:19:19.039 --rc geninfo_all_blocks=1 00:19:19.039 --rc geninfo_unexecuted_blocks=1 00:19:19.039 00:19:19.039 ' 00:19:19.039 10:21:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:19.039 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:19.039 --rc genhtml_branch_coverage=1 00:19:19.039 --rc genhtml_function_coverage=1 00:19:19.039 --rc genhtml_legend=1 00:19:19.039 --rc geninfo_all_blocks=1 00:19:19.039 --rc geninfo_unexecuted_blocks=1 00:19:19.039 00:19:19.039 ' 00:19:19.039 10:21:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:19.039 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:19.039 --rc genhtml_branch_coverage=1 00:19:19.039 --rc genhtml_function_coverage=1 00:19:19.039 --rc genhtml_legend=1 00:19:19.039 --rc geninfo_all_blocks=1 00:19:19.039 --rc geninfo_unexecuted_blocks=1 00:19:19.039 00:19:19.039 ' 00:19:19.039 10:21:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:19.039 10:21:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:19:19.039 10:21:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:19.039 10:21:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:19.039 10:21:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:19.039 10:21:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:19.039 10:21:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:19.039 10:21:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:19.039 10:21:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:19.039 10:21:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:19.039 10:21:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:19.039 10:21:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:19.039 10:21:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:19:19.039 10:21:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:19:19.039 10:21:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:19.039 10:21:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:19.039 10:21:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:19.039 10:21:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:19.039 10:21:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:19.039 10:21:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:19:19.039 10:21:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:19.039 10:21:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:19.039 10:21:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:19.039 10:21:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:19.040 10:21:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:19.040 10:21:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:19.040 10:21:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:19:19.040 10:21:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:19.040 10:21:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:19:19.040 10:21:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:19.040 10:21:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:19.040 10:21:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:19.040 10:21:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:19.040 10:21:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:19.040 10:21:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:19.040 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:19.040 10:21:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:19.040 10:21:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:19.040 10:21:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:19.040 10:21:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:19.040 10:21:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:19.040 10:21:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:19:19.040 10:21:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:19:19.040 10:21:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:19.040 10:21:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:19.040 10:21:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:19.040 10:21:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:19.040 10:21:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:19.040 10:21:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:19.040 10:21:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:19.040 10:21:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:19.040 10:21:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:19.040 10:21:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:19.040 10:21:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:19:19.040 10:21:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:24.315 10:21:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:24.315 10:21:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:19:24.315 10:21:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:24.315 10:21:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:24.315 10:21:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:24.315 10:21:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:24.315 10:21:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:24.315 10:21:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:19:24.315 10:21:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:24.315 10:21:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:19:24.315 10:21:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:19:24.315 10:21:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:19:24.315 10:21:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:19:24.315 10:21:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:19:24.315 10:21:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:19:24.315 10:21:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:24.315 10:21:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:24.315 10:21:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:24.315 10:21:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:24.315 10:21:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:24.315 10:21:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:24.315 10:21:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:24.315 10:21:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:24.315 10:21:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:24.315 10:21:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:24.315 10:21:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:24.315 10:21:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:24.315 10:21:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:24.315 10:21:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:24.315 10:21:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:24.315 10:21:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:24.315 10:21:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:24.315 10:21:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:24.315 10:21:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:24.315 10:21:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:19:24.315 Found 0000:af:00.0 (0x8086 - 0x159b) 00:19:24.315 10:21:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:24.315 10:21:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:24.315 10:21:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:24.315 10:21:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:24.315 10:21:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:24.315 10:21:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:24.315 10:21:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:19:24.315 Found 0000:af:00.1 (0x8086 - 0x159b) 00:19:24.315 10:21:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:24.315 10:21:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:24.315 10:21:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:24.315 10:21:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:24.315 10:21:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:24.315 10:21:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:24.315 10:21:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:24.315 10:21:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:24.315 10:21:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:24.315 10:21:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:24.315 10:21:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:24.315 10:21:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:24.315 10:21:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:24.315 10:21:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:24.315 10:21:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:24.315 10:21:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:19:24.315 Found net devices under 0000:af:00.0: cvl_0_0 00:19:24.315 10:21:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:24.315 10:21:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:24.315 10:21:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:24.315 10:21:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:24.315 10:21:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:24.315 10:21:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:24.315 10:21:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:24.315 10:21:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:24.315 10:21:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:19:24.315 Found net devices under 0000:af:00.1: cvl_0_1 00:19:24.315 10:21:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:24.315 10:21:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:24.315 10:21:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # is_hw=yes 00:19:24.315 10:21:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:24.315 10:21:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:24.315 10:21:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:24.315 10:21:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:24.315 10:21:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:24.315 10:21:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:24.315 10:21:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:24.316 10:21:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:24.316 10:21:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:24.316 10:21:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:24.316 10:21:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:24.316 10:21:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:24.316 10:21:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:24.316 10:21:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:24.316 10:21:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:24.316 10:21:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:24.316 10:21:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:24.316 10:21:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:24.316 10:21:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:24.316 10:21:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:24.316 10:21:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:24.316 10:21:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:24.316 10:21:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:24.316 10:21:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:24.316 10:21:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:24.316 10:21:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:24.316 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:24.316 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.440 ms 00:19:24.316 00:19:24.316 --- 10.0.0.2 ping statistics --- 00:19:24.316 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:24.316 rtt min/avg/max/mdev = 0.440/0.440/0.440/0.000 ms 00:19:24.316 10:21:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:24.316 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:24.316 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.204 ms 00:19:24.316 00:19:24.316 --- 10.0.0.1 ping statistics --- 00:19:24.316 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:24.316 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:19:24.316 10:21:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:24.316 10:21:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # return 0 00:19:24.316 10:21:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:24.316 10:21:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:24.316 10:21:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:24.316 10:21:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:24.316 10:21:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:24.316 10:21:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:24.316 10:21:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:24.316 10:21:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:19:24.316 10:21:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:24.316 10:21:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:24.316 10:21:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:24.316 10:21:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@509 -- # nvmfpid=3909760 00:19:24.316 10:21:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@510 -- # waitforlisten 3909760 00:19:24.316 10:21:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:24.316 10:21:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # '[' -z 3909760 ']' 00:19:24.316 10:21:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:24.316 10:21:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:24.316 10:21:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:24.316 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:24.316 10:21:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:24.316 10:21:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:24.316 [2024-12-13 10:21:18.052005] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:19:24.316 [2024-12-13 10:21:18.052094] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:24.316 [2024-12-13 10:21:18.170774] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:24.576 [2024-12-13 10:21:18.285324] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:24.576 [2024-12-13 10:21:18.285366] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:24.576 [2024-12-13 10:21:18.285376] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:24.576 [2024-12-13 10:21:18.285387] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:24.576 [2024-12-13 10:21:18.285395] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:24.576 [2024-12-13 10:21:18.287739] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:19:24.576 [2024-12-13 10:21:18.288767] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:19:24.576 [2024-12-13 10:21:18.288971] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:19:24.576 [2024-12-13 10:21:18.288975] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:19:25.144 10:21:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:25.144 10:21:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@868 -- # return 0 00:19:25.144 10:21:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:25.144 10:21:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:25.144 10:21:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:25.144 10:21:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:25.144 10:21:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:25.144 10:21:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.144 10:21:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:25.144 [2024-12-13 10:21:18.901911] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:25.144 10:21:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.144 10:21:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:25.144 10:21:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.144 10:21:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:25.144 Malloc0 00:19:25.144 10:21:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.144 10:21:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:19:25.144 10:21:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.144 10:21:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:25.404 Malloc1 00:19:25.404 10:21:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.404 10:21:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:19:25.404 10:21:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.404 10:21:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:25.404 10:21:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.404 10:21:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:25.404 10:21:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.404 10:21:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:25.404 10:21:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.404 10:21:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:25.404 10:21:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.404 10:21:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:25.404 10:21:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.404 10:21:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:25.404 10:21:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.404 10:21:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:25.404 [2024-12-13 10:21:19.108799] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:25.404 10:21:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.404 10:21:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:19:25.404 10:21:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.404 10:21:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:25.404 10:21:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.404 10:21:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:19:25.404 00:19:25.404 Discovery Log Number of Records 2, Generation counter 2 00:19:25.404 =====Discovery Log Entry 0====== 00:19:25.404 trtype: tcp 00:19:25.404 adrfam: ipv4 00:19:25.404 subtype: current discovery subsystem 00:19:25.404 treq: not required 00:19:25.404 portid: 0 00:19:25.404 trsvcid: 4420 00:19:25.404 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:19:25.404 traddr: 10.0.0.2 00:19:25.404 eflags: explicit discovery connections, duplicate discovery information 00:19:25.404 sectype: none 00:19:25.404 =====Discovery Log Entry 1====== 00:19:25.404 trtype: tcp 00:19:25.404 adrfam: ipv4 00:19:25.404 subtype: nvme subsystem 00:19:25.404 treq: not required 00:19:25.404 portid: 0 00:19:25.404 trsvcid: 4420 00:19:25.404 subnqn: nqn.2016-06.io.spdk:cnode1 00:19:25.404 traddr: 10.0.0.2 00:19:25.404 eflags: none 00:19:25.404 sectype: none 00:19:25.404 10:21:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:19:25.404 10:21:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:19:25.404 10:21:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:19:25.404 10:21:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:19:25.404 10:21:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:19:25.404 10:21:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:19:25.404 10:21:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:19:25.404 10:21:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:19:25.404 10:21:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:19:25.404 10:21:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:19:25.404 10:21:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:19:26.783 10:21:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:19:26.783 10:21:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # local i=0 00:19:26.783 10:21:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:19:26.783 10:21:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:19:26.783 10:21:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:19:26.783 10:21:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # sleep 2 00:19:28.687 10:21:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:19:28.687 10:21:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:19:28.687 10:21:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:19:28.687 10:21:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:19:28.687 10:21:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:19:28.687 10:21:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # return 0 00:19:28.687 10:21:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:19:28.687 10:21:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:19:28.687 10:21:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:19:28.687 10:21:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:19:28.687 10:21:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:19:28.687 10:21:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:19:28.687 10:21:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:19:28.687 10:21:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:19:28.687 10:21:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:19:28.687 10:21:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:19:28.687 10:21:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:19:28.687 10:21:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:19:28.687 10:21:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:19:28.687 10:21:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:19:28.687 10:21:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:19:28.687 /dev/nvme0n2 ]] 00:19:28.687 10:21:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:19:28.687 10:21:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:19:28.687 10:21:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:19:28.687 10:21:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:19:28.687 10:21:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:19:28.687 10:21:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:19:28.687 10:21:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:19:28.687 10:21:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:19:28.687 10:21:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:19:28.687 10:21:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:19:28.687 10:21:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:19:28.687 10:21:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:19:28.687 10:21:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:19:28.687 10:21:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:19:28.687 10:21:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:19:28.687 10:21:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:19:28.687 10:21:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:28.946 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:28.946 10:21:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:19:28.946 10:21:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # local i=0 00:19:28.947 10:21:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:19:28.947 10:21:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:28.947 10:21:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:19:28.947 10:21:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:28.947 10:21:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1235 -- # return 0 00:19:28.947 10:21:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:19:28.947 10:21:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:28.947 10:21:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.947 10:21:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:28.947 10:21:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.947 10:21:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:19:28.947 10:21:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:19:28.947 10:21:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:28.947 10:21:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:19:28.947 10:21:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:28.947 10:21:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:19:29.206 10:21:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:29.206 10:21:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:29.206 rmmod nvme_tcp 00:19:29.206 rmmod nvme_fabrics 00:19:29.206 rmmod nvme_keyring 00:19:29.206 10:21:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:29.206 10:21:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:19:29.206 10:21:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:19:29.206 10:21:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@517 -- # '[' -n 3909760 ']' 00:19:29.206 10:21:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@518 -- # killprocess 3909760 00:19:29.206 10:21:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # '[' -z 3909760 ']' 00:19:29.206 10:21:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # kill -0 3909760 00:19:29.206 10:21:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # uname 00:19:29.206 10:21:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:29.206 10:21:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3909760 00:19:29.206 10:21:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:29.206 10:21:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:29.206 10:21:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3909760' 00:19:29.206 killing process with pid 3909760 00:19:29.206 10:21:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@973 -- # kill 3909760 00:19:29.206 10:21:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@978 -- # wait 3909760 00:19:31.112 10:21:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:31.112 10:21:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:31.112 10:21:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:31.112 10:21:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # iptr 00:19:31.112 10:21:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-save 00:19:31.112 10:21:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:31.112 10:21:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-restore 00:19:31.112 10:21:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:31.112 10:21:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:31.112 10:21:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:31.112 10:21:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:31.112 10:21:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:33.019 10:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:33.019 00:19:33.019 real 0m14.106s 00:19:33.019 user 0m25.351s 00:19:33.019 sys 0m4.779s 00:19:33.019 10:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:33.019 10:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:33.019 ************************************ 00:19:33.019 END TEST nvmf_nvme_cli 00:19:33.019 ************************************ 00:19:33.019 10:21:26 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 0 -eq 1 ]] 00:19:33.019 10:21:26 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:19:33.019 10:21:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:33.019 10:21:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:33.019 10:21:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:33.020 ************************************ 00:19:33.020 START TEST nvmf_auth_target 00:19:33.020 ************************************ 00:19:33.020 10:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:19:33.020 * Looking for test storage... 00:19:33.020 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:33.020 10:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:33.020 10:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # lcov --version 00:19:33.020 10:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:33.020 10:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:33.020 10:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:33.020 10:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:33.020 10:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:33.020 10:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:19:33.020 10:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:19:33.020 10:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:19:33.020 10:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:19:33.020 10:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:19:33.020 10:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:19:33.020 10:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:19:33.020 10:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:33.020 10:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:19:33.020 10:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:19:33.020 10:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:33.020 10:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:33.020 10:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:19:33.020 10:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:19:33.020 10:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:33.020 10:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:19:33.020 10:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:19:33.020 10:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:19:33.020 10:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:19:33.020 10:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:33.020 10:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:19:33.020 10:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:19:33.020 10:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:33.020 10:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:33.020 10:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:19:33.020 10:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:33.020 10:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:33.020 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:33.020 --rc genhtml_branch_coverage=1 00:19:33.020 --rc genhtml_function_coverage=1 00:19:33.020 --rc genhtml_legend=1 00:19:33.020 --rc geninfo_all_blocks=1 00:19:33.020 --rc geninfo_unexecuted_blocks=1 00:19:33.020 00:19:33.020 ' 00:19:33.020 10:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:33.020 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:33.020 --rc genhtml_branch_coverage=1 00:19:33.020 --rc genhtml_function_coverage=1 00:19:33.020 --rc genhtml_legend=1 00:19:33.020 --rc geninfo_all_blocks=1 00:19:33.020 --rc geninfo_unexecuted_blocks=1 00:19:33.020 00:19:33.020 ' 00:19:33.020 10:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:33.020 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:33.020 --rc genhtml_branch_coverage=1 00:19:33.020 --rc genhtml_function_coverage=1 00:19:33.020 --rc genhtml_legend=1 00:19:33.020 --rc geninfo_all_blocks=1 00:19:33.020 --rc geninfo_unexecuted_blocks=1 00:19:33.020 00:19:33.020 ' 00:19:33.020 10:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:33.020 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:33.020 --rc genhtml_branch_coverage=1 00:19:33.020 --rc genhtml_function_coverage=1 00:19:33.020 --rc genhtml_legend=1 00:19:33.020 --rc geninfo_all_blocks=1 00:19:33.020 --rc geninfo_unexecuted_blocks=1 00:19:33.020 00:19:33.020 ' 00:19:33.020 10:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:33.020 10:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:19:33.020 10:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:33.020 10:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:33.020 10:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:33.020 10:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:33.020 10:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:33.020 10:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:33.020 10:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:33.020 10:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:33.020 10:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:33.020 10:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:33.020 10:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:19:33.020 10:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:19:33.020 10:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:33.020 10:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:33.020 10:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:33.020 10:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:33.020 10:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:33.020 10:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:19:33.020 10:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:33.020 10:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:33.020 10:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:33.020 10:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:33.020 10:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:33.020 10:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:33.020 10:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:19:33.020 10:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:33.020 10:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:19:33.021 10:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:33.021 10:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:33.021 10:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:33.021 10:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:33.021 10:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:33.021 10:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:33.021 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:33.021 10:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:33.021 10:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:33.021 10:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:33.021 10:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:19:33.021 10:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:19:33.021 10:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:19:33.021 10:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:19:33.021 10:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:19:33.021 10:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:19:33.021 10:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:19:33.021 10:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:19:33.021 10:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:33.021 10:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:33.021 10:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:33.021 10:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:33.021 10:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:33.021 10:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:33.021 10:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:33.021 10:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:33.021 10:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:33.021 10:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:33.021 10:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:19:33.021 10:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.293 10:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:38.293 10:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:19:38.293 10:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:38.293 10:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:38.293 10:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:38.293 10:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:38.293 10:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:38.293 10:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:19:38.293 10:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:38.293 10:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:19:38.293 10:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:19:38.293 10:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:19:38.293 10:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:19:38.293 10:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:19:38.293 10:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:19:38.293 10:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:38.293 10:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:38.293 10:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:38.293 10:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:38.293 10:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:38.293 10:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:38.293 10:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:38.293 10:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:38.293 10:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:38.293 10:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:38.293 10:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:38.293 10:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:38.293 10:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:38.293 10:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:38.293 10:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:38.293 10:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:38.293 10:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:38.294 10:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:38.294 10:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:38.294 10:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:19:38.294 Found 0000:af:00.0 (0x8086 - 0x159b) 00:19:38.294 10:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:38.294 10:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:38.294 10:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:38.294 10:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:38.294 10:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:38.294 10:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:38.294 10:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:19:38.294 Found 0000:af:00.1 (0x8086 - 0x159b) 00:19:38.294 10:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:38.294 10:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:38.294 10:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:38.294 10:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:38.294 10:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:38.294 10:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:38.294 10:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:38.294 10:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:38.294 10:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:38.294 10:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:38.294 10:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:38.294 10:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:38.294 10:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:38.294 10:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:38.294 10:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:38.294 10:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:19:38.294 Found net devices under 0000:af:00.0: cvl_0_0 00:19:38.294 10:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:38.294 10:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:38.294 10:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:38.294 10:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:38.294 10:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:38.294 10:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:38.294 10:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:38.294 10:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:38.294 10:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:19:38.294 Found net devices under 0000:af:00.1: cvl_0_1 00:19:38.294 10:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:38.294 10:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:38.294 10:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # is_hw=yes 00:19:38.294 10:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:38.294 10:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:38.294 10:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:38.294 10:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:38.294 10:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:38.294 10:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:38.294 10:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:38.294 10:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:38.294 10:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:38.294 10:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:38.294 10:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:38.294 10:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:38.294 10:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:38.294 10:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:38.294 10:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:38.294 10:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:38.294 10:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:38.294 10:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:38.294 10:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:38.294 10:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:38.554 10:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:38.554 10:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:38.554 10:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:38.554 10:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:38.554 10:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:38.554 10:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:38.554 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:38.554 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.445 ms 00:19:38.554 00:19:38.554 --- 10.0.0.2 ping statistics --- 00:19:38.554 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:38.554 rtt min/avg/max/mdev = 0.445/0.445/0.445/0.000 ms 00:19:38.554 10:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:38.554 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:38.554 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.213 ms 00:19:38.554 00:19:38.554 --- 10.0.0.1 ping statistics --- 00:19:38.554 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:38.554 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:19:38.554 10:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:38.554 10:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # return 0 00:19:38.554 10:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:38.554 10:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:38.554 10:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:38.554 10:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:38.554 10:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:38.554 10:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:38.554 10:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:38.554 10:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:19:38.554 10:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:38.554 10:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:38.554 10:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.554 10:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=3914364 00:19:38.554 10:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 3914364 00:19:38.554 10:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 3914364 ']' 00:19:38.554 10:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:38.554 10:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:38.554 10:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:38.554 10:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:19:38.554 10:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:38.554 10:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.492 10:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:39.492 10:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:19:39.492 10:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:39.493 10:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:39.493 10:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.493 10:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:39.493 10:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=3914395 00:19:39.493 10:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:19:39.493 10:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:19:39.493 10:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:19:39.493 10:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:19:39.493 10:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:39.493 10:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:19:39.493 10:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:19:39.493 10:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:19:39.493 10:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:39.493 10:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=e41e73e579dc33cc2c685d14391c5102c2a3265de63305db 00:19:39.493 10:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:19:39.493 10:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.Dhe 00:19:39.493 10:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key e41e73e579dc33cc2c685d14391c5102c2a3265de63305db 0 00:19:39.493 10:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 e41e73e579dc33cc2c685d14391c5102c2a3265de63305db 0 00:19:39.493 10:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:19:39.493 10:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:39.493 10:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=e41e73e579dc33cc2c685d14391c5102c2a3265de63305db 00:19:39.493 10:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:19:39.493 10:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:19:39.493 10:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.Dhe 00:19:39.493 10:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.Dhe 00:19:39.493 10:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.Dhe 00:19:39.493 10:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:19:39.493 10:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:19:39.493 10:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:39.493 10:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:19:39.493 10:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:19:39.493 10:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:19:39.493 10:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:19:39.493 10:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=1f55dd4bb420bd76643eaca27b10eeb07a890536fab1c588cf3cae68bb290bf8 00:19:39.493 10:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:19:39.493 10:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.GdK 00:19:39.493 10:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 1f55dd4bb420bd76643eaca27b10eeb07a890536fab1c588cf3cae68bb290bf8 3 00:19:39.493 10:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 1f55dd4bb420bd76643eaca27b10eeb07a890536fab1c588cf3cae68bb290bf8 3 00:19:39.493 10:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:19:39.493 10:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:39.493 10:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=1f55dd4bb420bd76643eaca27b10eeb07a890536fab1c588cf3cae68bb290bf8 00:19:39.493 10:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:19:39.493 10:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:19:39.493 10:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.GdK 00:19:39.493 10:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.GdK 00:19:39.493 10:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.GdK 00:19:39.493 10:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:19:39.493 10:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:19:39.493 10:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:39.493 10:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:19:39.493 10:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:19:39.493 10:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:19:39.493 10:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:39.493 10:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=a211f0ad65e3775c4cb30d815f9bc0de 00:19:39.493 10:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:19:39.493 10:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.aqn 00:19:39.493 10:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key a211f0ad65e3775c4cb30d815f9bc0de 1 00:19:39.493 10:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 a211f0ad65e3775c4cb30d815f9bc0de 1 00:19:39.493 10:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:19:39.493 10:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:39.493 10:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=a211f0ad65e3775c4cb30d815f9bc0de 00:19:39.493 10:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:19:39.493 10:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:19:39.753 10:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.aqn 00:19:39.753 10:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.aqn 00:19:39.753 10:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.aqn 00:19:39.753 10:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:19:39.753 10:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:19:39.753 10:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:39.753 10:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:19:39.753 10:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:19:39.753 10:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:19:39.753 10:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:39.753 10:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=d11d211b18160a7f7156fbd83adcca8ce7d7ba889394a2f3 00:19:39.753 10:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:19:39.753 10:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.3MW 00:19:39.753 10:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key d11d211b18160a7f7156fbd83adcca8ce7d7ba889394a2f3 2 00:19:39.753 10:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 d11d211b18160a7f7156fbd83adcca8ce7d7ba889394a2f3 2 00:19:39.753 10:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:19:39.753 10:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:39.753 10:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=d11d211b18160a7f7156fbd83adcca8ce7d7ba889394a2f3 00:19:39.753 10:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:19:39.753 10:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:19:39.753 10:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.3MW 00:19:39.753 10:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.3MW 00:19:39.753 10:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.3MW 00:19:39.753 10:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:19:39.753 10:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:19:39.753 10:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:39.753 10:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:19:39.753 10:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:19:39.753 10:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:19:39.753 10:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:39.753 10:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=9bb806f91861b7107463f658ead079935cc99a40a039858c 00:19:39.753 10:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:19:39.753 10:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.BBE 00:19:39.753 10:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 9bb806f91861b7107463f658ead079935cc99a40a039858c 2 00:19:39.753 10:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 9bb806f91861b7107463f658ead079935cc99a40a039858c 2 00:19:39.753 10:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:19:39.753 10:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:39.753 10:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=9bb806f91861b7107463f658ead079935cc99a40a039858c 00:19:39.753 10:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:19:39.753 10:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:19:39.753 10:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.BBE 00:19:39.753 10:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.BBE 00:19:39.753 10:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.BBE 00:19:39.753 10:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:19:39.753 10:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:19:39.753 10:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:39.753 10:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:19:39.753 10:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:19:39.753 10:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:19:39.753 10:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:39.753 10:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=ea3444bc5150f8767c52ee87657b43f6 00:19:39.753 10:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:19:39.753 10:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.bIH 00:19:39.753 10:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key ea3444bc5150f8767c52ee87657b43f6 1 00:19:39.753 10:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 ea3444bc5150f8767c52ee87657b43f6 1 00:19:39.753 10:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:19:39.753 10:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:39.753 10:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=ea3444bc5150f8767c52ee87657b43f6 00:19:39.753 10:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:19:39.753 10:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:19:39.753 10:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.bIH 00:19:39.753 10:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.bIH 00:19:39.753 10:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.bIH 00:19:39.753 10:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:19:39.753 10:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:19:39.753 10:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:39.753 10:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:19:39.753 10:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:19:39.753 10:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:19:39.753 10:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:19:39.753 10:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=52e700273b4196d717ad1f5fb124f8a35c059e04a0d60bcf57600e008d9f280b 00:19:39.753 10:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:19:39.754 10:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.Jyj 00:19:39.754 10:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 52e700273b4196d717ad1f5fb124f8a35c059e04a0d60bcf57600e008d9f280b 3 00:19:39.754 10:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 52e700273b4196d717ad1f5fb124f8a35c059e04a0d60bcf57600e008d9f280b 3 00:19:39.754 10:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:19:39.754 10:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:39.754 10:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=52e700273b4196d717ad1f5fb124f8a35c059e04a0d60bcf57600e008d9f280b 00:19:39.754 10:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:19:39.754 10:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:19:40.013 10:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.Jyj 00:19:40.013 10:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.Jyj 00:19:40.013 10:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.Jyj 00:19:40.013 10:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:19:40.013 10:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 3914364 00:19:40.013 10:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 3914364 ']' 00:19:40.013 10:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:40.013 10:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:40.013 10:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:40.013 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:40.013 10:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:40.013 10:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.013 10:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:40.013 10:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:19:40.013 10:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 3914395 /var/tmp/host.sock 00:19:40.013 10:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 3914395 ']' 00:19:40.013 10:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:19:40.013 10:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:40.013 10:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:19:40.013 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:19:40.013 10:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:40.013 10:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.581 10:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:40.581 10:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:19:40.581 10:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:19:40.581 10:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.581 10:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.581 10:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.581 10:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:19:40.581 10:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.Dhe 00:19:40.581 10:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.581 10:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.581 10:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.581 10:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.Dhe 00:19:40.581 10:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.Dhe 00:19:40.840 10:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.GdK ]] 00:19:40.840 10:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.GdK 00:19:40.840 10:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.840 10:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.840 10:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.840 10:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.GdK 00:19:40.840 10:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.GdK 00:19:41.099 10:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:19:41.099 10:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.aqn 00:19:41.099 10:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.099 10:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.099 10:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.099 10:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.aqn 00:19:41.099 10:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.aqn 00:19:41.099 10:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.3MW ]] 00:19:41.099 10:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.3MW 00:19:41.099 10:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.099 10:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.099 10:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.099 10:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.3MW 00:19:41.099 10:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.3MW 00:19:41.363 10:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:19:41.363 10:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.BBE 00:19:41.363 10:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.363 10:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.363 10:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.363 10:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.BBE 00:19:41.363 10:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.BBE 00:19:41.623 10:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.bIH ]] 00:19:41.623 10:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.bIH 00:19:41.623 10:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.623 10:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.623 10:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.623 10:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.bIH 00:19:41.623 10:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.bIH 00:19:41.882 10:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:19:41.882 10:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.Jyj 00:19:41.882 10:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.882 10:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.882 10:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.882 10:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.Jyj 00:19:41.882 10:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.Jyj 00:19:41.882 10:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:19:41.882 10:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:19:41.882 10:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:41.882 10:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:41.882 10:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:41.882 10:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:42.141 10:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:19:42.141 10:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:42.141 10:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:42.141 10:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:42.141 10:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:42.141 10:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:42.141 10:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:42.141 10:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.141 10:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.141 10:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.141 10:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:42.141 10:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:42.141 10:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:42.400 00:19:42.400 10:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:42.400 10:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:42.400 10:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:42.659 10:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:42.659 10:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:42.659 10:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.659 10:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.659 10:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.659 10:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:42.659 { 00:19:42.659 "cntlid": 1, 00:19:42.659 "qid": 0, 00:19:42.659 "state": "enabled", 00:19:42.659 "thread": "nvmf_tgt_poll_group_000", 00:19:42.659 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:19:42.659 "listen_address": { 00:19:42.659 "trtype": "TCP", 00:19:42.659 "adrfam": "IPv4", 00:19:42.659 "traddr": "10.0.0.2", 00:19:42.659 "trsvcid": "4420" 00:19:42.659 }, 00:19:42.659 "peer_address": { 00:19:42.659 "trtype": "TCP", 00:19:42.659 "adrfam": "IPv4", 00:19:42.659 "traddr": "10.0.0.1", 00:19:42.659 "trsvcid": "43990" 00:19:42.659 }, 00:19:42.659 "auth": { 00:19:42.659 "state": "completed", 00:19:42.659 "digest": "sha256", 00:19:42.659 "dhgroup": "null" 00:19:42.659 } 00:19:42.659 } 00:19:42.659 ]' 00:19:42.659 10:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:42.659 10:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:42.659 10:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:42.659 10:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:42.659 10:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:42.659 10:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:42.659 10:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:42.659 10:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:42.918 10:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTQxZTczZTU3OWRjMzNjYzJjNjg1ZDE0MzkxYzUxMDJjMmEzMjY1ZGU2MzMwNWRiooT6bw==: --dhchap-ctrl-secret DHHC-1:03:MWY1NWRkNGJiNDIwYmQ3NjY0M2VhY2EyN2IxMGVlYjA3YTg5MDUzNmZhYjFjNTg4Y2YzY2FlNjhiYjI5MGJmOB0pSzQ=: 00:19:42.918 10:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZTQxZTczZTU3OWRjMzNjYzJjNjg1ZDE0MzkxYzUxMDJjMmEzMjY1ZGU2MzMwNWRiooT6bw==: --dhchap-ctrl-secret DHHC-1:03:MWY1NWRkNGJiNDIwYmQ3NjY0M2VhY2EyN2IxMGVlYjA3YTg5MDUzNmZhYjFjNTg4Y2YzY2FlNjhiYjI5MGJmOB0pSzQ=: 00:19:43.485 10:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:43.485 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:43.485 10:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:19:43.485 10:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.485 10:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.485 10:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.485 10:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:43.485 10:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:43.485 10:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:43.744 10:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:19:43.744 10:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:43.744 10:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:43.744 10:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:43.744 10:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:43.744 10:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:43.744 10:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:43.744 10:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.744 10:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.744 10:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.744 10:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:43.744 10:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:43.744 10:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:44.003 00:19:44.003 10:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:44.003 10:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:44.003 10:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:44.261 10:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:44.261 10:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:44.261 10:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.261 10:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.261 10:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.261 10:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:44.261 { 00:19:44.261 "cntlid": 3, 00:19:44.261 "qid": 0, 00:19:44.261 "state": "enabled", 00:19:44.261 "thread": "nvmf_tgt_poll_group_000", 00:19:44.261 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:19:44.261 "listen_address": { 00:19:44.261 "trtype": "TCP", 00:19:44.261 "adrfam": "IPv4", 00:19:44.261 "traddr": "10.0.0.2", 00:19:44.261 "trsvcid": "4420" 00:19:44.261 }, 00:19:44.261 "peer_address": { 00:19:44.261 "trtype": "TCP", 00:19:44.261 "adrfam": "IPv4", 00:19:44.261 "traddr": "10.0.0.1", 00:19:44.261 "trsvcid": "40136" 00:19:44.261 }, 00:19:44.261 "auth": { 00:19:44.261 "state": "completed", 00:19:44.261 "digest": "sha256", 00:19:44.261 "dhgroup": "null" 00:19:44.261 } 00:19:44.261 } 00:19:44.261 ]' 00:19:44.261 10:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:44.261 10:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:44.261 10:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:44.261 10:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:44.261 10:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:44.261 10:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:44.261 10:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:44.261 10:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:44.520 10:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTIxMWYwYWQ2NWUzNzc1YzRjYjMwZDgxNWY5YmMwZGWJauU6: --dhchap-ctrl-secret DHHC-1:02:ZDExZDIxMWIxODE2MGE3ZjcxNTZmYmQ4M2FkY2NhOGNlN2Q3YmE4ODkzOTRhMmYzOYOvaQ==: 00:19:44.520 10:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YTIxMWYwYWQ2NWUzNzc1YzRjYjMwZDgxNWY5YmMwZGWJauU6: --dhchap-ctrl-secret DHHC-1:02:ZDExZDIxMWIxODE2MGE3ZjcxNTZmYmQ4M2FkY2NhOGNlN2Q3YmE4ODkzOTRhMmYzOYOvaQ==: 00:19:45.087 10:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:45.087 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:45.088 10:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:19:45.088 10:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.088 10:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.088 10:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.088 10:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:45.088 10:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:45.088 10:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:45.347 10:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:19:45.347 10:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:45.347 10:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:45.347 10:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:45.347 10:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:45.347 10:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:45.347 10:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:45.347 10:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.347 10:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.347 10:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.347 10:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:45.347 10:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:45.347 10:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:45.606 00:19:45.606 10:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:45.606 10:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:45.606 10:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:45.606 10:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:45.606 10:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:45.606 10:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.606 10:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.606 10:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.606 10:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:45.606 { 00:19:45.606 "cntlid": 5, 00:19:45.606 "qid": 0, 00:19:45.606 "state": "enabled", 00:19:45.606 "thread": "nvmf_tgt_poll_group_000", 00:19:45.606 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:19:45.606 "listen_address": { 00:19:45.606 "trtype": "TCP", 00:19:45.606 "adrfam": "IPv4", 00:19:45.606 "traddr": "10.0.0.2", 00:19:45.606 "trsvcid": "4420" 00:19:45.606 }, 00:19:45.606 "peer_address": { 00:19:45.606 "trtype": "TCP", 00:19:45.606 "adrfam": "IPv4", 00:19:45.606 "traddr": "10.0.0.1", 00:19:45.606 "trsvcid": "40172" 00:19:45.606 }, 00:19:45.606 "auth": { 00:19:45.606 "state": "completed", 00:19:45.606 "digest": "sha256", 00:19:45.606 "dhgroup": "null" 00:19:45.606 } 00:19:45.606 } 00:19:45.606 ]' 00:19:45.606 10:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:45.865 10:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:45.865 10:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:45.865 10:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:45.865 10:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:45.865 10:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:45.865 10:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:45.865 10:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:46.123 10:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OWJiODA2ZjkxODYxYjcxMDc0NjNmNjU4ZWFkMDc5OTM1Y2M5OWE0MGEwMzk4NThjEC7s2A==: --dhchap-ctrl-secret DHHC-1:01:ZWEzNDQ0YmM1MTUwZjg3NjdjNTJlZTg3NjU3YjQzZjbwUG5H: 00:19:46.123 10:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:OWJiODA2ZjkxODYxYjcxMDc0NjNmNjU4ZWFkMDc5OTM1Y2M5OWE0MGEwMzk4NThjEC7s2A==: --dhchap-ctrl-secret DHHC-1:01:ZWEzNDQ0YmM1MTUwZjg3NjdjNTJlZTg3NjU3YjQzZjbwUG5H: 00:19:46.691 10:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:46.691 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:46.691 10:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:19:46.691 10:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.691 10:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.691 10:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.691 10:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:46.691 10:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:46.691 10:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:46.691 10:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:19:46.691 10:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:46.691 10:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:46.691 10:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:46.691 10:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:46.691 10:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:46.691 10:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:19:46.691 10:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.691 10:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.691 10:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.691 10:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:46.691 10:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:46.691 10:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:46.950 00:19:46.950 10:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:46.950 10:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:46.950 10:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:47.209 10:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:47.209 10:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:47.209 10:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.209 10:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.209 10:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.209 10:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:47.209 { 00:19:47.209 "cntlid": 7, 00:19:47.209 "qid": 0, 00:19:47.209 "state": "enabled", 00:19:47.209 "thread": "nvmf_tgt_poll_group_000", 00:19:47.209 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:19:47.209 "listen_address": { 00:19:47.209 "trtype": "TCP", 00:19:47.209 "adrfam": "IPv4", 00:19:47.209 "traddr": "10.0.0.2", 00:19:47.209 "trsvcid": "4420" 00:19:47.209 }, 00:19:47.209 "peer_address": { 00:19:47.209 "trtype": "TCP", 00:19:47.209 "adrfam": "IPv4", 00:19:47.209 "traddr": "10.0.0.1", 00:19:47.209 "trsvcid": "40202" 00:19:47.209 }, 00:19:47.209 "auth": { 00:19:47.209 "state": "completed", 00:19:47.209 "digest": "sha256", 00:19:47.209 "dhgroup": "null" 00:19:47.209 } 00:19:47.209 } 00:19:47.209 ]' 00:19:47.209 10:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:47.209 10:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:47.209 10:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:47.209 10:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:47.209 10:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:47.467 10:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:47.467 10:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:47.467 10:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:47.467 10:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTJlNzAwMjczYjQxOTZkNzE3YWQxZjVmYjEyNGY4YTM1YzA1OWUwNGEwZDYwYmNmNTc2MDBlMDA4ZDlmMjgwYhS8dh8=: 00:19:47.467 10:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NTJlNzAwMjczYjQxOTZkNzE3YWQxZjVmYjEyNGY4YTM1YzA1OWUwNGEwZDYwYmNmNTc2MDBlMDA4ZDlmMjgwYhS8dh8=: 00:19:48.034 10:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:48.034 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:48.034 10:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:19:48.034 10:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.034 10:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.034 10:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.034 10:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:48.034 10:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:48.034 10:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:48.034 10:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:48.293 10:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:19:48.293 10:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:48.293 10:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:48.293 10:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:48.293 10:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:48.293 10:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:48.293 10:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:48.293 10:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.293 10:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.293 10:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.293 10:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:48.293 10:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:48.293 10:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:48.550 00:19:48.550 10:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:48.550 10:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:48.550 10:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:48.809 10:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:48.809 10:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:48.809 10:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.809 10:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.809 10:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.809 10:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:48.809 { 00:19:48.809 "cntlid": 9, 00:19:48.809 "qid": 0, 00:19:48.809 "state": "enabled", 00:19:48.809 "thread": "nvmf_tgt_poll_group_000", 00:19:48.809 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:19:48.809 "listen_address": { 00:19:48.809 "trtype": "TCP", 00:19:48.809 "adrfam": "IPv4", 00:19:48.809 "traddr": "10.0.0.2", 00:19:48.809 "trsvcid": "4420" 00:19:48.809 }, 00:19:48.809 "peer_address": { 00:19:48.809 "trtype": "TCP", 00:19:48.809 "adrfam": "IPv4", 00:19:48.809 "traddr": "10.0.0.1", 00:19:48.809 "trsvcid": "40228" 00:19:48.809 }, 00:19:48.809 "auth": { 00:19:48.809 "state": "completed", 00:19:48.809 "digest": "sha256", 00:19:48.809 "dhgroup": "ffdhe2048" 00:19:48.809 } 00:19:48.809 } 00:19:48.809 ]' 00:19:48.809 10:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:48.809 10:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:48.809 10:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:48.809 10:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:48.809 10:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:48.809 10:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:48.809 10:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:48.809 10:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:49.067 10:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTQxZTczZTU3OWRjMzNjYzJjNjg1ZDE0MzkxYzUxMDJjMmEzMjY1ZGU2MzMwNWRiooT6bw==: --dhchap-ctrl-secret DHHC-1:03:MWY1NWRkNGJiNDIwYmQ3NjY0M2VhY2EyN2IxMGVlYjA3YTg5MDUzNmZhYjFjNTg4Y2YzY2FlNjhiYjI5MGJmOB0pSzQ=: 00:19:49.067 10:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZTQxZTczZTU3OWRjMzNjYzJjNjg1ZDE0MzkxYzUxMDJjMmEzMjY1ZGU2MzMwNWRiooT6bw==: --dhchap-ctrl-secret DHHC-1:03:MWY1NWRkNGJiNDIwYmQ3NjY0M2VhY2EyN2IxMGVlYjA3YTg5MDUzNmZhYjFjNTg4Y2YzY2FlNjhiYjI5MGJmOB0pSzQ=: 00:19:49.635 10:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:49.635 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:49.635 10:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:19:49.635 10:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.635 10:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.635 10:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.635 10:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:49.635 10:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:49.635 10:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:49.894 10:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:19:49.894 10:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:49.894 10:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:49.894 10:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:49.894 10:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:49.894 10:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:49.894 10:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:49.894 10:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.894 10:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.894 10:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.894 10:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:49.894 10:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:49.894 10:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:50.153 00:19:50.153 10:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:50.153 10:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:50.153 10:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:50.412 10:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:50.412 10:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:50.412 10:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.412 10:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.412 10:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.412 10:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:50.412 { 00:19:50.412 "cntlid": 11, 00:19:50.412 "qid": 0, 00:19:50.412 "state": "enabled", 00:19:50.412 "thread": "nvmf_tgt_poll_group_000", 00:19:50.412 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:19:50.412 "listen_address": { 00:19:50.412 "trtype": "TCP", 00:19:50.412 "adrfam": "IPv4", 00:19:50.412 "traddr": "10.0.0.2", 00:19:50.412 "trsvcid": "4420" 00:19:50.412 }, 00:19:50.412 "peer_address": { 00:19:50.412 "trtype": "TCP", 00:19:50.412 "adrfam": "IPv4", 00:19:50.412 "traddr": "10.0.0.1", 00:19:50.412 "trsvcid": "40252" 00:19:50.412 }, 00:19:50.412 "auth": { 00:19:50.412 "state": "completed", 00:19:50.412 "digest": "sha256", 00:19:50.412 "dhgroup": "ffdhe2048" 00:19:50.412 } 00:19:50.412 } 00:19:50.412 ]' 00:19:50.412 10:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:50.412 10:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:50.412 10:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:50.412 10:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:50.412 10:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:50.412 10:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:50.412 10:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:50.412 10:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:50.671 10:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTIxMWYwYWQ2NWUzNzc1YzRjYjMwZDgxNWY5YmMwZGWJauU6: --dhchap-ctrl-secret DHHC-1:02:ZDExZDIxMWIxODE2MGE3ZjcxNTZmYmQ4M2FkY2NhOGNlN2Q3YmE4ODkzOTRhMmYzOYOvaQ==: 00:19:50.671 10:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YTIxMWYwYWQ2NWUzNzc1YzRjYjMwZDgxNWY5YmMwZGWJauU6: --dhchap-ctrl-secret DHHC-1:02:ZDExZDIxMWIxODE2MGE3ZjcxNTZmYmQ4M2FkY2NhOGNlN2Q3YmE4ODkzOTRhMmYzOYOvaQ==: 00:19:51.238 10:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:51.238 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:51.238 10:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:19:51.238 10:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.238 10:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.238 10:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.239 10:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:51.239 10:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:51.239 10:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:51.497 10:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:19:51.497 10:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:51.497 10:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:51.497 10:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:51.497 10:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:51.497 10:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:51.497 10:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:51.497 10:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.497 10:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.497 10:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.497 10:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:51.497 10:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:51.497 10:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:51.756 00:19:51.756 10:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:51.756 10:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:51.756 10:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:51.756 10:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:51.756 10:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:51.756 10:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.756 10:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.756 10:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.756 10:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:51.756 { 00:19:51.756 "cntlid": 13, 00:19:51.756 "qid": 0, 00:19:51.756 "state": "enabled", 00:19:51.756 "thread": "nvmf_tgt_poll_group_000", 00:19:51.756 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:19:51.756 "listen_address": { 00:19:51.756 "trtype": "TCP", 00:19:51.756 "adrfam": "IPv4", 00:19:51.756 "traddr": "10.0.0.2", 00:19:51.756 "trsvcid": "4420" 00:19:51.756 }, 00:19:51.756 "peer_address": { 00:19:51.756 "trtype": "TCP", 00:19:51.756 "adrfam": "IPv4", 00:19:51.756 "traddr": "10.0.0.1", 00:19:51.756 "trsvcid": "40296" 00:19:51.756 }, 00:19:51.756 "auth": { 00:19:51.756 "state": "completed", 00:19:51.756 "digest": "sha256", 00:19:51.756 "dhgroup": "ffdhe2048" 00:19:51.756 } 00:19:51.756 } 00:19:51.756 ]' 00:19:51.756 10:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:52.015 10:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:52.015 10:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:52.015 10:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:52.015 10:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:52.015 10:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:52.015 10:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:52.015 10:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:52.274 10:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OWJiODA2ZjkxODYxYjcxMDc0NjNmNjU4ZWFkMDc5OTM1Y2M5OWE0MGEwMzk4NThjEC7s2A==: --dhchap-ctrl-secret DHHC-1:01:ZWEzNDQ0YmM1MTUwZjg3NjdjNTJlZTg3NjU3YjQzZjbwUG5H: 00:19:52.274 10:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:OWJiODA2ZjkxODYxYjcxMDc0NjNmNjU4ZWFkMDc5OTM1Y2M5OWE0MGEwMzk4NThjEC7s2A==: --dhchap-ctrl-secret DHHC-1:01:ZWEzNDQ0YmM1MTUwZjg3NjdjNTJlZTg3NjU3YjQzZjbwUG5H: 00:19:52.893 10:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:52.893 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:52.893 10:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:19:52.893 10:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.893 10:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.893 10:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.893 10:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:52.893 10:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:52.893 10:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:52.893 10:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:19:52.893 10:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:52.893 10:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:52.893 10:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:52.893 10:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:52.893 10:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:52.893 10:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:19:52.893 10:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.893 10:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.893 10:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.893 10:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:52.893 10:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:52.893 10:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:53.176 00:19:53.176 10:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:53.176 10:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:53.176 10:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:53.458 10:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:53.458 10:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:53.458 10:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.458 10:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.458 10:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.458 10:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:53.458 { 00:19:53.458 "cntlid": 15, 00:19:53.458 "qid": 0, 00:19:53.458 "state": "enabled", 00:19:53.458 "thread": "nvmf_tgt_poll_group_000", 00:19:53.458 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:19:53.458 "listen_address": { 00:19:53.458 "trtype": "TCP", 00:19:53.458 "adrfam": "IPv4", 00:19:53.458 "traddr": "10.0.0.2", 00:19:53.458 "trsvcid": "4420" 00:19:53.458 }, 00:19:53.458 "peer_address": { 00:19:53.458 "trtype": "TCP", 00:19:53.458 "adrfam": "IPv4", 00:19:53.458 "traddr": "10.0.0.1", 00:19:53.458 "trsvcid": "59714" 00:19:53.458 }, 00:19:53.458 "auth": { 00:19:53.458 "state": "completed", 00:19:53.458 "digest": "sha256", 00:19:53.458 "dhgroup": "ffdhe2048" 00:19:53.458 } 00:19:53.458 } 00:19:53.458 ]' 00:19:53.458 10:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:53.458 10:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:53.458 10:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:53.458 10:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:53.458 10:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:53.458 10:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:53.458 10:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:53.458 10:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:53.717 10:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTJlNzAwMjczYjQxOTZkNzE3YWQxZjVmYjEyNGY4YTM1YzA1OWUwNGEwZDYwYmNmNTc2MDBlMDA4ZDlmMjgwYhS8dh8=: 00:19:53.717 10:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NTJlNzAwMjczYjQxOTZkNzE3YWQxZjVmYjEyNGY4YTM1YzA1OWUwNGEwZDYwYmNmNTc2MDBlMDA4ZDlmMjgwYhS8dh8=: 00:19:54.284 10:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:54.284 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:54.284 10:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:19:54.284 10:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.284 10:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.284 10:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.284 10:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:54.284 10:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:54.284 10:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:54.284 10:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:54.543 10:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:19:54.543 10:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:54.543 10:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:54.543 10:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:54.543 10:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:54.543 10:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:54.543 10:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:54.543 10:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.543 10:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.543 10:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.543 10:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:54.543 10:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:54.543 10:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:54.802 00:19:54.802 10:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:54.802 10:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:54.802 10:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:55.060 10:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:55.060 10:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:55.060 10:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.060 10:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.060 10:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.060 10:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:55.060 { 00:19:55.060 "cntlid": 17, 00:19:55.060 "qid": 0, 00:19:55.060 "state": "enabled", 00:19:55.060 "thread": "nvmf_tgt_poll_group_000", 00:19:55.060 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:19:55.060 "listen_address": { 00:19:55.060 "trtype": "TCP", 00:19:55.060 "adrfam": "IPv4", 00:19:55.061 "traddr": "10.0.0.2", 00:19:55.061 "trsvcid": "4420" 00:19:55.061 }, 00:19:55.061 "peer_address": { 00:19:55.061 "trtype": "TCP", 00:19:55.061 "adrfam": "IPv4", 00:19:55.061 "traddr": "10.0.0.1", 00:19:55.061 "trsvcid": "59752" 00:19:55.061 }, 00:19:55.061 "auth": { 00:19:55.061 "state": "completed", 00:19:55.061 "digest": "sha256", 00:19:55.061 "dhgroup": "ffdhe3072" 00:19:55.061 } 00:19:55.061 } 00:19:55.061 ]' 00:19:55.061 10:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:55.061 10:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:55.061 10:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:55.061 10:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:55.061 10:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:55.061 10:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:55.061 10:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:55.061 10:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:55.320 10:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTQxZTczZTU3OWRjMzNjYzJjNjg1ZDE0MzkxYzUxMDJjMmEzMjY1ZGU2MzMwNWRiooT6bw==: --dhchap-ctrl-secret DHHC-1:03:MWY1NWRkNGJiNDIwYmQ3NjY0M2VhY2EyN2IxMGVlYjA3YTg5MDUzNmZhYjFjNTg4Y2YzY2FlNjhiYjI5MGJmOB0pSzQ=: 00:19:55.320 10:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZTQxZTczZTU3OWRjMzNjYzJjNjg1ZDE0MzkxYzUxMDJjMmEzMjY1ZGU2MzMwNWRiooT6bw==: --dhchap-ctrl-secret DHHC-1:03:MWY1NWRkNGJiNDIwYmQ3NjY0M2VhY2EyN2IxMGVlYjA3YTg5MDUzNmZhYjFjNTg4Y2YzY2FlNjhiYjI5MGJmOB0pSzQ=: 00:19:55.887 10:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:55.887 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:55.887 10:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:19:55.887 10:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.887 10:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.887 10:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.887 10:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:55.887 10:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:55.887 10:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:56.145 10:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:19:56.145 10:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:56.145 10:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:56.145 10:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:56.145 10:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:56.145 10:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:56.145 10:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:56.145 10:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.145 10:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.145 10:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.145 10:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:56.145 10:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:56.145 10:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:56.404 00:19:56.404 10:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:56.404 10:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:56.404 10:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:56.663 10:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:56.663 10:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:56.663 10:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.663 10:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.663 10:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.663 10:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:56.663 { 00:19:56.663 "cntlid": 19, 00:19:56.663 "qid": 0, 00:19:56.663 "state": "enabled", 00:19:56.663 "thread": "nvmf_tgt_poll_group_000", 00:19:56.663 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:19:56.663 "listen_address": { 00:19:56.663 "trtype": "TCP", 00:19:56.663 "adrfam": "IPv4", 00:19:56.663 "traddr": "10.0.0.2", 00:19:56.663 "trsvcid": "4420" 00:19:56.663 }, 00:19:56.663 "peer_address": { 00:19:56.663 "trtype": "TCP", 00:19:56.663 "adrfam": "IPv4", 00:19:56.663 "traddr": "10.0.0.1", 00:19:56.663 "trsvcid": "59768" 00:19:56.663 }, 00:19:56.663 "auth": { 00:19:56.663 "state": "completed", 00:19:56.663 "digest": "sha256", 00:19:56.663 "dhgroup": "ffdhe3072" 00:19:56.663 } 00:19:56.663 } 00:19:56.663 ]' 00:19:56.663 10:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:56.663 10:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:56.663 10:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:56.663 10:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:56.663 10:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:56.663 10:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:56.663 10:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:56.663 10:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:56.922 10:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTIxMWYwYWQ2NWUzNzc1YzRjYjMwZDgxNWY5YmMwZGWJauU6: --dhchap-ctrl-secret DHHC-1:02:ZDExZDIxMWIxODE2MGE3ZjcxNTZmYmQ4M2FkY2NhOGNlN2Q3YmE4ODkzOTRhMmYzOYOvaQ==: 00:19:56.922 10:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YTIxMWYwYWQ2NWUzNzc1YzRjYjMwZDgxNWY5YmMwZGWJauU6: --dhchap-ctrl-secret DHHC-1:02:ZDExZDIxMWIxODE2MGE3ZjcxNTZmYmQ4M2FkY2NhOGNlN2Q3YmE4ODkzOTRhMmYzOYOvaQ==: 00:19:57.490 10:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:57.490 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:57.490 10:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:19:57.490 10:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.490 10:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.490 10:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.490 10:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:57.490 10:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:57.490 10:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:57.749 10:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:19:57.749 10:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:57.749 10:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:57.749 10:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:57.749 10:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:57.749 10:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:57.749 10:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:57.749 10:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.749 10:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.749 10:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.749 10:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:57.749 10:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:57.749 10:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:58.007 00:19:58.008 10:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:58.008 10:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:58.008 10:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:58.008 10:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:58.008 10:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:58.008 10:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.008 10:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.266 10:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.266 10:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:58.266 { 00:19:58.267 "cntlid": 21, 00:19:58.267 "qid": 0, 00:19:58.267 "state": "enabled", 00:19:58.267 "thread": "nvmf_tgt_poll_group_000", 00:19:58.267 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:19:58.267 "listen_address": { 00:19:58.267 "trtype": "TCP", 00:19:58.267 "adrfam": "IPv4", 00:19:58.267 "traddr": "10.0.0.2", 00:19:58.267 "trsvcid": "4420" 00:19:58.267 }, 00:19:58.267 "peer_address": { 00:19:58.267 "trtype": "TCP", 00:19:58.267 "adrfam": "IPv4", 00:19:58.267 "traddr": "10.0.0.1", 00:19:58.267 "trsvcid": "59798" 00:19:58.267 }, 00:19:58.267 "auth": { 00:19:58.267 "state": "completed", 00:19:58.267 "digest": "sha256", 00:19:58.267 "dhgroup": "ffdhe3072" 00:19:58.267 } 00:19:58.267 } 00:19:58.267 ]' 00:19:58.267 10:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:58.267 10:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:58.267 10:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:58.267 10:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:58.267 10:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:58.267 10:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:58.267 10:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:58.267 10:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:58.526 10:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OWJiODA2ZjkxODYxYjcxMDc0NjNmNjU4ZWFkMDc5OTM1Y2M5OWE0MGEwMzk4NThjEC7s2A==: --dhchap-ctrl-secret DHHC-1:01:ZWEzNDQ0YmM1MTUwZjg3NjdjNTJlZTg3NjU3YjQzZjbwUG5H: 00:19:58.526 10:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:OWJiODA2ZjkxODYxYjcxMDc0NjNmNjU4ZWFkMDc5OTM1Y2M5OWE0MGEwMzk4NThjEC7s2A==: --dhchap-ctrl-secret DHHC-1:01:ZWEzNDQ0YmM1MTUwZjg3NjdjNTJlZTg3NjU3YjQzZjbwUG5H: 00:19:59.094 10:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:59.094 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:59.094 10:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:19:59.094 10:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.095 10:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.095 10:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.095 10:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:59.095 10:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:59.095 10:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:59.095 10:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:19:59.095 10:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:59.095 10:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:59.095 10:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:59.095 10:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:59.095 10:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:59.095 10:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:19:59.095 10:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.095 10:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.095 10:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.095 10:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:59.095 10:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:59.095 10:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:59.354 00:19:59.354 10:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:59.354 10:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:59.354 10:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:59.613 10:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:59.613 10:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:59.613 10:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.613 10:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.613 10:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.613 10:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:59.613 { 00:19:59.613 "cntlid": 23, 00:19:59.613 "qid": 0, 00:19:59.613 "state": "enabled", 00:19:59.613 "thread": "nvmf_tgt_poll_group_000", 00:19:59.613 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:19:59.613 "listen_address": { 00:19:59.613 "trtype": "TCP", 00:19:59.613 "adrfam": "IPv4", 00:19:59.613 "traddr": "10.0.0.2", 00:19:59.613 "trsvcid": "4420" 00:19:59.613 }, 00:19:59.613 "peer_address": { 00:19:59.613 "trtype": "TCP", 00:19:59.613 "adrfam": "IPv4", 00:19:59.613 "traddr": "10.0.0.1", 00:19:59.613 "trsvcid": "59806" 00:19:59.613 }, 00:19:59.613 "auth": { 00:19:59.613 "state": "completed", 00:19:59.613 "digest": "sha256", 00:19:59.613 "dhgroup": "ffdhe3072" 00:19:59.613 } 00:19:59.613 } 00:19:59.613 ]' 00:19:59.613 10:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:59.613 10:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:59.613 10:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:59.872 10:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:59.872 10:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:59.872 10:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:59.872 10:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:59.872 10:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:59.872 10:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTJlNzAwMjczYjQxOTZkNzE3YWQxZjVmYjEyNGY4YTM1YzA1OWUwNGEwZDYwYmNmNTc2MDBlMDA4ZDlmMjgwYhS8dh8=: 00:19:59.872 10:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NTJlNzAwMjczYjQxOTZkNzE3YWQxZjVmYjEyNGY4YTM1YzA1OWUwNGEwZDYwYmNmNTc2MDBlMDA4ZDlmMjgwYhS8dh8=: 00:20:00.440 10:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:00.440 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:00.440 10:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:00.440 10:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.440 10:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.440 10:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.440 10:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:00.440 10:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:00.440 10:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:00.440 10:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:00.699 10:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:20:00.699 10:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:00.699 10:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:00.699 10:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:00.699 10:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:00.699 10:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:00.699 10:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:00.699 10:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.699 10:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.699 10:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.699 10:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:00.699 10:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:00.699 10:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:00.958 00:20:00.958 10:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:00.958 10:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:00.958 10:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:01.217 10:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:01.217 10:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:01.217 10:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.217 10:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.217 10:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.217 10:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:01.217 { 00:20:01.217 "cntlid": 25, 00:20:01.217 "qid": 0, 00:20:01.217 "state": "enabled", 00:20:01.217 "thread": "nvmf_tgt_poll_group_000", 00:20:01.217 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:01.217 "listen_address": { 00:20:01.217 "trtype": "TCP", 00:20:01.217 "adrfam": "IPv4", 00:20:01.217 "traddr": "10.0.0.2", 00:20:01.217 "trsvcid": "4420" 00:20:01.217 }, 00:20:01.217 "peer_address": { 00:20:01.217 "trtype": "TCP", 00:20:01.217 "adrfam": "IPv4", 00:20:01.217 "traddr": "10.0.0.1", 00:20:01.217 "trsvcid": "59840" 00:20:01.217 }, 00:20:01.217 "auth": { 00:20:01.217 "state": "completed", 00:20:01.217 "digest": "sha256", 00:20:01.217 "dhgroup": "ffdhe4096" 00:20:01.217 } 00:20:01.217 } 00:20:01.217 ]' 00:20:01.217 10:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:01.217 10:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:01.217 10:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:01.217 10:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:01.217 10:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:01.475 10:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:01.476 10:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:01.476 10:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:01.476 10:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTQxZTczZTU3OWRjMzNjYzJjNjg1ZDE0MzkxYzUxMDJjMmEzMjY1ZGU2MzMwNWRiooT6bw==: --dhchap-ctrl-secret DHHC-1:03:MWY1NWRkNGJiNDIwYmQ3NjY0M2VhY2EyN2IxMGVlYjA3YTg5MDUzNmZhYjFjNTg4Y2YzY2FlNjhiYjI5MGJmOB0pSzQ=: 00:20:01.476 10:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZTQxZTczZTU3OWRjMzNjYzJjNjg1ZDE0MzkxYzUxMDJjMmEzMjY1ZGU2MzMwNWRiooT6bw==: --dhchap-ctrl-secret DHHC-1:03:MWY1NWRkNGJiNDIwYmQ3NjY0M2VhY2EyN2IxMGVlYjA3YTg5MDUzNmZhYjFjNTg4Y2YzY2FlNjhiYjI5MGJmOB0pSzQ=: 00:20:02.043 10:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:02.043 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:02.043 10:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:02.043 10:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.043 10:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.043 10:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.043 10:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:02.043 10:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:02.043 10:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:02.302 10:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:20:02.302 10:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:02.302 10:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:02.302 10:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:02.302 10:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:02.302 10:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:02.302 10:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:02.302 10:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.302 10:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.302 10:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.302 10:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:02.302 10:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:02.302 10:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:02.561 00:20:02.561 10:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:02.561 10:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:02.561 10:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:02.820 10:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:02.820 10:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:02.820 10:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.820 10:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.820 10:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.820 10:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:02.820 { 00:20:02.820 "cntlid": 27, 00:20:02.820 "qid": 0, 00:20:02.820 "state": "enabled", 00:20:02.820 "thread": "nvmf_tgt_poll_group_000", 00:20:02.820 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:02.820 "listen_address": { 00:20:02.820 "trtype": "TCP", 00:20:02.820 "adrfam": "IPv4", 00:20:02.820 "traddr": "10.0.0.2", 00:20:02.820 "trsvcid": "4420" 00:20:02.820 }, 00:20:02.820 "peer_address": { 00:20:02.820 "trtype": "TCP", 00:20:02.820 "adrfam": "IPv4", 00:20:02.820 "traddr": "10.0.0.1", 00:20:02.820 "trsvcid": "59874" 00:20:02.820 }, 00:20:02.820 "auth": { 00:20:02.820 "state": "completed", 00:20:02.820 "digest": "sha256", 00:20:02.820 "dhgroup": "ffdhe4096" 00:20:02.820 } 00:20:02.820 } 00:20:02.820 ]' 00:20:02.820 10:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:02.820 10:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:02.820 10:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:02.820 10:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:02.820 10:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:03.079 10:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:03.079 10:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:03.079 10:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:03.079 10:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTIxMWYwYWQ2NWUzNzc1YzRjYjMwZDgxNWY5YmMwZGWJauU6: --dhchap-ctrl-secret DHHC-1:02:ZDExZDIxMWIxODE2MGE3ZjcxNTZmYmQ4M2FkY2NhOGNlN2Q3YmE4ODkzOTRhMmYzOYOvaQ==: 00:20:03.079 10:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YTIxMWYwYWQ2NWUzNzc1YzRjYjMwZDgxNWY5YmMwZGWJauU6: --dhchap-ctrl-secret DHHC-1:02:ZDExZDIxMWIxODE2MGE3ZjcxNTZmYmQ4M2FkY2NhOGNlN2Q3YmE4ODkzOTRhMmYzOYOvaQ==: 00:20:03.646 10:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:03.646 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:03.646 10:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:03.646 10:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.646 10:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.646 10:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.646 10:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:03.646 10:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:03.646 10:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:03.904 10:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:20:03.905 10:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:03.905 10:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:03.905 10:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:03.905 10:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:03.905 10:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:03.905 10:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:03.905 10:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.905 10:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.905 10:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.905 10:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:03.905 10:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:03.905 10:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:04.163 00:20:04.163 10:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:04.163 10:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:04.163 10:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:04.422 10:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:04.422 10:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:04.422 10:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.422 10:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.422 10:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.422 10:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:04.422 { 00:20:04.422 "cntlid": 29, 00:20:04.422 "qid": 0, 00:20:04.422 "state": "enabled", 00:20:04.422 "thread": "nvmf_tgt_poll_group_000", 00:20:04.422 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:04.422 "listen_address": { 00:20:04.422 "trtype": "TCP", 00:20:04.422 "adrfam": "IPv4", 00:20:04.422 "traddr": "10.0.0.2", 00:20:04.422 "trsvcid": "4420" 00:20:04.422 }, 00:20:04.422 "peer_address": { 00:20:04.422 "trtype": "TCP", 00:20:04.422 "adrfam": "IPv4", 00:20:04.422 "traddr": "10.0.0.1", 00:20:04.422 "trsvcid": "52012" 00:20:04.422 }, 00:20:04.422 "auth": { 00:20:04.422 "state": "completed", 00:20:04.422 "digest": "sha256", 00:20:04.422 "dhgroup": "ffdhe4096" 00:20:04.422 } 00:20:04.422 } 00:20:04.422 ]' 00:20:04.422 10:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:04.422 10:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:04.422 10:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:04.422 10:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:04.422 10:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:04.680 10:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:04.680 10:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:04.680 10:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:04.680 10:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OWJiODA2ZjkxODYxYjcxMDc0NjNmNjU4ZWFkMDc5OTM1Y2M5OWE0MGEwMzk4NThjEC7s2A==: --dhchap-ctrl-secret DHHC-1:01:ZWEzNDQ0YmM1MTUwZjg3NjdjNTJlZTg3NjU3YjQzZjbwUG5H: 00:20:04.680 10:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:OWJiODA2ZjkxODYxYjcxMDc0NjNmNjU4ZWFkMDc5OTM1Y2M5OWE0MGEwMzk4NThjEC7s2A==: --dhchap-ctrl-secret DHHC-1:01:ZWEzNDQ0YmM1MTUwZjg3NjdjNTJlZTg3NjU3YjQzZjbwUG5H: 00:20:05.247 10:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:05.247 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:05.247 10:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:05.247 10:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.247 10:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.247 10:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.247 10:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:05.247 10:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:05.247 10:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:05.505 10:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:20:05.505 10:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:05.505 10:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:05.505 10:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:05.505 10:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:05.505 10:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:05.505 10:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:20:05.505 10:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.505 10:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.505 10:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.505 10:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:05.505 10:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:05.505 10:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:05.764 00:20:05.764 10:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:05.764 10:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:05.764 10:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:06.023 10:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:06.023 10:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:06.023 10:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.023 10:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.023 10:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.023 10:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:06.023 { 00:20:06.023 "cntlid": 31, 00:20:06.023 "qid": 0, 00:20:06.023 "state": "enabled", 00:20:06.023 "thread": "nvmf_tgt_poll_group_000", 00:20:06.023 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:06.023 "listen_address": { 00:20:06.023 "trtype": "TCP", 00:20:06.023 "adrfam": "IPv4", 00:20:06.023 "traddr": "10.0.0.2", 00:20:06.023 "trsvcid": "4420" 00:20:06.023 }, 00:20:06.023 "peer_address": { 00:20:06.023 "trtype": "TCP", 00:20:06.023 "adrfam": "IPv4", 00:20:06.023 "traddr": "10.0.0.1", 00:20:06.023 "trsvcid": "52034" 00:20:06.023 }, 00:20:06.023 "auth": { 00:20:06.023 "state": "completed", 00:20:06.023 "digest": "sha256", 00:20:06.023 "dhgroup": "ffdhe4096" 00:20:06.023 } 00:20:06.023 } 00:20:06.023 ]' 00:20:06.023 10:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:06.023 10:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:06.023 10:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:06.023 10:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:06.023 10:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:06.281 10:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:06.281 10:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:06.281 10:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:06.281 10:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTJlNzAwMjczYjQxOTZkNzE3YWQxZjVmYjEyNGY4YTM1YzA1OWUwNGEwZDYwYmNmNTc2MDBlMDA4ZDlmMjgwYhS8dh8=: 00:20:06.281 10:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NTJlNzAwMjczYjQxOTZkNzE3YWQxZjVmYjEyNGY4YTM1YzA1OWUwNGEwZDYwYmNmNTc2MDBlMDA4ZDlmMjgwYhS8dh8=: 00:20:06.848 10:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:06.848 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:06.848 10:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:06.848 10:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.848 10:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.848 10:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.848 10:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:06.848 10:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:06.848 10:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:06.848 10:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:07.106 10:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:20:07.106 10:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:07.106 10:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:07.106 10:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:07.106 10:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:07.106 10:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:07.106 10:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:07.106 10:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.106 10:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.106 10:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.106 10:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:07.106 10:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:07.106 10:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:07.365 00:20:07.365 10:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:07.365 10:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:07.365 10:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:07.623 10:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:07.623 10:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:07.623 10:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.623 10:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.623 10:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.623 10:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:07.623 { 00:20:07.623 "cntlid": 33, 00:20:07.623 "qid": 0, 00:20:07.623 "state": "enabled", 00:20:07.623 "thread": "nvmf_tgt_poll_group_000", 00:20:07.624 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:07.624 "listen_address": { 00:20:07.624 "trtype": "TCP", 00:20:07.624 "adrfam": "IPv4", 00:20:07.624 "traddr": "10.0.0.2", 00:20:07.624 "trsvcid": "4420" 00:20:07.624 }, 00:20:07.624 "peer_address": { 00:20:07.624 "trtype": "TCP", 00:20:07.624 "adrfam": "IPv4", 00:20:07.624 "traddr": "10.0.0.1", 00:20:07.624 "trsvcid": "52056" 00:20:07.624 }, 00:20:07.624 "auth": { 00:20:07.624 "state": "completed", 00:20:07.624 "digest": "sha256", 00:20:07.624 "dhgroup": "ffdhe6144" 00:20:07.624 } 00:20:07.624 } 00:20:07.624 ]' 00:20:07.624 10:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:07.624 10:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:07.624 10:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:07.882 10:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:07.882 10:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:07.882 10:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:07.882 10:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:07.882 10:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:07.882 10:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTQxZTczZTU3OWRjMzNjYzJjNjg1ZDE0MzkxYzUxMDJjMmEzMjY1ZGU2MzMwNWRiooT6bw==: --dhchap-ctrl-secret DHHC-1:03:MWY1NWRkNGJiNDIwYmQ3NjY0M2VhY2EyN2IxMGVlYjA3YTg5MDUzNmZhYjFjNTg4Y2YzY2FlNjhiYjI5MGJmOB0pSzQ=: 00:20:07.882 10:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZTQxZTczZTU3OWRjMzNjYzJjNjg1ZDE0MzkxYzUxMDJjMmEzMjY1ZGU2MzMwNWRiooT6bw==: --dhchap-ctrl-secret DHHC-1:03:MWY1NWRkNGJiNDIwYmQ3NjY0M2VhY2EyN2IxMGVlYjA3YTg5MDUzNmZhYjFjNTg4Y2YzY2FlNjhiYjI5MGJmOB0pSzQ=: 00:20:08.449 10:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:08.449 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:08.449 10:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:08.449 10:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.449 10:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.449 10:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.449 10:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:08.449 10:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:08.449 10:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:08.708 10:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:20:08.708 10:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:08.708 10:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:08.708 10:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:08.708 10:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:08.708 10:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:08.708 10:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:08.708 10:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.708 10:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.708 10:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.708 10:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:08.708 10:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:08.708 10:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:08.967 00:20:09.225 10:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:09.225 10:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:09.225 10:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:09.225 10:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:09.225 10:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:09.225 10:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.225 10:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.225 10:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.225 10:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:09.225 { 00:20:09.225 "cntlid": 35, 00:20:09.225 "qid": 0, 00:20:09.225 "state": "enabled", 00:20:09.225 "thread": "nvmf_tgt_poll_group_000", 00:20:09.225 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:09.225 "listen_address": { 00:20:09.225 "trtype": "TCP", 00:20:09.225 "adrfam": "IPv4", 00:20:09.225 "traddr": "10.0.0.2", 00:20:09.225 "trsvcid": "4420" 00:20:09.225 }, 00:20:09.225 "peer_address": { 00:20:09.225 "trtype": "TCP", 00:20:09.225 "adrfam": "IPv4", 00:20:09.225 "traddr": "10.0.0.1", 00:20:09.225 "trsvcid": "52072" 00:20:09.225 }, 00:20:09.225 "auth": { 00:20:09.225 "state": "completed", 00:20:09.225 "digest": "sha256", 00:20:09.225 "dhgroup": "ffdhe6144" 00:20:09.225 } 00:20:09.225 } 00:20:09.225 ]' 00:20:09.225 10:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:09.225 10:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:09.225 10:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:09.484 10:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:09.484 10:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:09.484 10:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:09.484 10:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:09.484 10:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:09.484 10:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTIxMWYwYWQ2NWUzNzc1YzRjYjMwZDgxNWY5YmMwZGWJauU6: --dhchap-ctrl-secret DHHC-1:02:ZDExZDIxMWIxODE2MGE3ZjcxNTZmYmQ4M2FkY2NhOGNlN2Q3YmE4ODkzOTRhMmYzOYOvaQ==: 00:20:09.484 10:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YTIxMWYwYWQ2NWUzNzc1YzRjYjMwZDgxNWY5YmMwZGWJauU6: --dhchap-ctrl-secret DHHC-1:02:ZDExZDIxMWIxODE2MGE3ZjcxNTZmYmQ4M2FkY2NhOGNlN2Q3YmE4ODkzOTRhMmYzOYOvaQ==: 00:20:10.050 10:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:10.309 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:10.309 10:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:10.309 10:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.309 10:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.309 10:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.309 10:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:10.309 10:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:10.309 10:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:10.309 10:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:20:10.309 10:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:10.309 10:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:10.309 10:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:10.309 10:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:10.309 10:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:10.309 10:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:10.309 10:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.309 10:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.309 10:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.309 10:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:10.310 10:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:10.310 10:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:10.877 00:20:10.877 10:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:10.877 10:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:10.877 10:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:10.877 10:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:10.877 10:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:10.877 10:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.877 10:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.877 10:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.877 10:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:10.877 { 00:20:10.877 "cntlid": 37, 00:20:10.877 "qid": 0, 00:20:10.877 "state": "enabled", 00:20:10.877 "thread": "nvmf_tgt_poll_group_000", 00:20:10.877 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:10.877 "listen_address": { 00:20:10.877 "trtype": "TCP", 00:20:10.877 "adrfam": "IPv4", 00:20:10.877 "traddr": "10.0.0.2", 00:20:10.877 "trsvcid": "4420" 00:20:10.877 }, 00:20:10.877 "peer_address": { 00:20:10.877 "trtype": "TCP", 00:20:10.877 "adrfam": "IPv4", 00:20:10.877 "traddr": "10.0.0.1", 00:20:10.877 "trsvcid": "52112" 00:20:10.877 }, 00:20:10.877 "auth": { 00:20:10.877 "state": "completed", 00:20:10.877 "digest": "sha256", 00:20:10.877 "dhgroup": "ffdhe6144" 00:20:10.877 } 00:20:10.877 } 00:20:10.877 ]' 00:20:10.877 10:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:11.136 10:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:11.136 10:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:11.136 10:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:11.136 10:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:11.136 10:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:11.136 10:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:11.136 10:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:11.396 10:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OWJiODA2ZjkxODYxYjcxMDc0NjNmNjU4ZWFkMDc5OTM1Y2M5OWE0MGEwMzk4NThjEC7s2A==: --dhchap-ctrl-secret DHHC-1:01:ZWEzNDQ0YmM1MTUwZjg3NjdjNTJlZTg3NjU3YjQzZjbwUG5H: 00:20:11.396 10:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:OWJiODA2ZjkxODYxYjcxMDc0NjNmNjU4ZWFkMDc5OTM1Y2M5OWE0MGEwMzk4NThjEC7s2A==: --dhchap-ctrl-secret DHHC-1:01:ZWEzNDQ0YmM1MTUwZjg3NjdjNTJlZTg3NjU3YjQzZjbwUG5H: 00:20:11.966 10:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:11.966 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:11.966 10:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:11.966 10:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.966 10:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.966 10:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.966 10:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:11.966 10:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:11.967 10:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:11.967 10:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:20:11.967 10:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:11.967 10:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:11.967 10:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:11.967 10:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:11.967 10:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:11.967 10:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:20:11.967 10:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.967 10:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.967 10:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.967 10:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:11.967 10:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:11.967 10:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:12.534 00:20:12.534 10:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:12.534 10:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:12.534 10:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:12.534 10:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:12.534 10:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:12.534 10:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.534 10:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:12.534 10:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.534 10:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:12.534 { 00:20:12.534 "cntlid": 39, 00:20:12.534 "qid": 0, 00:20:12.534 "state": "enabled", 00:20:12.534 "thread": "nvmf_tgt_poll_group_000", 00:20:12.534 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:12.534 "listen_address": { 00:20:12.534 "trtype": "TCP", 00:20:12.534 "adrfam": "IPv4", 00:20:12.534 "traddr": "10.0.0.2", 00:20:12.534 "trsvcid": "4420" 00:20:12.534 }, 00:20:12.534 "peer_address": { 00:20:12.534 "trtype": "TCP", 00:20:12.534 "adrfam": "IPv4", 00:20:12.534 "traddr": "10.0.0.1", 00:20:12.534 "trsvcid": "52136" 00:20:12.534 }, 00:20:12.534 "auth": { 00:20:12.534 "state": "completed", 00:20:12.534 "digest": "sha256", 00:20:12.534 "dhgroup": "ffdhe6144" 00:20:12.534 } 00:20:12.534 } 00:20:12.534 ]' 00:20:12.534 10:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:12.794 10:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:12.794 10:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:12.794 10:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:12.794 10:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:12.794 10:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:12.794 10:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:12.794 10:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:13.053 10:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTJlNzAwMjczYjQxOTZkNzE3YWQxZjVmYjEyNGY4YTM1YzA1OWUwNGEwZDYwYmNmNTc2MDBlMDA4ZDlmMjgwYhS8dh8=: 00:20:13.053 10:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NTJlNzAwMjczYjQxOTZkNzE3YWQxZjVmYjEyNGY4YTM1YzA1OWUwNGEwZDYwYmNmNTc2MDBlMDA4ZDlmMjgwYhS8dh8=: 00:20:13.621 10:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:13.621 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:13.621 10:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:13.621 10:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.621 10:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:13.621 10:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.621 10:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:13.621 10:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:13.621 10:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:13.621 10:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:13.621 10:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:20:13.621 10:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:13.621 10:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:13.621 10:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:13.621 10:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:13.621 10:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:13.621 10:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:13.621 10:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.621 10:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:13.621 10:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.621 10:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:13.621 10:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:13.621 10:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:14.188 00:20:14.188 10:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:14.188 10:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:14.188 10:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:14.447 10:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:14.447 10:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:14.447 10:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.447 10:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.447 10:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.447 10:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:14.447 { 00:20:14.447 "cntlid": 41, 00:20:14.447 "qid": 0, 00:20:14.447 "state": "enabled", 00:20:14.447 "thread": "nvmf_tgt_poll_group_000", 00:20:14.447 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:14.448 "listen_address": { 00:20:14.448 "trtype": "TCP", 00:20:14.448 "adrfam": "IPv4", 00:20:14.448 "traddr": "10.0.0.2", 00:20:14.448 "trsvcid": "4420" 00:20:14.448 }, 00:20:14.448 "peer_address": { 00:20:14.448 "trtype": "TCP", 00:20:14.448 "adrfam": "IPv4", 00:20:14.448 "traddr": "10.0.0.1", 00:20:14.448 "trsvcid": "48962" 00:20:14.448 }, 00:20:14.448 "auth": { 00:20:14.448 "state": "completed", 00:20:14.448 "digest": "sha256", 00:20:14.448 "dhgroup": "ffdhe8192" 00:20:14.448 } 00:20:14.448 } 00:20:14.448 ]' 00:20:14.448 10:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:14.448 10:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:14.448 10:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:14.448 10:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:14.448 10:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:14.448 10:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:14.448 10:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:14.448 10:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:14.707 10:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTQxZTczZTU3OWRjMzNjYzJjNjg1ZDE0MzkxYzUxMDJjMmEzMjY1ZGU2MzMwNWRiooT6bw==: --dhchap-ctrl-secret DHHC-1:03:MWY1NWRkNGJiNDIwYmQ3NjY0M2VhY2EyN2IxMGVlYjA3YTg5MDUzNmZhYjFjNTg4Y2YzY2FlNjhiYjI5MGJmOB0pSzQ=: 00:20:14.707 10:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZTQxZTczZTU3OWRjMzNjYzJjNjg1ZDE0MzkxYzUxMDJjMmEzMjY1ZGU2MzMwNWRiooT6bw==: --dhchap-ctrl-secret DHHC-1:03:MWY1NWRkNGJiNDIwYmQ3NjY0M2VhY2EyN2IxMGVlYjA3YTg5MDUzNmZhYjFjNTg4Y2YzY2FlNjhiYjI5MGJmOB0pSzQ=: 00:20:15.275 10:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:15.275 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:15.275 10:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:15.275 10:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.275 10:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:15.275 10:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.275 10:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:15.275 10:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:15.275 10:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:15.534 10:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:20:15.534 10:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:15.534 10:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:15.534 10:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:15.534 10:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:15.534 10:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:15.534 10:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:15.534 10:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.534 10:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:15.534 10:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.534 10:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:15.534 10:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:15.534 10:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:16.101 00:20:16.101 10:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:16.101 10:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:16.101 10:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:16.101 10:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:16.101 10:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:16.101 10:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.101 10:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.101 10:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.102 10:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:16.102 { 00:20:16.102 "cntlid": 43, 00:20:16.102 "qid": 0, 00:20:16.102 "state": "enabled", 00:20:16.102 "thread": "nvmf_tgt_poll_group_000", 00:20:16.102 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:16.102 "listen_address": { 00:20:16.102 "trtype": "TCP", 00:20:16.102 "adrfam": "IPv4", 00:20:16.102 "traddr": "10.0.0.2", 00:20:16.102 "trsvcid": "4420" 00:20:16.102 }, 00:20:16.102 "peer_address": { 00:20:16.102 "trtype": "TCP", 00:20:16.102 "adrfam": "IPv4", 00:20:16.102 "traddr": "10.0.0.1", 00:20:16.102 "trsvcid": "48986" 00:20:16.102 }, 00:20:16.102 "auth": { 00:20:16.102 "state": "completed", 00:20:16.102 "digest": "sha256", 00:20:16.102 "dhgroup": "ffdhe8192" 00:20:16.102 } 00:20:16.102 } 00:20:16.102 ]' 00:20:16.102 10:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:16.102 10:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:16.102 10:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:16.361 10:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:16.361 10:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:16.361 10:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:16.361 10:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:16.361 10:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:16.361 10:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTIxMWYwYWQ2NWUzNzc1YzRjYjMwZDgxNWY5YmMwZGWJauU6: --dhchap-ctrl-secret DHHC-1:02:ZDExZDIxMWIxODE2MGE3ZjcxNTZmYmQ4M2FkY2NhOGNlN2Q3YmE4ODkzOTRhMmYzOYOvaQ==: 00:20:16.361 10:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YTIxMWYwYWQ2NWUzNzc1YzRjYjMwZDgxNWY5YmMwZGWJauU6: --dhchap-ctrl-secret DHHC-1:02:ZDExZDIxMWIxODE2MGE3ZjcxNTZmYmQ4M2FkY2NhOGNlN2Q3YmE4ODkzOTRhMmYzOYOvaQ==: 00:20:16.928 10:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:16.928 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:16.928 10:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:16.928 10:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.928 10:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.928 10:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.928 10:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:16.928 10:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:16.928 10:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:17.189 10:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:20:17.189 10:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:17.189 10:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:17.189 10:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:17.189 10:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:17.189 10:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:17.189 10:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:17.189 10:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.189 10:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:17.189 10:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.189 10:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:17.189 10:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:17.189 10:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:17.755 00:20:17.755 10:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:17.755 10:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:17.755 10:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:18.013 10:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:18.013 10:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:18.014 10:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.014 10:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.014 10:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.014 10:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:18.014 { 00:20:18.014 "cntlid": 45, 00:20:18.014 "qid": 0, 00:20:18.014 "state": "enabled", 00:20:18.014 "thread": "nvmf_tgt_poll_group_000", 00:20:18.014 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:18.014 "listen_address": { 00:20:18.014 "trtype": "TCP", 00:20:18.014 "adrfam": "IPv4", 00:20:18.014 "traddr": "10.0.0.2", 00:20:18.014 "trsvcid": "4420" 00:20:18.014 }, 00:20:18.014 "peer_address": { 00:20:18.014 "trtype": "TCP", 00:20:18.014 "adrfam": "IPv4", 00:20:18.014 "traddr": "10.0.0.1", 00:20:18.014 "trsvcid": "49004" 00:20:18.014 }, 00:20:18.014 "auth": { 00:20:18.014 "state": "completed", 00:20:18.014 "digest": "sha256", 00:20:18.014 "dhgroup": "ffdhe8192" 00:20:18.014 } 00:20:18.014 } 00:20:18.014 ]' 00:20:18.014 10:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:18.014 10:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:18.014 10:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:18.014 10:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:18.014 10:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:18.014 10:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:18.014 10:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:18.014 10:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:18.272 10:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OWJiODA2ZjkxODYxYjcxMDc0NjNmNjU4ZWFkMDc5OTM1Y2M5OWE0MGEwMzk4NThjEC7s2A==: --dhchap-ctrl-secret DHHC-1:01:ZWEzNDQ0YmM1MTUwZjg3NjdjNTJlZTg3NjU3YjQzZjbwUG5H: 00:20:18.272 10:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:OWJiODA2ZjkxODYxYjcxMDc0NjNmNjU4ZWFkMDc5OTM1Y2M5OWE0MGEwMzk4NThjEC7s2A==: --dhchap-ctrl-secret DHHC-1:01:ZWEzNDQ0YmM1MTUwZjg3NjdjNTJlZTg3NjU3YjQzZjbwUG5H: 00:20:18.840 10:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:18.840 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:18.840 10:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:18.840 10:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.840 10:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.840 10:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.840 10:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:18.840 10:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:18.841 10:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:19.099 10:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:20:19.099 10:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:19.099 10:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:19.099 10:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:19.099 10:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:19.099 10:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:19.099 10:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:20:19.100 10:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.100 10:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.100 10:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.100 10:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:19.100 10:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:19.100 10:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:19.667 00:20:19.667 10:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:19.667 10:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:19.667 10:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:19.667 10:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:19.667 10:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:19.667 10:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.667 10:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.667 10:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.667 10:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:19.667 { 00:20:19.667 "cntlid": 47, 00:20:19.667 "qid": 0, 00:20:19.667 "state": "enabled", 00:20:19.667 "thread": "nvmf_tgt_poll_group_000", 00:20:19.667 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:19.667 "listen_address": { 00:20:19.667 "trtype": "TCP", 00:20:19.667 "adrfam": "IPv4", 00:20:19.667 "traddr": "10.0.0.2", 00:20:19.667 "trsvcid": "4420" 00:20:19.667 }, 00:20:19.667 "peer_address": { 00:20:19.667 "trtype": "TCP", 00:20:19.667 "adrfam": "IPv4", 00:20:19.667 "traddr": "10.0.0.1", 00:20:19.667 "trsvcid": "49040" 00:20:19.667 }, 00:20:19.667 "auth": { 00:20:19.667 "state": "completed", 00:20:19.667 "digest": "sha256", 00:20:19.667 "dhgroup": "ffdhe8192" 00:20:19.667 } 00:20:19.667 } 00:20:19.667 ]' 00:20:19.667 10:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:19.667 10:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:19.667 10:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:19.926 10:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:19.926 10:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:19.926 10:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:19.926 10:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:19.926 10:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:19.926 10:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTJlNzAwMjczYjQxOTZkNzE3YWQxZjVmYjEyNGY4YTM1YzA1OWUwNGEwZDYwYmNmNTc2MDBlMDA4ZDlmMjgwYhS8dh8=: 00:20:19.926 10:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NTJlNzAwMjczYjQxOTZkNzE3YWQxZjVmYjEyNGY4YTM1YzA1OWUwNGEwZDYwYmNmNTc2MDBlMDA4ZDlmMjgwYhS8dh8=: 00:20:20.493 10:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:20.493 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:20.493 10:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:20.493 10:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.493 10:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.493 10:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.493 10:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:20:20.493 10:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:20.493 10:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:20.493 10:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:20.493 10:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:20.752 10:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:20:20.752 10:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:20.752 10:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:20.752 10:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:20.752 10:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:20.752 10:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:20.752 10:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:20.752 10:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.752 10:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.752 10:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.752 10:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:20.752 10:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:20.752 10:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:21.011 00:20:21.011 10:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:21.011 10:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:21.011 10:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:21.270 10:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:21.270 10:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:21.270 10:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.270 10:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.270 10:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.270 10:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:21.270 { 00:20:21.270 "cntlid": 49, 00:20:21.270 "qid": 0, 00:20:21.270 "state": "enabled", 00:20:21.270 "thread": "nvmf_tgt_poll_group_000", 00:20:21.270 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:21.270 "listen_address": { 00:20:21.270 "trtype": "TCP", 00:20:21.270 "adrfam": "IPv4", 00:20:21.270 "traddr": "10.0.0.2", 00:20:21.270 "trsvcid": "4420" 00:20:21.270 }, 00:20:21.270 "peer_address": { 00:20:21.270 "trtype": "TCP", 00:20:21.270 "adrfam": "IPv4", 00:20:21.270 "traddr": "10.0.0.1", 00:20:21.270 "trsvcid": "49072" 00:20:21.270 }, 00:20:21.270 "auth": { 00:20:21.270 "state": "completed", 00:20:21.270 "digest": "sha384", 00:20:21.270 "dhgroup": "null" 00:20:21.270 } 00:20:21.270 } 00:20:21.270 ]' 00:20:21.270 10:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:21.270 10:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:21.270 10:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:21.270 10:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:21.270 10:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:21.529 10:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:21.529 10:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:21.529 10:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:21.529 10:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTQxZTczZTU3OWRjMzNjYzJjNjg1ZDE0MzkxYzUxMDJjMmEzMjY1ZGU2MzMwNWRiooT6bw==: --dhchap-ctrl-secret DHHC-1:03:MWY1NWRkNGJiNDIwYmQ3NjY0M2VhY2EyN2IxMGVlYjA3YTg5MDUzNmZhYjFjNTg4Y2YzY2FlNjhiYjI5MGJmOB0pSzQ=: 00:20:21.529 10:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZTQxZTczZTU3OWRjMzNjYzJjNjg1ZDE0MzkxYzUxMDJjMmEzMjY1ZGU2MzMwNWRiooT6bw==: --dhchap-ctrl-secret DHHC-1:03:MWY1NWRkNGJiNDIwYmQ3NjY0M2VhY2EyN2IxMGVlYjA3YTg5MDUzNmZhYjFjNTg4Y2YzY2FlNjhiYjI5MGJmOB0pSzQ=: 00:20:22.096 10:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:22.096 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:22.096 10:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:22.096 10:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.096 10:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.096 10:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.096 10:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:22.096 10:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:22.096 10:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:22.355 10:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:20:22.355 10:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:22.355 10:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:22.355 10:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:22.355 10:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:22.355 10:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:22.355 10:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:22.355 10:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.355 10:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.355 10:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.355 10:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:22.355 10:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:22.355 10:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:22.614 00:20:22.614 10:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:22.614 10:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:22.614 10:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:22.873 10:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:22.873 10:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:22.873 10:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.873 10:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.873 10:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.873 10:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:22.873 { 00:20:22.873 "cntlid": 51, 00:20:22.873 "qid": 0, 00:20:22.873 "state": "enabled", 00:20:22.873 "thread": "nvmf_tgt_poll_group_000", 00:20:22.873 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:22.873 "listen_address": { 00:20:22.873 "trtype": "TCP", 00:20:22.873 "adrfam": "IPv4", 00:20:22.873 "traddr": "10.0.0.2", 00:20:22.873 "trsvcid": "4420" 00:20:22.873 }, 00:20:22.873 "peer_address": { 00:20:22.873 "trtype": "TCP", 00:20:22.873 "adrfam": "IPv4", 00:20:22.873 "traddr": "10.0.0.1", 00:20:22.873 "trsvcid": "49102" 00:20:22.873 }, 00:20:22.873 "auth": { 00:20:22.873 "state": "completed", 00:20:22.873 "digest": "sha384", 00:20:22.873 "dhgroup": "null" 00:20:22.873 } 00:20:22.873 } 00:20:22.873 ]' 00:20:22.873 10:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:22.873 10:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:22.873 10:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:22.873 10:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:22.873 10:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:22.873 10:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:22.873 10:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:22.873 10:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:23.131 10:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTIxMWYwYWQ2NWUzNzc1YzRjYjMwZDgxNWY5YmMwZGWJauU6: --dhchap-ctrl-secret DHHC-1:02:ZDExZDIxMWIxODE2MGE3ZjcxNTZmYmQ4M2FkY2NhOGNlN2Q3YmE4ODkzOTRhMmYzOYOvaQ==: 00:20:23.132 10:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YTIxMWYwYWQ2NWUzNzc1YzRjYjMwZDgxNWY5YmMwZGWJauU6: --dhchap-ctrl-secret DHHC-1:02:ZDExZDIxMWIxODE2MGE3ZjcxNTZmYmQ4M2FkY2NhOGNlN2Q3YmE4ODkzOTRhMmYzOYOvaQ==: 00:20:23.698 10:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:23.698 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:23.698 10:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:23.698 10:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.698 10:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.698 10:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.698 10:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:23.698 10:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:23.698 10:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:23.957 10:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:20:23.957 10:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:23.957 10:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:23.957 10:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:23.957 10:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:23.957 10:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:23.957 10:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:23.957 10:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.957 10:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.957 10:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.957 10:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:23.957 10:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:23.957 10:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:24.216 00:20:24.216 10:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:24.216 10:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:24.216 10:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:24.216 10:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:24.216 10:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:24.216 10:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.216 10:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.216 10:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.475 10:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:24.475 { 00:20:24.475 "cntlid": 53, 00:20:24.475 "qid": 0, 00:20:24.475 "state": "enabled", 00:20:24.475 "thread": "nvmf_tgt_poll_group_000", 00:20:24.475 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:24.475 "listen_address": { 00:20:24.475 "trtype": "TCP", 00:20:24.475 "adrfam": "IPv4", 00:20:24.475 "traddr": "10.0.0.2", 00:20:24.475 "trsvcid": "4420" 00:20:24.475 }, 00:20:24.475 "peer_address": { 00:20:24.475 "trtype": "TCP", 00:20:24.475 "adrfam": "IPv4", 00:20:24.475 "traddr": "10.0.0.1", 00:20:24.475 "trsvcid": "49540" 00:20:24.475 }, 00:20:24.475 "auth": { 00:20:24.475 "state": "completed", 00:20:24.475 "digest": "sha384", 00:20:24.475 "dhgroup": "null" 00:20:24.475 } 00:20:24.475 } 00:20:24.475 ]' 00:20:24.475 10:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:24.475 10:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:24.475 10:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:24.475 10:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:24.475 10:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:24.475 10:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:24.475 10:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:24.475 10:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:24.734 10:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OWJiODA2ZjkxODYxYjcxMDc0NjNmNjU4ZWFkMDc5OTM1Y2M5OWE0MGEwMzk4NThjEC7s2A==: --dhchap-ctrl-secret DHHC-1:01:ZWEzNDQ0YmM1MTUwZjg3NjdjNTJlZTg3NjU3YjQzZjbwUG5H: 00:20:24.734 10:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:OWJiODA2ZjkxODYxYjcxMDc0NjNmNjU4ZWFkMDc5OTM1Y2M5OWE0MGEwMzk4NThjEC7s2A==: --dhchap-ctrl-secret DHHC-1:01:ZWEzNDQ0YmM1MTUwZjg3NjdjNTJlZTg3NjU3YjQzZjbwUG5H: 00:20:25.301 10:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:25.301 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:25.301 10:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:25.301 10:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.301 10:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.301 10:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.301 10:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:25.301 10:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:25.301 10:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:25.301 10:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:20:25.301 10:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:25.301 10:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:25.301 10:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:25.301 10:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:25.301 10:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:25.301 10:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:20:25.301 10:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.301 10:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.301 10:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.301 10:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:25.301 10:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:25.301 10:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:25.560 00:20:25.560 10:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:25.560 10:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:25.560 10:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:25.819 10:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:25.819 10:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:25.819 10:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.819 10:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.819 10:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.819 10:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:25.819 { 00:20:25.819 "cntlid": 55, 00:20:25.819 "qid": 0, 00:20:25.819 "state": "enabled", 00:20:25.819 "thread": "nvmf_tgt_poll_group_000", 00:20:25.819 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:25.819 "listen_address": { 00:20:25.819 "trtype": "TCP", 00:20:25.819 "adrfam": "IPv4", 00:20:25.819 "traddr": "10.0.0.2", 00:20:25.819 "trsvcid": "4420" 00:20:25.819 }, 00:20:25.819 "peer_address": { 00:20:25.819 "trtype": "TCP", 00:20:25.819 "adrfam": "IPv4", 00:20:25.819 "traddr": "10.0.0.1", 00:20:25.819 "trsvcid": "49568" 00:20:25.819 }, 00:20:25.819 "auth": { 00:20:25.819 "state": "completed", 00:20:25.819 "digest": "sha384", 00:20:25.819 "dhgroup": "null" 00:20:25.819 } 00:20:25.819 } 00:20:25.819 ]' 00:20:25.819 10:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:25.819 10:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:25.819 10:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:25.819 10:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:25.819 10:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:26.078 10:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:26.078 10:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:26.078 10:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:26.078 10:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTJlNzAwMjczYjQxOTZkNzE3YWQxZjVmYjEyNGY4YTM1YzA1OWUwNGEwZDYwYmNmNTc2MDBlMDA4ZDlmMjgwYhS8dh8=: 00:20:26.078 10:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NTJlNzAwMjczYjQxOTZkNzE3YWQxZjVmYjEyNGY4YTM1YzA1OWUwNGEwZDYwYmNmNTc2MDBlMDA4ZDlmMjgwYhS8dh8=: 00:20:26.646 10:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:26.646 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:26.646 10:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:26.646 10:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:26.904 10:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:26.904 10:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:26.904 10:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:26.904 10:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:26.904 10:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:26.904 10:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:26.904 10:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:20:26.904 10:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:26.904 10:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:26.904 10:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:26.904 10:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:26.904 10:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:26.904 10:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:26.904 10:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:26.904 10:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:26.904 10:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:26.904 10:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:26.905 10:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:26.905 10:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:27.163 00:20:27.163 10:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:27.163 10:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:27.163 10:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:27.422 10:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:27.422 10:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:27.422 10:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.422 10:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.422 10:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.422 10:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:27.422 { 00:20:27.422 "cntlid": 57, 00:20:27.422 "qid": 0, 00:20:27.422 "state": "enabled", 00:20:27.422 "thread": "nvmf_tgt_poll_group_000", 00:20:27.422 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:27.422 "listen_address": { 00:20:27.422 "trtype": "TCP", 00:20:27.422 "adrfam": "IPv4", 00:20:27.422 "traddr": "10.0.0.2", 00:20:27.422 "trsvcid": "4420" 00:20:27.422 }, 00:20:27.422 "peer_address": { 00:20:27.422 "trtype": "TCP", 00:20:27.422 "adrfam": "IPv4", 00:20:27.422 "traddr": "10.0.0.1", 00:20:27.422 "trsvcid": "49590" 00:20:27.422 }, 00:20:27.422 "auth": { 00:20:27.422 "state": "completed", 00:20:27.422 "digest": "sha384", 00:20:27.422 "dhgroup": "ffdhe2048" 00:20:27.422 } 00:20:27.422 } 00:20:27.422 ]' 00:20:27.422 10:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:27.422 10:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:27.422 10:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:27.422 10:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:27.422 10:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:27.422 10:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:27.681 10:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:27.681 10:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:27.681 10:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTQxZTczZTU3OWRjMzNjYzJjNjg1ZDE0MzkxYzUxMDJjMmEzMjY1ZGU2MzMwNWRiooT6bw==: --dhchap-ctrl-secret DHHC-1:03:MWY1NWRkNGJiNDIwYmQ3NjY0M2VhY2EyN2IxMGVlYjA3YTg5MDUzNmZhYjFjNTg4Y2YzY2FlNjhiYjI5MGJmOB0pSzQ=: 00:20:27.681 10:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZTQxZTczZTU3OWRjMzNjYzJjNjg1ZDE0MzkxYzUxMDJjMmEzMjY1ZGU2MzMwNWRiooT6bw==: --dhchap-ctrl-secret DHHC-1:03:MWY1NWRkNGJiNDIwYmQ3NjY0M2VhY2EyN2IxMGVlYjA3YTg5MDUzNmZhYjFjNTg4Y2YzY2FlNjhiYjI5MGJmOB0pSzQ=: 00:20:28.248 10:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:28.248 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:28.249 10:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:28.249 10:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.249 10:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.249 10:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.249 10:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:28.249 10:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:28.249 10:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:28.507 10:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:20:28.507 10:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:28.507 10:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:28.507 10:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:28.507 10:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:28.507 10:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:28.507 10:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:28.507 10:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.507 10:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.507 10:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.507 10:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:28.507 10:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:28.507 10:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:28.766 00:20:28.766 10:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:28.766 10:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:28.766 10:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:29.025 10:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:29.025 10:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:29.025 10:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.025 10:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.025 10:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.025 10:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:29.025 { 00:20:29.025 "cntlid": 59, 00:20:29.025 "qid": 0, 00:20:29.025 "state": "enabled", 00:20:29.025 "thread": "nvmf_tgt_poll_group_000", 00:20:29.025 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:29.025 "listen_address": { 00:20:29.025 "trtype": "TCP", 00:20:29.025 "adrfam": "IPv4", 00:20:29.025 "traddr": "10.0.0.2", 00:20:29.025 "trsvcid": "4420" 00:20:29.025 }, 00:20:29.025 "peer_address": { 00:20:29.025 "trtype": "TCP", 00:20:29.025 "adrfam": "IPv4", 00:20:29.025 "traddr": "10.0.0.1", 00:20:29.025 "trsvcid": "49606" 00:20:29.025 }, 00:20:29.025 "auth": { 00:20:29.025 "state": "completed", 00:20:29.025 "digest": "sha384", 00:20:29.025 "dhgroup": "ffdhe2048" 00:20:29.025 } 00:20:29.025 } 00:20:29.025 ]' 00:20:29.025 10:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:29.025 10:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:29.025 10:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:29.025 10:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:29.025 10:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:29.025 10:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:29.025 10:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:29.025 10:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:29.284 10:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTIxMWYwYWQ2NWUzNzc1YzRjYjMwZDgxNWY5YmMwZGWJauU6: --dhchap-ctrl-secret DHHC-1:02:ZDExZDIxMWIxODE2MGE3ZjcxNTZmYmQ4M2FkY2NhOGNlN2Q3YmE4ODkzOTRhMmYzOYOvaQ==: 00:20:29.284 10:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YTIxMWYwYWQ2NWUzNzc1YzRjYjMwZDgxNWY5YmMwZGWJauU6: --dhchap-ctrl-secret DHHC-1:02:ZDExZDIxMWIxODE2MGE3ZjcxNTZmYmQ4M2FkY2NhOGNlN2Q3YmE4ODkzOTRhMmYzOYOvaQ==: 00:20:29.851 10:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:29.851 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:29.851 10:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:29.851 10:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.851 10:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.851 10:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.851 10:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:29.851 10:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:29.852 10:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:30.138 10:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:20:30.138 10:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:30.138 10:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:30.138 10:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:30.138 10:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:30.138 10:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:30.138 10:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:30.138 10:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.138 10:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:30.138 10:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.138 10:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:30.138 10:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:30.138 10:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:30.475 00:20:30.475 10:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:30.475 10:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:30.475 10:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:30.475 10:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:30.475 10:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:30.475 10:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.475 10:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:30.475 10:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.475 10:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:30.475 { 00:20:30.475 "cntlid": 61, 00:20:30.475 "qid": 0, 00:20:30.475 "state": "enabled", 00:20:30.475 "thread": "nvmf_tgt_poll_group_000", 00:20:30.475 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:30.475 "listen_address": { 00:20:30.475 "trtype": "TCP", 00:20:30.475 "adrfam": "IPv4", 00:20:30.475 "traddr": "10.0.0.2", 00:20:30.475 "trsvcid": "4420" 00:20:30.475 }, 00:20:30.475 "peer_address": { 00:20:30.475 "trtype": "TCP", 00:20:30.475 "adrfam": "IPv4", 00:20:30.475 "traddr": "10.0.0.1", 00:20:30.475 "trsvcid": "49630" 00:20:30.475 }, 00:20:30.475 "auth": { 00:20:30.475 "state": "completed", 00:20:30.475 "digest": "sha384", 00:20:30.475 "dhgroup": "ffdhe2048" 00:20:30.475 } 00:20:30.475 } 00:20:30.475 ]' 00:20:30.475 10:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:30.475 10:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:30.734 10:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:30.734 10:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:30.734 10:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:30.734 10:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:30.734 10:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:30.734 10:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:30.734 10:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OWJiODA2ZjkxODYxYjcxMDc0NjNmNjU4ZWFkMDc5OTM1Y2M5OWE0MGEwMzk4NThjEC7s2A==: --dhchap-ctrl-secret DHHC-1:01:ZWEzNDQ0YmM1MTUwZjg3NjdjNTJlZTg3NjU3YjQzZjbwUG5H: 00:20:30.734 10:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:OWJiODA2ZjkxODYxYjcxMDc0NjNmNjU4ZWFkMDc5OTM1Y2M5OWE0MGEwMzk4NThjEC7s2A==: --dhchap-ctrl-secret DHHC-1:01:ZWEzNDQ0YmM1MTUwZjg3NjdjNTJlZTg3NjU3YjQzZjbwUG5H: 00:20:31.303 10:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:31.303 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:31.303 10:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:31.303 10:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.303 10:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.303 10:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.303 10:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:31.303 10:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:31.303 10:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:31.562 10:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:20:31.562 10:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:31.562 10:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:31.562 10:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:31.562 10:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:31.562 10:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:31.562 10:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:20:31.562 10:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.562 10:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.562 10:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.562 10:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:31.562 10:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:31.562 10:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:31.821 00:20:31.821 10:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:31.821 10:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:31.821 10:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:32.079 10:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:32.079 10:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:32.079 10:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.079 10:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.079 10:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.079 10:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:32.079 { 00:20:32.079 "cntlid": 63, 00:20:32.079 "qid": 0, 00:20:32.079 "state": "enabled", 00:20:32.079 "thread": "nvmf_tgt_poll_group_000", 00:20:32.079 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:32.079 "listen_address": { 00:20:32.079 "trtype": "TCP", 00:20:32.079 "adrfam": "IPv4", 00:20:32.079 "traddr": "10.0.0.2", 00:20:32.079 "trsvcid": "4420" 00:20:32.079 }, 00:20:32.079 "peer_address": { 00:20:32.079 "trtype": "TCP", 00:20:32.079 "adrfam": "IPv4", 00:20:32.079 "traddr": "10.0.0.1", 00:20:32.079 "trsvcid": "49660" 00:20:32.079 }, 00:20:32.079 "auth": { 00:20:32.079 "state": "completed", 00:20:32.079 "digest": "sha384", 00:20:32.079 "dhgroup": "ffdhe2048" 00:20:32.079 } 00:20:32.079 } 00:20:32.079 ]' 00:20:32.079 10:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:32.079 10:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:32.079 10:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:32.079 10:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:32.079 10:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:32.336 10:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:32.337 10:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:32.337 10:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:32.337 10:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTJlNzAwMjczYjQxOTZkNzE3YWQxZjVmYjEyNGY4YTM1YzA1OWUwNGEwZDYwYmNmNTc2MDBlMDA4ZDlmMjgwYhS8dh8=: 00:20:32.337 10:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NTJlNzAwMjczYjQxOTZkNzE3YWQxZjVmYjEyNGY4YTM1YzA1OWUwNGEwZDYwYmNmNTc2MDBlMDA4ZDlmMjgwYhS8dh8=: 00:20:32.903 10:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:32.903 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:32.903 10:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:32.903 10:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.903 10:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.903 10:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.903 10:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:32.903 10:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:32.903 10:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:32.903 10:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:33.162 10:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:20:33.162 10:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:33.162 10:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:33.162 10:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:33.162 10:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:33.162 10:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:33.162 10:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:33.162 10:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.162 10:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.162 10:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.162 10:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:33.162 10:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:33.162 10:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:33.420 00:20:33.420 10:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:33.420 10:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:33.420 10:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:33.678 10:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:33.678 10:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:33.678 10:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.678 10:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.678 10:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.678 10:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:33.678 { 00:20:33.678 "cntlid": 65, 00:20:33.678 "qid": 0, 00:20:33.678 "state": "enabled", 00:20:33.678 "thread": "nvmf_tgt_poll_group_000", 00:20:33.678 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:33.678 "listen_address": { 00:20:33.678 "trtype": "TCP", 00:20:33.678 "adrfam": "IPv4", 00:20:33.678 "traddr": "10.0.0.2", 00:20:33.678 "trsvcid": "4420" 00:20:33.678 }, 00:20:33.678 "peer_address": { 00:20:33.678 "trtype": "TCP", 00:20:33.678 "adrfam": "IPv4", 00:20:33.678 "traddr": "10.0.0.1", 00:20:33.678 "trsvcid": "58688" 00:20:33.678 }, 00:20:33.678 "auth": { 00:20:33.678 "state": "completed", 00:20:33.678 "digest": "sha384", 00:20:33.678 "dhgroup": "ffdhe3072" 00:20:33.678 } 00:20:33.678 } 00:20:33.678 ]' 00:20:33.678 10:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:33.678 10:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:33.678 10:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:33.678 10:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:33.678 10:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:33.937 10:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:33.937 10:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:33.937 10:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:33.937 10:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTQxZTczZTU3OWRjMzNjYzJjNjg1ZDE0MzkxYzUxMDJjMmEzMjY1ZGU2MzMwNWRiooT6bw==: --dhchap-ctrl-secret DHHC-1:03:MWY1NWRkNGJiNDIwYmQ3NjY0M2VhY2EyN2IxMGVlYjA3YTg5MDUzNmZhYjFjNTg4Y2YzY2FlNjhiYjI5MGJmOB0pSzQ=: 00:20:33.937 10:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZTQxZTczZTU3OWRjMzNjYzJjNjg1ZDE0MzkxYzUxMDJjMmEzMjY1ZGU2MzMwNWRiooT6bw==: --dhchap-ctrl-secret DHHC-1:03:MWY1NWRkNGJiNDIwYmQ3NjY0M2VhY2EyN2IxMGVlYjA3YTg5MDUzNmZhYjFjNTg4Y2YzY2FlNjhiYjI5MGJmOB0pSzQ=: 00:20:34.509 10:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:34.509 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:34.509 10:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:34.509 10:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.509 10:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.509 10:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.509 10:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:34.509 10:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:34.509 10:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:34.768 10:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:20:34.768 10:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:34.768 10:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:34.768 10:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:34.768 10:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:34.768 10:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:34.768 10:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:34.768 10:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.768 10:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.768 10:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.768 10:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:34.768 10:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:34.768 10:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:35.027 00:20:35.027 10:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:35.027 10:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:35.027 10:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:35.285 10:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:35.286 10:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:35.286 10:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.286 10:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.286 10:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.286 10:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:35.286 { 00:20:35.286 "cntlid": 67, 00:20:35.286 "qid": 0, 00:20:35.286 "state": "enabled", 00:20:35.286 "thread": "nvmf_tgt_poll_group_000", 00:20:35.286 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:35.286 "listen_address": { 00:20:35.286 "trtype": "TCP", 00:20:35.286 "adrfam": "IPv4", 00:20:35.286 "traddr": "10.0.0.2", 00:20:35.286 "trsvcid": "4420" 00:20:35.286 }, 00:20:35.286 "peer_address": { 00:20:35.286 "trtype": "TCP", 00:20:35.286 "adrfam": "IPv4", 00:20:35.286 "traddr": "10.0.0.1", 00:20:35.286 "trsvcid": "58714" 00:20:35.286 }, 00:20:35.286 "auth": { 00:20:35.286 "state": "completed", 00:20:35.286 "digest": "sha384", 00:20:35.286 "dhgroup": "ffdhe3072" 00:20:35.286 } 00:20:35.286 } 00:20:35.286 ]' 00:20:35.286 10:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:35.286 10:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:35.286 10:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:35.286 10:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:35.286 10:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:35.544 10:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:35.544 10:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:35.545 10:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:35.545 10:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTIxMWYwYWQ2NWUzNzc1YzRjYjMwZDgxNWY5YmMwZGWJauU6: --dhchap-ctrl-secret DHHC-1:02:ZDExZDIxMWIxODE2MGE3ZjcxNTZmYmQ4M2FkY2NhOGNlN2Q3YmE4ODkzOTRhMmYzOYOvaQ==: 00:20:35.545 10:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YTIxMWYwYWQ2NWUzNzc1YzRjYjMwZDgxNWY5YmMwZGWJauU6: --dhchap-ctrl-secret DHHC-1:02:ZDExZDIxMWIxODE2MGE3ZjcxNTZmYmQ4M2FkY2NhOGNlN2Q3YmE4ODkzOTRhMmYzOYOvaQ==: 00:20:36.112 10:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:36.112 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:36.112 10:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:36.112 10:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.112 10:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.112 10:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.112 10:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:36.112 10:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:36.112 10:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:36.371 10:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:20:36.371 10:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:36.371 10:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:36.371 10:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:36.371 10:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:36.371 10:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:36.371 10:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:36.371 10:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.371 10:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.371 10:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.371 10:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:36.371 10:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:36.371 10:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:36.629 00:20:36.629 10:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:36.629 10:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:36.629 10:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:36.888 10:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:36.888 10:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:36.888 10:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.888 10:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.888 10:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.888 10:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:36.888 { 00:20:36.888 "cntlid": 69, 00:20:36.888 "qid": 0, 00:20:36.888 "state": "enabled", 00:20:36.888 "thread": "nvmf_tgt_poll_group_000", 00:20:36.888 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:36.888 "listen_address": { 00:20:36.888 "trtype": "TCP", 00:20:36.888 "adrfam": "IPv4", 00:20:36.888 "traddr": "10.0.0.2", 00:20:36.888 "trsvcid": "4420" 00:20:36.888 }, 00:20:36.888 "peer_address": { 00:20:36.888 "trtype": "TCP", 00:20:36.888 "adrfam": "IPv4", 00:20:36.888 "traddr": "10.0.0.1", 00:20:36.888 "trsvcid": "58732" 00:20:36.888 }, 00:20:36.888 "auth": { 00:20:36.888 "state": "completed", 00:20:36.888 "digest": "sha384", 00:20:36.888 "dhgroup": "ffdhe3072" 00:20:36.888 } 00:20:36.888 } 00:20:36.888 ]' 00:20:36.888 10:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:36.888 10:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:36.888 10:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:36.888 10:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:36.888 10:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:36.888 10:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:36.888 10:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:36.888 10:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:37.147 10:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OWJiODA2ZjkxODYxYjcxMDc0NjNmNjU4ZWFkMDc5OTM1Y2M5OWE0MGEwMzk4NThjEC7s2A==: --dhchap-ctrl-secret DHHC-1:01:ZWEzNDQ0YmM1MTUwZjg3NjdjNTJlZTg3NjU3YjQzZjbwUG5H: 00:20:37.147 10:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:OWJiODA2ZjkxODYxYjcxMDc0NjNmNjU4ZWFkMDc5OTM1Y2M5OWE0MGEwMzk4NThjEC7s2A==: --dhchap-ctrl-secret DHHC-1:01:ZWEzNDQ0YmM1MTUwZjg3NjdjNTJlZTg3NjU3YjQzZjbwUG5H: 00:20:37.714 10:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:37.714 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:37.714 10:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:37.714 10:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.714 10:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.714 10:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.714 10:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:37.714 10:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:37.714 10:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:37.973 10:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:20:37.973 10:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:37.973 10:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:37.973 10:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:37.973 10:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:37.973 10:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:37.973 10:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:20:37.973 10:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.973 10:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.973 10:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.973 10:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:37.973 10:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:37.973 10:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:38.231 00:20:38.232 10:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:38.232 10:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:38.232 10:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:38.490 10:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:38.490 10:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:38.490 10:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.490 10:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.490 10:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.490 10:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:38.490 { 00:20:38.490 "cntlid": 71, 00:20:38.490 "qid": 0, 00:20:38.490 "state": "enabled", 00:20:38.490 "thread": "nvmf_tgt_poll_group_000", 00:20:38.490 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:38.490 "listen_address": { 00:20:38.490 "trtype": "TCP", 00:20:38.490 "adrfam": "IPv4", 00:20:38.490 "traddr": "10.0.0.2", 00:20:38.490 "trsvcid": "4420" 00:20:38.490 }, 00:20:38.490 "peer_address": { 00:20:38.490 "trtype": "TCP", 00:20:38.490 "adrfam": "IPv4", 00:20:38.490 "traddr": "10.0.0.1", 00:20:38.490 "trsvcid": "58758" 00:20:38.490 }, 00:20:38.490 "auth": { 00:20:38.490 "state": "completed", 00:20:38.490 "digest": "sha384", 00:20:38.490 "dhgroup": "ffdhe3072" 00:20:38.490 } 00:20:38.490 } 00:20:38.490 ]' 00:20:38.490 10:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:38.490 10:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:38.490 10:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:38.490 10:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:38.490 10:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:38.490 10:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:38.490 10:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:38.490 10:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:38.749 10:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTJlNzAwMjczYjQxOTZkNzE3YWQxZjVmYjEyNGY4YTM1YzA1OWUwNGEwZDYwYmNmNTc2MDBlMDA4ZDlmMjgwYhS8dh8=: 00:20:38.749 10:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NTJlNzAwMjczYjQxOTZkNzE3YWQxZjVmYjEyNGY4YTM1YzA1OWUwNGEwZDYwYmNmNTc2MDBlMDA4ZDlmMjgwYhS8dh8=: 00:20:39.317 10:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:39.317 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:39.317 10:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:39.317 10:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:39.317 10:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:39.317 10:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:39.317 10:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:39.317 10:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:39.317 10:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:39.317 10:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:39.575 10:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:20:39.575 10:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:39.575 10:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:39.575 10:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:39.575 10:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:39.575 10:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:39.575 10:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:39.575 10:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:39.575 10:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:39.575 10:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:39.575 10:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:39.576 10:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:39.576 10:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:39.834 00:20:39.834 10:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:39.834 10:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:39.834 10:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:39.834 10:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:39.834 10:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:39.834 10:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:39.834 10:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:39.834 10:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:39.834 10:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:39.834 { 00:20:39.834 "cntlid": 73, 00:20:39.834 "qid": 0, 00:20:39.834 "state": "enabled", 00:20:39.834 "thread": "nvmf_tgt_poll_group_000", 00:20:39.834 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:39.834 "listen_address": { 00:20:39.834 "trtype": "TCP", 00:20:39.834 "adrfam": "IPv4", 00:20:39.834 "traddr": "10.0.0.2", 00:20:39.834 "trsvcid": "4420" 00:20:39.834 }, 00:20:39.834 "peer_address": { 00:20:39.834 "trtype": "TCP", 00:20:39.834 "adrfam": "IPv4", 00:20:39.834 "traddr": "10.0.0.1", 00:20:39.834 "trsvcid": "58782" 00:20:39.834 }, 00:20:39.834 "auth": { 00:20:39.834 "state": "completed", 00:20:39.834 "digest": "sha384", 00:20:39.834 "dhgroup": "ffdhe4096" 00:20:39.834 } 00:20:39.834 } 00:20:39.834 ]' 00:20:40.093 10:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:40.093 10:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:40.093 10:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:40.093 10:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:40.093 10:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:40.093 10:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:40.093 10:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:40.093 10:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:40.352 10:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTQxZTczZTU3OWRjMzNjYzJjNjg1ZDE0MzkxYzUxMDJjMmEzMjY1ZGU2MzMwNWRiooT6bw==: --dhchap-ctrl-secret DHHC-1:03:MWY1NWRkNGJiNDIwYmQ3NjY0M2VhY2EyN2IxMGVlYjA3YTg5MDUzNmZhYjFjNTg4Y2YzY2FlNjhiYjI5MGJmOB0pSzQ=: 00:20:40.352 10:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZTQxZTczZTU3OWRjMzNjYzJjNjg1ZDE0MzkxYzUxMDJjMmEzMjY1ZGU2MzMwNWRiooT6bw==: --dhchap-ctrl-secret DHHC-1:03:MWY1NWRkNGJiNDIwYmQ3NjY0M2VhY2EyN2IxMGVlYjA3YTg5MDUzNmZhYjFjNTg4Y2YzY2FlNjhiYjI5MGJmOB0pSzQ=: 00:20:40.920 10:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:40.920 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:40.920 10:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:40.920 10:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.920 10:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.920 10:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.920 10:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:40.920 10:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:40.920 10:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:40.920 10:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:20:40.920 10:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:40.920 10:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:40.920 10:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:40.920 10:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:40.920 10:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:40.920 10:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:40.920 10:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.920 10:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.920 10:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.920 10:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:40.920 10:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:40.920 10:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:41.179 00:20:41.436 10:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:41.436 10:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:41.436 10:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:41.436 10:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:41.436 10:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:41.436 10:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.436 10:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:41.436 10:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.436 10:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:41.436 { 00:20:41.436 "cntlid": 75, 00:20:41.436 "qid": 0, 00:20:41.436 "state": "enabled", 00:20:41.436 "thread": "nvmf_tgt_poll_group_000", 00:20:41.436 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:41.436 "listen_address": { 00:20:41.436 "trtype": "TCP", 00:20:41.436 "adrfam": "IPv4", 00:20:41.436 "traddr": "10.0.0.2", 00:20:41.436 "trsvcid": "4420" 00:20:41.436 }, 00:20:41.436 "peer_address": { 00:20:41.436 "trtype": "TCP", 00:20:41.436 "adrfam": "IPv4", 00:20:41.436 "traddr": "10.0.0.1", 00:20:41.436 "trsvcid": "58822" 00:20:41.436 }, 00:20:41.436 "auth": { 00:20:41.436 "state": "completed", 00:20:41.436 "digest": "sha384", 00:20:41.436 "dhgroup": "ffdhe4096" 00:20:41.436 } 00:20:41.436 } 00:20:41.436 ]' 00:20:41.436 10:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:41.436 10:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:41.436 10:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:41.694 10:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:41.694 10:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:41.694 10:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:41.694 10:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:41.694 10:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:41.694 10:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTIxMWYwYWQ2NWUzNzc1YzRjYjMwZDgxNWY5YmMwZGWJauU6: --dhchap-ctrl-secret DHHC-1:02:ZDExZDIxMWIxODE2MGE3ZjcxNTZmYmQ4M2FkY2NhOGNlN2Q3YmE4ODkzOTRhMmYzOYOvaQ==: 00:20:41.694 10:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YTIxMWYwYWQ2NWUzNzc1YzRjYjMwZDgxNWY5YmMwZGWJauU6: --dhchap-ctrl-secret DHHC-1:02:ZDExZDIxMWIxODE2MGE3ZjcxNTZmYmQ4M2FkY2NhOGNlN2Q3YmE4ODkzOTRhMmYzOYOvaQ==: 00:20:42.261 10:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:42.261 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:42.261 10:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:42.261 10:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.261 10:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.261 10:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.261 10:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:42.261 10:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:42.261 10:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:42.519 10:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:20:42.519 10:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:42.519 10:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:42.519 10:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:42.519 10:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:42.519 10:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:42.519 10:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:42.519 10:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.519 10:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.519 10:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.519 10:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:42.519 10:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:42.520 10:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:42.778 00:20:42.778 10:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:42.778 10:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:42.778 10:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:43.036 10:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:43.036 10:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:43.036 10:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.036 10:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.036 10:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.036 10:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:43.036 { 00:20:43.036 "cntlid": 77, 00:20:43.036 "qid": 0, 00:20:43.036 "state": "enabled", 00:20:43.036 "thread": "nvmf_tgt_poll_group_000", 00:20:43.036 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:43.036 "listen_address": { 00:20:43.036 "trtype": "TCP", 00:20:43.036 "adrfam": "IPv4", 00:20:43.036 "traddr": "10.0.0.2", 00:20:43.036 "trsvcid": "4420" 00:20:43.036 }, 00:20:43.036 "peer_address": { 00:20:43.036 "trtype": "TCP", 00:20:43.037 "adrfam": "IPv4", 00:20:43.037 "traddr": "10.0.0.1", 00:20:43.037 "trsvcid": "36416" 00:20:43.037 }, 00:20:43.037 "auth": { 00:20:43.037 "state": "completed", 00:20:43.037 "digest": "sha384", 00:20:43.037 "dhgroup": "ffdhe4096" 00:20:43.037 } 00:20:43.037 } 00:20:43.037 ]' 00:20:43.037 10:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:43.037 10:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:43.037 10:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:43.037 10:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:43.037 10:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:43.037 10:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:43.037 10:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:43.037 10:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:43.295 10:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OWJiODA2ZjkxODYxYjcxMDc0NjNmNjU4ZWFkMDc5OTM1Y2M5OWE0MGEwMzk4NThjEC7s2A==: --dhchap-ctrl-secret DHHC-1:01:ZWEzNDQ0YmM1MTUwZjg3NjdjNTJlZTg3NjU3YjQzZjbwUG5H: 00:20:43.295 10:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:OWJiODA2ZjkxODYxYjcxMDc0NjNmNjU4ZWFkMDc5OTM1Y2M5OWE0MGEwMzk4NThjEC7s2A==: --dhchap-ctrl-secret DHHC-1:01:ZWEzNDQ0YmM1MTUwZjg3NjdjNTJlZTg3NjU3YjQzZjbwUG5H: 00:20:43.862 10:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:43.862 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:43.862 10:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:43.862 10:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.862 10:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.862 10:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.862 10:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:43.862 10:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:43.862 10:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:44.120 10:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:20:44.120 10:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:44.120 10:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:44.120 10:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:44.120 10:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:44.120 10:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:44.120 10:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:20:44.120 10:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.120 10:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:44.120 10:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.120 10:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:44.120 10:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:44.120 10:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:44.379 00:20:44.379 10:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:44.379 10:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:44.379 10:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:44.638 10:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:44.638 10:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:44.638 10:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.638 10:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:44.638 10:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.638 10:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:44.638 { 00:20:44.638 "cntlid": 79, 00:20:44.638 "qid": 0, 00:20:44.638 "state": "enabled", 00:20:44.638 "thread": "nvmf_tgt_poll_group_000", 00:20:44.638 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:44.638 "listen_address": { 00:20:44.638 "trtype": "TCP", 00:20:44.638 "adrfam": "IPv4", 00:20:44.638 "traddr": "10.0.0.2", 00:20:44.638 "trsvcid": "4420" 00:20:44.638 }, 00:20:44.638 "peer_address": { 00:20:44.638 "trtype": "TCP", 00:20:44.638 "adrfam": "IPv4", 00:20:44.638 "traddr": "10.0.0.1", 00:20:44.638 "trsvcid": "36454" 00:20:44.638 }, 00:20:44.638 "auth": { 00:20:44.638 "state": "completed", 00:20:44.638 "digest": "sha384", 00:20:44.638 "dhgroup": "ffdhe4096" 00:20:44.638 } 00:20:44.638 } 00:20:44.638 ]' 00:20:44.638 10:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:44.638 10:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:44.638 10:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:44.638 10:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:44.638 10:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:44.638 10:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:44.638 10:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:44.638 10:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:44.897 10:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTJlNzAwMjczYjQxOTZkNzE3YWQxZjVmYjEyNGY4YTM1YzA1OWUwNGEwZDYwYmNmNTc2MDBlMDA4ZDlmMjgwYhS8dh8=: 00:20:44.897 10:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NTJlNzAwMjczYjQxOTZkNzE3YWQxZjVmYjEyNGY4YTM1YzA1OWUwNGEwZDYwYmNmNTc2MDBlMDA4ZDlmMjgwYhS8dh8=: 00:20:45.464 10:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:45.464 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:45.464 10:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:45.464 10:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.464 10:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.464 10:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.464 10:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:45.464 10:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:45.464 10:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:45.464 10:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:45.723 10:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:20:45.723 10:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:45.723 10:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:45.723 10:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:45.723 10:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:45.723 10:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:45.723 10:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:45.723 10:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.723 10:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.723 10:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.723 10:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:45.723 10:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:45.723 10:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:45.982 00:20:45.982 10:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:45.982 10:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:45.982 10:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:46.240 10:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:46.240 10:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:46.240 10:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.240 10:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.240 10:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.240 10:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:46.240 { 00:20:46.240 "cntlid": 81, 00:20:46.240 "qid": 0, 00:20:46.240 "state": "enabled", 00:20:46.240 "thread": "nvmf_tgt_poll_group_000", 00:20:46.240 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:46.240 "listen_address": { 00:20:46.240 "trtype": "TCP", 00:20:46.240 "adrfam": "IPv4", 00:20:46.240 "traddr": "10.0.0.2", 00:20:46.240 "trsvcid": "4420" 00:20:46.240 }, 00:20:46.240 "peer_address": { 00:20:46.240 "trtype": "TCP", 00:20:46.240 "adrfam": "IPv4", 00:20:46.240 "traddr": "10.0.0.1", 00:20:46.240 "trsvcid": "36480" 00:20:46.240 }, 00:20:46.240 "auth": { 00:20:46.240 "state": "completed", 00:20:46.240 "digest": "sha384", 00:20:46.241 "dhgroup": "ffdhe6144" 00:20:46.241 } 00:20:46.241 } 00:20:46.241 ]' 00:20:46.241 10:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:46.241 10:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:46.241 10:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:46.241 10:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:46.500 10:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:46.500 10:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:46.500 10:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:46.500 10:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:46.500 10:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTQxZTczZTU3OWRjMzNjYzJjNjg1ZDE0MzkxYzUxMDJjMmEzMjY1ZGU2MzMwNWRiooT6bw==: --dhchap-ctrl-secret DHHC-1:03:MWY1NWRkNGJiNDIwYmQ3NjY0M2VhY2EyN2IxMGVlYjA3YTg5MDUzNmZhYjFjNTg4Y2YzY2FlNjhiYjI5MGJmOB0pSzQ=: 00:20:46.500 10:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZTQxZTczZTU3OWRjMzNjYzJjNjg1ZDE0MzkxYzUxMDJjMmEzMjY1ZGU2MzMwNWRiooT6bw==: --dhchap-ctrl-secret DHHC-1:03:MWY1NWRkNGJiNDIwYmQ3NjY0M2VhY2EyN2IxMGVlYjA3YTg5MDUzNmZhYjFjNTg4Y2YzY2FlNjhiYjI5MGJmOB0pSzQ=: 00:20:47.067 10:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:47.067 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:47.067 10:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:47.067 10:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.067 10:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.067 10:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.067 10:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:47.067 10:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:47.067 10:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:47.326 10:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:20:47.326 10:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:47.326 10:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:47.326 10:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:47.326 10:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:47.326 10:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:47.326 10:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:47.326 10:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.326 10:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.326 10:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.326 10:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:47.326 10:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:47.326 10:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:47.585 00:20:47.585 10:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:47.585 10:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:47.585 10:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:47.843 10:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:47.843 10:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:47.843 10:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.843 10:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.843 10:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.843 10:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:47.843 { 00:20:47.843 "cntlid": 83, 00:20:47.843 "qid": 0, 00:20:47.843 "state": "enabled", 00:20:47.843 "thread": "nvmf_tgt_poll_group_000", 00:20:47.843 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:47.843 "listen_address": { 00:20:47.843 "trtype": "TCP", 00:20:47.843 "adrfam": "IPv4", 00:20:47.843 "traddr": "10.0.0.2", 00:20:47.843 "trsvcid": "4420" 00:20:47.843 }, 00:20:47.843 "peer_address": { 00:20:47.843 "trtype": "TCP", 00:20:47.843 "adrfam": "IPv4", 00:20:47.843 "traddr": "10.0.0.1", 00:20:47.843 "trsvcid": "36506" 00:20:47.843 }, 00:20:47.843 "auth": { 00:20:47.843 "state": "completed", 00:20:47.843 "digest": "sha384", 00:20:47.843 "dhgroup": "ffdhe6144" 00:20:47.843 } 00:20:47.843 } 00:20:47.843 ]' 00:20:47.844 10:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:47.844 10:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:47.844 10:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:48.102 10:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:48.102 10:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:48.102 10:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:48.102 10:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:48.102 10:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:48.361 10:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTIxMWYwYWQ2NWUzNzc1YzRjYjMwZDgxNWY5YmMwZGWJauU6: --dhchap-ctrl-secret DHHC-1:02:ZDExZDIxMWIxODE2MGE3ZjcxNTZmYmQ4M2FkY2NhOGNlN2Q3YmE4ODkzOTRhMmYzOYOvaQ==: 00:20:48.361 10:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YTIxMWYwYWQ2NWUzNzc1YzRjYjMwZDgxNWY5YmMwZGWJauU6: --dhchap-ctrl-secret DHHC-1:02:ZDExZDIxMWIxODE2MGE3ZjcxNTZmYmQ4M2FkY2NhOGNlN2Q3YmE4ODkzOTRhMmYzOYOvaQ==: 00:20:48.928 10:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:48.928 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:48.928 10:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:48.928 10:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.928 10:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:48.928 10:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.928 10:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:48.928 10:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:48.928 10:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:48.928 10:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:20:48.928 10:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:48.928 10:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:48.928 10:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:48.928 10:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:48.928 10:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:48.928 10:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:48.928 10:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.928 10:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:48.928 10:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.928 10:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:48.928 10:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:48.928 10:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:49.496 00:20:49.496 10:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:49.496 10:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:49.496 10:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:49.496 10:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:49.496 10:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:49.496 10:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:49.496 10:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.496 10:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:49.496 10:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:49.496 { 00:20:49.496 "cntlid": 85, 00:20:49.496 "qid": 0, 00:20:49.496 "state": "enabled", 00:20:49.496 "thread": "nvmf_tgt_poll_group_000", 00:20:49.496 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:49.496 "listen_address": { 00:20:49.496 "trtype": "TCP", 00:20:49.496 "adrfam": "IPv4", 00:20:49.496 "traddr": "10.0.0.2", 00:20:49.496 "trsvcid": "4420" 00:20:49.496 }, 00:20:49.496 "peer_address": { 00:20:49.496 "trtype": "TCP", 00:20:49.496 "adrfam": "IPv4", 00:20:49.496 "traddr": "10.0.0.1", 00:20:49.496 "trsvcid": "36530" 00:20:49.496 }, 00:20:49.496 "auth": { 00:20:49.496 "state": "completed", 00:20:49.496 "digest": "sha384", 00:20:49.496 "dhgroup": "ffdhe6144" 00:20:49.496 } 00:20:49.496 } 00:20:49.496 ]' 00:20:49.496 10:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:49.496 10:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:49.755 10:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:49.755 10:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:49.755 10:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:49.755 10:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:49.755 10:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:49.755 10:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:50.014 10:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OWJiODA2ZjkxODYxYjcxMDc0NjNmNjU4ZWFkMDc5OTM1Y2M5OWE0MGEwMzk4NThjEC7s2A==: --dhchap-ctrl-secret DHHC-1:01:ZWEzNDQ0YmM1MTUwZjg3NjdjNTJlZTg3NjU3YjQzZjbwUG5H: 00:20:50.014 10:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:OWJiODA2ZjkxODYxYjcxMDc0NjNmNjU4ZWFkMDc5OTM1Y2M5OWE0MGEwMzk4NThjEC7s2A==: --dhchap-ctrl-secret DHHC-1:01:ZWEzNDQ0YmM1MTUwZjg3NjdjNTJlZTg3NjU3YjQzZjbwUG5H: 00:20:50.581 10:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:50.581 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:50.581 10:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:50.581 10:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.581 10:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.581 10:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:50.581 10:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:50.581 10:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:50.581 10:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:50.581 10:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:20:50.581 10:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:50.581 10:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:50.581 10:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:50.581 10:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:50.581 10:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:50.581 10:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:20:50.581 10:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.581 10:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.581 10:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:50.581 10:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:50.581 10:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:50.581 10:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:51.149 00:20:51.149 10:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:51.149 10:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:51.149 10:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:51.149 10:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:51.149 10:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:51.149 10:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.149 10:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.149 10:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.149 10:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:51.149 { 00:20:51.149 "cntlid": 87, 00:20:51.149 "qid": 0, 00:20:51.149 "state": "enabled", 00:20:51.149 "thread": "nvmf_tgt_poll_group_000", 00:20:51.149 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:51.149 "listen_address": { 00:20:51.149 "trtype": "TCP", 00:20:51.149 "adrfam": "IPv4", 00:20:51.149 "traddr": "10.0.0.2", 00:20:51.149 "trsvcid": "4420" 00:20:51.149 }, 00:20:51.149 "peer_address": { 00:20:51.149 "trtype": "TCP", 00:20:51.149 "adrfam": "IPv4", 00:20:51.149 "traddr": "10.0.0.1", 00:20:51.149 "trsvcid": "36564" 00:20:51.149 }, 00:20:51.149 "auth": { 00:20:51.149 "state": "completed", 00:20:51.149 "digest": "sha384", 00:20:51.149 "dhgroup": "ffdhe6144" 00:20:51.149 } 00:20:51.149 } 00:20:51.149 ]' 00:20:51.149 10:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:51.408 10:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:51.408 10:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:51.408 10:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:51.408 10:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:51.408 10:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:51.408 10:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:51.408 10:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:51.665 10:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTJlNzAwMjczYjQxOTZkNzE3YWQxZjVmYjEyNGY4YTM1YzA1OWUwNGEwZDYwYmNmNTc2MDBlMDA4ZDlmMjgwYhS8dh8=: 00:20:51.665 10:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NTJlNzAwMjczYjQxOTZkNzE3YWQxZjVmYjEyNGY4YTM1YzA1OWUwNGEwZDYwYmNmNTc2MDBlMDA4ZDlmMjgwYhS8dh8=: 00:20:52.233 10:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:52.233 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:52.233 10:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:52.233 10:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.233 10:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.233 10:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.233 10:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:52.233 10:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:52.233 10:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:52.233 10:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:52.233 10:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:20:52.233 10:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:52.233 10:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:52.233 10:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:52.233 10:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:52.233 10:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:52.233 10:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:52.233 10:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.233 10:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.491 10:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.491 10:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:52.492 10:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:52.492 10:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:52.750 00:20:52.750 10:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:52.750 10:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:52.750 10:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:53.009 10:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:53.009 10:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:53.009 10:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.009 10:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.009 10:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.009 10:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:53.009 { 00:20:53.009 "cntlid": 89, 00:20:53.009 "qid": 0, 00:20:53.009 "state": "enabled", 00:20:53.009 "thread": "nvmf_tgt_poll_group_000", 00:20:53.009 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:53.009 "listen_address": { 00:20:53.009 "trtype": "TCP", 00:20:53.009 "adrfam": "IPv4", 00:20:53.009 "traddr": "10.0.0.2", 00:20:53.009 "trsvcid": "4420" 00:20:53.009 }, 00:20:53.009 "peer_address": { 00:20:53.009 "trtype": "TCP", 00:20:53.009 "adrfam": "IPv4", 00:20:53.009 "traddr": "10.0.0.1", 00:20:53.009 "trsvcid": "36598" 00:20:53.009 }, 00:20:53.009 "auth": { 00:20:53.009 "state": "completed", 00:20:53.009 "digest": "sha384", 00:20:53.009 "dhgroup": "ffdhe8192" 00:20:53.009 } 00:20:53.009 } 00:20:53.009 ]' 00:20:53.009 10:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:53.009 10:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:53.009 10:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:53.009 10:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:53.009 10:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:53.268 10:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:53.268 10:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:53.268 10:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:53.268 10:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTQxZTczZTU3OWRjMzNjYzJjNjg1ZDE0MzkxYzUxMDJjMmEzMjY1ZGU2MzMwNWRiooT6bw==: --dhchap-ctrl-secret DHHC-1:03:MWY1NWRkNGJiNDIwYmQ3NjY0M2VhY2EyN2IxMGVlYjA3YTg5MDUzNmZhYjFjNTg4Y2YzY2FlNjhiYjI5MGJmOB0pSzQ=: 00:20:53.268 10:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZTQxZTczZTU3OWRjMzNjYzJjNjg1ZDE0MzkxYzUxMDJjMmEzMjY1ZGU2MzMwNWRiooT6bw==: --dhchap-ctrl-secret DHHC-1:03:MWY1NWRkNGJiNDIwYmQ3NjY0M2VhY2EyN2IxMGVlYjA3YTg5MDUzNmZhYjFjNTg4Y2YzY2FlNjhiYjI5MGJmOB0pSzQ=: 00:20:53.835 10:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:53.836 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:53.836 10:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:53.836 10:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.836 10:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.836 10:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.836 10:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:53.836 10:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:53.836 10:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:54.094 10:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:20:54.094 10:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:54.094 10:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:54.094 10:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:54.094 10:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:54.094 10:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:54.094 10:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:54.094 10:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.094 10:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.094 10:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.094 10:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:54.094 10:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:54.094 10:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:54.662 00:20:54.662 10:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:54.662 10:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:54.662 10:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:54.921 10:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:54.921 10:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:54.921 10:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.921 10:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.921 10:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.921 10:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:54.921 { 00:20:54.921 "cntlid": 91, 00:20:54.921 "qid": 0, 00:20:54.921 "state": "enabled", 00:20:54.921 "thread": "nvmf_tgt_poll_group_000", 00:20:54.921 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:54.921 "listen_address": { 00:20:54.921 "trtype": "TCP", 00:20:54.921 "adrfam": "IPv4", 00:20:54.921 "traddr": "10.0.0.2", 00:20:54.921 "trsvcid": "4420" 00:20:54.921 }, 00:20:54.921 "peer_address": { 00:20:54.921 "trtype": "TCP", 00:20:54.921 "adrfam": "IPv4", 00:20:54.921 "traddr": "10.0.0.1", 00:20:54.921 "trsvcid": "34754" 00:20:54.921 }, 00:20:54.921 "auth": { 00:20:54.921 "state": "completed", 00:20:54.921 "digest": "sha384", 00:20:54.921 "dhgroup": "ffdhe8192" 00:20:54.921 } 00:20:54.921 } 00:20:54.921 ]' 00:20:54.921 10:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:54.921 10:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:54.921 10:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:54.921 10:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:54.921 10:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:54.921 10:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:54.921 10:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:54.921 10:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:55.179 10:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTIxMWYwYWQ2NWUzNzc1YzRjYjMwZDgxNWY5YmMwZGWJauU6: --dhchap-ctrl-secret DHHC-1:02:ZDExZDIxMWIxODE2MGE3ZjcxNTZmYmQ4M2FkY2NhOGNlN2Q3YmE4ODkzOTRhMmYzOYOvaQ==: 00:20:55.179 10:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YTIxMWYwYWQ2NWUzNzc1YzRjYjMwZDgxNWY5YmMwZGWJauU6: --dhchap-ctrl-secret DHHC-1:02:ZDExZDIxMWIxODE2MGE3ZjcxNTZmYmQ4M2FkY2NhOGNlN2Q3YmE4ODkzOTRhMmYzOYOvaQ==: 00:20:55.745 10:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:55.745 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:55.745 10:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:55.745 10:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.745 10:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.745 10:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.745 10:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:55.745 10:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:55.745 10:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:56.003 10:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:20:56.003 10:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:56.003 10:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:56.003 10:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:56.003 10:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:56.003 10:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:56.003 10:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:56.003 10:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:56.003 10:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.003 10:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.003 10:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:56.003 10:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:56.003 10:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:56.570 00:20:56.570 10:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:56.570 10:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:56.570 10:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:56.570 10:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:56.570 10:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:56.570 10:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:56.570 10:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.570 10:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.570 10:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:56.570 { 00:20:56.570 "cntlid": 93, 00:20:56.570 "qid": 0, 00:20:56.570 "state": "enabled", 00:20:56.570 "thread": "nvmf_tgt_poll_group_000", 00:20:56.570 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:56.570 "listen_address": { 00:20:56.570 "trtype": "TCP", 00:20:56.570 "adrfam": "IPv4", 00:20:56.570 "traddr": "10.0.0.2", 00:20:56.570 "trsvcid": "4420" 00:20:56.570 }, 00:20:56.570 "peer_address": { 00:20:56.570 "trtype": "TCP", 00:20:56.570 "adrfam": "IPv4", 00:20:56.570 "traddr": "10.0.0.1", 00:20:56.570 "trsvcid": "34776" 00:20:56.570 }, 00:20:56.570 "auth": { 00:20:56.570 "state": "completed", 00:20:56.570 "digest": "sha384", 00:20:56.570 "dhgroup": "ffdhe8192" 00:20:56.570 } 00:20:56.570 } 00:20:56.570 ]' 00:20:56.570 10:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:56.570 10:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:56.570 10:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:56.828 10:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:56.828 10:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:56.828 10:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:56.828 10:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:56.828 10:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:57.086 10:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OWJiODA2ZjkxODYxYjcxMDc0NjNmNjU4ZWFkMDc5OTM1Y2M5OWE0MGEwMzk4NThjEC7s2A==: --dhchap-ctrl-secret DHHC-1:01:ZWEzNDQ0YmM1MTUwZjg3NjdjNTJlZTg3NjU3YjQzZjbwUG5H: 00:20:57.086 10:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:OWJiODA2ZjkxODYxYjcxMDc0NjNmNjU4ZWFkMDc5OTM1Y2M5OWE0MGEwMzk4NThjEC7s2A==: --dhchap-ctrl-secret DHHC-1:01:ZWEzNDQ0YmM1MTUwZjg3NjdjNTJlZTg3NjU3YjQzZjbwUG5H: 00:20:57.651 10:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:57.651 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:57.651 10:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:57.651 10:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.651 10:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.651 10:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.651 10:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:57.651 10:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:57.651 10:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:57.651 10:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:20:57.651 10:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:57.651 10:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:57.651 10:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:57.651 10:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:57.651 10:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:57.651 10:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:20:57.651 10:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.651 10:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.651 10:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.651 10:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:57.651 10:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:57.651 10:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:58.218 00:20:58.218 10:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:58.218 10:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:58.218 10:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:58.477 10:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:58.477 10:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:58.477 10:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.477 10:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:58.477 10:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.477 10:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:58.477 { 00:20:58.477 "cntlid": 95, 00:20:58.477 "qid": 0, 00:20:58.477 "state": "enabled", 00:20:58.477 "thread": "nvmf_tgt_poll_group_000", 00:20:58.477 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:58.477 "listen_address": { 00:20:58.477 "trtype": "TCP", 00:20:58.477 "adrfam": "IPv4", 00:20:58.477 "traddr": "10.0.0.2", 00:20:58.477 "trsvcid": "4420" 00:20:58.477 }, 00:20:58.477 "peer_address": { 00:20:58.477 "trtype": "TCP", 00:20:58.477 "adrfam": "IPv4", 00:20:58.477 "traddr": "10.0.0.1", 00:20:58.477 "trsvcid": "34812" 00:20:58.477 }, 00:20:58.477 "auth": { 00:20:58.477 "state": "completed", 00:20:58.477 "digest": "sha384", 00:20:58.477 "dhgroup": "ffdhe8192" 00:20:58.477 } 00:20:58.477 } 00:20:58.477 ]' 00:20:58.477 10:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:58.477 10:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:58.477 10:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:58.477 10:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:58.477 10:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:58.477 10:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:58.477 10:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:58.477 10:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:58.736 10:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTJlNzAwMjczYjQxOTZkNzE3YWQxZjVmYjEyNGY4YTM1YzA1OWUwNGEwZDYwYmNmNTc2MDBlMDA4ZDlmMjgwYhS8dh8=: 00:20:58.736 10:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NTJlNzAwMjczYjQxOTZkNzE3YWQxZjVmYjEyNGY4YTM1YzA1OWUwNGEwZDYwYmNmNTc2MDBlMDA4ZDlmMjgwYhS8dh8=: 00:20:59.303 10:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:59.303 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:59.303 10:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:59.303 10:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.303 10:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.303 10:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.303 10:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:20:59.303 10:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:59.303 10:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:59.303 10:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:59.303 10:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:59.561 10:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:20:59.561 10:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:59.561 10:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:59.562 10:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:59.562 10:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:59.562 10:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:59.562 10:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:59.562 10:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.562 10:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.562 10:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.562 10:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:59.562 10:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:59.562 10:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:59.821 00:20:59.821 10:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:59.821 10:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:59.821 10:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:59.821 10:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:59.821 10:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:59.821 10:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.821 10:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.821 10:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.821 10:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:59.821 { 00:20:59.821 "cntlid": 97, 00:20:59.821 "qid": 0, 00:20:59.821 "state": "enabled", 00:20:59.821 "thread": "nvmf_tgt_poll_group_000", 00:20:59.821 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:59.821 "listen_address": { 00:20:59.821 "trtype": "TCP", 00:20:59.821 "adrfam": "IPv4", 00:20:59.821 "traddr": "10.0.0.2", 00:20:59.821 "trsvcid": "4420" 00:20:59.821 }, 00:20:59.821 "peer_address": { 00:20:59.821 "trtype": "TCP", 00:20:59.821 "adrfam": "IPv4", 00:20:59.821 "traddr": "10.0.0.1", 00:20:59.821 "trsvcid": "34844" 00:20:59.821 }, 00:20:59.821 "auth": { 00:20:59.821 "state": "completed", 00:20:59.821 "digest": "sha512", 00:20:59.821 "dhgroup": "null" 00:20:59.821 } 00:20:59.821 } 00:20:59.821 ]' 00:20:59.821 10:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:00.079 10:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:00.079 10:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:00.079 10:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:00.079 10:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:00.079 10:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:00.079 10:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:00.079 10:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:00.338 10:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTQxZTczZTU3OWRjMzNjYzJjNjg1ZDE0MzkxYzUxMDJjMmEzMjY1ZGU2MzMwNWRiooT6bw==: --dhchap-ctrl-secret DHHC-1:03:MWY1NWRkNGJiNDIwYmQ3NjY0M2VhY2EyN2IxMGVlYjA3YTg5MDUzNmZhYjFjNTg4Y2YzY2FlNjhiYjI5MGJmOB0pSzQ=: 00:21:00.338 10:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZTQxZTczZTU3OWRjMzNjYzJjNjg1ZDE0MzkxYzUxMDJjMmEzMjY1ZGU2MzMwNWRiooT6bw==: --dhchap-ctrl-secret DHHC-1:03:MWY1NWRkNGJiNDIwYmQ3NjY0M2VhY2EyN2IxMGVlYjA3YTg5MDUzNmZhYjFjNTg4Y2YzY2FlNjhiYjI5MGJmOB0pSzQ=: 00:21:00.904 10:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:00.904 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:00.904 10:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:00.904 10:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.904 10:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:00.904 10:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.904 10:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:00.904 10:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:00.904 10:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:01.163 10:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:21:01.163 10:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:01.163 10:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:01.163 10:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:01.163 10:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:01.163 10:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:01.163 10:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:01.163 10:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.163 10:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.163 10:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.163 10:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:01.163 10:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:01.163 10:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:01.163 00:21:01.420 10:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:01.420 10:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:01.420 10:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:01.420 10:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:01.420 10:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:01.420 10:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.420 10:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.420 10:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.420 10:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:01.420 { 00:21:01.420 "cntlid": 99, 00:21:01.420 "qid": 0, 00:21:01.420 "state": "enabled", 00:21:01.420 "thread": "nvmf_tgt_poll_group_000", 00:21:01.420 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:01.420 "listen_address": { 00:21:01.420 "trtype": "TCP", 00:21:01.420 "adrfam": "IPv4", 00:21:01.420 "traddr": "10.0.0.2", 00:21:01.420 "trsvcid": "4420" 00:21:01.420 }, 00:21:01.420 "peer_address": { 00:21:01.420 "trtype": "TCP", 00:21:01.420 "adrfam": "IPv4", 00:21:01.420 "traddr": "10.0.0.1", 00:21:01.420 "trsvcid": "34868" 00:21:01.420 }, 00:21:01.420 "auth": { 00:21:01.420 "state": "completed", 00:21:01.420 "digest": "sha512", 00:21:01.420 "dhgroup": "null" 00:21:01.420 } 00:21:01.420 } 00:21:01.420 ]' 00:21:01.420 10:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:01.420 10:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:01.420 10:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:01.678 10:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:01.678 10:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:01.678 10:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:01.678 10:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:01.678 10:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:01.936 10:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTIxMWYwYWQ2NWUzNzc1YzRjYjMwZDgxNWY5YmMwZGWJauU6: --dhchap-ctrl-secret DHHC-1:02:ZDExZDIxMWIxODE2MGE3ZjcxNTZmYmQ4M2FkY2NhOGNlN2Q3YmE4ODkzOTRhMmYzOYOvaQ==: 00:21:01.936 10:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YTIxMWYwYWQ2NWUzNzc1YzRjYjMwZDgxNWY5YmMwZGWJauU6: --dhchap-ctrl-secret DHHC-1:02:ZDExZDIxMWIxODE2MGE3ZjcxNTZmYmQ4M2FkY2NhOGNlN2Q3YmE4ODkzOTRhMmYzOYOvaQ==: 00:21:02.504 10:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:02.504 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:02.504 10:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:02.504 10:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.504 10:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:02.504 10:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.504 10:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:02.504 10:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:02.504 10:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:02.504 10:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:21:02.504 10:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:02.504 10:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:02.504 10:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:02.504 10:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:02.504 10:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:02.504 10:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:02.504 10:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.504 10:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:02.504 10:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.504 10:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:02.504 10:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:02.504 10:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:02.763 00:21:02.763 10:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:02.763 10:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:02.763 10:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:03.021 10:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:03.021 10:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:03.021 10:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.021 10:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:03.021 10:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.021 10:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:03.021 { 00:21:03.021 "cntlid": 101, 00:21:03.021 "qid": 0, 00:21:03.021 "state": "enabled", 00:21:03.021 "thread": "nvmf_tgt_poll_group_000", 00:21:03.021 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:03.021 "listen_address": { 00:21:03.021 "trtype": "TCP", 00:21:03.022 "adrfam": "IPv4", 00:21:03.022 "traddr": "10.0.0.2", 00:21:03.022 "trsvcid": "4420" 00:21:03.022 }, 00:21:03.022 "peer_address": { 00:21:03.022 "trtype": "TCP", 00:21:03.022 "adrfam": "IPv4", 00:21:03.022 "traddr": "10.0.0.1", 00:21:03.022 "trsvcid": "58514" 00:21:03.022 }, 00:21:03.022 "auth": { 00:21:03.022 "state": "completed", 00:21:03.022 "digest": "sha512", 00:21:03.022 "dhgroup": "null" 00:21:03.022 } 00:21:03.022 } 00:21:03.022 ]' 00:21:03.022 10:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:03.022 10:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:03.022 10:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:03.281 10:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:03.281 10:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:03.281 10:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:03.281 10:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:03.281 10:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:03.281 10:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OWJiODA2ZjkxODYxYjcxMDc0NjNmNjU4ZWFkMDc5OTM1Y2M5OWE0MGEwMzk4NThjEC7s2A==: --dhchap-ctrl-secret DHHC-1:01:ZWEzNDQ0YmM1MTUwZjg3NjdjNTJlZTg3NjU3YjQzZjbwUG5H: 00:21:03.281 10:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:OWJiODA2ZjkxODYxYjcxMDc0NjNmNjU4ZWFkMDc5OTM1Y2M5OWE0MGEwMzk4NThjEC7s2A==: --dhchap-ctrl-secret DHHC-1:01:ZWEzNDQ0YmM1MTUwZjg3NjdjNTJlZTg3NjU3YjQzZjbwUG5H: 00:21:03.848 10:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:03.849 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:03.849 10:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:03.849 10:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.849 10:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:03.849 10:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.849 10:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:03.849 10:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:03.849 10:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:04.108 10:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:21:04.108 10:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:04.108 10:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:04.108 10:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:04.108 10:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:04.108 10:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:04.108 10:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:21:04.108 10:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.108 10:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.108 10:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.108 10:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:04.108 10:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:04.108 10:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:04.366 00:21:04.366 10:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:04.366 10:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:04.366 10:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:04.625 10:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:04.625 10:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:04.625 10:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.625 10:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.625 10:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.625 10:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:04.625 { 00:21:04.625 "cntlid": 103, 00:21:04.625 "qid": 0, 00:21:04.625 "state": "enabled", 00:21:04.625 "thread": "nvmf_tgt_poll_group_000", 00:21:04.625 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:04.625 "listen_address": { 00:21:04.625 "trtype": "TCP", 00:21:04.625 "adrfam": "IPv4", 00:21:04.625 "traddr": "10.0.0.2", 00:21:04.625 "trsvcid": "4420" 00:21:04.625 }, 00:21:04.625 "peer_address": { 00:21:04.625 "trtype": "TCP", 00:21:04.625 "adrfam": "IPv4", 00:21:04.625 "traddr": "10.0.0.1", 00:21:04.625 "trsvcid": "58548" 00:21:04.625 }, 00:21:04.625 "auth": { 00:21:04.625 "state": "completed", 00:21:04.625 "digest": "sha512", 00:21:04.625 "dhgroup": "null" 00:21:04.625 } 00:21:04.625 } 00:21:04.625 ]' 00:21:04.625 10:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:04.625 10:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:04.625 10:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:04.625 10:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:04.625 10:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:04.625 10:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:04.625 10:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:04.625 10:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:04.884 10:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTJlNzAwMjczYjQxOTZkNzE3YWQxZjVmYjEyNGY4YTM1YzA1OWUwNGEwZDYwYmNmNTc2MDBlMDA4ZDlmMjgwYhS8dh8=: 00:21:04.884 10:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NTJlNzAwMjczYjQxOTZkNzE3YWQxZjVmYjEyNGY4YTM1YzA1OWUwNGEwZDYwYmNmNTc2MDBlMDA4ZDlmMjgwYhS8dh8=: 00:21:05.451 10:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:05.451 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:05.451 10:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:05.451 10:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.451 10:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:05.451 10:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.451 10:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:05.451 10:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:05.451 10:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:05.451 10:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:05.710 10:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:21:05.710 10:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:05.710 10:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:05.710 10:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:05.710 10:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:05.710 10:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:05.710 10:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:05.710 10:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.710 10:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:05.710 10:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.710 10:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:05.710 10:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:05.710 10:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:05.969 00:21:05.969 10:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:05.969 10:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:05.969 10:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:06.227 10:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:06.227 10:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:06.227 10:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.227 10:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.227 10:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.227 10:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:06.227 { 00:21:06.227 "cntlid": 105, 00:21:06.227 "qid": 0, 00:21:06.227 "state": "enabled", 00:21:06.227 "thread": "nvmf_tgt_poll_group_000", 00:21:06.227 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:06.227 "listen_address": { 00:21:06.227 "trtype": "TCP", 00:21:06.227 "adrfam": "IPv4", 00:21:06.227 "traddr": "10.0.0.2", 00:21:06.227 "trsvcid": "4420" 00:21:06.227 }, 00:21:06.227 "peer_address": { 00:21:06.227 "trtype": "TCP", 00:21:06.227 "adrfam": "IPv4", 00:21:06.227 "traddr": "10.0.0.1", 00:21:06.227 "trsvcid": "58566" 00:21:06.227 }, 00:21:06.227 "auth": { 00:21:06.227 "state": "completed", 00:21:06.227 "digest": "sha512", 00:21:06.227 "dhgroup": "ffdhe2048" 00:21:06.227 } 00:21:06.227 } 00:21:06.227 ]' 00:21:06.227 10:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:06.227 10:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:06.227 10:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:06.227 10:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:06.227 10:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:06.227 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:06.227 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:06.227 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:06.486 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTQxZTczZTU3OWRjMzNjYzJjNjg1ZDE0MzkxYzUxMDJjMmEzMjY1ZGU2MzMwNWRiooT6bw==: --dhchap-ctrl-secret DHHC-1:03:MWY1NWRkNGJiNDIwYmQ3NjY0M2VhY2EyN2IxMGVlYjA3YTg5MDUzNmZhYjFjNTg4Y2YzY2FlNjhiYjI5MGJmOB0pSzQ=: 00:21:06.486 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZTQxZTczZTU3OWRjMzNjYzJjNjg1ZDE0MzkxYzUxMDJjMmEzMjY1ZGU2MzMwNWRiooT6bw==: --dhchap-ctrl-secret DHHC-1:03:MWY1NWRkNGJiNDIwYmQ3NjY0M2VhY2EyN2IxMGVlYjA3YTg5MDUzNmZhYjFjNTg4Y2YzY2FlNjhiYjI5MGJmOB0pSzQ=: 00:21:07.053 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:07.053 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:07.053 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:07.053 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:07.053 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:07.053 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:07.053 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:07.053 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:07.053 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:07.312 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:21:07.312 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:07.312 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:07.312 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:07.312 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:07.312 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:07.312 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:07.312 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:07.312 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:07.312 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:07.312 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:07.312 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:07.312 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:07.570 00:21:07.570 10:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:07.570 10:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:07.570 10:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:07.570 10:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:07.570 10:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:07.570 10:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:07.570 10:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:07.570 10:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:07.570 10:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:07.570 { 00:21:07.570 "cntlid": 107, 00:21:07.570 "qid": 0, 00:21:07.570 "state": "enabled", 00:21:07.570 "thread": "nvmf_tgt_poll_group_000", 00:21:07.570 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:07.570 "listen_address": { 00:21:07.570 "trtype": "TCP", 00:21:07.570 "adrfam": "IPv4", 00:21:07.570 "traddr": "10.0.0.2", 00:21:07.570 "trsvcid": "4420" 00:21:07.570 }, 00:21:07.570 "peer_address": { 00:21:07.570 "trtype": "TCP", 00:21:07.570 "adrfam": "IPv4", 00:21:07.570 "traddr": "10.0.0.1", 00:21:07.570 "trsvcid": "58582" 00:21:07.570 }, 00:21:07.570 "auth": { 00:21:07.570 "state": "completed", 00:21:07.570 "digest": "sha512", 00:21:07.570 "dhgroup": "ffdhe2048" 00:21:07.570 } 00:21:07.570 } 00:21:07.570 ]' 00:21:07.570 10:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:07.887 10:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:07.887 10:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:07.887 10:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:07.887 10:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:07.887 10:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:07.887 10:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:07.887 10:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:08.182 10:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTIxMWYwYWQ2NWUzNzc1YzRjYjMwZDgxNWY5YmMwZGWJauU6: --dhchap-ctrl-secret DHHC-1:02:ZDExZDIxMWIxODE2MGE3ZjcxNTZmYmQ4M2FkY2NhOGNlN2Q3YmE4ODkzOTRhMmYzOYOvaQ==: 00:21:08.182 10:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YTIxMWYwYWQ2NWUzNzc1YzRjYjMwZDgxNWY5YmMwZGWJauU6: --dhchap-ctrl-secret DHHC-1:02:ZDExZDIxMWIxODE2MGE3ZjcxNTZmYmQ4M2FkY2NhOGNlN2Q3YmE4ODkzOTRhMmYzOYOvaQ==: 00:21:08.441 10:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:08.441 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:08.441 10:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:08.441 10:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.441 10:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:08.700 10:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.700 10:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:08.700 10:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:08.700 10:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:08.700 10:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:21:08.700 10:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:08.700 10:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:08.700 10:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:08.700 10:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:08.700 10:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:08.700 10:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:08.700 10:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.700 10:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:08.700 10:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.700 10:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:08.700 10:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:08.700 10:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:08.958 00:21:08.959 10:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:08.959 10:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:08.959 10:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:09.217 10:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:09.217 10:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:09.217 10:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:09.217 10:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:09.217 10:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:09.217 10:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:09.217 { 00:21:09.217 "cntlid": 109, 00:21:09.217 "qid": 0, 00:21:09.217 "state": "enabled", 00:21:09.217 "thread": "nvmf_tgt_poll_group_000", 00:21:09.217 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:09.217 "listen_address": { 00:21:09.217 "trtype": "TCP", 00:21:09.217 "adrfam": "IPv4", 00:21:09.217 "traddr": "10.0.0.2", 00:21:09.217 "trsvcid": "4420" 00:21:09.217 }, 00:21:09.217 "peer_address": { 00:21:09.217 "trtype": "TCP", 00:21:09.217 "adrfam": "IPv4", 00:21:09.217 "traddr": "10.0.0.1", 00:21:09.217 "trsvcid": "58608" 00:21:09.217 }, 00:21:09.217 "auth": { 00:21:09.217 "state": "completed", 00:21:09.217 "digest": "sha512", 00:21:09.217 "dhgroup": "ffdhe2048" 00:21:09.217 } 00:21:09.217 } 00:21:09.217 ]' 00:21:09.217 10:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:09.217 10:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:09.217 10:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:09.217 10:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:09.217 10:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:09.476 10:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:09.476 10:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:09.476 10:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:09.476 10:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OWJiODA2ZjkxODYxYjcxMDc0NjNmNjU4ZWFkMDc5OTM1Y2M5OWE0MGEwMzk4NThjEC7s2A==: --dhchap-ctrl-secret DHHC-1:01:ZWEzNDQ0YmM1MTUwZjg3NjdjNTJlZTg3NjU3YjQzZjbwUG5H: 00:21:09.476 10:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:OWJiODA2ZjkxODYxYjcxMDc0NjNmNjU4ZWFkMDc5OTM1Y2M5OWE0MGEwMzk4NThjEC7s2A==: --dhchap-ctrl-secret DHHC-1:01:ZWEzNDQ0YmM1MTUwZjg3NjdjNTJlZTg3NjU3YjQzZjbwUG5H: 00:21:10.044 10:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:10.044 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:10.044 10:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:10.044 10:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.044 10:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:10.044 10:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.044 10:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:10.044 10:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:10.044 10:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:10.304 10:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:21:10.304 10:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:10.304 10:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:10.304 10:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:10.304 10:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:10.304 10:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:10.304 10:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:21:10.304 10:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.304 10:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:10.304 10:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.304 10:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:10.304 10:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:10.304 10:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:10.563 00:21:10.563 10:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:10.563 10:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:10.563 10:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:10.822 10:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:10.822 10:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:10.822 10:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.822 10:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:10.822 10:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.822 10:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:10.822 { 00:21:10.822 "cntlid": 111, 00:21:10.822 "qid": 0, 00:21:10.822 "state": "enabled", 00:21:10.822 "thread": "nvmf_tgt_poll_group_000", 00:21:10.822 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:10.822 "listen_address": { 00:21:10.822 "trtype": "TCP", 00:21:10.822 "adrfam": "IPv4", 00:21:10.822 "traddr": "10.0.0.2", 00:21:10.822 "trsvcid": "4420" 00:21:10.822 }, 00:21:10.822 "peer_address": { 00:21:10.822 "trtype": "TCP", 00:21:10.822 "adrfam": "IPv4", 00:21:10.822 "traddr": "10.0.0.1", 00:21:10.822 "trsvcid": "58628" 00:21:10.822 }, 00:21:10.822 "auth": { 00:21:10.822 "state": "completed", 00:21:10.822 "digest": "sha512", 00:21:10.822 "dhgroup": "ffdhe2048" 00:21:10.822 } 00:21:10.822 } 00:21:10.822 ]' 00:21:10.822 10:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:10.822 10:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:10.822 10:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:10.822 10:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:10.822 10:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:10.822 10:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:10.822 10:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:10.822 10:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:11.081 10:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTJlNzAwMjczYjQxOTZkNzE3YWQxZjVmYjEyNGY4YTM1YzA1OWUwNGEwZDYwYmNmNTc2MDBlMDA4ZDlmMjgwYhS8dh8=: 00:21:11.081 10:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NTJlNzAwMjczYjQxOTZkNzE3YWQxZjVmYjEyNGY4YTM1YzA1OWUwNGEwZDYwYmNmNTc2MDBlMDA4ZDlmMjgwYhS8dh8=: 00:21:11.652 10:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:11.652 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:11.652 10:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:11.652 10:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.652 10:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.652 10:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.652 10:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:11.652 10:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:11.652 10:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:11.652 10:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:11.910 10:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:21:11.910 10:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:11.910 10:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:11.910 10:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:11.910 10:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:11.910 10:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:11.910 10:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:11.910 10:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.910 10:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.910 10:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.910 10:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:11.910 10:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:11.910 10:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:12.169 00:21:12.169 10:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:12.169 10:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:12.169 10:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:12.428 10:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:12.428 10:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:12.428 10:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.428 10:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:12.428 10:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.428 10:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:12.428 { 00:21:12.428 "cntlid": 113, 00:21:12.428 "qid": 0, 00:21:12.428 "state": "enabled", 00:21:12.428 "thread": "nvmf_tgt_poll_group_000", 00:21:12.428 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:12.428 "listen_address": { 00:21:12.428 "trtype": "TCP", 00:21:12.428 "adrfam": "IPv4", 00:21:12.428 "traddr": "10.0.0.2", 00:21:12.428 "trsvcid": "4420" 00:21:12.428 }, 00:21:12.428 "peer_address": { 00:21:12.428 "trtype": "TCP", 00:21:12.428 "adrfam": "IPv4", 00:21:12.428 "traddr": "10.0.0.1", 00:21:12.428 "trsvcid": "58654" 00:21:12.428 }, 00:21:12.428 "auth": { 00:21:12.428 "state": "completed", 00:21:12.428 "digest": "sha512", 00:21:12.428 "dhgroup": "ffdhe3072" 00:21:12.428 } 00:21:12.428 } 00:21:12.428 ]' 00:21:12.428 10:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:12.428 10:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:12.428 10:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:12.428 10:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:12.428 10:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:12.428 10:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:12.428 10:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:12.428 10:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:12.687 10:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTQxZTczZTU3OWRjMzNjYzJjNjg1ZDE0MzkxYzUxMDJjMmEzMjY1ZGU2MzMwNWRiooT6bw==: --dhchap-ctrl-secret DHHC-1:03:MWY1NWRkNGJiNDIwYmQ3NjY0M2VhY2EyN2IxMGVlYjA3YTg5MDUzNmZhYjFjNTg4Y2YzY2FlNjhiYjI5MGJmOB0pSzQ=: 00:21:12.687 10:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZTQxZTczZTU3OWRjMzNjYzJjNjg1ZDE0MzkxYzUxMDJjMmEzMjY1ZGU2MzMwNWRiooT6bw==: --dhchap-ctrl-secret DHHC-1:03:MWY1NWRkNGJiNDIwYmQ3NjY0M2VhY2EyN2IxMGVlYjA3YTg5MDUzNmZhYjFjNTg4Y2YzY2FlNjhiYjI5MGJmOB0pSzQ=: 00:21:13.254 10:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:13.254 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:13.254 10:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:13.254 10:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.254 10:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.254 10:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.254 10:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:13.254 10:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:13.254 10:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:13.513 10:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:21:13.513 10:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:13.513 10:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:13.513 10:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:13.513 10:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:13.513 10:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:13.513 10:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:13.513 10:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.513 10:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.513 10:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.513 10:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:13.513 10:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:13.513 10:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:13.771 00:21:13.771 10:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:13.771 10:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:13.771 10:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:13.771 10:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:14.030 10:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:14.030 10:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.030 10:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:14.030 10:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.030 10:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:14.030 { 00:21:14.030 "cntlid": 115, 00:21:14.030 "qid": 0, 00:21:14.030 "state": "enabled", 00:21:14.030 "thread": "nvmf_tgt_poll_group_000", 00:21:14.030 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:14.030 "listen_address": { 00:21:14.030 "trtype": "TCP", 00:21:14.030 "adrfam": "IPv4", 00:21:14.030 "traddr": "10.0.0.2", 00:21:14.030 "trsvcid": "4420" 00:21:14.030 }, 00:21:14.030 "peer_address": { 00:21:14.030 "trtype": "TCP", 00:21:14.030 "adrfam": "IPv4", 00:21:14.030 "traddr": "10.0.0.1", 00:21:14.030 "trsvcid": "46266" 00:21:14.030 }, 00:21:14.030 "auth": { 00:21:14.030 "state": "completed", 00:21:14.030 "digest": "sha512", 00:21:14.030 "dhgroup": "ffdhe3072" 00:21:14.030 } 00:21:14.030 } 00:21:14.030 ]' 00:21:14.030 10:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:14.030 10:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:14.030 10:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:14.030 10:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:14.030 10:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:14.030 10:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:14.030 10:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:14.030 10:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:14.288 10:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTIxMWYwYWQ2NWUzNzc1YzRjYjMwZDgxNWY5YmMwZGWJauU6: --dhchap-ctrl-secret DHHC-1:02:ZDExZDIxMWIxODE2MGE3ZjcxNTZmYmQ4M2FkY2NhOGNlN2Q3YmE4ODkzOTRhMmYzOYOvaQ==: 00:21:14.288 10:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YTIxMWYwYWQ2NWUzNzc1YzRjYjMwZDgxNWY5YmMwZGWJauU6: --dhchap-ctrl-secret DHHC-1:02:ZDExZDIxMWIxODE2MGE3ZjcxNTZmYmQ4M2FkY2NhOGNlN2Q3YmE4ODkzOTRhMmYzOYOvaQ==: 00:21:14.856 10:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:14.856 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:14.856 10:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:14.856 10:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.856 10:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:14.856 10:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.856 10:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:14.856 10:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:14.856 10:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:15.115 10:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:21:15.115 10:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:15.115 10:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:15.115 10:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:15.115 10:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:15.115 10:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:15.115 10:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:15.115 10:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:15.115 10:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.115 10:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:15.115 10:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:15.115 10:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:15.115 10:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:15.374 00:21:15.374 10:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:15.374 10:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:15.374 10:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:15.374 10:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:15.374 10:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:15.374 10:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:15.374 10:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.374 10:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:15.374 10:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:15.374 { 00:21:15.374 "cntlid": 117, 00:21:15.374 "qid": 0, 00:21:15.374 "state": "enabled", 00:21:15.374 "thread": "nvmf_tgt_poll_group_000", 00:21:15.374 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:15.374 "listen_address": { 00:21:15.374 "trtype": "TCP", 00:21:15.374 "adrfam": "IPv4", 00:21:15.374 "traddr": "10.0.0.2", 00:21:15.374 "trsvcid": "4420" 00:21:15.374 }, 00:21:15.374 "peer_address": { 00:21:15.374 "trtype": "TCP", 00:21:15.374 "adrfam": "IPv4", 00:21:15.374 "traddr": "10.0.0.1", 00:21:15.374 "trsvcid": "46284" 00:21:15.374 }, 00:21:15.374 "auth": { 00:21:15.374 "state": "completed", 00:21:15.374 "digest": "sha512", 00:21:15.374 "dhgroup": "ffdhe3072" 00:21:15.374 } 00:21:15.374 } 00:21:15.374 ]' 00:21:15.374 10:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:15.633 10:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:15.633 10:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:15.633 10:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:15.633 10:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:15.633 10:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:15.633 10:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:15.633 10:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:15.892 10:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OWJiODA2ZjkxODYxYjcxMDc0NjNmNjU4ZWFkMDc5OTM1Y2M5OWE0MGEwMzk4NThjEC7s2A==: --dhchap-ctrl-secret DHHC-1:01:ZWEzNDQ0YmM1MTUwZjg3NjdjNTJlZTg3NjU3YjQzZjbwUG5H: 00:21:15.892 10:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:OWJiODA2ZjkxODYxYjcxMDc0NjNmNjU4ZWFkMDc5OTM1Y2M5OWE0MGEwMzk4NThjEC7s2A==: --dhchap-ctrl-secret DHHC-1:01:ZWEzNDQ0YmM1MTUwZjg3NjdjNTJlZTg3NjU3YjQzZjbwUG5H: 00:21:16.459 10:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:16.459 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:16.459 10:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:16.459 10:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.459 10:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:16.459 10:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.459 10:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:16.459 10:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:16.459 10:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:16.459 10:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:21:16.459 10:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:16.459 10:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:16.459 10:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:16.459 10:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:16.459 10:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:16.459 10:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:21:16.459 10:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.459 10:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:16.459 10:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.459 10:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:16.459 10:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:16.459 10:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:16.718 00:21:16.718 10:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:16.718 10:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:16.718 10:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:16.978 10:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:16.978 10:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:16.978 10:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.978 10:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:16.978 10:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.978 10:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:16.978 { 00:21:16.978 "cntlid": 119, 00:21:16.978 "qid": 0, 00:21:16.978 "state": "enabled", 00:21:16.978 "thread": "nvmf_tgt_poll_group_000", 00:21:16.978 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:16.978 "listen_address": { 00:21:16.978 "trtype": "TCP", 00:21:16.978 "adrfam": "IPv4", 00:21:16.978 "traddr": "10.0.0.2", 00:21:16.978 "trsvcid": "4420" 00:21:16.978 }, 00:21:16.978 "peer_address": { 00:21:16.978 "trtype": "TCP", 00:21:16.978 "adrfam": "IPv4", 00:21:16.978 "traddr": "10.0.0.1", 00:21:16.978 "trsvcid": "46312" 00:21:16.978 }, 00:21:16.978 "auth": { 00:21:16.978 "state": "completed", 00:21:16.978 "digest": "sha512", 00:21:16.978 "dhgroup": "ffdhe3072" 00:21:16.978 } 00:21:16.978 } 00:21:16.978 ]' 00:21:16.978 10:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:16.978 10:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:16.978 10:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:17.238 10:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:17.238 10:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:17.238 10:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:17.238 10:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:17.238 10:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:17.238 10:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTJlNzAwMjczYjQxOTZkNzE3YWQxZjVmYjEyNGY4YTM1YzA1OWUwNGEwZDYwYmNmNTc2MDBlMDA4ZDlmMjgwYhS8dh8=: 00:21:17.238 10:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NTJlNzAwMjczYjQxOTZkNzE3YWQxZjVmYjEyNGY4YTM1YzA1OWUwNGEwZDYwYmNmNTc2MDBlMDA4ZDlmMjgwYhS8dh8=: 00:21:17.805 10:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:17.805 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:17.805 10:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:17.805 10:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.805 10:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.805 10:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.805 10:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:17.805 10:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:17.805 10:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:17.805 10:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:18.064 10:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:21:18.064 10:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:18.064 10:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:18.064 10:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:18.064 10:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:18.064 10:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:18.064 10:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:18.064 10:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.064 10:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:18.064 10:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.064 10:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:18.064 10:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:18.064 10:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:18.323 00:21:18.323 10:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:18.323 10:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:18.323 10:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:18.582 10:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:18.582 10:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:18.582 10:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.582 10:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:18.582 10:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.582 10:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:18.582 { 00:21:18.582 "cntlid": 121, 00:21:18.582 "qid": 0, 00:21:18.582 "state": "enabled", 00:21:18.582 "thread": "nvmf_tgt_poll_group_000", 00:21:18.582 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:18.582 "listen_address": { 00:21:18.582 "trtype": "TCP", 00:21:18.582 "adrfam": "IPv4", 00:21:18.582 "traddr": "10.0.0.2", 00:21:18.582 "trsvcid": "4420" 00:21:18.582 }, 00:21:18.582 "peer_address": { 00:21:18.582 "trtype": "TCP", 00:21:18.582 "adrfam": "IPv4", 00:21:18.582 "traddr": "10.0.0.1", 00:21:18.582 "trsvcid": "46346" 00:21:18.582 }, 00:21:18.582 "auth": { 00:21:18.582 "state": "completed", 00:21:18.582 "digest": "sha512", 00:21:18.582 "dhgroup": "ffdhe4096" 00:21:18.582 } 00:21:18.582 } 00:21:18.582 ]' 00:21:18.582 10:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:18.582 10:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:18.582 10:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:18.582 10:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:18.582 10:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:18.841 10:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:18.841 10:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:18.841 10:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:18.841 10:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTQxZTczZTU3OWRjMzNjYzJjNjg1ZDE0MzkxYzUxMDJjMmEzMjY1ZGU2MzMwNWRiooT6bw==: --dhchap-ctrl-secret DHHC-1:03:MWY1NWRkNGJiNDIwYmQ3NjY0M2VhY2EyN2IxMGVlYjA3YTg5MDUzNmZhYjFjNTg4Y2YzY2FlNjhiYjI5MGJmOB0pSzQ=: 00:21:18.841 10:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZTQxZTczZTU3OWRjMzNjYzJjNjg1ZDE0MzkxYzUxMDJjMmEzMjY1ZGU2MzMwNWRiooT6bw==: --dhchap-ctrl-secret DHHC-1:03:MWY1NWRkNGJiNDIwYmQ3NjY0M2VhY2EyN2IxMGVlYjA3YTg5MDUzNmZhYjFjNTg4Y2YzY2FlNjhiYjI5MGJmOB0pSzQ=: 00:21:19.408 10:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:19.408 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:19.408 10:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:19.408 10:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.408 10:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:19.408 10:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.409 10:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:19.409 10:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:19.409 10:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:19.668 10:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:21:19.668 10:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:19.668 10:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:19.668 10:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:19.668 10:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:19.668 10:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:19.668 10:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:19.668 10:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.668 10:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:19.668 10:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.668 10:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:19.668 10:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:19.668 10:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:19.926 00:21:19.926 10:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:19.926 10:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:19.927 10:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:20.185 10:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:20.185 10:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:20.185 10:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:20.185 10:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:20.185 10:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:20.185 10:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:20.185 { 00:21:20.185 "cntlid": 123, 00:21:20.185 "qid": 0, 00:21:20.185 "state": "enabled", 00:21:20.185 "thread": "nvmf_tgt_poll_group_000", 00:21:20.185 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:20.185 "listen_address": { 00:21:20.185 "trtype": "TCP", 00:21:20.185 "adrfam": "IPv4", 00:21:20.185 "traddr": "10.0.0.2", 00:21:20.185 "trsvcid": "4420" 00:21:20.185 }, 00:21:20.185 "peer_address": { 00:21:20.185 "trtype": "TCP", 00:21:20.185 "adrfam": "IPv4", 00:21:20.185 "traddr": "10.0.0.1", 00:21:20.185 "trsvcid": "46386" 00:21:20.185 }, 00:21:20.185 "auth": { 00:21:20.185 "state": "completed", 00:21:20.185 "digest": "sha512", 00:21:20.185 "dhgroup": "ffdhe4096" 00:21:20.185 } 00:21:20.185 } 00:21:20.185 ]' 00:21:20.185 10:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:20.185 10:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:20.185 10:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:20.185 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:20.185 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:20.185 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:20.185 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:20.185 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:20.444 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTIxMWYwYWQ2NWUzNzc1YzRjYjMwZDgxNWY5YmMwZGWJauU6: --dhchap-ctrl-secret DHHC-1:02:ZDExZDIxMWIxODE2MGE3ZjcxNTZmYmQ4M2FkY2NhOGNlN2Q3YmE4ODkzOTRhMmYzOYOvaQ==: 00:21:20.444 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YTIxMWYwYWQ2NWUzNzc1YzRjYjMwZDgxNWY5YmMwZGWJauU6: --dhchap-ctrl-secret DHHC-1:02:ZDExZDIxMWIxODE2MGE3ZjcxNTZmYmQ4M2FkY2NhOGNlN2Q3YmE4ODkzOTRhMmYzOYOvaQ==: 00:21:21.012 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:21.012 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:21.012 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:21.012 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:21.012 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:21.012 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:21.012 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:21.012 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:21.012 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:21.271 10:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:21:21.271 10:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:21.271 10:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:21.271 10:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:21.271 10:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:21.271 10:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:21.271 10:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:21.271 10:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:21.271 10:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:21.271 10:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:21.271 10:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:21.271 10:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:21.271 10:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:21.529 00:21:21.529 10:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:21.529 10:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:21.529 10:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:21.788 10:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:21.788 10:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:21.788 10:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:21.788 10:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:21.788 10:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:21.788 10:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:21.788 { 00:21:21.788 "cntlid": 125, 00:21:21.788 "qid": 0, 00:21:21.788 "state": "enabled", 00:21:21.788 "thread": "nvmf_tgt_poll_group_000", 00:21:21.788 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:21.788 "listen_address": { 00:21:21.788 "trtype": "TCP", 00:21:21.788 "adrfam": "IPv4", 00:21:21.788 "traddr": "10.0.0.2", 00:21:21.788 "trsvcid": "4420" 00:21:21.788 }, 00:21:21.788 "peer_address": { 00:21:21.788 "trtype": "TCP", 00:21:21.788 "adrfam": "IPv4", 00:21:21.788 "traddr": "10.0.0.1", 00:21:21.788 "trsvcid": "46394" 00:21:21.788 }, 00:21:21.788 "auth": { 00:21:21.788 "state": "completed", 00:21:21.788 "digest": "sha512", 00:21:21.788 "dhgroup": "ffdhe4096" 00:21:21.788 } 00:21:21.788 } 00:21:21.788 ]' 00:21:21.788 10:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:21.788 10:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:21.788 10:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:21.788 10:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:21.788 10:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:21.788 10:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:21.788 10:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:21.788 10:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:22.047 10:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OWJiODA2ZjkxODYxYjcxMDc0NjNmNjU4ZWFkMDc5OTM1Y2M5OWE0MGEwMzk4NThjEC7s2A==: --dhchap-ctrl-secret DHHC-1:01:ZWEzNDQ0YmM1MTUwZjg3NjdjNTJlZTg3NjU3YjQzZjbwUG5H: 00:21:22.047 10:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:OWJiODA2ZjkxODYxYjcxMDc0NjNmNjU4ZWFkMDc5OTM1Y2M5OWE0MGEwMzk4NThjEC7s2A==: --dhchap-ctrl-secret DHHC-1:01:ZWEzNDQ0YmM1MTUwZjg3NjdjNTJlZTg3NjU3YjQzZjbwUG5H: 00:21:22.614 10:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:22.614 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:22.614 10:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:22.614 10:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.614 10:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:22.614 10:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:22.614 10:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:22.614 10:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:22.614 10:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:22.873 10:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:21:22.873 10:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:22.873 10:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:22.873 10:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:22.873 10:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:22.873 10:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:22.873 10:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:21:22.874 10:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.874 10:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:22.874 10:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:22.874 10:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:22.874 10:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:22.874 10:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:23.132 00:21:23.132 10:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:23.132 10:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:23.132 10:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:23.391 10:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:23.391 10:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:23.391 10:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.391 10:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.391 10:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.391 10:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:23.391 { 00:21:23.391 "cntlid": 127, 00:21:23.391 "qid": 0, 00:21:23.391 "state": "enabled", 00:21:23.391 "thread": "nvmf_tgt_poll_group_000", 00:21:23.391 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:23.391 "listen_address": { 00:21:23.391 "trtype": "TCP", 00:21:23.391 "adrfam": "IPv4", 00:21:23.391 "traddr": "10.0.0.2", 00:21:23.391 "trsvcid": "4420" 00:21:23.391 }, 00:21:23.391 "peer_address": { 00:21:23.391 "trtype": "TCP", 00:21:23.391 "adrfam": "IPv4", 00:21:23.391 "traddr": "10.0.0.1", 00:21:23.391 "trsvcid": "40384" 00:21:23.391 }, 00:21:23.391 "auth": { 00:21:23.391 "state": "completed", 00:21:23.391 "digest": "sha512", 00:21:23.391 "dhgroup": "ffdhe4096" 00:21:23.391 } 00:21:23.391 } 00:21:23.391 ]' 00:21:23.391 10:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:23.391 10:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:23.391 10:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:23.391 10:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:23.391 10:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:23.391 10:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:23.391 10:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:23.391 10:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:23.650 10:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTJlNzAwMjczYjQxOTZkNzE3YWQxZjVmYjEyNGY4YTM1YzA1OWUwNGEwZDYwYmNmNTc2MDBlMDA4ZDlmMjgwYhS8dh8=: 00:21:23.650 10:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NTJlNzAwMjczYjQxOTZkNzE3YWQxZjVmYjEyNGY4YTM1YzA1OWUwNGEwZDYwYmNmNTc2MDBlMDA4ZDlmMjgwYhS8dh8=: 00:21:24.219 10:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:24.219 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:24.219 10:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:24.219 10:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.219 10:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:24.219 10:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.219 10:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:24.219 10:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:24.219 10:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:24.219 10:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:24.478 10:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:21:24.478 10:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:24.478 10:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:24.478 10:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:24.478 10:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:24.478 10:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:24.478 10:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:24.478 10:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.478 10:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:24.478 10:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.478 10:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:24.478 10:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:24.478 10:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:24.737 00:21:24.737 10:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:24.737 10:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:24.737 10:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:24.996 10:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:24.996 10:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:24.996 10:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.996 10:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:24.996 10:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.996 10:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:24.996 { 00:21:24.996 "cntlid": 129, 00:21:24.996 "qid": 0, 00:21:24.996 "state": "enabled", 00:21:24.996 "thread": "nvmf_tgt_poll_group_000", 00:21:24.996 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:24.996 "listen_address": { 00:21:24.996 "trtype": "TCP", 00:21:24.996 "adrfam": "IPv4", 00:21:24.996 "traddr": "10.0.0.2", 00:21:24.996 "trsvcid": "4420" 00:21:24.996 }, 00:21:24.996 "peer_address": { 00:21:24.996 "trtype": "TCP", 00:21:24.996 "adrfam": "IPv4", 00:21:24.996 "traddr": "10.0.0.1", 00:21:24.996 "trsvcid": "40398" 00:21:24.996 }, 00:21:24.996 "auth": { 00:21:24.996 "state": "completed", 00:21:24.996 "digest": "sha512", 00:21:24.996 "dhgroup": "ffdhe6144" 00:21:24.996 } 00:21:24.996 } 00:21:24.996 ]' 00:21:24.996 10:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:24.996 10:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:24.996 10:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:24.996 10:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:24.996 10:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:25.255 10:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:25.255 10:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:25.255 10:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:25.255 10:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTQxZTczZTU3OWRjMzNjYzJjNjg1ZDE0MzkxYzUxMDJjMmEzMjY1ZGU2MzMwNWRiooT6bw==: --dhchap-ctrl-secret DHHC-1:03:MWY1NWRkNGJiNDIwYmQ3NjY0M2VhY2EyN2IxMGVlYjA3YTg5MDUzNmZhYjFjNTg4Y2YzY2FlNjhiYjI5MGJmOB0pSzQ=: 00:21:25.255 10:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZTQxZTczZTU3OWRjMzNjYzJjNjg1ZDE0MzkxYzUxMDJjMmEzMjY1ZGU2MzMwNWRiooT6bw==: --dhchap-ctrl-secret DHHC-1:03:MWY1NWRkNGJiNDIwYmQ3NjY0M2VhY2EyN2IxMGVlYjA3YTg5MDUzNmZhYjFjNTg4Y2YzY2FlNjhiYjI5MGJmOB0pSzQ=: 00:21:25.823 10:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:25.823 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:25.823 10:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:25.823 10:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:25.823 10:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:25.823 10:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:25.823 10:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:25.823 10:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:25.823 10:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:26.081 10:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:21:26.081 10:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:26.081 10:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:26.081 10:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:26.081 10:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:26.081 10:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:26.081 10:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:26.081 10:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.081 10:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:26.081 10:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.081 10:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:26.081 10:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:26.081 10:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:26.340 00:21:26.599 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:26.599 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:26.599 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:26.599 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:26.599 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:26.599 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.599 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:26.599 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.599 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:26.599 { 00:21:26.599 "cntlid": 131, 00:21:26.599 "qid": 0, 00:21:26.599 "state": "enabled", 00:21:26.599 "thread": "nvmf_tgt_poll_group_000", 00:21:26.599 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:26.599 "listen_address": { 00:21:26.599 "trtype": "TCP", 00:21:26.599 "adrfam": "IPv4", 00:21:26.599 "traddr": "10.0.0.2", 00:21:26.599 "trsvcid": "4420" 00:21:26.599 }, 00:21:26.599 "peer_address": { 00:21:26.599 "trtype": "TCP", 00:21:26.599 "adrfam": "IPv4", 00:21:26.599 "traddr": "10.0.0.1", 00:21:26.599 "trsvcid": "40418" 00:21:26.599 }, 00:21:26.599 "auth": { 00:21:26.599 "state": "completed", 00:21:26.599 "digest": "sha512", 00:21:26.599 "dhgroup": "ffdhe6144" 00:21:26.599 } 00:21:26.599 } 00:21:26.599 ]' 00:21:26.599 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:26.599 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:26.599 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:26.860 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:26.860 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:26.860 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:26.860 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:26.860 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:27.118 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTIxMWYwYWQ2NWUzNzc1YzRjYjMwZDgxNWY5YmMwZGWJauU6: --dhchap-ctrl-secret DHHC-1:02:ZDExZDIxMWIxODE2MGE3ZjcxNTZmYmQ4M2FkY2NhOGNlN2Q3YmE4ODkzOTRhMmYzOYOvaQ==: 00:21:27.118 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YTIxMWYwYWQ2NWUzNzc1YzRjYjMwZDgxNWY5YmMwZGWJauU6: --dhchap-ctrl-secret DHHC-1:02:ZDExZDIxMWIxODE2MGE3ZjcxNTZmYmQ4M2FkY2NhOGNlN2Q3YmE4ODkzOTRhMmYzOYOvaQ==: 00:21:27.686 10:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:27.686 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:27.686 10:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:27.686 10:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.686 10:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:27.686 10:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:27.686 10:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:27.686 10:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:27.686 10:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:27.945 10:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:21:27.945 10:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:27.945 10:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:27.945 10:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:27.945 10:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:27.945 10:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:27.945 10:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:27.945 10:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.945 10:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:27.945 10:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:27.946 10:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:27.946 10:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:27.946 10:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:28.204 00:21:28.204 10:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:28.205 10:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:28.205 10:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:28.463 10:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:28.463 10:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:28.463 10:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.463 10:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.463 10:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.463 10:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:28.463 { 00:21:28.463 "cntlid": 133, 00:21:28.463 "qid": 0, 00:21:28.463 "state": "enabled", 00:21:28.463 "thread": "nvmf_tgt_poll_group_000", 00:21:28.463 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:28.463 "listen_address": { 00:21:28.463 "trtype": "TCP", 00:21:28.463 "adrfam": "IPv4", 00:21:28.463 "traddr": "10.0.0.2", 00:21:28.463 "trsvcid": "4420" 00:21:28.463 }, 00:21:28.463 "peer_address": { 00:21:28.463 "trtype": "TCP", 00:21:28.463 "adrfam": "IPv4", 00:21:28.463 "traddr": "10.0.0.1", 00:21:28.463 "trsvcid": "40436" 00:21:28.463 }, 00:21:28.463 "auth": { 00:21:28.463 "state": "completed", 00:21:28.463 "digest": "sha512", 00:21:28.463 "dhgroup": "ffdhe6144" 00:21:28.463 } 00:21:28.463 } 00:21:28.463 ]' 00:21:28.463 10:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:28.463 10:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:28.463 10:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:28.463 10:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:28.463 10:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:28.463 10:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:28.463 10:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:28.463 10:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:28.721 10:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OWJiODA2ZjkxODYxYjcxMDc0NjNmNjU4ZWFkMDc5OTM1Y2M5OWE0MGEwMzk4NThjEC7s2A==: --dhchap-ctrl-secret DHHC-1:01:ZWEzNDQ0YmM1MTUwZjg3NjdjNTJlZTg3NjU3YjQzZjbwUG5H: 00:21:28.721 10:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:OWJiODA2ZjkxODYxYjcxMDc0NjNmNjU4ZWFkMDc5OTM1Y2M5OWE0MGEwMzk4NThjEC7s2A==: --dhchap-ctrl-secret DHHC-1:01:ZWEzNDQ0YmM1MTUwZjg3NjdjNTJlZTg3NjU3YjQzZjbwUG5H: 00:21:29.287 10:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:29.287 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:29.287 10:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:29.287 10:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:29.287 10:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:29.287 10:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:29.287 10:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:29.287 10:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:29.287 10:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:29.546 10:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:21:29.546 10:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:29.546 10:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:29.546 10:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:29.546 10:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:29.546 10:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:29.546 10:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:21:29.546 10:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:29.546 10:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:29.546 10:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:29.546 10:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:29.546 10:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:29.546 10:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:29.804 00:21:29.804 10:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:29.804 10:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:29.804 10:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:30.063 10:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:30.063 10:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:30.063 10:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.063 10:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:30.063 10:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.063 10:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:30.063 { 00:21:30.063 "cntlid": 135, 00:21:30.063 "qid": 0, 00:21:30.063 "state": "enabled", 00:21:30.063 "thread": "nvmf_tgt_poll_group_000", 00:21:30.063 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:30.063 "listen_address": { 00:21:30.063 "trtype": "TCP", 00:21:30.063 "adrfam": "IPv4", 00:21:30.063 "traddr": "10.0.0.2", 00:21:30.063 "trsvcid": "4420" 00:21:30.063 }, 00:21:30.063 "peer_address": { 00:21:30.063 "trtype": "TCP", 00:21:30.063 "adrfam": "IPv4", 00:21:30.063 "traddr": "10.0.0.1", 00:21:30.063 "trsvcid": "40456" 00:21:30.063 }, 00:21:30.063 "auth": { 00:21:30.063 "state": "completed", 00:21:30.063 "digest": "sha512", 00:21:30.063 "dhgroup": "ffdhe6144" 00:21:30.063 } 00:21:30.063 } 00:21:30.063 ]' 00:21:30.063 10:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:30.063 10:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:30.063 10:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:30.063 10:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:30.063 10:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:30.063 10:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:30.063 10:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:30.322 10:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:30.322 10:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTJlNzAwMjczYjQxOTZkNzE3YWQxZjVmYjEyNGY4YTM1YzA1OWUwNGEwZDYwYmNmNTc2MDBlMDA4ZDlmMjgwYhS8dh8=: 00:21:30.322 10:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NTJlNzAwMjczYjQxOTZkNzE3YWQxZjVmYjEyNGY4YTM1YzA1OWUwNGEwZDYwYmNmNTc2MDBlMDA4ZDlmMjgwYhS8dh8=: 00:21:30.888 10:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:30.888 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:30.888 10:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:30.888 10:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.888 10:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:30.888 10:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.888 10:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:30.889 10:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:30.889 10:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:30.889 10:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:31.148 10:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:21:31.148 10:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:31.148 10:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:31.148 10:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:31.148 10:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:31.148 10:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:31.148 10:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:31.148 10:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.148 10:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.148 10:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.148 10:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:31.148 10:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:31.148 10:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:31.714 00:21:31.714 10:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:31.714 10:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:31.714 10:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:31.973 10:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:31.973 10:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:31.973 10:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.973 10:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.973 10:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.973 10:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:31.973 { 00:21:31.973 "cntlid": 137, 00:21:31.973 "qid": 0, 00:21:31.973 "state": "enabled", 00:21:31.973 "thread": "nvmf_tgt_poll_group_000", 00:21:31.973 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:31.973 "listen_address": { 00:21:31.973 "trtype": "TCP", 00:21:31.973 "adrfam": "IPv4", 00:21:31.973 "traddr": "10.0.0.2", 00:21:31.973 "trsvcid": "4420" 00:21:31.973 }, 00:21:31.973 "peer_address": { 00:21:31.973 "trtype": "TCP", 00:21:31.973 "adrfam": "IPv4", 00:21:31.973 "traddr": "10.0.0.1", 00:21:31.973 "trsvcid": "40478" 00:21:31.973 }, 00:21:31.973 "auth": { 00:21:31.973 "state": "completed", 00:21:31.973 "digest": "sha512", 00:21:31.973 "dhgroup": "ffdhe8192" 00:21:31.973 } 00:21:31.973 } 00:21:31.973 ]' 00:21:31.973 10:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:31.973 10:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:31.973 10:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:31.973 10:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:31.973 10:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:31.973 10:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:31.973 10:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:31.973 10:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:32.231 10:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTQxZTczZTU3OWRjMzNjYzJjNjg1ZDE0MzkxYzUxMDJjMmEzMjY1ZGU2MzMwNWRiooT6bw==: --dhchap-ctrl-secret DHHC-1:03:MWY1NWRkNGJiNDIwYmQ3NjY0M2VhY2EyN2IxMGVlYjA3YTg5MDUzNmZhYjFjNTg4Y2YzY2FlNjhiYjI5MGJmOB0pSzQ=: 00:21:32.231 10:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZTQxZTczZTU3OWRjMzNjYzJjNjg1ZDE0MzkxYzUxMDJjMmEzMjY1ZGU2MzMwNWRiooT6bw==: --dhchap-ctrl-secret DHHC-1:03:MWY1NWRkNGJiNDIwYmQ3NjY0M2VhY2EyN2IxMGVlYjA3YTg5MDUzNmZhYjFjNTg4Y2YzY2FlNjhiYjI5MGJmOB0pSzQ=: 00:21:32.798 10:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:32.799 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:32.799 10:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:32.799 10:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.799 10:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:32.799 10:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.799 10:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:32.799 10:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:32.799 10:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:33.058 10:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:21:33.058 10:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:33.058 10:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:33.058 10:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:33.058 10:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:33.058 10:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:33.058 10:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:33.058 10:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:33.058 10:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:33.058 10:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:33.058 10:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:33.058 10:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:33.058 10:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:33.317 00:21:33.576 10:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:33.576 10:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:33.576 10:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:33.576 10:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:33.576 10:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:33.576 10:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:33.576 10:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:33.576 10:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:33.576 10:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:33.576 { 00:21:33.576 "cntlid": 139, 00:21:33.576 "qid": 0, 00:21:33.576 "state": "enabled", 00:21:33.576 "thread": "nvmf_tgt_poll_group_000", 00:21:33.576 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:33.576 "listen_address": { 00:21:33.576 "trtype": "TCP", 00:21:33.576 "adrfam": "IPv4", 00:21:33.576 "traddr": "10.0.0.2", 00:21:33.576 "trsvcid": "4420" 00:21:33.576 }, 00:21:33.576 "peer_address": { 00:21:33.576 "trtype": "TCP", 00:21:33.576 "adrfam": "IPv4", 00:21:33.576 "traddr": "10.0.0.1", 00:21:33.576 "trsvcid": "52880" 00:21:33.576 }, 00:21:33.576 "auth": { 00:21:33.576 "state": "completed", 00:21:33.576 "digest": "sha512", 00:21:33.576 "dhgroup": "ffdhe8192" 00:21:33.576 } 00:21:33.576 } 00:21:33.576 ]' 00:21:33.576 10:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:33.576 10:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:33.576 10:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:33.835 10:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:33.835 10:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:33.835 10:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:33.835 10:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:33.835 10:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:34.094 10:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTIxMWYwYWQ2NWUzNzc1YzRjYjMwZDgxNWY5YmMwZGWJauU6: --dhchap-ctrl-secret DHHC-1:02:ZDExZDIxMWIxODE2MGE3ZjcxNTZmYmQ4M2FkY2NhOGNlN2Q3YmE4ODkzOTRhMmYzOYOvaQ==: 00:21:34.094 10:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YTIxMWYwYWQ2NWUzNzc1YzRjYjMwZDgxNWY5YmMwZGWJauU6: --dhchap-ctrl-secret DHHC-1:02:ZDExZDIxMWIxODE2MGE3ZjcxNTZmYmQ4M2FkY2NhOGNlN2Q3YmE4ODkzOTRhMmYzOYOvaQ==: 00:21:34.661 10:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:34.661 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:34.661 10:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:34.661 10:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:34.661 10:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:34.661 10:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:34.661 10:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:34.661 10:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:34.662 10:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:34.662 10:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:21:34.662 10:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:34.662 10:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:34.662 10:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:34.662 10:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:34.662 10:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:34.662 10:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:34.662 10:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:34.662 10:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:34.662 10:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:34.662 10:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:34.662 10:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:34.662 10:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:35.229 00:21:35.229 10:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:35.229 10:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:35.229 10:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:35.488 10:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:35.488 10:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:35.488 10:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:35.488 10:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:35.488 10:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:35.488 10:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:35.488 { 00:21:35.488 "cntlid": 141, 00:21:35.488 "qid": 0, 00:21:35.488 "state": "enabled", 00:21:35.488 "thread": "nvmf_tgt_poll_group_000", 00:21:35.489 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:35.489 "listen_address": { 00:21:35.489 "trtype": "TCP", 00:21:35.489 "adrfam": "IPv4", 00:21:35.489 "traddr": "10.0.0.2", 00:21:35.489 "trsvcid": "4420" 00:21:35.489 }, 00:21:35.489 "peer_address": { 00:21:35.489 "trtype": "TCP", 00:21:35.489 "adrfam": "IPv4", 00:21:35.489 "traddr": "10.0.0.1", 00:21:35.489 "trsvcid": "52910" 00:21:35.489 }, 00:21:35.489 "auth": { 00:21:35.489 "state": "completed", 00:21:35.489 "digest": "sha512", 00:21:35.489 "dhgroup": "ffdhe8192" 00:21:35.489 } 00:21:35.489 } 00:21:35.489 ]' 00:21:35.489 10:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:35.489 10:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:35.489 10:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:35.489 10:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:35.489 10:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:35.489 10:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:35.489 10:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:35.489 10:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:35.748 10:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OWJiODA2ZjkxODYxYjcxMDc0NjNmNjU4ZWFkMDc5OTM1Y2M5OWE0MGEwMzk4NThjEC7s2A==: --dhchap-ctrl-secret DHHC-1:01:ZWEzNDQ0YmM1MTUwZjg3NjdjNTJlZTg3NjU3YjQzZjbwUG5H: 00:21:35.748 10:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:OWJiODA2ZjkxODYxYjcxMDc0NjNmNjU4ZWFkMDc5OTM1Y2M5OWE0MGEwMzk4NThjEC7s2A==: --dhchap-ctrl-secret DHHC-1:01:ZWEzNDQ0YmM1MTUwZjg3NjdjNTJlZTg3NjU3YjQzZjbwUG5H: 00:21:36.315 10:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:36.315 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:36.315 10:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:36.315 10:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.315 10:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:36.315 10:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.315 10:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:36.315 10:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:36.315 10:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:36.574 10:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:21:36.574 10:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:36.574 10:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:36.574 10:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:36.574 10:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:36.574 10:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:36.574 10:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:21:36.574 10:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.574 10:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:36.574 10:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.574 10:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:36.574 10:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:36.574 10:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:37.141 00:21:37.141 10:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:37.141 10:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:37.141 10:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:37.141 10:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:37.141 10:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:37.141 10:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.141 10:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:37.141 10:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.141 10:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:37.141 { 00:21:37.141 "cntlid": 143, 00:21:37.141 "qid": 0, 00:21:37.141 "state": "enabled", 00:21:37.141 "thread": "nvmf_tgt_poll_group_000", 00:21:37.141 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:37.141 "listen_address": { 00:21:37.141 "trtype": "TCP", 00:21:37.141 "adrfam": "IPv4", 00:21:37.141 "traddr": "10.0.0.2", 00:21:37.141 "trsvcid": "4420" 00:21:37.141 }, 00:21:37.141 "peer_address": { 00:21:37.141 "trtype": "TCP", 00:21:37.141 "adrfam": "IPv4", 00:21:37.141 "traddr": "10.0.0.1", 00:21:37.141 "trsvcid": "52934" 00:21:37.141 }, 00:21:37.141 "auth": { 00:21:37.141 "state": "completed", 00:21:37.141 "digest": "sha512", 00:21:37.141 "dhgroup": "ffdhe8192" 00:21:37.141 } 00:21:37.141 } 00:21:37.141 ]' 00:21:37.141 10:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:37.399 10:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:37.399 10:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:37.399 10:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:37.399 10:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:37.399 10:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:37.399 10:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:37.399 10:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:37.658 10:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTJlNzAwMjczYjQxOTZkNzE3YWQxZjVmYjEyNGY4YTM1YzA1OWUwNGEwZDYwYmNmNTc2MDBlMDA4ZDlmMjgwYhS8dh8=: 00:21:37.658 10:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NTJlNzAwMjczYjQxOTZkNzE3YWQxZjVmYjEyNGY4YTM1YzA1OWUwNGEwZDYwYmNmNTc2MDBlMDA4ZDlmMjgwYhS8dh8=: 00:21:38.226 10:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:38.226 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:38.226 10:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:38.226 10:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.226 10:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:38.226 10:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.226 10:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:21:38.226 10:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:21:38.226 10:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:21:38.226 10:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:38.226 10:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:38.226 10:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:38.226 10:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:21:38.226 10:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:38.226 10:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:38.226 10:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:38.226 10:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:38.485 10:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:38.485 10:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:38.485 10:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.485 10:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:38.485 10:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.485 10:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:38.485 10:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:38.485 10:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:38.744 00:21:38.744 10:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:38.744 10:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:38.744 10:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:39.003 10:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:39.003 10:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:39.003 10:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.003 10:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:39.003 10:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.003 10:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:39.003 { 00:21:39.003 "cntlid": 145, 00:21:39.003 "qid": 0, 00:21:39.003 "state": "enabled", 00:21:39.003 "thread": "nvmf_tgt_poll_group_000", 00:21:39.003 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:39.003 "listen_address": { 00:21:39.003 "trtype": "TCP", 00:21:39.003 "adrfam": "IPv4", 00:21:39.003 "traddr": "10.0.0.2", 00:21:39.003 "trsvcid": "4420" 00:21:39.003 }, 00:21:39.003 "peer_address": { 00:21:39.003 "trtype": "TCP", 00:21:39.003 "adrfam": "IPv4", 00:21:39.003 "traddr": "10.0.0.1", 00:21:39.003 "trsvcid": "52970" 00:21:39.003 }, 00:21:39.003 "auth": { 00:21:39.003 "state": "completed", 00:21:39.003 "digest": "sha512", 00:21:39.003 "dhgroup": "ffdhe8192" 00:21:39.003 } 00:21:39.003 } 00:21:39.003 ]' 00:21:39.003 10:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:39.003 10:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:39.003 10:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:39.262 10:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:39.262 10:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:39.262 10:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:39.262 10:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:39.262 10:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:39.262 10:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTQxZTczZTU3OWRjMzNjYzJjNjg1ZDE0MzkxYzUxMDJjMmEzMjY1ZGU2MzMwNWRiooT6bw==: --dhchap-ctrl-secret DHHC-1:03:MWY1NWRkNGJiNDIwYmQ3NjY0M2VhY2EyN2IxMGVlYjA3YTg5MDUzNmZhYjFjNTg4Y2YzY2FlNjhiYjI5MGJmOB0pSzQ=: 00:21:39.262 10:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZTQxZTczZTU3OWRjMzNjYzJjNjg1ZDE0MzkxYzUxMDJjMmEzMjY1ZGU2MzMwNWRiooT6bw==: --dhchap-ctrl-secret DHHC-1:03:MWY1NWRkNGJiNDIwYmQ3NjY0M2VhY2EyN2IxMGVlYjA3YTg5MDUzNmZhYjFjNTg4Y2YzY2FlNjhiYjI5MGJmOB0pSzQ=: 00:21:39.830 10:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:39.830 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:39.830 10:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:39.830 10:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.830 10:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:39.830 10:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.830 10:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 00:21:39.830 10:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.830 10:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:39.830 10:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.830 10:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:21:39.830 10:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:21:39.830 10:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:21:40.088 10:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:21:40.088 10:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:40.088 10:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:21:40.088 10:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:40.088 10:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key2 00:21:40.089 10:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:21:40.089 10:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:21:40.347 request: 00:21:40.347 { 00:21:40.347 "name": "nvme0", 00:21:40.347 "trtype": "tcp", 00:21:40.347 "traddr": "10.0.0.2", 00:21:40.347 "adrfam": "ipv4", 00:21:40.347 "trsvcid": "4420", 00:21:40.347 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:40.347 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:40.347 "prchk_reftag": false, 00:21:40.347 "prchk_guard": false, 00:21:40.347 "hdgst": false, 00:21:40.347 "ddgst": false, 00:21:40.347 "dhchap_key": "key2", 00:21:40.347 "allow_unrecognized_csi": false, 00:21:40.347 "method": "bdev_nvme_attach_controller", 00:21:40.347 "req_id": 1 00:21:40.347 } 00:21:40.347 Got JSON-RPC error response 00:21:40.347 response: 00:21:40.347 { 00:21:40.347 "code": -5, 00:21:40.347 "message": "Input/output error" 00:21:40.347 } 00:21:40.347 10:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:21:40.347 10:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:40.347 10:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:40.347 10:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:40.347 10:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:40.347 10:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.347 10:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.347 10:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.347 10:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:40.347 10:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.347 10:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.347 10:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.347 10:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:40.347 10:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:21:40.347 10:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:40.347 10:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:21:40.347 10:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:40.347 10:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:21:40.347 10:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:40.347 10:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:40.347 10:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:40.347 10:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:40.915 request: 00:21:40.915 { 00:21:40.915 "name": "nvme0", 00:21:40.915 "trtype": "tcp", 00:21:40.915 "traddr": "10.0.0.2", 00:21:40.915 "adrfam": "ipv4", 00:21:40.915 "trsvcid": "4420", 00:21:40.915 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:40.915 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:40.915 "prchk_reftag": false, 00:21:40.915 "prchk_guard": false, 00:21:40.915 "hdgst": false, 00:21:40.915 "ddgst": false, 00:21:40.915 "dhchap_key": "key1", 00:21:40.915 "dhchap_ctrlr_key": "ckey2", 00:21:40.915 "allow_unrecognized_csi": false, 00:21:40.915 "method": "bdev_nvme_attach_controller", 00:21:40.915 "req_id": 1 00:21:40.915 } 00:21:40.915 Got JSON-RPC error response 00:21:40.915 response: 00:21:40.915 { 00:21:40.915 "code": -5, 00:21:40.915 "message": "Input/output error" 00:21:40.915 } 00:21:40.915 10:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:21:40.915 10:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:40.915 10:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:40.915 10:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:40.915 10:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:40.916 10:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.916 10:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.916 10:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.916 10:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 00:21:40.916 10:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.916 10:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.916 10:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.916 10:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:40.916 10:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:21:40.916 10:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:40.916 10:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:21:40.916 10:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:40.916 10:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:21:40.916 10:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:40.916 10:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:40.916 10:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:40.916 10:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:41.482 request: 00:21:41.482 { 00:21:41.482 "name": "nvme0", 00:21:41.482 "trtype": "tcp", 00:21:41.482 "traddr": "10.0.0.2", 00:21:41.482 "adrfam": "ipv4", 00:21:41.482 "trsvcid": "4420", 00:21:41.482 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:41.482 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:41.482 "prchk_reftag": false, 00:21:41.482 "prchk_guard": false, 00:21:41.482 "hdgst": false, 00:21:41.482 "ddgst": false, 00:21:41.482 "dhchap_key": "key1", 00:21:41.482 "dhchap_ctrlr_key": "ckey1", 00:21:41.482 "allow_unrecognized_csi": false, 00:21:41.482 "method": "bdev_nvme_attach_controller", 00:21:41.482 "req_id": 1 00:21:41.482 } 00:21:41.482 Got JSON-RPC error response 00:21:41.482 response: 00:21:41.482 { 00:21:41.482 "code": -5, 00:21:41.482 "message": "Input/output error" 00:21:41.482 } 00:21:41.483 10:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:21:41.483 10:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:41.483 10:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:41.483 10:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:41.483 10:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:41.483 10:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.483 10:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:41.483 10:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.483 10:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 3914364 00:21:41.483 10:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 3914364 ']' 00:21:41.483 10:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 3914364 00:21:41.483 10:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:21:41.483 10:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:41.483 10:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3914364 00:21:41.483 10:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:41.483 10:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:41.483 10:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3914364' 00:21:41.483 killing process with pid 3914364 00:21:41.483 10:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 3914364 00:21:41.483 10:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 3914364 00:21:42.453 10:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:21:42.453 10:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:42.453 10:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:42.453 10:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:42.453 10:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=3935523 00:21:42.453 10:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 3935523 00:21:42.453 10:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:21:42.453 10:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 3935523 ']' 00:21:42.453 10:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:42.453 10:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:42.453 10:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:42.453 10:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:42.453 10:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:43.389 10:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:43.389 10:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:21:43.389 10:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:43.389 10:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:43.389 10:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:43.389 10:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:43.389 10:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:21:43.389 10:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 3935523 00:21:43.389 10:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 3935523 ']' 00:21:43.389 10:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:43.389 10:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:43.389 10:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:43.389 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:43.389 10:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:43.389 10:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:43.648 10:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:43.648 10:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:21:43.648 10:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:21:43.648 10:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.648 10:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:43.906 null0 00:21:43.906 10:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.906 10:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:21:43.906 10:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.Dhe 00:21:43.906 10:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.906 10:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:43.906 10:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.906 10:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.GdK ]] 00:21:43.906 10:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.GdK 00:21:43.906 10:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.906 10:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:43.906 10:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.906 10:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:21:43.906 10:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.aqn 00:21:43.906 10:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.906 10:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:43.906 10:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.906 10:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.3MW ]] 00:21:43.906 10:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.3MW 00:21:43.906 10:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.906 10:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:44.165 10:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:44.165 10:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:21:44.165 10:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.BBE 00:21:44.165 10:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:44.165 10:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:44.165 10:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:44.165 10:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.bIH ]] 00:21:44.165 10:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.bIH 00:21:44.165 10:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:44.165 10:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:44.165 10:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:44.165 10:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:21:44.165 10:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.Jyj 00:21:44.165 10:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:44.165 10:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:44.165 10:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:44.165 10:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:21:44.165 10:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:21:44.165 10:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:44.165 10:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:44.165 10:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:44.165 10:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:44.165 10:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:44.165 10:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:21:44.165 10:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:44.165 10:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:44.165 10:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:44.165 10:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:44.165 10:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:44.165 10:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:44.732 nvme0n1 00:21:44.732 10:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:44.732 10:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:44.732 10:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:45.082 10:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:45.082 10:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:45.082 10:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.082 10:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:45.082 10:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.082 10:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:45.082 { 00:21:45.082 "cntlid": 1, 00:21:45.082 "qid": 0, 00:21:45.082 "state": "enabled", 00:21:45.082 "thread": "nvmf_tgt_poll_group_000", 00:21:45.082 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:45.082 "listen_address": { 00:21:45.082 "trtype": "TCP", 00:21:45.082 "adrfam": "IPv4", 00:21:45.082 "traddr": "10.0.0.2", 00:21:45.082 "trsvcid": "4420" 00:21:45.082 }, 00:21:45.082 "peer_address": { 00:21:45.082 "trtype": "TCP", 00:21:45.082 "adrfam": "IPv4", 00:21:45.082 "traddr": "10.0.0.1", 00:21:45.082 "trsvcid": "35224" 00:21:45.082 }, 00:21:45.082 "auth": { 00:21:45.082 "state": "completed", 00:21:45.082 "digest": "sha512", 00:21:45.082 "dhgroup": "ffdhe8192" 00:21:45.082 } 00:21:45.082 } 00:21:45.082 ]' 00:21:45.082 10:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:45.082 10:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:45.082 10:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:45.082 10:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:45.082 10:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:45.082 10:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:45.082 10:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:45.082 10:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:45.362 10:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTJlNzAwMjczYjQxOTZkNzE3YWQxZjVmYjEyNGY4YTM1YzA1OWUwNGEwZDYwYmNmNTc2MDBlMDA4ZDlmMjgwYhS8dh8=: 00:21:45.362 10:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NTJlNzAwMjczYjQxOTZkNzE3YWQxZjVmYjEyNGY4YTM1YzA1OWUwNGEwZDYwYmNmNTc2MDBlMDA4ZDlmMjgwYhS8dh8=: 00:21:45.930 10:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:45.930 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:45.930 10:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:45.930 10:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.930 10:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:45.930 10:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.930 10:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:21:45.930 10:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.930 10:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:45.930 10:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.930 10:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:21:45.930 10:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:21:46.188 10:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:21:46.188 10:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:21:46.188 10:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:21:46.188 10:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:21:46.188 10:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:46.189 10:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:21:46.189 10:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:46.189 10:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:46.189 10:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:46.189 10:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:46.448 request: 00:21:46.448 { 00:21:46.448 "name": "nvme0", 00:21:46.448 "trtype": "tcp", 00:21:46.448 "traddr": "10.0.0.2", 00:21:46.448 "adrfam": "ipv4", 00:21:46.448 "trsvcid": "4420", 00:21:46.448 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:46.448 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:46.448 "prchk_reftag": false, 00:21:46.448 "prchk_guard": false, 00:21:46.448 "hdgst": false, 00:21:46.448 "ddgst": false, 00:21:46.448 "dhchap_key": "key3", 00:21:46.448 "allow_unrecognized_csi": false, 00:21:46.448 "method": "bdev_nvme_attach_controller", 00:21:46.448 "req_id": 1 00:21:46.448 } 00:21:46.448 Got JSON-RPC error response 00:21:46.448 response: 00:21:46.448 { 00:21:46.448 "code": -5, 00:21:46.448 "message": "Input/output error" 00:21:46.448 } 00:21:46.448 10:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:21:46.448 10:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:46.448 10:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:46.448 10:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:46.448 10:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:21:46.448 10:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:21:46.448 10:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:21:46.448 10:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:21:46.448 10:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:21:46.448 10:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:21:46.448 10:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:21:46.448 10:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:21:46.448 10:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:46.448 10:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:21:46.448 10:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:46.448 10:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:46.448 10:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:46.448 10:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:46.707 request: 00:21:46.707 { 00:21:46.707 "name": "nvme0", 00:21:46.707 "trtype": "tcp", 00:21:46.707 "traddr": "10.0.0.2", 00:21:46.707 "adrfam": "ipv4", 00:21:46.707 "trsvcid": "4420", 00:21:46.707 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:46.707 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:46.707 "prchk_reftag": false, 00:21:46.707 "prchk_guard": false, 00:21:46.707 "hdgst": false, 00:21:46.707 "ddgst": false, 00:21:46.707 "dhchap_key": "key3", 00:21:46.707 "allow_unrecognized_csi": false, 00:21:46.707 "method": "bdev_nvme_attach_controller", 00:21:46.707 "req_id": 1 00:21:46.707 } 00:21:46.707 Got JSON-RPC error response 00:21:46.707 response: 00:21:46.707 { 00:21:46.707 "code": -5, 00:21:46.707 "message": "Input/output error" 00:21:46.707 } 00:21:46.707 10:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:21:46.707 10:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:46.707 10:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:46.707 10:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:46.707 10:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:21:46.707 10:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:21:46.707 10:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:21:46.707 10:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:46.707 10:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:46.707 10:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:46.966 10:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:46.966 10:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.966 10:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:46.966 10:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.966 10:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:46.966 10:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.966 10:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:46.966 10:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.966 10:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:46.966 10:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:21:46.966 10:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:46.966 10:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:21:46.966 10:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:46.966 10:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:21:46.966 10:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:46.966 10:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:46.967 10:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:46.967 10:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:47.225 request: 00:21:47.225 { 00:21:47.225 "name": "nvme0", 00:21:47.225 "trtype": "tcp", 00:21:47.225 "traddr": "10.0.0.2", 00:21:47.225 "adrfam": "ipv4", 00:21:47.225 "trsvcid": "4420", 00:21:47.225 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:47.225 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:47.225 "prchk_reftag": false, 00:21:47.225 "prchk_guard": false, 00:21:47.225 "hdgst": false, 00:21:47.225 "ddgst": false, 00:21:47.225 "dhchap_key": "key0", 00:21:47.225 "dhchap_ctrlr_key": "key1", 00:21:47.225 "allow_unrecognized_csi": false, 00:21:47.225 "method": "bdev_nvme_attach_controller", 00:21:47.225 "req_id": 1 00:21:47.225 } 00:21:47.225 Got JSON-RPC error response 00:21:47.225 response: 00:21:47.225 { 00:21:47.225 "code": -5, 00:21:47.225 "message": "Input/output error" 00:21:47.226 } 00:21:47.226 10:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:21:47.226 10:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:47.226 10:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:47.226 10:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:47.226 10:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:21:47.226 10:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:21:47.226 10:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:21:47.484 nvme0n1 00:21:47.484 10:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:21:47.484 10:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:47.484 10:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:21:47.743 10:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:47.743 10:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:47.743 10:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:48.002 10:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 00:21:48.002 10:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:48.002 10:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:48.002 10:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:48.002 10:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:21:48.002 10:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:21:48.002 10:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:21:48.569 nvme0n1 00:21:48.569 10:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:21:48.569 10:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:21:48.569 10:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:48.828 10:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:48.828 10:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:48.828 10:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:48.828 10:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:48.828 10:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:48.828 10:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:21:48.828 10:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:21:48.828 10:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:49.086 10:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:49.086 10:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:OWJiODA2ZjkxODYxYjcxMDc0NjNmNjU4ZWFkMDc5OTM1Y2M5OWE0MGEwMzk4NThjEC7s2A==: --dhchap-ctrl-secret DHHC-1:03:NTJlNzAwMjczYjQxOTZkNzE3YWQxZjVmYjEyNGY4YTM1YzA1OWUwNGEwZDYwYmNmNTc2MDBlMDA4ZDlmMjgwYhS8dh8=: 00:21:49.086 10:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:OWJiODA2ZjkxODYxYjcxMDc0NjNmNjU4ZWFkMDc5OTM1Y2M5OWE0MGEwMzk4NThjEC7s2A==: --dhchap-ctrl-secret DHHC-1:03:NTJlNzAwMjczYjQxOTZkNzE3YWQxZjVmYjEyNGY4YTM1YzA1OWUwNGEwZDYwYmNmNTc2MDBlMDA4ZDlmMjgwYhS8dh8=: 00:21:49.653 10:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:21:49.653 10:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:21:49.653 10:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:21:49.653 10:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:21:49.653 10:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:21:49.653 10:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:21:49.653 10:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:21:49.653 10:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:49.653 10:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:49.912 10:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:21:49.912 10:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:21:49.912 10:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:21:49.912 10:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:21:49.912 10:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:49.912 10:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:21:49.912 10:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:49.912 10:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 00:21:49.912 10:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:21:49.912 10:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:21:50.171 request: 00:21:50.171 { 00:21:50.171 "name": "nvme0", 00:21:50.171 "trtype": "tcp", 00:21:50.171 "traddr": "10.0.0.2", 00:21:50.171 "adrfam": "ipv4", 00:21:50.171 "trsvcid": "4420", 00:21:50.171 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:50.171 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:50.171 "prchk_reftag": false, 00:21:50.171 "prchk_guard": false, 00:21:50.171 "hdgst": false, 00:21:50.171 "ddgst": false, 00:21:50.171 "dhchap_key": "key1", 00:21:50.171 "allow_unrecognized_csi": false, 00:21:50.171 "method": "bdev_nvme_attach_controller", 00:21:50.171 "req_id": 1 00:21:50.171 } 00:21:50.171 Got JSON-RPC error response 00:21:50.171 response: 00:21:50.171 { 00:21:50.171 "code": -5, 00:21:50.171 "message": "Input/output error" 00:21:50.171 } 00:21:50.171 10:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:21:50.171 10:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:50.171 10:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:50.171 10:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:50.171 10:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:50.171 10:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:50.171 10:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:51.107 nvme0n1 00:21:51.107 10:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:21:51.107 10:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:21:51.107 10:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:51.107 10:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:51.107 10:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:51.107 10:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:51.365 10:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:51.365 10:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:51.365 10:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:51.365 10:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:51.365 10:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:21:51.365 10:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:21:51.365 10:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:21:51.622 nvme0n1 00:21:51.622 10:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:21:51.623 10:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:21:51.623 10:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:51.881 10:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:51.881 10:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:51.881 10:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:52.140 10:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key key3 00:21:52.140 10:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:52.140 10:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:52.140 10:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:52.140 10:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:YTIxMWYwYWQ2NWUzNzc1YzRjYjMwZDgxNWY5YmMwZGWJauU6: '' 2s 00:21:52.140 10:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:21:52.140 10:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:21:52.140 10:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:YTIxMWYwYWQ2NWUzNzc1YzRjYjMwZDgxNWY5YmMwZGWJauU6: 00:21:52.140 10:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:21:52.140 10:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:21:52.140 10:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:21:52.140 10:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:YTIxMWYwYWQ2NWUzNzc1YzRjYjMwZDgxNWY5YmMwZGWJauU6: ]] 00:21:52.140 10:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:YTIxMWYwYWQ2NWUzNzc1YzRjYjMwZDgxNWY5YmMwZGWJauU6: 00:21:52.140 10:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:21:52.140 10:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:21:52.140 10:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:21:54.045 10:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:21:54.045 10:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:21:54.045 10:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:21:54.045 10:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:21:54.045 10:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:21:54.045 10:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:21:54.045 10:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:21:54.045 10:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key key2 00:21:54.045 10:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:54.045 10:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:54.045 10:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:54.046 10:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:OWJiODA2ZjkxODYxYjcxMDc0NjNmNjU4ZWFkMDc5OTM1Y2M5OWE0MGEwMzk4NThjEC7s2A==: 2s 00:21:54.046 10:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:21:54.046 10:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:21:54.046 10:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:21:54.046 10:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:OWJiODA2ZjkxODYxYjcxMDc0NjNmNjU4ZWFkMDc5OTM1Y2M5OWE0MGEwMzk4NThjEC7s2A==: 00:21:54.046 10:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:21:54.046 10:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:21:54.046 10:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:21:54.046 10:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:OWJiODA2ZjkxODYxYjcxMDc0NjNmNjU4ZWFkMDc5OTM1Y2M5OWE0MGEwMzk4NThjEC7s2A==: ]] 00:21:54.046 10:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:OWJiODA2ZjkxODYxYjcxMDc0NjNmNjU4ZWFkMDc5OTM1Y2M5OWE0MGEwMzk4NThjEC7s2A==: 00:21:54.046 10:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:21:54.046 10:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:21:56.580 10:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:21:56.580 10:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:21:56.580 10:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:21:56.580 10:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:21:56.580 10:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:21:56.580 10:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:21:56.580 10:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:21:56.580 10:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:56.580 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:56.580 10:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:56.580 10:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:56.580 10:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:56.580 10:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:56.580 10:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:21:56.580 10:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:21:56.580 10:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:21:56.838 nvme0n1 00:21:57.097 10:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:57.097 10:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:57.097 10:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:57.097 10:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:57.097 10:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:57.097 10:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:57.356 10:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:21:57.356 10:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:57.356 10:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:21:57.615 10:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:57.615 10:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:57.615 10:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:57.615 10:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:57.615 10:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:57.615 10:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:21:57.615 10:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:21:57.874 10:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:21:57.874 10:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:21:57.874 10:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:58.133 10:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:58.133 10:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:58.133 10:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:58.133 10:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:58.133 10:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:58.133 10:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:21:58.133 10:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:21:58.133 10:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:21:58.133 10:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:21:58.133 10:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:58.133 10:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:21:58.133 10:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:58.133 10:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:21:58.133 10:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:21:58.701 request: 00:21:58.701 { 00:21:58.701 "name": "nvme0", 00:21:58.701 "dhchap_key": "key1", 00:21:58.701 "dhchap_ctrlr_key": "key3", 00:21:58.701 "method": "bdev_nvme_set_keys", 00:21:58.701 "req_id": 1 00:21:58.701 } 00:21:58.701 Got JSON-RPC error response 00:21:58.701 response: 00:21:58.701 { 00:21:58.701 "code": -13, 00:21:58.701 "message": "Permission denied" 00:21:58.701 } 00:21:58.701 10:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:21:58.701 10:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:58.701 10:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:58.701 10:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:58.701 10:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:21:58.701 10:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:21:58.701 10:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:58.701 10:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:21:58.701 10:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:21:59.636 10:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:21:59.636 10:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:21:59.636 10:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:59.895 10:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:21:59.895 10:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:59.895 10:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.895 10:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:59.895 10:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.895 10:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:21:59.895 10:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:21:59.895 10:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:00.832 nvme0n1 00:22:00.832 10:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:00.832 10:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:00.832 10:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:00.832 10:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:00.832 10:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:22:00.832 10:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:22:00.832 10:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:22:00.832 10:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:22:00.832 10:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:00.832 10:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:22:00.832 10:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:00.832 10:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:22:00.832 10:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:22:01.091 request: 00:22:01.091 { 00:22:01.091 "name": "nvme0", 00:22:01.091 "dhchap_key": "key2", 00:22:01.091 "dhchap_ctrlr_key": "key0", 00:22:01.091 "method": "bdev_nvme_set_keys", 00:22:01.091 "req_id": 1 00:22:01.091 } 00:22:01.091 Got JSON-RPC error response 00:22:01.091 response: 00:22:01.091 { 00:22:01.091 "code": -13, 00:22:01.091 "message": "Permission denied" 00:22:01.091 } 00:22:01.091 10:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:22:01.091 10:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:01.091 10:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:01.091 10:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:01.091 10:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:22:01.091 10:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:22:01.091 10:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:01.349 10:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:22:01.349 10:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:22:02.285 10:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:22:02.285 10:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:22:02.285 10:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:02.544 10:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:22:02.544 10:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:22:02.544 10:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:22:02.544 10:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 3914395 00:22:02.544 10:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 3914395 ']' 00:22:02.544 10:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 3914395 00:22:02.544 10:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:22:02.544 10:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:02.544 10:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3914395 00:22:02.544 10:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:02.544 10:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:02.544 10:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3914395' 00:22:02.544 killing process with pid 3914395 00:22:02.544 10:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 3914395 00:22:02.544 10:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 3914395 00:22:05.078 10:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:22:05.078 10:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:05.078 10:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:22:05.078 10:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:05.078 10:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:22:05.078 10:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:05.078 10:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:05.078 rmmod nvme_tcp 00:22:05.078 rmmod nvme_fabrics 00:22:05.078 rmmod nvme_keyring 00:22:05.078 10:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:05.078 10:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:22:05.078 10:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:22:05.078 10:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 3935523 ']' 00:22:05.078 10:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 3935523 00:22:05.078 10:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 3935523 ']' 00:22:05.078 10:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 3935523 00:22:05.078 10:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:22:05.078 10:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:05.078 10:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3935523 00:22:05.078 10:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:05.078 10:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:05.078 10:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3935523' 00:22:05.078 killing process with pid 3935523 00:22:05.078 10:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 3935523 00:22:05.078 10:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 3935523 00:22:06.012 10:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:06.012 10:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:06.012 10:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:06.012 10:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:22:06.013 10:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 00:22:06.013 10:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:06.013 10:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 00:22:06.013 10:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:06.013 10:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:06.013 10:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:06.013 10:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:06.013 10:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:08.549 10:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:08.549 10:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.Dhe /tmp/spdk.key-sha256.aqn /tmp/spdk.key-sha384.BBE /tmp/spdk.key-sha512.Jyj /tmp/spdk.key-sha512.GdK /tmp/spdk.key-sha384.3MW /tmp/spdk.key-sha256.bIH '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:22:08.549 00:22:08.549 real 2m35.306s 00:22:08.549 user 5m54.846s 00:22:08.549 sys 0m23.413s 00:22:08.549 10:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:08.549 10:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:08.549 ************************************ 00:22:08.549 END TEST nvmf_auth_target 00:22:08.549 ************************************ 00:22:08.549 10:24:01 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:22:08.549 10:24:01 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:22:08.549 10:24:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:22:08.549 10:24:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:08.549 10:24:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:08.549 ************************************ 00:22:08.549 START TEST nvmf_bdevio_no_huge 00:22:08.549 ************************************ 00:22:08.549 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:22:08.549 * Looking for test storage... 00:22:08.549 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:08.549 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:22:08.549 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # lcov --version 00:22:08.549 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:22:08.549 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:22:08.549 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:08.549 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:08.549 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:08.549 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:22:08.549 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:22:08.549 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:22:08.549 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:22:08.549 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:22:08.549 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:22:08.550 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:22:08.550 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:08.550 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:22:08.550 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:22:08.550 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:08.550 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:08.550 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:22:08.550 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:22:08.550 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:08.550 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:22:08.550 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:22:08.550 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:22:08.550 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:22:08.550 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:08.550 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:22:08.550 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:22:08.550 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:08.550 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:08.550 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:22:08.550 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:08.550 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:22:08.550 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:08.550 --rc genhtml_branch_coverage=1 00:22:08.550 --rc genhtml_function_coverage=1 00:22:08.550 --rc genhtml_legend=1 00:22:08.550 --rc geninfo_all_blocks=1 00:22:08.550 --rc geninfo_unexecuted_blocks=1 00:22:08.550 00:22:08.550 ' 00:22:08.550 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:22:08.550 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:08.550 --rc genhtml_branch_coverage=1 00:22:08.550 --rc genhtml_function_coverage=1 00:22:08.550 --rc genhtml_legend=1 00:22:08.550 --rc geninfo_all_blocks=1 00:22:08.550 --rc geninfo_unexecuted_blocks=1 00:22:08.550 00:22:08.550 ' 00:22:08.550 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:22:08.550 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:08.550 --rc genhtml_branch_coverage=1 00:22:08.550 --rc genhtml_function_coverage=1 00:22:08.550 --rc genhtml_legend=1 00:22:08.550 --rc geninfo_all_blocks=1 00:22:08.550 --rc geninfo_unexecuted_blocks=1 00:22:08.550 00:22:08.550 ' 00:22:08.550 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:22:08.550 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:08.550 --rc genhtml_branch_coverage=1 00:22:08.550 --rc genhtml_function_coverage=1 00:22:08.550 --rc genhtml_legend=1 00:22:08.550 --rc geninfo_all_blocks=1 00:22:08.550 --rc geninfo_unexecuted_blocks=1 00:22:08.550 00:22:08.550 ' 00:22:08.550 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:08.550 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:22:08.550 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:08.550 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:08.550 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:08.550 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:08.550 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:08.550 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:08.550 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:08.550 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:08.550 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:08.550 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:08.550 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:22:08.550 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:22:08.550 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:08.550 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:08.550 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:08.550 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:08.550 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:08.550 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:22:08.550 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:08.550 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:08.550 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:08.550 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:08.550 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:08.550 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:08.551 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:22:08.551 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:08.551 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:22:08.551 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:08.551 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:08.551 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:08.551 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:08.551 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:08.551 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:08.551 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:08.551 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:08.551 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:08.551 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:08.551 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:08.551 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:08.551 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:22:08.551 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:08.551 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:08.551 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:08.551 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:08.551 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:08.551 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:08.551 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:08.551 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:08.551 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:08.551 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:08.551 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@309 -- # xtrace_disable 00:22:08.551 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:13.825 10:24:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:13.825 10:24:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # pci_devs=() 00:22:13.825 10:24:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:13.825 10:24:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:13.825 10:24:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:13.825 10:24:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:13.825 10:24:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:13.825 10:24:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # net_devs=() 00:22:13.825 10:24:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:13.825 10:24:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # e810=() 00:22:13.825 10:24:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # local -ga e810 00:22:13.825 10:24:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # x722=() 00:22:13.825 10:24:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # local -ga x722 00:22:13.825 10:24:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # mlx=() 00:22:13.825 10:24:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # local -ga mlx 00:22:13.825 10:24:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:13.825 10:24:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:13.825 10:24:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:13.825 10:24:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:13.825 10:24:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:13.825 10:24:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:13.825 10:24:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:13.825 10:24:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:13.825 10:24:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:13.825 10:24:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:13.825 10:24:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:13.825 10:24:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:13.825 10:24:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:13.825 10:24:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:13.825 10:24:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:13.825 10:24:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:13.825 10:24:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:13.825 10:24:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:13.825 10:24:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:13.825 10:24:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:22:13.825 Found 0000:af:00.0 (0x8086 - 0x159b) 00:22:13.825 10:24:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:13.825 10:24:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:13.825 10:24:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:13.825 10:24:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:13.825 10:24:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:13.825 10:24:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:13.825 10:24:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:22:13.825 Found 0000:af:00.1 (0x8086 - 0x159b) 00:22:13.825 10:24:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:13.825 10:24:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:13.825 10:24:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:13.825 10:24:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:13.825 10:24:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:13.825 10:24:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:13.825 10:24:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:13.825 10:24:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:13.825 10:24:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:13.825 10:24:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:13.825 10:24:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:13.825 10:24:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:13.825 10:24:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:13.825 10:24:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:13.825 10:24:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:13.825 10:24:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:22:13.825 Found net devices under 0000:af:00.0: cvl_0_0 00:22:13.825 10:24:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:13.825 10:24:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:13.825 10:24:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:13.825 10:24:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:13.825 10:24:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:13.825 10:24:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:13.825 10:24:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:13.825 10:24:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:13.825 10:24:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:22:13.825 Found net devices under 0000:af:00.1: cvl_0_1 00:22:13.825 10:24:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:13.825 10:24:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:13.825 10:24:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # is_hw=yes 00:22:13.825 10:24:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:13.825 10:24:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:13.825 10:24:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:13.825 10:24:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:13.825 10:24:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:13.825 10:24:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:13.825 10:24:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:13.825 10:24:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:13.825 10:24:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:13.825 10:24:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:13.825 10:24:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:13.825 10:24:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:13.825 10:24:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:13.825 10:24:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:13.825 10:24:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:13.825 10:24:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:13.825 10:24:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:13.825 10:24:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:13.825 10:24:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:13.825 10:24:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:13.825 10:24:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:13.825 10:24:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:14.087 10:24:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:14.087 10:24:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:14.087 10:24:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:14.087 10:24:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:14.087 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:14.087 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.427 ms 00:22:14.087 00:22:14.087 --- 10.0.0.2 ping statistics --- 00:22:14.087 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:14.087 rtt min/avg/max/mdev = 0.427/0.427/0.427/0.000 ms 00:22:14.087 10:24:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:14.087 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:14.087 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.210 ms 00:22:14.087 00:22:14.087 --- 10.0.0.1 ping statistics --- 00:22:14.087 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:14.087 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:22:14.087 10:24:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:14.087 10:24:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # return 0 00:22:14.087 10:24:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:14.087 10:24:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:14.087 10:24:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:14.087 10:24:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:14.087 10:24:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:14.087 10:24:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:14.087 10:24:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:14.087 10:24:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:22:14.087 10:24:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:14.087 10:24:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:14.087 10:24:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:14.087 10:24:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=3942826 00:22:14.087 10:24:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 3942826 00:22:14.087 10:24:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:22:14.087 10:24:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # '[' -z 3942826 ']' 00:22:14.087 10:24:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:14.087 10:24:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:14.087 10:24:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:14.087 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:14.087 10:24:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:14.087 10:24:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:14.087 [2024-12-13 10:24:07.888203] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:22:14.087 [2024-12-13 10:24:07.888301] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:22:14.347 [2024-12-13 10:24:08.027092] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:14.347 [2024-12-13 10:24:08.144982] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:14.347 [2024-12-13 10:24:08.145026] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:14.347 [2024-12-13 10:24:08.145037] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:14.347 [2024-12-13 10:24:08.145049] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:14.347 [2024-12-13 10:24:08.145058] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:14.347 [2024-12-13 10:24:08.147182] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:22:14.347 [2024-12-13 10:24:08.147275] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 5 00:22:14.347 [2024-12-13 10:24:08.147342] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:22:14.347 [2024-12-13 10:24:08.147363] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 6 00:22:14.915 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:14.915 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@868 -- # return 0 00:22:14.915 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:14.915 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:14.915 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:14.915 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:14.915 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:14.915 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.915 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:14.915 [2024-12-13 10:24:08.759121] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:14.915 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.915 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:14.915 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.915 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:15.174 Malloc0 00:22:15.174 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:15.174 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:15.174 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:15.174 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:15.174 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:15.174 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:15.174 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:15.174 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:15.174 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:15.174 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:15.174 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:15.174 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:15.174 [2024-12-13 10:24:08.866567] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:15.174 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:15.174 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:22:15.174 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:22:15.174 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 00:22:15.174 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 00:22:15.174 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:15.174 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:15.174 { 00:22:15.174 "params": { 00:22:15.174 "name": "Nvme$subsystem", 00:22:15.174 "trtype": "$TEST_TRANSPORT", 00:22:15.174 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:15.174 "adrfam": "ipv4", 00:22:15.174 "trsvcid": "$NVMF_PORT", 00:22:15.174 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:15.174 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:15.174 "hdgst": ${hdgst:-false}, 00:22:15.174 "ddgst": ${ddgst:-false} 00:22:15.174 }, 00:22:15.174 "method": "bdev_nvme_attach_controller" 00:22:15.174 } 00:22:15.174 EOF 00:22:15.174 )") 00:22:15.174 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 00:22:15.174 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 00:22:15.174 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 00:22:15.174 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:22:15.174 "params": { 00:22:15.174 "name": "Nvme1", 00:22:15.174 "trtype": "tcp", 00:22:15.174 "traddr": "10.0.0.2", 00:22:15.174 "adrfam": "ipv4", 00:22:15.174 "trsvcid": "4420", 00:22:15.174 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:15.174 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:15.174 "hdgst": false, 00:22:15.174 "ddgst": false 00:22:15.174 }, 00:22:15.174 "method": "bdev_nvme_attach_controller" 00:22:15.174 }' 00:22:15.174 [2024-12-13 10:24:08.946599] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:22:15.174 [2024-12-13 10:24:08.946695] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid3943204 ] 00:22:15.433 [2024-12-13 10:24:09.078391] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:15.433 [2024-12-13 10:24:09.201780] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:22:15.433 [2024-12-13 10:24:09.201848] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:22:15.433 [2024-12-13 10:24:09.201856] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:22:16.001 I/O targets: 00:22:16.001 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:22:16.001 00:22:16.001 00:22:16.001 CUnit - A unit testing framework for C - Version 2.1-3 00:22:16.001 http://cunit.sourceforge.net/ 00:22:16.001 00:22:16.001 00:22:16.001 Suite: bdevio tests on: Nvme1n1 00:22:16.260 Test: blockdev write read block ...passed 00:22:16.260 Test: blockdev write zeroes read block ...passed 00:22:16.260 Test: blockdev write zeroes read no split ...passed 00:22:16.260 Test: blockdev write zeroes read split ...passed 00:22:16.260 Test: blockdev write zeroes read split partial ...passed 00:22:16.260 Test: blockdev reset ...[2024-12-13 10:24:10.121729] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:22:16.260 [2024-12-13 10:24:10.121847] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000323a00 (9): Bad file descriptor 00:22:16.519 [2024-12-13 10:24:10.177371] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:22:16.519 passed 00:22:16.519 Test: blockdev write read 8 blocks ...passed 00:22:16.519 Test: blockdev write read size > 128k ...passed 00:22:16.519 Test: blockdev write read invalid size ...passed 00:22:16.519 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:22:16.519 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:22:16.519 Test: blockdev write read max offset ...passed 00:22:16.519 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:22:16.519 Test: blockdev writev readv 8 blocks ...passed 00:22:16.519 Test: blockdev writev readv 30 x 1block ...passed 00:22:16.519 Test: blockdev writev readv block ...passed 00:22:16.519 Test: blockdev writev readv size > 128k ...passed 00:22:16.519 Test: blockdev writev readv size > 128k in two iovs ...passed 00:22:16.519 Test: blockdev comparev and writev ...[2024-12-13 10:24:10.350028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:16.519 [2024-12-13 10:24:10.350075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:16.519 [2024-12-13 10:24:10.350095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:16.519 [2024-12-13 10:24:10.350108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:16.519 [2024-12-13 10:24:10.350397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:16.519 [2024-12-13 10:24:10.350414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:16.519 [2024-12-13 10:24:10.350431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:16.519 [2024-12-13 10:24:10.350443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:16.519 [2024-12-13 10:24:10.350729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:16.519 [2024-12-13 10:24:10.350747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:16.519 [2024-12-13 10:24:10.350764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:16.519 [2024-12-13 10:24:10.350776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:16.519 [2024-12-13 10:24:10.351045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:16.519 [2024-12-13 10:24:10.351062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:16.519 [2024-12-13 10:24:10.351078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:16.520 [2024-12-13 10:24:10.351089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:16.520 passed 00:22:16.779 Test: blockdev nvme passthru rw ...passed 00:22:16.779 Test: blockdev nvme passthru vendor specific ...[2024-12-13 10:24:10.432900] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:16.779 [2024-12-13 10:24:10.432932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:16.779 [2024-12-13 10:24:10.433068] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:16.779 [2024-12-13 10:24:10.433084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:16.779 [2024-12-13 10:24:10.433217] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:16.779 [2024-12-13 10:24:10.433232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:16.779 [2024-12-13 10:24:10.433354] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:16.779 [2024-12-13 10:24:10.433369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:16.779 passed 00:22:16.779 Test: blockdev nvme admin passthru ...passed 00:22:16.779 Test: blockdev copy ...passed 00:22:16.779 00:22:16.779 Run Summary: Type Total Ran Passed Failed Inactive 00:22:16.779 suites 1 1 n/a 0 0 00:22:16.779 tests 23 23 23 0 0 00:22:16.779 asserts 152 152 152 0 n/a 00:22:16.779 00:22:16.779 Elapsed time = 1.270 seconds 00:22:17.347 10:24:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:17.347 10:24:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:17.347 10:24:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:17.347 10:24:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:17.347 10:24:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:22:17.347 10:24:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:22:17.347 10:24:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:17.347 10:24:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:22:17.347 10:24:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:17.347 10:24:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:22:17.347 10:24:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:17.347 10:24:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:17.347 rmmod nvme_tcp 00:22:17.347 rmmod nvme_fabrics 00:22:17.347 rmmod nvme_keyring 00:22:17.347 10:24:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:17.347 10:24:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:22:17.347 10:24:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:22:17.347 10:24:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 3942826 ']' 00:22:17.347 10:24:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 3942826 00:22:17.347 10:24:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # '[' -z 3942826 ']' 00:22:17.347 10:24:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # kill -0 3942826 00:22:17.347 10:24:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # uname 00:22:17.347 10:24:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:17.347 10:24:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3942826 00:22:17.606 10:24:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:22:17.606 10:24:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:22:17.606 10:24:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3942826' 00:22:17.606 killing process with pid 3942826 00:22:17.606 10:24:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@973 -- # kill 3942826 00:22:17.606 10:24:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@978 -- # wait 3942826 00:22:18.174 10:24:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:18.174 10:24:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:18.174 10:24:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:18.174 10:24:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:22:18.174 10:24:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 00:22:18.174 10:24:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 00:22:18.174 10:24:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:18.174 10:24:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:18.174 10:24:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:18.174 10:24:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:18.174 10:24:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:18.174 10:24:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:20.710 10:24:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:20.710 00:22:20.710 real 0m12.105s 00:22:20.710 user 0m21.484s 00:22:20.710 sys 0m5.283s 00:22:20.710 10:24:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:20.710 10:24:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:20.710 ************************************ 00:22:20.710 END TEST nvmf_bdevio_no_huge 00:22:20.710 ************************************ 00:22:20.710 10:24:14 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:22:20.710 10:24:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:20.710 10:24:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:20.710 10:24:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:20.710 ************************************ 00:22:20.710 START TEST nvmf_tls 00:22:20.710 ************************************ 00:22:20.710 10:24:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:22:20.710 * Looking for test storage... 00:22:20.710 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:20.710 10:24:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:22:20.710 10:24:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # lcov --version 00:22:20.710 10:24:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:22:20.710 10:24:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:22:20.710 10:24:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:20.710 10:24:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:20.710 10:24:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:20.710 10:24:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:22:20.710 10:24:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:22:20.710 10:24:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:22:20.710 10:24:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:22:20.710 10:24:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:22:20.710 10:24:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:22:20.710 10:24:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:22:20.710 10:24:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:20.710 10:24:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:22:20.710 10:24:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:22:20.710 10:24:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:20.711 10:24:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:20.711 10:24:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:22:20.711 10:24:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:22:20.711 10:24:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:20.711 10:24:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:22:20.711 10:24:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:22:20.711 10:24:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:22:20.711 10:24:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:22:20.711 10:24:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:20.711 10:24:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:22:20.711 10:24:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:22:20.711 10:24:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:20.711 10:24:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:20.711 10:24:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:22:20.711 10:24:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:20.711 10:24:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:22:20.711 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:20.711 --rc genhtml_branch_coverage=1 00:22:20.711 --rc genhtml_function_coverage=1 00:22:20.711 --rc genhtml_legend=1 00:22:20.711 --rc geninfo_all_blocks=1 00:22:20.711 --rc geninfo_unexecuted_blocks=1 00:22:20.711 00:22:20.711 ' 00:22:20.711 10:24:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:22:20.711 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:20.711 --rc genhtml_branch_coverage=1 00:22:20.711 --rc genhtml_function_coverage=1 00:22:20.711 --rc genhtml_legend=1 00:22:20.711 --rc geninfo_all_blocks=1 00:22:20.711 --rc geninfo_unexecuted_blocks=1 00:22:20.711 00:22:20.711 ' 00:22:20.711 10:24:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:22:20.711 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:20.711 --rc genhtml_branch_coverage=1 00:22:20.711 --rc genhtml_function_coverage=1 00:22:20.711 --rc genhtml_legend=1 00:22:20.711 --rc geninfo_all_blocks=1 00:22:20.711 --rc geninfo_unexecuted_blocks=1 00:22:20.711 00:22:20.711 ' 00:22:20.711 10:24:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:22:20.711 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:20.711 --rc genhtml_branch_coverage=1 00:22:20.711 --rc genhtml_function_coverage=1 00:22:20.711 --rc genhtml_legend=1 00:22:20.711 --rc geninfo_all_blocks=1 00:22:20.711 --rc geninfo_unexecuted_blocks=1 00:22:20.711 00:22:20.711 ' 00:22:20.711 10:24:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:20.711 10:24:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:22:20.711 10:24:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:20.711 10:24:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:20.711 10:24:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:20.711 10:24:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:20.711 10:24:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:20.711 10:24:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:20.711 10:24:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:20.711 10:24:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:20.711 10:24:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:20.711 10:24:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:20.711 10:24:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:22:20.711 10:24:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:22:20.711 10:24:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:20.711 10:24:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:20.711 10:24:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:20.711 10:24:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:20.711 10:24:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:20.711 10:24:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:22:20.711 10:24:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:20.711 10:24:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:20.711 10:24:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:20.711 10:24:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:20.711 10:24:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:20.711 10:24:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:20.711 10:24:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:22:20.711 10:24:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:20.711 10:24:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:22:20.711 10:24:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:20.711 10:24:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:20.711 10:24:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:20.711 10:24:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:20.711 10:24:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:20.711 10:24:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:20.711 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:20.711 10:24:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:20.711 10:24:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:20.711 10:24:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:20.711 10:24:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:20.711 10:24:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:22:20.711 10:24:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:20.711 10:24:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:20.711 10:24:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:20.711 10:24:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:20.711 10:24:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:20.711 10:24:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:20.711 10:24:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:20.711 10:24:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:20.711 10:24:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:20.711 10:24:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:20.711 10:24:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@309 -- # xtrace_disable 00:22:20.711 10:24:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:25.982 10:24:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:25.982 10:24:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # pci_devs=() 00:22:25.982 10:24:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:25.982 10:24:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:25.982 10:24:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:25.982 10:24:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:25.982 10:24:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:25.982 10:24:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # net_devs=() 00:22:25.982 10:24:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:25.982 10:24:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # e810=() 00:22:25.982 10:24:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # local -ga e810 00:22:25.982 10:24:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # x722=() 00:22:25.982 10:24:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # local -ga x722 00:22:25.982 10:24:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # mlx=() 00:22:25.982 10:24:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # local -ga mlx 00:22:25.982 10:24:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:25.982 10:24:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:25.982 10:24:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:25.982 10:24:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:25.982 10:24:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:25.982 10:24:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:25.982 10:24:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:25.982 10:24:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:25.982 10:24:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:25.982 10:24:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:25.982 10:24:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:25.982 10:24:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:25.982 10:24:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:25.982 10:24:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:25.982 10:24:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:25.982 10:24:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:25.982 10:24:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:25.982 10:24:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:25.982 10:24:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:25.982 10:24:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:22:25.982 Found 0000:af:00.0 (0x8086 - 0x159b) 00:22:25.982 10:24:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:25.982 10:24:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:25.982 10:24:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:25.982 10:24:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:25.982 10:24:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:25.982 10:24:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:25.982 10:24:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:22:25.982 Found 0000:af:00.1 (0x8086 - 0x159b) 00:22:25.982 10:24:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:25.982 10:24:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:25.982 10:24:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:25.982 10:24:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:25.982 10:24:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:25.982 10:24:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:25.982 10:24:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:25.982 10:24:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:25.982 10:24:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:25.982 10:24:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:25.982 10:24:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:25.982 10:24:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:25.982 10:24:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:25.982 10:24:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:25.982 10:24:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:25.982 10:24:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:22:25.982 Found net devices under 0000:af:00.0: cvl_0_0 00:22:25.982 10:24:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:25.982 10:24:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:25.982 10:24:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:25.982 10:24:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:25.983 10:24:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:25.983 10:24:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:25.983 10:24:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:25.983 10:24:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:25.983 10:24:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:22:25.983 Found net devices under 0000:af:00.1: cvl_0_1 00:22:25.983 10:24:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:25.983 10:24:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:25.983 10:24:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # is_hw=yes 00:22:25.983 10:24:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:25.983 10:24:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:25.983 10:24:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:25.983 10:24:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:25.983 10:24:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:25.983 10:24:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:25.983 10:24:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:25.983 10:24:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:25.983 10:24:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:25.983 10:24:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:25.983 10:24:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:25.983 10:24:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:25.983 10:24:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:25.983 10:24:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:25.983 10:24:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:25.983 10:24:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:25.983 10:24:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:25.983 10:24:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:25.983 10:24:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:25.983 10:24:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:25.983 10:24:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:25.983 10:24:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:25.983 10:24:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:25.983 10:24:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:25.983 10:24:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:25.983 10:24:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:25.983 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:25.983 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.489 ms 00:22:25.983 00:22:25.983 --- 10.0.0.2 ping statistics --- 00:22:25.983 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:25.983 rtt min/avg/max/mdev = 0.489/0.489/0.489/0.000 ms 00:22:25.983 10:24:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:25.983 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:25.983 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.201 ms 00:22:25.983 00:22:25.983 --- 10.0.0.1 ping statistics --- 00:22:25.983 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:25.983 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:22:25.983 10:24:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:25.983 10:24:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # return 0 00:22:25.983 10:24:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:25.983 10:24:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:25.983 10:24:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:25.983 10:24:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:25.983 10:24:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:25.983 10:24:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:25.983 10:24:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:25.983 10:24:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:22:25.983 10:24:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:25.983 10:24:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:25.983 10:24:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:25.983 10:24:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3947362 00:22:25.983 10:24:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3947362 00:22:25.983 10:24:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3947362 ']' 00:22:25.983 10:24:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:25.983 10:24:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:25.983 10:24:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:25.983 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:25.983 10:24:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:25.983 10:24:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:25.983 10:24:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:22:25.983 [2024-12-13 10:24:19.777785] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:22:25.983 [2024-12-13 10:24:19.777878] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:26.242 [2024-12-13 10:24:19.896259] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:26.242 [2024-12-13 10:24:20.001639] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:26.242 [2024-12-13 10:24:20.001687] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:26.242 [2024-12-13 10:24:20.001697] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:26.242 [2024-12-13 10:24:20.001710] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:26.242 [2024-12-13 10:24:20.001718] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:26.242 [2024-12-13 10:24:20.003162] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:22:26.809 10:24:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:26.809 10:24:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:26.809 10:24:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:26.809 10:24:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:26.809 10:24:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:26.809 10:24:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:26.809 10:24:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:22:26.809 10:24:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:22:27.068 true 00:22:27.068 10:24:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:27.068 10:24:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:22:27.327 10:24:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:22:27.327 10:24:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:22:27.327 10:24:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:22:27.327 10:24:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:27.327 10:24:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:22:27.586 10:24:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:22:27.586 10:24:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:22:27.586 10:24:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:22:27.845 10:24:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:27.845 10:24:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:22:27.845 10:24:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:22:27.845 10:24:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:22:27.845 10:24:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:27.845 10:24:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:22:28.103 10:24:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:22:28.103 10:24:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:22:28.103 10:24:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:22:28.362 10:24:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:28.362 10:24:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:22:28.621 10:24:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:22:28.621 10:24:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:22:28.621 10:24:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:22:28.621 10:24:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:22:28.621 10:24:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:28.880 10:24:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:22:28.880 10:24:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:22:28.880 10:24:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:22:28.880 10:24:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:22:28.880 10:24:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:22:28.880 10:24:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:22:28.880 10:24:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:22:28.880 10:24:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:22:28.880 10:24:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:22:28.880 10:24:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:22:28.880 10:24:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:22:28.880 10:24:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:22:28.880 10:24:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:22:28.880 10:24:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:22:28.880 10:24:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 00:22:28.880 10:24:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:22:28.880 10:24:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:22:28.880 10:24:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:22:28.880 10:24:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:22:28.880 10:24:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.WUOot6VNck 00:22:28.880 10:24:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:22:28.880 10:24:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.ppWchDVReS 00:22:28.880 10:24:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:22:28.880 10:24:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:22:28.880 10:24:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.WUOot6VNck 00:22:28.880 10:24:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.ppWchDVReS 00:22:28.880 10:24:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:22:29.139 10:24:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:22:29.707 10:24:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.WUOot6VNck 00:22:29.707 10:24:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.WUOot6VNck 00:22:29.707 10:24:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:29.707 [2024-12-13 10:24:23.581856] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:29.707 10:24:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:29.966 10:24:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:30.224 [2024-12-13 10:24:23.958832] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:30.224 [2024-12-13 10:24:23.959085] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:30.224 10:24:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:30.483 malloc0 00:22:30.483 10:24:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:30.742 10:24:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.WUOot6VNck 00:22:30.742 10:24:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:22:31.000 10:24:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.WUOot6VNck 00:22:41.104 Initializing NVMe Controllers 00:22:41.104 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:41.104 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:41.104 Initialization complete. Launching workers. 00:22:41.104 ======================================================== 00:22:41.104 Latency(us) 00:22:41.104 Device Information : IOPS MiB/s Average min max 00:22:41.104 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 13061.59 51.02 4900.19 1247.28 6858.31 00:22:41.104 ======================================================== 00:22:41.104 Total : 13061.59 51.02 4900.19 1247.28 6858.31 00:22:41.104 00:22:41.104 10:24:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.WUOot6VNck 00:22:41.104 10:24:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:41.104 10:24:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:41.104 10:24:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:41.104 10:24:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.WUOot6VNck 00:22:41.104 10:24:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:41.104 10:24:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3949869 00:22:41.104 10:24:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:41.105 10:24:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:41.105 10:24:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3949869 /var/tmp/bdevperf.sock 00:22:41.105 10:24:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3949869 ']' 00:22:41.105 10:24:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:41.105 10:24:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:41.105 10:24:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:41.105 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:41.105 10:24:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:41.105 10:24:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:41.383 [2024-12-13 10:24:35.026557] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:22:41.383 [2024-12-13 10:24:35.026640] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3949869 ] 00:22:41.383 [2024-12-13 10:24:35.132656] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:41.383 [2024-12-13 10:24:35.242328] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:22:42.318 10:24:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:42.318 10:24:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:42.318 10:24:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.WUOot6VNck 00:22:42.318 10:24:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:22:42.318 [2024-12-13 10:24:36.193539] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:42.577 TLSTESTn1 00:22:42.577 10:24:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:22:42.577 Running I/O for 10 seconds... 00:22:44.890 4361.00 IOPS, 17.04 MiB/s [2024-12-13T09:24:39.717Z] 4357.00 IOPS, 17.02 MiB/s [2024-12-13T09:24:40.653Z] 4315.67 IOPS, 16.86 MiB/s [2024-12-13T09:24:41.588Z] 4310.50 IOPS, 16.84 MiB/s [2024-12-13T09:24:42.525Z] 4288.00 IOPS, 16.75 MiB/s [2024-12-13T09:24:43.461Z] 4297.67 IOPS, 16.79 MiB/s [2024-12-13T09:24:44.837Z] 4228.57 IOPS, 16.52 MiB/s [2024-12-13T09:24:45.404Z] 4160.88 IOPS, 16.25 MiB/s [2024-12-13T09:24:46.781Z] 4105.78 IOPS, 16.04 MiB/s [2024-12-13T09:24:46.781Z] 4066.30 IOPS, 15.88 MiB/s 00:22:52.890 Latency(us) 00:22:52.890 [2024-12-13T09:24:46.781Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:52.890 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:52.890 Verification LBA range: start 0x0 length 0x2000 00:22:52.890 TLSTESTn1 : 10.03 4068.47 15.89 0.00 0.00 31403.97 7677.07 32705.58 00:22:52.890 [2024-12-13T09:24:46.781Z] =================================================================================================================== 00:22:52.890 [2024-12-13T09:24:46.781Z] Total : 4068.47 15.89 0.00 0.00 31403.97 7677.07 32705.58 00:22:52.890 { 00:22:52.890 "results": [ 00:22:52.890 { 00:22:52.890 "job": "TLSTESTn1", 00:22:52.890 "core_mask": "0x4", 00:22:52.890 "workload": "verify", 00:22:52.890 "status": "finished", 00:22:52.890 "verify_range": { 00:22:52.890 "start": 0, 00:22:52.890 "length": 8192 00:22:52.890 }, 00:22:52.890 "queue_depth": 128, 00:22:52.890 "io_size": 4096, 00:22:52.890 "runtime": 10.026139, 00:22:52.890 "iops": 4068.4654381911123, 00:22:52.890 "mibps": 15.892443117934032, 00:22:52.890 "io_failed": 0, 00:22:52.890 "io_timeout": 0, 00:22:52.890 "avg_latency_us": 31403.968992413123, 00:22:52.890 "min_latency_us": 7677.074285714286, 00:22:52.890 "max_latency_us": 32705.584761904764 00:22:52.890 } 00:22:52.890 ], 00:22:52.890 "core_count": 1 00:22:52.890 } 00:22:52.890 10:24:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:52.890 10:24:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 3949869 00:22:52.890 10:24:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3949869 ']' 00:22:52.890 10:24:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3949869 00:22:52.890 10:24:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:22:52.890 10:24:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:52.890 10:24:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3949869 00:22:52.890 10:24:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:22:52.890 10:24:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:22:52.890 10:24:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3949869' 00:22:52.890 killing process with pid 3949869 00:22:52.890 10:24:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3949869 00:22:52.890 Received shutdown signal, test time was about 10.000000 seconds 00:22:52.890 00:22:52.890 Latency(us) 00:22:52.890 [2024-12-13T09:24:46.781Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:52.890 [2024-12-13T09:24:46.781Z] =================================================================================================================== 00:22:52.890 [2024-12-13T09:24:46.781Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:52.890 10:24:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3949869 00:22:53.826 10:24:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.ppWchDVReS 00:22:53.826 10:24:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:22:53.826 10:24:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.ppWchDVReS 00:22:53.826 10:24:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:22:53.827 10:24:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:53.827 10:24:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:22:53.827 10:24:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:53.827 10:24:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.ppWchDVReS 00:22:53.827 10:24:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:53.827 10:24:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:53.827 10:24:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:53.827 10:24:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.ppWchDVReS 00:22:53.827 10:24:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:53.827 10:24:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3951776 00:22:53.827 10:24:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:53.827 10:24:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:53.827 10:24:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3951776 /var/tmp/bdevperf.sock 00:22:53.827 10:24:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3951776 ']' 00:22:53.827 10:24:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:53.827 10:24:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:53.827 10:24:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:53.827 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:53.827 10:24:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:53.827 10:24:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:53.827 [2024-12-13 10:24:47.502590] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:22:53.827 [2024-12-13 10:24:47.502682] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3951776 ] 00:22:53.827 [2024-12-13 10:24:47.615326] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:54.085 [2024-12-13 10:24:47.725164] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:22:54.653 10:24:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:54.653 10:24:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:54.653 10:24:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.ppWchDVReS 00:22:54.653 10:24:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:22:54.911 [2024-12-13 10:24:48.661917] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:54.911 [2024-12-13 10:24:48.672028] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:54.911 [2024-12-13 10:24:48.672843] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (107): Transport endpoint is not connected 00:22:54.912 [2024-12-13 10:24:48.673824] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:22:54.912 [2024-12-13 10:24:48.674819] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:22:54.912 [2024-12-13 10:24:48.674845] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:22:54.912 [2024-12-13 10:24:48.674859] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:22:54.912 [2024-12-13 10:24:48.674875] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:22:54.912 request: 00:22:54.912 { 00:22:54.912 "name": "TLSTEST", 00:22:54.912 "trtype": "tcp", 00:22:54.912 "traddr": "10.0.0.2", 00:22:54.912 "adrfam": "ipv4", 00:22:54.912 "trsvcid": "4420", 00:22:54.912 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:54.912 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:54.912 "prchk_reftag": false, 00:22:54.912 "prchk_guard": false, 00:22:54.912 "hdgst": false, 00:22:54.912 "ddgst": false, 00:22:54.912 "psk": "key0", 00:22:54.912 "allow_unrecognized_csi": false, 00:22:54.912 "method": "bdev_nvme_attach_controller", 00:22:54.912 "req_id": 1 00:22:54.912 } 00:22:54.912 Got JSON-RPC error response 00:22:54.912 response: 00:22:54.912 { 00:22:54.912 "code": -5, 00:22:54.912 "message": "Input/output error" 00:22:54.912 } 00:22:54.912 10:24:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3951776 00:22:54.912 10:24:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3951776 ']' 00:22:54.912 10:24:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3951776 00:22:54.912 10:24:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:22:54.912 10:24:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:54.912 10:24:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3951776 00:22:54.912 10:24:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:22:54.912 10:24:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:22:54.912 10:24:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3951776' 00:22:54.912 killing process with pid 3951776 00:22:54.912 10:24:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3951776 00:22:54.912 Received shutdown signal, test time was about 10.000000 seconds 00:22:54.912 00:22:54.912 Latency(us) 00:22:54.912 [2024-12-13T09:24:48.803Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:54.912 [2024-12-13T09:24:48.803Z] =================================================================================================================== 00:22:54.912 [2024-12-13T09:24:48.803Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:54.912 10:24:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3951776 00:22:55.848 10:24:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:22:55.848 10:24:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:22:55.848 10:24:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:55.848 10:24:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:55.848 10:24:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:55.848 10:24:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.WUOot6VNck 00:22:55.848 10:24:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:22:55.848 10:24:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.WUOot6VNck 00:22:55.848 10:24:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:22:55.848 10:24:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:55.848 10:24:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:22:55.848 10:24:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:55.848 10:24:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.WUOot6VNck 00:22:55.848 10:24:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:55.848 10:24:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:55.848 10:24:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:22:55.848 10:24:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.WUOot6VNck 00:22:55.848 10:24:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:55.848 10:24:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3952126 00:22:55.848 10:24:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:55.849 10:24:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:55.849 10:24:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3952126 /var/tmp/bdevperf.sock 00:22:55.849 10:24:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3952126 ']' 00:22:55.849 10:24:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:55.849 10:24:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:55.849 10:24:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:55.849 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:55.849 10:24:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:55.849 10:24:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:55.849 [2024-12-13 10:24:49.708308] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:22:55.849 [2024-12-13 10:24:49.708411] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3952126 ] 00:22:56.108 [2024-12-13 10:24:49.816703] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:56.108 [2024-12-13 10:24:49.923147] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:22:56.676 10:24:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:56.676 10:24:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:56.676 10:24:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.WUOot6VNck 00:22:56.934 10:24:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:22:57.194 [2024-12-13 10:24:50.871211] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:57.194 [2024-12-13 10:24:50.878650] tcp.c: 987:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:22:57.194 [2024-12-13 10:24:50.878679] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:22:57.194 [2024-12-13 10:24:50.878715] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:57.194 [2024-12-13 10:24:50.879007] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (107): Transport endpoint is not connected 00:22:57.194 [2024-12-13 10:24:50.879989] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:22:57.194 [2024-12-13 10:24:50.880990] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:22:57.194 [2024-12-13 10:24:50.881011] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:22:57.194 [2024-12-13 10:24:50.881026] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:22:57.194 [2024-12-13 10:24:50.881040] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:22:57.194 request: 00:22:57.194 { 00:22:57.194 "name": "TLSTEST", 00:22:57.194 "trtype": "tcp", 00:22:57.194 "traddr": "10.0.0.2", 00:22:57.194 "adrfam": "ipv4", 00:22:57.194 "trsvcid": "4420", 00:22:57.194 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:57.194 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:57.194 "prchk_reftag": false, 00:22:57.194 "prchk_guard": false, 00:22:57.194 "hdgst": false, 00:22:57.194 "ddgst": false, 00:22:57.194 "psk": "key0", 00:22:57.194 "allow_unrecognized_csi": false, 00:22:57.194 "method": "bdev_nvme_attach_controller", 00:22:57.194 "req_id": 1 00:22:57.194 } 00:22:57.194 Got JSON-RPC error response 00:22:57.194 response: 00:22:57.194 { 00:22:57.194 "code": -5, 00:22:57.194 "message": "Input/output error" 00:22:57.194 } 00:22:57.194 10:24:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3952126 00:22:57.194 10:24:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3952126 ']' 00:22:57.194 10:24:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3952126 00:22:57.194 10:24:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:22:57.194 10:24:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:57.194 10:24:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3952126 00:22:57.194 10:24:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:22:57.194 10:24:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:22:57.194 10:24:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3952126' 00:22:57.194 killing process with pid 3952126 00:22:57.194 10:24:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3952126 00:22:57.194 Received shutdown signal, test time was about 10.000000 seconds 00:22:57.194 00:22:57.194 Latency(us) 00:22:57.194 [2024-12-13T09:24:51.085Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:57.194 [2024-12-13T09:24:51.085Z] =================================================================================================================== 00:22:57.194 [2024-12-13T09:24:51.085Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:57.194 10:24:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3952126 00:22:58.131 10:24:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:22:58.131 10:24:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:22:58.131 10:24:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:58.131 10:24:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:58.131 10:24:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:58.131 10:24:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.WUOot6VNck 00:22:58.131 10:24:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:22:58.131 10:24:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.WUOot6VNck 00:22:58.131 10:24:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:22:58.131 10:24:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:58.131 10:24:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:22:58.131 10:24:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:58.131 10:24:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.WUOot6VNck 00:22:58.131 10:24:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:58.131 10:24:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:22:58.131 10:24:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:58.131 10:24:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.WUOot6VNck 00:22:58.131 10:24:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:58.131 10:24:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3952580 00:22:58.131 10:24:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:58.131 10:24:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:58.131 10:24:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3952580 /var/tmp/bdevperf.sock 00:22:58.131 10:24:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3952580 ']' 00:22:58.131 10:24:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:58.131 10:24:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:58.131 10:24:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:58.131 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:58.131 10:24:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:58.131 10:24:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:58.131 [2024-12-13 10:24:51.907396] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:22:58.131 [2024-12-13 10:24:51.907501] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3952580 ] 00:22:58.131 [2024-12-13 10:24:52.015254] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:58.390 [2024-12-13 10:24:52.118808] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:22:58.958 10:24:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:58.958 10:24:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:58.958 10:24:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.WUOot6VNck 00:22:59.217 10:24:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:22:59.217 [2024-12-13 10:24:53.091520] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:59.217 [2024-12-13 10:24:53.103368] tcp.c: 987:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:22:59.217 [2024-12-13 10:24:53.103395] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:22:59.217 [2024-12-13 10:24:53.103427] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:59.218 [2024-12-13 10:24:53.104317] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (107): Transport endpoint is not connected 00:22:59.218 [2024-12-13 10:24:53.105303] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:22:59.218 [2024-12-13 10:24:53.106298] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:22:59.218 [2024-12-13 10:24:53.106324] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:22:59.218 [2024-12-13 10:24:53.106338] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:22:59.218 [2024-12-13 10:24:53.106353] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:22:59.218 request: 00:22:59.218 { 00:22:59.218 "name": "TLSTEST", 00:22:59.218 "trtype": "tcp", 00:22:59.218 "traddr": "10.0.0.2", 00:22:59.218 "adrfam": "ipv4", 00:22:59.218 "trsvcid": "4420", 00:22:59.218 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:59.218 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:59.218 "prchk_reftag": false, 00:22:59.218 "prchk_guard": false, 00:22:59.218 "hdgst": false, 00:22:59.218 "ddgst": false, 00:22:59.218 "psk": "key0", 00:22:59.218 "allow_unrecognized_csi": false, 00:22:59.218 "method": "bdev_nvme_attach_controller", 00:22:59.218 "req_id": 1 00:22:59.218 } 00:22:59.218 Got JSON-RPC error response 00:22:59.218 response: 00:22:59.218 { 00:22:59.218 "code": -5, 00:22:59.218 "message": "Input/output error" 00:22:59.218 } 00:22:59.477 10:24:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3952580 00:22:59.477 10:24:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3952580 ']' 00:22:59.477 10:24:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3952580 00:22:59.477 10:24:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:22:59.477 10:24:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:59.477 10:24:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3952580 00:22:59.477 10:24:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:22:59.477 10:24:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:22:59.477 10:24:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3952580' 00:22:59.477 killing process with pid 3952580 00:22:59.477 10:24:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3952580 00:22:59.477 Received shutdown signal, test time was about 10.000000 seconds 00:22:59.477 00:22:59.477 Latency(us) 00:22:59.477 [2024-12-13T09:24:53.368Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:59.477 [2024-12-13T09:24:53.368Z] =================================================================================================================== 00:22:59.477 [2024-12-13T09:24:53.368Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:59.477 10:24:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3952580 00:23:00.414 10:24:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:23:00.414 10:24:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:23:00.414 10:24:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:00.414 10:24:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:00.414 10:24:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:00.414 10:24:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:23:00.414 10:24:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:23:00.414 10:24:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:23:00.414 10:24:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:23:00.414 10:24:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:00.414 10:24:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:23:00.414 10:24:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:00.414 10:24:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:23:00.414 10:24:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:00.414 10:24:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:00.414 10:24:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:00.414 10:24:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:23:00.414 10:24:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:00.414 10:24:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3952822 00:23:00.414 10:24:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:00.414 10:24:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:00.414 10:24:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3952822 /var/tmp/bdevperf.sock 00:23:00.414 10:24:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3952822 ']' 00:23:00.414 10:24:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:00.414 10:24:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:00.414 10:24:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:00.414 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:00.414 10:24:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:00.414 10:24:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:00.414 [2024-12-13 10:24:54.132643] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:23:00.414 [2024-12-13 10:24:54.132754] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3952822 ] 00:23:00.414 [2024-12-13 10:24:54.240202] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:00.674 [2024-12-13 10:24:54.344741] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:23:01.242 10:24:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:01.242 10:24:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:01.242 10:24:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:23:01.242 [2024-12-13 10:24:55.108221] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:23:01.242 [2024-12-13 10:24:55.108265] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:23:01.242 request: 00:23:01.242 { 00:23:01.242 "name": "key0", 00:23:01.242 "path": "", 00:23:01.242 "method": "keyring_file_add_key", 00:23:01.242 "req_id": 1 00:23:01.242 } 00:23:01.242 Got JSON-RPC error response 00:23:01.242 response: 00:23:01.242 { 00:23:01.242 "code": -1, 00:23:01.242 "message": "Operation not permitted" 00:23:01.242 } 00:23:01.242 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:01.500 [2024-12-13 10:24:55.288814] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:01.500 [2024-12-13 10:24:55.288856] bdev_nvme.c:6754:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:23:01.500 request: 00:23:01.500 { 00:23:01.500 "name": "TLSTEST", 00:23:01.500 "trtype": "tcp", 00:23:01.500 "traddr": "10.0.0.2", 00:23:01.500 "adrfam": "ipv4", 00:23:01.500 "trsvcid": "4420", 00:23:01.500 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:01.500 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:01.500 "prchk_reftag": false, 00:23:01.500 "prchk_guard": false, 00:23:01.500 "hdgst": false, 00:23:01.500 "ddgst": false, 00:23:01.500 "psk": "key0", 00:23:01.500 "allow_unrecognized_csi": false, 00:23:01.500 "method": "bdev_nvme_attach_controller", 00:23:01.500 "req_id": 1 00:23:01.500 } 00:23:01.500 Got JSON-RPC error response 00:23:01.500 response: 00:23:01.500 { 00:23:01.500 "code": -126, 00:23:01.500 "message": "Required key not available" 00:23:01.500 } 00:23:01.500 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3952822 00:23:01.500 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3952822 ']' 00:23:01.501 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3952822 00:23:01.501 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:01.501 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:01.501 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3952822 00:23:01.501 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:01.501 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:01.501 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3952822' 00:23:01.501 killing process with pid 3952822 00:23:01.501 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3952822 00:23:01.501 Received shutdown signal, test time was about 10.000000 seconds 00:23:01.501 00:23:01.501 Latency(us) 00:23:01.501 [2024-12-13T09:24:55.392Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:01.501 [2024-12-13T09:24:55.392Z] =================================================================================================================== 00:23:01.501 [2024-12-13T09:24:55.392Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:01.501 10:24:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3952822 00:23:02.437 10:24:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:23:02.437 10:24:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:23:02.437 10:24:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:02.437 10:24:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:02.437 10:24:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:02.437 10:24:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 3947362 00:23:02.438 10:24:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3947362 ']' 00:23:02.438 10:24:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3947362 00:23:02.438 10:24:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:02.438 10:24:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:02.438 10:24:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3947362 00:23:02.438 10:24:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:02.438 10:24:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:02.438 10:24:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3947362' 00:23:02.438 killing process with pid 3947362 00:23:02.438 10:24:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3947362 00:23:02.438 10:24:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3947362 00:23:03.816 10:24:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:23:03.816 10:24:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:23:03.816 10:24:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:23:03.816 10:24:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:23:03.816 10:24:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:23:03.816 10:24:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 00:23:03.816 10:24:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:23:03.816 10:24:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:23:03.816 10:24:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:23:03.816 10:24:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.IyV0de1qDz 00:23:03.816 10:24:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:23:03.816 10:24:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.IyV0de1qDz 00:23:03.816 10:24:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:23:03.816 10:24:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:03.816 10:24:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:03.816 10:24:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:03.816 10:24:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3953502 00:23:03.816 10:24:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:03.816 10:24:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3953502 00:23:03.816 10:24:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3953502 ']' 00:23:03.816 10:24:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:03.816 10:24:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:03.816 10:24:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:03.816 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:03.816 10:24:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:03.816 10:24:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:03.816 [2024-12-13 10:24:57.661552] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:23:03.816 [2024-12-13 10:24:57.661653] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:04.076 [2024-12-13 10:24:57.776831] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:04.076 [2024-12-13 10:24:57.879560] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:04.076 [2024-12-13 10:24:57.879608] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:04.076 [2024-12-13 10:24:57.879619] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:04.076 [2024-12-13 10:24:57.879630] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:04.076 [2024-12-13 10:24:57.879638] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:04.076 [2024-12-13 10:24:57.881101] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:23:04.643 10:24:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:04.643 10:24:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:04.643 10:24:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:04.643 10:24:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:04.643 10:24:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:04.643 10:24:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:04.643 10:24:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.IyV0de1qDz 00:23:04.643 10:24:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.IyV0de1qDz 00:23:04.643 10:24:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:04.902 [2024-12-13 10:24:58.675107] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:04.902 10:24:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:05.160 10:24:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:05.419 [2024-12-13 10:24:59.056108] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:05.419 [2024-12-13 10:24:59.056372] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:05.419 10:24:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:05.419 malloc0 00:23:05.419 10:24:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:05.678 10:24:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.IyV0de1qDz 00:23:05.937 10:24:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:23:05.937 10:24:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.IyV0de1qDz 00:23:05.937 10:24:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:05.937 10:24:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:05.937 10:24:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:05.937 10:24:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.IyV0de1qDz 00:23:05.937 10:24:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:05.937 10:24:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:05.937 10:24:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3953767 00:23:05.937 10:24:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:05.937 10:24:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3953767 /var/tmp/bdevperf.sock 00:23:05.937 10:24:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3953767 ']' 00:23:05.937 10:24:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:05.937 10:24:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:05.937 10:24:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:05.937 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:05.937 10:24:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:05.937 10:24:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:06.196 [2024-12-13 10:24:59.874079] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:23:06.196 [2024-12-13 10:24:59.874169] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3953767 ] 00:23:06.196 [2024-12-13 10:24:59.986578] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:06.455 [2024-12-13 10:25:00.104860] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:23:07.022 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:07.022 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:07.022 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.IyV0de1qDz 00:23:07.022 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:07.281 [2024-12-13 10:25:01.038068] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:07.281 TLSTESTn1 00:23:07.281 10:25:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:23:07.540 Running I/O for 10 seconds... 00:23:09.412 4502.00 IOPS, 17.59 MiB/s [2024-12-13T09:25:04.240Z] 4577.00 IOPS, 17.88 MiB/s [2024-12-13T09:25:05.618Z] 4518.67 IOPS, 17.65 MiB/s [2024-12-13T09:25:06.553Z] 4470.25 IOPS, 17.46 MiB/s [2024-12-13T09:25:07.490Z] 4436.20 IOPS, 17.33 MiB/s [2024-12-13T09:25:08.425Z] 4410.83 IOPS, 17.23 MiB/s [2024-12-13T09:25:09.362Z] 4394.57 IOPS, 17.17 MiB/s [2024-12-13T09:25:10.297Z] 4381.75 IOPS, 17.12 MiB/s [2024-12-13T09:25:11.674Z] 4357.11 IOPS, 17.02 MiB/s [2024-12-13T09:25:11.674Z] 4340.50 IOPS, 16.96 MiB/s 00:23:17.783 Latency(us) 00:23:17.783 [2024-12-13T09:25:11.674Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:17.783 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:17.783 Verification LBA range: start 0x0 length 0x2000 00:23:17.783 TLSTESTn1 : 10.02 4343.98 16.97 0.00 0.00 29417.94 7957.94 40195.41 00:23:17.783 [2024-12-13T09:25:11.674Z] =================================================================================================================== 00:23:17.783 [2024-12-13T09:25:11.674Z] Total : 4343.98 16.97 0.00 0.00 29417.94 7957.94 40195.41 00:23:17.783 { 00:23:17.783 "results": [ 00:23:17.783 { 00:23:17.783 "job": "TLSTESTn1", 00:23:17.783 "core_mask": "0x4", 00:23:17.783 "workload": "verify", 00:23:17.783 "status": "finished", 00:23:17.783 "verify_range": { 00:23:17.783 "start": 0, 00:23:17.783 "length": 8192 00:23:17.783 }, 00:23:17.783 "queue_depth": 128, 00:23:17.783 "io_size": 4096, 00:23:17.783 "runtime": 10.02123, 00:23:17.783 "iops": 4343.977735268026, 00:23:17.783 "mibps": 16.968663028390726, 00:23:17.783 "io_failed": 0, 00:23:17.783 "io_timeout": 0, 00:23:17.783 "avg_latency_us": 29417.940021090122, 00:23:17.783 "min_latency_us": 7957.942857142857, 00:23:17.783 "max_latency_us": 40195.41333333333 00:23:17.783 } 00:23:17.783 ], 00:23:17.783 "core_count": 1 00:23:17.783 } 00:23:17.783 10:25:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:17.783 10:25:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 3953767 00:23:17.783 10:25:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3953767 ']' 00:23:17.783 10:25:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3953767 00:23:17.783 10:25:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:17.783 10:25:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:17.783 10:25:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3953767 00:23:17.783 10:25:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:17.783 10:25:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:17.783 10:25:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3953767' 00:23:17.783 killing process with pid 3953767 00:23:17.783 10:25:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3953767 00:23:17.783 Received shutdown signal, test time was about 10.000000 seconds 00:23:17.783 00:23:17.783 Latency(us) 00:23:17.783 [2024-12-13T09:25:11.674Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:17.783 [2024-12-13T09:25:11.674Z] =================================================================================================================== 00:23:17.783 [2024-12-13T09:25:11.674Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:17.783 10:25:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3953767 00:23:18.720 10:25:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.IyV0de1qDz 00:23:18.720 10:25:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.IyV0de1qDz 00:23:18.720 10:25:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:23:18.720 10:25:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.IyV0de1qDz 00:23:18.720 10:25:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:23:18.720 10:25:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:18.720 10:25:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:23:18.720 10:25:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:18.720 10:25:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.IyV0de1qDz 00:23:18.720 10:25:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:18.720 10:25:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:18.720 10:25:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:18.721 10:25:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.IyV0de1qDz 00:23:18.721 10:25:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:18.721 10:25:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3955765 00:23:18.721 10:25:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:18.721 10:25:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:18.721 10:25:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3955765 /var/tmp/bdevperf.sock 00:23:18.721 10:25:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3955765 ']' 00:23:18.721 10:25:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:18.721 10:25:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:18.721 10:25:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:18.721 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:18.721 10:25:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:18.721 10:25:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:18.721 [2024-12-13 10:25:12.328964] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:23:18.721 [2024-12-13 10:25:12.329055] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3955765 ] 00:23:18.721 [2024-12-13 10:25:12.435803] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:18.721 [2024-12-13 10:25:12.540792] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:23:19.289 10:25:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:19.289 10:25:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:19.289 10:25:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.IyV0de1qDz 00:23:19.547 [2024-12-13 10:25:13.305049] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.IyV0de1qDz': 0100666 00:23:19.547 [2024-12-13 10:25:13.305091] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:23:19.547 request: 00:23:19.547 { 00:23:19.547 "name": "key0", 00:23:19.547 "path": "/tmp/tmp.IyV0de1qDz", 00:23:19.547 "method": "keyring_file_add_key", 00:23:19.547 "req_id": 1 00:23:19.547 } 00:23:19.547 Got JSON-RPC error response 00:23:19.547 response: 00:23:19.547 { 00:23:19.547 "code": -1, 00:23:19.547 "message": "Operation not permitted" 00:23:19.547 } 00:23:19.547 10:25:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:19.806 [2024-12-13 10:25:13.481622] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:19.806 [2024-12-13 10:25:13.481667] bdev_nvme.c:6754:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:23:19.806 request: 00:23:19.806 { 00:23:19.806 "name": "TLSTEST", 00:23:19.806 "trtype": "tcp", 00:23:19.806 "traddr": "10.0.0.2", 00:23:19.806 "adrfam": "ipv4", 00:23:19.806 "trsvcid": "4420", 00:23:19.806 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:19.806 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:19.806 "prchk_reftag": false, 00:23:19.806 "prchk_guard": false, 00:23:19.806 "hdgst": false, 00:23:19.806 "ddgst": false, 00:23:19.806 "psk": "key0", 00:23:19.806 "allow_unrecognized_csi": false, 00:23:19.806 "method": "bdev_nvme_attach_controller", 00:23:19.806 "req_id": 1 00:23:19.806 } 00:23:19.806 Got JSON-RPC error response 00:23:19.806 response: 00:23:19.806 { 00:23:19.806 "code": -126, 00:23:19.806 "message": "Required key not available" 00:23:19.806 } 00:23:19.806 10:25:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3955765 00:23:19.806 10:25:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3955765 ']' 00:23:19.806 10:25:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3955765 00:23:19.806 10:25:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:19.806 10:25:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:19.806 10:25:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3955765 00:23:19.806 10:25:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:19.806 10:25:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:19.806 10:25:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3955765' 00:23:19.806 killing process with pid 3955765 00:23:19.806 10:25:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3955765 00:23:19.806 Received shutdown signal, test time was about 10.000000 seconds 00:23:19.806 00:23:19.806 Latency(us) 00:23:19.806 [2024-12-13T09:25:13.697Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:19.806 [2024-12-13T09:25:13.697Z] =================================================================================================================== 00:23:19.806 [2024-12-13T09:25:13.697Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:19.806 10:25:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3955765 00:23:20.742 10:25:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:23:20.742 10:25:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:23:20.742 10:25:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:20.742 10:25:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:20.742 10:25:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:20.742 10:25:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 3953502 00:23:20.742 10:25:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3953502 ']' 00:23:20.742 10:25:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3953502 00:23:20.742 10:25:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:20.742 10:25:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:20.742 10:25:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3953502 00:23:20.742 10:25:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:20.742 10:25:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:20.742 10:25:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3953502' 00:23:20.742 killing process with pid 3953502 00:23:20.742 10:25:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3953502 00:23:20.742 10:25:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3953502 00:23:22.124 10:25:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:23:22.124 10:25:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:22.124 10:25:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:22.124 10:25:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:22.124 10:25:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:22.124 10:25:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3956422 00:23:22.124 10:25:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3956422 00:23:22.124 10:25:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3956422 ']' 00:23:22.124 10:25:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:22.124 10:25:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:22.124 10:25:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:22.124 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:22.124 10:25:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:22.124 10:25:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:22.124 [2024-12-13 10:25:15.784489] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:23:22.124 [2024-12-13 10:25:15.784597] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:22.124 [2024-12-13 10:25:15.902512] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:22.124 [2024-12-13 10:25:16.004694] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:22.124 [2024-12-13 10:25:16.004744] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:22.124 [2024-12-13 10:25:16.004755] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:22.124 [2024-12-13 10:25:16.004765] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:22.124 [2024-12-13 10:25:16.004775] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:22.124 [2024-12-13 10:25:16.006152] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:23:22.691 10:25:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:22.691 10:25:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:22.691 10:25:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:22.691 10:25:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:22.691 10:25:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:22.949 10:25:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:22.949 10:25:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.IyV0de1qDz 00:23:22.949 10:25:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:23:22.949 10:25:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.IyV0de1qDz 00:23:22.949 10:25:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=setup_nvmf_tgt 00:23:22.949 10:25:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:22.950 10:25:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t setup_nvmf_tgt 00:23:22.950 10:25:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:22.950 10:25:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # setup_nvmf_tgt /tmp/tmp.IyV0de1qDz 00:23:22.950 10:25:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.IyV0de1qDz 00:23:22.950 10:25:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:22.950 [2024-12-13 10:25:16.793411] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:22.950 10:25:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:23.208 10:25:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:23.467 [2024-12-13 10:25:17.162373] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:23.467 [2024-12-13 10:25:17.162637] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:23.467 10:25:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:23.725 malloc0 00:23:23.725 10:25:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:23.725 10:25:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.IyV0de1qDz 00:23:24.068 [2024-12-13 10:25:17.727492] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.IyV0de1qDz': 0100666 00:23:24.068 [2024-12-13 10:25:17.727527] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:23:24.068 request: 00:23:24.068 { 00:23:24.068 "name": "key0", 00:23:24.068 "path": "/tmp/tmp.IyV0de1qDz", 00:23:24.068 "method": "keyring_file_add_key", 00:23:24.068 "req_id": 1 00:23:24.068 } 00:23:24.068 Got JSON-RPC error response 00:23:24.068 response: 00:23:24.068 { 00:23:24.068 "code": -1, 00:23:24.068 "message": "Operation not permitted" 00:23:24.068 } 00:23:24.068 10:25:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:23:24.068 [2024-12-13 10:25:17.911985] tcp.c:3777:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:23:24.068 [2024-12-13 10:25:17.912031] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:23:24.068 request: 00:23:24.068 { 00:23:24.068 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:24.068 "host": "nqn.2016-06.io.spdk:host1", 00:23:24.068 "psk": "key0", 00:23:24.068 "method": "nvmf_subsystem_add_host", 00:23:24.068 "req_id": 1 00:23:24.068 } 00:23:24.068 Got JSON-RPC error response 00:23:24.068 response: 00:23:24.068 { 00:23:24.068 "code": -32603, 00:23:24.068 "message": "Internal error" 00:23:24.068 } 00:23:24.068 10:25:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:23:24.068 10:25:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:24.068 10:25:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:24.068 10:25:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:24.068 10:25:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 3956422 00:23:24.068 10:25:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3956422 ']' 00:23:24.068 10:25:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3956422 00:23:24.068 10:25:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:24.370 10:25:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:24.370 10:25:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3956422 00:23:24.370 10:25:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:24.370 10:25:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:24.370 10:25:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3956422' 00:23:24.370 killing process with pid 3956422 00:23:24.370 10:25:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3956422 00:23:24.370 10:25:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3956422 00:23:25.307 10:25:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.IyV0de1qDz 00:23:25.307 10:25:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:23:25.307 10:25:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:25.307 10:25:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:25.307 10:25:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:25.307 10:25:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3956938 00:23:25.307 10:25:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:25.307 10:25:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3956938 00:23:25.307 10:25:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3956938 ']' 00:23:25.307 10:25:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:25.307 10:25:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:25.307 10:25:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:25.307 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:25.307 10:25:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:25.307 10:25:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:25.566 [2024-12-13 10:25:19.261430] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:23:25.566 [2024-12-13 10:25:19.261540] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:25.566 [2024-12-13 10:25:19.377535] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:25.825 [2024-12-13 10:25:19.474603] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:25.825 [2024-12-13 10:25:19.474648] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:25.825 [2024-12-13 10:25:19.474659] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:25.825 [2024-12-13 10:25:19.474671] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:25.825 [2024-12-13 10:25:19.474680] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:25.825 [2024-12-13 10:25:19.475921] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:23:26.391 10:25:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:26.391 10:25:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:26.391 10:25:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:26.391 10:25:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:26.391 10:25:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:26.391 10:25:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:26.391 10:25:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.IyV0de1qDz 00:23:26.391 10:25:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.IyV0de1qDz 00:23:26.391 10:25:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:26.391 [2024-12-13 10:25:20.254573] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:26.391 10:25:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:26.650 10:25:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:26.908 [2024-12-13 10:25:20.627510] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:26.908 [2024-12-13 10:25:20.627759] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:26.908 10:25:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:27.167 malloc0 00:23:27.167 10:25:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:27.167 10:25:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.IyV0de1qDz 00:23:27.425 10:25:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:23:27.684 10:25:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=3957399 00:23:27.684 10:25:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:27.684 10:25:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:27.684 10:25:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 3957399 /var/tmp/bdevperf.sock 00:23:27.684 10:25:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3957399 ']' 00:23:27.684 10:25:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:27.684 10:25:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:27.684 10:25:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:27.684 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:27.684 10:25:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:27.684 10:25:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:27.684 [2024-12-13 10:25:21.502363] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:23:27.684 [2024-12-13 10:25:21.502463] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3957399 ] 00:23:27.942 [2024-12-13 10:25:21.610385] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:27.942 [2024-12-13 10:25:21.719894] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:23:28.510 10:25:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:28.510 10:25:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:28.510 10:25:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.IyV0de1qDz 00:23:28.768 10:25:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:29.027 [2024-12-13 10:25:22.672534] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:29.027 TLSTESTn1 00:23:29.027 10:25:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:23:29.286 10:25:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:23:29.286 "subsystems": [ 00:23:29.286 { 00:23:29.286 "subsystem": "keyring", 00:23:29.286 "config": [ 00:23:29.286 { 00:23:29.286 "method": "keyring_file_add_key", 00:23:29.286 "params": { 00:23:29.286 "name": "key0", 00:23:29.286 "path": "/tmp/tmp.IyV0de1qDz" 00:23:29.286 } 00:23:29.286 } 00:23:29.286 ] 00:23:29.286 }, 00:23:29.286 { 00:23:29.286 "subsystem": "iobuf", 00:23:29.286 "config": [ 00:23:29.286 { 00:23:29.286 "method": "iobuf_set_options", 00:23:29.286 "params": { 00:23:29.286 "small_pool_count": 8192, 00:23:29.286 "large_pool_count": 1024, 00:23:29.286 "small_bufsize": 8192, 00:23:29.286 "large_bufsize": 135168, 00:23:29.286 "enable_numa": false 00:23:29.286 } 00:23:29.286 } 00:23:29.286 ] 00:23:29.286 }, 00:23:29.286 { 00:23:29.286 "subsystem": "sock", 00:23:29.286 "config": [ 00:23:29.286 { 00:23:29.286 "method": "sock_set_default_impl", 00:23:29.286 "params": { 00:23:29.286 "impl_name": "posix" 00:23:29.286 } 00:23:29.286 }, 00:23:29.286 { 00:23:29.286 "method": "sock_impl_set_options", 00:23:29.286 "params": { 00:23:29.286 "impl_name": "ssl", 00:23:29.286 "recv_buf_size": 4096, 00:23:29.286 "send_buf_size": 4096, 00:23:29.286 "enable_recv_pipe": true, 00:23:29.286 "enable_quickack": false, 00:23:29.286 "enable_placement_id": 0, 00:23:29.286 "enable_zerocopy_send_server": true, 00:23:29.287 "enable_zerocopy_send_client": false, 00:23:29.287 "zerocopy_threshold": 0, 00:23:29.287 "tls_version": 0, 00:23:29.287 "enable_ktls": false 00:23:29.287 } 00:23:29.287 }, 00:23:29.287 { 00:23:29.287 "method": "sock_impl_set_options", 00:23:29.287 "params": { 00:23:29.287 "impl_name": "posix", 00:23:29.287 "recv_buf_size": 2097152, 00:23:29.287 "send_buf_size": 2097152, 00:23:29.287 "enable_recv_pipe": true, 00:23:29.287 "enable_quickack": false, 00:23:29.287 "enable_placement_id": 0, 00:23:29.287 "enable_zerocopy_send_server": true, 00:23:29.287 "enable_zerocopy_send_client": false, 00:23:29.287 "zerocopy_threshold": 0, 00:23:29.287 "tls_version": 0, 00:23:29.287 "enable_ktls": false 00:23:29.287 } 00:23:29.287 } 00:23:29.287 ] 00:23:29.287 }, 00:23:29.287 { 00:23:29.287 "subsystem": "vmd", 00:23:29.287 "config": [] 00:23:29.287 }, 00:23:29.287 { 00:23:29.287 "subsystem": "accel", 00:23:29.287 "config": [ 00:23:29.287 { 00:23:29.287 "method": "accel_set_options", 00:23:29.287 "params": { 00:23:29.287 "small_cache_size": 128, 00:23:29.287 "large_cache_size": 16, 00:23:29.287 "task_count": 2048, 00:23:29.287 "sequence_count": 2048, 00:23:29.287 "buf_count": 2048 00:23:29.287 } 00:23:29.287 } 00:23:29.287 ] 00:23:29.287 }, 00:23:29.287 { 00:23:29.287 "subsystem": "bdev", 00:23:29.287 "config": [ 00:23:29.287 { 00:23:29.287 "method": "bdev_set_options", 00:23:29.287 "params": { 00:23:29.287 "bdev_io_pool_size": 65535, 00:23:29.287 "bdev_io_cache_size": 256, 00:23:29.287 "bdev_auto_examine": true, 00:23:29.287 "iobuf_small_cache_size": 128, 00:23:29.287 "iobuf_large_cache_size": 16 00:23:29.287 } 00:23:29.287 }, 00:23:29.287 { 00:23:29.287 "method": "bdev_raid_set_options", 00:23:29.287 "params": { 00:23:29.287 "process_window_size_kb": 1024, 00:23:29.287 "process_max_bandwidth_mb_sec": 0 00:23:29.287 } 00:23:29.287 }, 00:23:29.287 { 00:23:29.287 "method": "bdev_iscsi_set_options", 00:23:29.287 "params": { 00:23:29.287 "timeout_sec": 30 00:23:29.287 } 00:23:29.287 }, 00:23:29.287 { 00:23:29.287 "method": "bdev_nvme_set_options", 00:23:29.287 "params": { 00:23:29.287 "action_on_timeout": "none", 00:23:29.287 "timeout_us": 0, 00:23:29.287 "timeout_admin_us": 0, 00:23:29.287 "keep_alive_timeout_ms": 10000, 00:23:29.287 "arbitration_burst": 0, 00:23:29.287 "low_priority_weight": 0, 00:23:29.287 "medium_priority_weight": 0, 00:23:29.287 "high_priority_weight": 0, 00:23:29.287 "nvme_adminq_poll_period_us": 10000, 00:23:29.287 "nvme_ioq_poll_period_us": 0, 00:23:29.287 "io_queue_requests": 0, 00:23:29.287 "delay_cmd_submit": true, 00:23:29.287 "transport_retry_count": 4, 00:23:29.287 "bdev_retry_count": 3, 00:23:29.287 "transport_ack_timeout": 0, 00:23:29.287 "ctrlr_loss_timeout_sec": 0, 00:23:29.287 "reconnect_delay_sec": 0, 00:23:29.287 "fast_io_fail_timeout_sec": 0, 00:23:29.287 "disable_auto_failback": false, 00:23:29.287 "generate_uuids": false, 00:23:29.287 "transport_tos": 0, 00:23:29.287 "nvme_error_stat": false, 00:23:29.287 "rdma_srq_size": 0, 00:23:29.287 "io_path_stat": false, 00:23:29.287 "allow_accel_sequence": false, 00:23:29.287 "rdma_max_cq_size": 0, 00:23:29.287 "rdma_cm_event_timeout_ms": 0, 00:23:29.287 "dhchap_digests": [ 00:23:29.287 "sha256", 00:23:29.287 "sha384", 00:23:29.287 "sha512" 00:23:29.287 ], 00:23:29.287 "dhchap_dhgroups": [ 00:23:29.287 "null", 00:23:29.287 "ffdhe2048", 00:23:29.287 "ffdhe3072", 00:23:29.287 "ffdhe4096", 00:23:29.287 "ffdhe6144", 00:23:29.287 "ffdhe8192" 00:23:29.287 ], 00:23:29.287 "rdma_umr_per_io": false 00:23:29.287 } 00:23:29.287 }, 00:23:29.287 { 00:23:29.287 "method": "bdev_nvme_set_hotplug", 00:23:29.287 "params": { 00:23:29.287 "period_us": 100000, 00:23:29.287 "enable": false 00:23:29.287 } 00:23:29.287 }, 00:23:29.287 { 00:23:29.287 "method": "bdev_malloc_create", 00:23:29.287 "params": { 00:23:29.287 "name": "malloc0", 00:23:29.287 "num_blocks": 8192, 00:23:29.287 "block_size": 4096, 00:23:29.287 "physical_block_size": 4096, 00:23:29.287 "uuid": "1b51c49d-6e89-4c19-83a6-ccde0c5a6ebf", 00:23:29.287 "optimal_io_boundary": 0, 00:23:29.287 "md_size": 0, 00:23:29.287 "dif_type": 0, 00:23:29.287 "dif_is_head_of_md": false, 00:23:29.287 "dif_pi_format": 0 00:23:29.287 } 00:23:29.287 }, 00:23:29.287 { 00:23:29.287 "method": "bdev_wait_for_examine" 00:23:29.287 } 00:23:29.287 ] 00:23:29.287 }, 00:23:29.287 { 00:23:29.287 "subsystem": "nbd", 00:23:29.287 "config": [] 00:23:29.287 }, 00:23:29.287 { 00:23:29.287 "subsystem": "scheduler", 00:23:29.287 "config": [ 00:23:29.287 { 00:23:29.287 "method": "framework_set_scheduler", 00:23:29.287 "params": { 00:23:29.287 "name": "static" 00:23:29.287 } 00:23:29.287 } 00:23:29.287 ] 00:23:29.287 }, 00:23:29.287 { 00:23:29.287 "subsystem": "nvmf", 00:23:29.287 "config": [ 00:23:29.287 { 00:23:29.287 "method": "nvmf_set_config", 00:23:29.287 "params": { 00:23:29.287 "discovery_filter": "match_any", 00:23:29.287 "admin_cmd_passthru": { 00:23:29.287 "identify_ctrlr": false 00:23:29.287 }, 00:23:29.287 "dhchap_digests": [ 00:23:29.287 "sha256", 00:23:29.287 "sha384", 00:23:29.287 "sha512" 00:23:29.287 ], 00:23:29.287 "dhchap_dhgroups": [ 00:23:29.287 "null", 00:23:29.287 "ffdhe2048", 00:23:29.287 "ffdhe3072", 00:23:29.287 "ffdhe4096", 00:23:29.287 "ffdhe6144", 00:23:29.287 "ffdhe8192" 00:23:29.287 ] 00:23:29.287 } 00:23:29.287 }, 00:23:29.287 { 00:23:29.287 "method": "nvmf_set_max_subsystems", 00:23:29.287 "params": { 00:23:29.287 "max_subsystems": 1024 00:23:29.287 } 00:23:29.287 }, 00:23:29.287 { 00:23:29.287 "method": "nvmf_set_crdt", 00:23:29.287 "params": { 00:23:29.287 "crdt1": 0, 00:23:29.287 "crdt2": 0, 00:23:29.287 "crdt3": 0 00:23:29.287 } 00:23:29.287 }, 00:23:29.287 { 00:23:29.287 "method": "nvmf_create_transport", 00:23:29.287 "params": { 00:23:29.287 "trtype": "TCP", 00:23:29.287 "max_queue_depth": 128, 00:23:29.287 "max_io_qpairs_per_ctrlr": 127, 00:23:29.287 "in_capsule_data_size": 4096, 00:23:29.287 "max_io_size": 131072, 00:23:29.287 "io_unit_size": 131072, 00:23:29.287 "max_aq_depth": 128, 00:23:29.287 "num_shared_buffers": 511, 00:23:29.287 "buf_cache_size": 4294967295, 00:23:29.287 "dif_insert_or_strip": false, 00:23:29.287 "zcopy": false, 00:23:29.287 "c2h_success": false, 00:23:29.287 "sock_priority": 0, 00:23:29.287 "abort_timeout_sec": 1, 00:23:29.287 "ack_timeout": 0, 00:23:29.287 "data_wr_pool_size": 0 00:23:29.287 } 00:23:29.287 }, 00:23:29.287 { 00:23:29.287 "method": "nvmf_create_subsystem", 00:23:29.287 "params": { 00:23:29.287 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:29.287 "allow_any_host": false, 00:23:29.287 "serial_number": "SPDK00000000000001", 00:23:29.287 "model_number": "SPDK bdev Controller", 00:23:29.287 "max_namespaces": 10, 00:23:29.287 "min_cntlid": 1, 00:23:29.287 "max_cntlid": 65519, 00:23:29.287 "ana_reporting": false 00:23:29.287 } 00:23:29.287 }, 00:23:29.287 { 00:23:29.287 "method": "nvmf_subsystem_add_host", 00:23:29.287 "params": { 00:23:29.287 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:29.287 "host": "nqn.2016-06.io.spdk:host1", 00:23:29.287 "psk": "key0" 00:23:29.287 } 00:23:29.287 }, 00:23:29.287 { 00:23:29.287 "method": "nvmf_subsystem_add_ns", 00:23:29.287 "params": { 00:23:29.287 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:29.287 "namespace": { 00:23:29.287 "nsid": 1, 00:23:29.287 "bdev_name": "malloc0", 00:23:29.287 "nguid": "1B51C49D6E894C1983A6CCDE0C5A6EBF", 00:23:29.287 "uuid": "1b51c49d-6e89-4c19-83a6-ccde0c5a6ebf", 00:23:29.287 "no_auto_visible": false 00:23:29.287 } 00:23:29.287 } 00:23:29.287 }, 00:23:29.287 { 00:23:29.287 "method": "nvmf_subsystem_add_listener", 00:23:29.287 "params": { 00:23:29.288 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:29.288 "listen_address": { 00:23:29.288 "trtype": "TCP", 00:23:29.288 "adrfam": "IPv4", 00:23:29.288 "traddr": "10.0.0.2", 00:23:29.288 "trsvcid": "4420" 00:23:29.288 }, 00:23:29.288 "secure_channel": true 00:23:29.288 } 00:23:29.288 } 00:23:29.288 ] 00:23:29.288 } 00:23:29.288 ] 00:23:29.288 }' 00:23:29.288 10:25:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:23:29.547 10:25:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:23:29.547 "subsystems": [ 00:23:29.547 { 00:23:29.547 "subsystem": "keyring", 00:23:29.547 "config": [ 00:23:29.547 { 00:23:29.547 "method": "keyring_file_add_key", 00:23:29.547 "params": { 00:23:29.547 "name": "key0", 00:23:29.547 "path": "/tmp/tmp.IyV0de1qDz" 00:23:29.547 } 00:23:29.547 } 00:23:29.547 ] 00:23:29.547 }, 00:23:29.547 { 00:23:29.547 "subsystem": "iobuf", 00:23:29.547 "config": [ 00:23:29.547 { 00:23:29.547 "method": "iobuf_set_options", 00:23:29.547 "params": { 00:23:29.547 "small_pool_count": 8192, 00:23:29.547 "large_pool_count": 1024, 00:23:29.547 "small_bufsize": 8192, 00:23:29.547 "large_bufsize": 135168, 00:23:29.547 "enable_numa": false 00:23:29.547 } 00:23:29.547 } 00:23:29.547 ] 00:23:29.547 }, 00:23:29.547 { 00:23:29.547 "subsystem": "sock", 00:23:29.547 "config": [ 00:23:29.547 { 00:23:29.547 "method": "sock_set_default_impl", 00:23:29.547 "params": { 00:23:29.547 "impl_name": "posix" 00:23:29.547 } 00:23:29.547 }, 00:23:29.547 { 00:23:29.547 "method": "sock_impl_set_options", 00:23:29.547 "params": { 00:23:29.547 "impl_name": "ssl", 00:23:29.547 "recv_buf_size": 4096, 00:23:29.547 "send_buf_size": 4096, 00:23:29.547 "enable_recv_pipe": true, 00:23:29.547 "enable_quickack": false, 00:23:29.547 "enable_placement_id": 0, 00:23:29.547 "enable_zerocopy_send_server": true, 00:23:29.547 "enable_zerocopy_send_client": false, 00:23:29.547 "zerocopy_threshold": 0, 00:23:29.547 "tls_version": 0, 00:23:29.547 "enable_ktls": false 00:23:29.547 } 00:23:29.547 }, 00:23:29.547 { 00:23:29.547 "method": "sock_impl_set_options", 00:23:29.547 "params": { 00:23:29.547 "impl_name": "posix", 00:23:29.547 "recv_buf_size": 2097152, 00:23:29.547 "send_buf_size": 2097152, 00:23:29.547 "enable_recv_pipe": true, 00:23:29.547 "enable_quickack": false, 00:23:29.547 "enable_placement_id": 0, 00:23:29.547 "enable_zerocopy_send_server": true, 00:23:29.547 "enable_zerocopy_send_client": false, 00:23:29.547 "zerocopy_threshold": 0, 00:23:29.547 "tls_version": 0, 00:23:29.547 "enable_ktls": false 00:23:29.547 } 00:23:29.547 } 00:23:29.547 ] 00:23:29.547 }, 00:23:29.547 { 00:23:29.547 "subsystem": "vmd", 00:23:29.547 "config": [] 00:23:29.547 }, 00:23:29.547 { 00:23:29.547 "subsystem": "accel", 00:23:29.547 "config": [ 00:23:29.547 { 00:23:29.547 "method": "accel_set_options", 00:23:29.547 "params": { 00:23:29.547 "small_cache_size": 128, 00:23:29.547 "large_cache_size": 16, 00:23:29.547 "task_count": 2048, 00:23:29.547 "sequence_count": 2048, 00:23:29.547 "buf_count": 2048 00:23:29.547 } 00:23:29.547 } 00:23:29.547 ] 00:23:29.547 }, 00:23:29.547 { 00:23:29.547 "subsystem": "bdev", 00:23:29.547 "config": [ 00:23:29.547 { 00:23:29.547 "method": "bdev_set_options", 00:23:29.547 "params": { 00:23:29.547 "bdev_io_pool_size": 65535, 00:23:29.547 "bdev_io_cache_size": 256, 00:23:29.547 "bdev_auto_examine": true, 00:23:29.547 "iobuf_small_cache_size": 128, 00:23:29.547 "iobuf_large_cache_size": 16 00:23:29.547 } 00:23:29.547 }, 00:23:29.547 { 00:23:29.547 "method": "bdev_raid_set_options", 00:23:29.547 "params": { 00:23:29.547 "process_window_size_kb": 1024, 00:23:29.547 "process_max_bandwidth_mb_sec": 0 00:23:29.547 } 00:23:29.547 }, 00:23:29.547 { 00:23:29.547 "method": "bdev_iscsi_set_options", 00:23:29.547 "params": { 00:23:29.547 "timeout_sec": 30 00:23:29.547 } 00:23:29.547 }, 00:23:29.547 { 00:23:29.547 "method": "bdev_nvme_set_options", 00:23:29.547 "params": { 00:23:29.547 "action_on_timeout": "none", 00:23:29.547 "timeout_us": 0, 00:23:29.547 "timeout_admin_us": 0, 00:23:29.547 "keep_alive_timeout_ms": 10000, 00:23:29.547 "arbitration_burst": 0, 00:23:29.547 "low_priority_weight": 0, 00:23:29.547 "medium_priority_weight": 0, 00:23:29.547 "high_priority_weight": 0, 00:23:29.547 "nvme_adminq_poll_period_us": 10000, 00:23:29.547 "nvme_ioq_poll_period_us": 0, 00:23:29.547 "io_queue_requests": 512, 00:23:29.547 "delay_cmd_submit": true, 00:23:29.547 "transport_retry_count": 4, 00:23:29.547 "bdev_retry_count": 3, 00:23:29.547 "transport_ack_timeout": 0, 00:23:29.547 "ctrlr_loss_timeout_sec": 0, 00:23:29.547 "reconnect_delay_sec": 0, 00:23:29.547 "fast_io_fail_timeout_sec": 0, 00:23:29.547 "disable_auto_failback": false, 00:23:29.547 "generate_uuids": false, 00:23:29.547 "transport_tos": 0, 00:23:29.547 "nvme_error_stat": false, 00:23:29.547 "rdma_srq_size": 0, 00:23:29.547 "io_path_stat": false, 00:23:29.547 "allow_accel_sequence": false, 00:23:29.547 "rdma_max_cq_size": 0, 00:23:29.547 "rdma_cm_event_timeout_ms": 0, 00:23:29.547 "dhchap_digests": [ 00:23:29.547 "sha256", 00:23:29.547 "sha384", 00:23:29.547 "sha512" 00:23:29.547 ], 00:23:29.547 "dhchap_dhgroups": [ 00:23:29.547 "null", 00:23:29.547 "ffdhe2048", 00:23:29.547 "ffdhe3072", 00:23:29.547 "ffdhe4096", 00:23:29.547 "ffdhe6144", 00:23:29.547 "ffdhe8192" 00:23:29.547 ], 00:23:29.547 "rdma_umr_per_io": false 00:23:29.547 } 00:23:29.547 }, 00:23:29.547 { 00:23:29.547 "method": "bdev_nvme_attach_controller", 00:23:29.547 "params": { 00:23:29.547 "name": "TLSTEST", 00:23:29.547 "trtype": "TCP", 00:23:29.547 "adrfam": "IPv4", 00:23:29.547 "traddr": "10.0.0.2", 00:23:29.547 "trsvcid": "4420", 00:23:29.547 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:29.547 "prchk_reftag": false, 00:23:29.547 "prchk_guard": false, 00:23:29.547 "ctrlr_loss_timeout_sec": 0, 00:23:29.547 "reconnect_delay_sec": 0, 00:23:29.547 "fast_io_fail_timeout_sec": 0, 00:23:29.547 "psk": "key0", 00:23:29.547 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:29.548 "hdgst": false, 00:23:29.548 "ddgst": false, 00:23:29.548 "multipath": "multipath" 00:23:29.548 } 00:23:29.548 }, 00:23:29.548 { 00:23:29.548 "method": "bdev_nvme_set_hotplug", 00:23:29.548 "params": { 00:23:29.548 "period_us": 100000, 00:23:29.548 "enable": false 00:23:29.548 } 00:23:29.548 }, 00:23:29.548 { 00:23:29.548 "method": "bdev_wait_for_examine" 00:23:29.548 } 00:23:29.548 ] 00:23:29.548 }, 00:23:29.548 { 00:23:29.548 "subsystem": "nbd", 00:23:29.548 "config": [] 00:23:29.548 } 00:23:29.548 ] 00:23:29.548 }' 00:23:29.548 10:25:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 3957399 00:23:29.548 10:25:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3957399 ']' 00:23:29.548 10:25:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3957399 00:23:29.548 10:25:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:29.548 10:25:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:29.548 10:25:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3957399 00:23:29.548 10:25:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:29.548 10:25:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:29.548 10:25:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3957399' 00:23:29.548 killing process with pid 3957399 00:23:29.548 10:25:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3957399 00:23:29.548 Received shutdown signal, test time was about 10.000000 seconds 00:23:29.548 00:23:29.548 Latency(us) 00:23:29.548 [2024-12-13T09:25:23.439Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:29.548 [2024-12-13T09:25:23.439Z] =================================================================================================================== 00:23:29.548 [2024-12-13T09:25:23.439Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:29.548 10:25:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3957399 00:23:30.484 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 3956938 00:23:30.484 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3956938 ']' 00:23:30.484 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3956938 00:23:30.484 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:30.484 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:30.484 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3956938 00:23:30.484 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:30.484 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:30.484 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3956938' 00:23:30.484 killing process with pid 3956938 00:23:30.484 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3956938 00:23:30.484 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3956938 00:23:31.859 10:25:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:23:31.859 10:25:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:31.859 10:25:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:31.859 10:25:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:23:31.859 "subsystems": [ 00:23:31.859 { 00:23:31.859 "subsystem": "keyring", 00:23:31.859 "config": [ 00:23:31.859 { 00:23:31.859 "method": "keyring_file_add_key", 00:23:31.859 "params": { 00:23:31.859 "name": "key0", 00:23:31.859 "path": "/tmp/tmp.IyV0de1qDz" 00:23:31.859 } 00:23:31.859 } 00:23:31.859 ] 00:23:31.859 }, 00:23:31.859 { 00:23:31.859 "subsystem": "iobuf", 00:23:31.859 "config": [ 00:23:31.859 { 00:23:31.859 "method": "iobuf_set_options", 00:23:31.859 "params": { 00:23:31.859 "small_pool_count": 8192, 00:23:31.859 "large_pool_count": 1024, 00:23:31.859 "small_bufsize": 8192, 00:23:31.859 "large_bufsize": 135168, 00:23:31.859 "enable_numa": false 00:23:31.859 } 00:23:31.859 } 00:23:31.859 ] 00:23:31.859 }, 00:23:31.859 { 00:23:31.859 "subsystem": "sock", 00:23:31.859 "config": [ 00:23:31.859 { 00:23:31.860 "method": "sock_set_default_impl", 00:23:31.860 "params": { 00:23:31.860 "impl_name": "posix" 00:23:31.860 } 00:23:31.860 }, 00:23:31.860 { 00:23:31.860 "method": "sock_impl_set_options", 00:23:31.860 "params": { 00:23:31.860 "impl_name": "ssl", 00:23:31.860 "recv_buf_size": 4096, 00:23:31.860 "send_buf_size": 4096, 00:23:31.860 "enable_recv_pipe": true, 00:23:31.860 "enable_quickack": false, 00:23:31.860 "enable_placement_id": 0, 00:23:31.860 "enable_zerocopy_send_server": true, 00:23:31.860 "enable_zerocopy_send_client": false, 00:23:31.860 "zerocopy_threshold": 0, 00:23:31.860 "tls_version": 0, 00:23:31.860 "enable_ktls": false 00:23:31.860 } 00:23:31.860 }, 00:23:31.860 { 00:23:31.860 "method": "sock_impl_set_options", 00:23:31.860 "params": { 00:23:31.860 "impl_name": "posix", 00:23:31.860 "recv_buf_size": 2097152, 00:23:31.860 "send_buf_size": 2097152, 00:23:31.860 "enable_recv_pipe": true, 00:23:31.860 "enable_quickack": false, 00:23:31.860 "enable_placement_id": 0, 00:23:31.860 "enable_zerocopy_send_server": true, 00:23:31.860 "enable_zerocopy_send_client": false, 00:23:31.860 "zerocopy_threshold": 0, 00:23:31.860 "tls_version": 0, 00:23:31.860 "enable_ktls": false 00:23:31.860 } 00:23:31.860 } 00:23:31.860 ] 00:23:31.860 }, 00:23:31.860 { 00:23:31.860 "subsystem": "vmd", 00:23:31.860 "config": [] 00:23:31.860 }, 00:23:31.860 { 00:23:31.860 "subsystem": "accel", 00:23:31.860 "config": [ 00:23:31.860 { 00:23:31.860 "method": "accel_set_options", 00:23:31.860 "params": { 00:23:31.860 "small_cache_size": 128, 00:23:31.860 "large_cache_size": 16, 00:23:31.860 "task_count": 2048, 00:23:31.860 "sequence_count": 2048, 00:23:31.860 "buf_count": 2048 00:23:31.860 } 00:23:31.860 } 00:23:31.860 ] 00:23:31.860 }, 00:23:31.860 { 00:23:31.860 "subsystem": "bdev", 00:23:31.860 "config": [ 00:23:31.860 { 00:23:31.860 "method": "bdev_set_options", 00:23:31.860 "params": { 00:23:31.860 "bdev_io_pool_size": 65535, 00:23:31.860 "bdev_io_cache_size": 256, 00:23:31.860 "bdev_auto_examine": true, 00:23:31.860 "iobuf_small_cache_size": 128, 00:23:31.860 "iobuf_large_cache_size": 16 00:23:31.860 } 00:23:31.860 }, 00:23:31.860 { 00:23:31.860 "method": "bdev_raid_set_options", 00:23:31.860 "params": { 00:23:31.860 "process_window_size_kb": 1024, 00:23:31.860 "process_max_bandwidth_mb_sec": 0 00:23:31.860 } 00:23:31.860 }, 00:23:31.860 { 00:23:31.860 "method": "bdev_iscsi_set_options", 00:23:31.860 "params": { 00:23:31.860 "timeout_sec": 30 00:23:31.860 } 00:23:31.860 }, 00:23:31.860 { 00:23:31.860 "method": "bdev_nvme_set_options", 00:23:31.860 "params": { 00:23:31.860 "action_on_timeout": "none", 00:23:31.860 "timeout_us": 0, 00:23:31.860 "timeout_admin_us": 0, 00:23:31.860 "keep_alive_timeout_ms": 10000, 00:23:31.860 "arbitration_burst": 0, 00:23:31.860 "low_priority_weight": 0, 00:23:31.860 "medium_priority_weight": 0, 00:23:31.860 "high_priority_weight": 0, 00:23:31.860 "nvme_adminq_poll_period_us": 10000, 00:23:31.860 "nvme_ioq_poll_period_us": 0, 00:23:31.860 "io_queue_requests": 0, 00:23:31.860 "delay_cmd_submit": true, 00:23:31.860 "transport_retry_count": 4, 00:23:31.860 "bdev_retry_count": 3, 00:23:31.860 "transport_ack_timeout": 0, 00:23:31.860 "ctrlr_loss_timeout_sec": 0, 00:23:31.860 "reconnect_delay_sec": 0, 00:23:31.860 "fast_io_fail_timeout_sec": 0, 00:23:31.860 "disable_auto_failback": false, 00:23:31.860 "generate_uuids": false, 00:23:31.860 "transport_tos": 0, 00:23:31.860 "nvme_error_stat": false, 00:23:31.860 "rdma_srq_size": 0, 00:23:31.860 "io_path_stat": false, 00:23:31.860 "allow_accel_sequence": false, 00:23:31.860 "rdma_max_cq_size": 0, 00:23:31.860 "rdma_cm_event_timeout_ms": 0, 00:23:31.860 "dhchap_digests": [ 00:23:31.860 "sha256", 00:23:31.860 "sha384", 00:23:31.860 "sha512" 00:23:31.860 ], 00:23:31.860 "dhchap_dhgroups": [ 00:23:31.860 "null", 00:23:31.860 "ffdhe2048", 00:23:31.860 "ffdhe3072", 00:23:31.860 "ffdhe4096", 00:23:31.860 "ffdhe6144", 00:23:31.860 "ffdhe8192" 00:23:31.860 ], 00:23:31.860 "rdma_umr_per_io": false 00:23:31.860 } 00:23:31.860 }, 00:23:31.860 { 00:23:31.860 "method": "bdev_nvme_set_hotplug", 00:23:31.860 "params": { 00:23:31.860 "period_us": 100000, 00:23:31.860 "enable": false 00:23:31.860 } 00:23:31.860 }, 00:23:31.860 { 00:23:31.860 "method": "bdev_malloc_create", 00:23:31.860 "params": { 00:23:31.860 "name": "malloc0", 00:23:31.860 "num_blocks": 8192, 00:23:31.860 "block_size": 4096, 00:23:31.860 "physical_block_size": 4096, 00:23:31.860 "uuid": "1b51c49d-6e89-4c19-83a6-ccde0c5a6ebf", 00:23:31.860 "optimal_io_boundary": 0, 00:23:31.860 "md_size": 0, 00:23:31.860 "dif_type": 0, 00:23:31.860 "dif_is_head_of_md": false, 00:23:31.860 "dif_pi_format": 0 00:23:31.860 } 00:23:31.860 }, 00:23:31.860 { 00:23:31.860 "method": "bdev_wait_for_examine" 00:23:31.860 } 00:23:31.860 ] 00:23:31.860 }, 00:23:31.860 { 00:23:31.860 "subsystem": "nbd", 00:23:31.860 "config": [] 00:23:31.860 }, 00:23:31.860 { 00:23:31.860 "subsystem": "scheduler", 00:23:31.860 "config": [ 00:23:31.860 { 00:23:31.860 "method": "framework_set_scheduler", 00:23:31.860 "params": { 00:23:31.860 "name": "static" 00:23:31.860 } 00:23:31.860 } 00:23:31.860 ] 00:23:31.860 }, 00:23:31.860 { 00:23:31.860 "subsystem": "nvmf", 00:23:31.860 "config": [ 00:23:31.860 { 00:23:31.860 "method": "nvmf_set_config", 00:23:31.860 "params": { 00:23:31.860 "discovery_filter": "match_any", 00:23:31.860 "admin_cmd_passthru": { 00:23:31.860 "identify_ctrlr": false 00:23:31.860 }, 00:23:31.860 "dhchap_digests": [ 00:23:31.860 "sha256", 00:23:31.860 "sha384", 00:23:31.860 "sha512" 00:23:31.860 ], 00:23:31.860 "dhchap_dhgroups": [ 00:23:31.860 "null", 00:23:31.860 "ffdhe2048", 00:23:31.860 "ffdhe3072", 00:23:31.860 "ffdhe4096", 00:23:31.860 "ffdhe6144", 00:23:31.860 "ffdhe8192" 00:23:31.860 ] 00:23:31.860 } 00:23:31.860 }, 00:23:31.860 { 00:23:31.860 "method": "nvmf_set_max_subsystems", 00:23:31.860 "params": { 00:23:31.860 "max_subsystems": 1024 00:23:31.860 } 00:23:31.860 }, 00:23:31.860 { 00:23:31.860 "method": "nvmf_set_crdt", 00:23:31.860 "params": { 00:23:31.860 "crdt1": 0, 00:23:31.860 "crdt2": 0, 00:23:31.860 "crdt3": 0 00:23:31.860 } 00:23:31.860 }, 00:23:31.860 { 00:23:31.860 "method": "nvmf_create_transport", 00:23:31.860 "params": { 00:23:31.860 "trtype": "TCP", 00:23:31.860 "max_queue_depth": 128, 00:23:31.860 "max_io_qpairs_per_ctrlr": 127, 00:23:31.860 "in_capsule_data_size": 4096, 00:23:31.860 "max_io_size": 131072, 00:23:31.860 "io_unit_size": 131072, 00:23:31.860 "max_aq_depth": 128, 00:23:31.860 "num_shared_buffers": 511, 00:23:31.860 "buf_cache_size": 4294967295, 00:23:31.860 "dif_insert_or_strip": false, 00:23:31.860 "zcopy": false, 00:23:31.860 "c2h_success": false, 00:23:31.860 "sock_priority": 0, 00:23:31.860 "abort_timeout_sec": 1, 00:23:31.860 "ack_timeout": 0, 00:23:31.860 "data_wr_pool_size": 0 00:23:31.860 } 00:23:31.860 }, 00:23:31.860 { 00:23:31.860 "method": "nvmf_create_subsystem", 00:23:31.860 "params": { 00:23:31.860 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:31.860 "allow_any_host": false, 00:23:31.860 "serial_number": "SPDK00000000000001", 00:23:31.860 "model_number": "SPDK bdev Controller", 00:23:31.860 "max_namespaces": 10, 00:23:31.860 "min_cntlid": 1, 00:23:31.860 "max_cntlid": 65519, 00:23:31.860 "ana_reporting": false 00:23:31.860 } 00:23:31.860 }, 00:23:31.860 { 00:23:31.861 "method": "nvmf_subsystem_add_host", 00:23:31.861 "params": { 00:23:31.861 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:31.861 "host": "nqn.2016-06.io.spdk:host1", 00:23:31.861 "psk": "key0" 00:23:31.861 } 00:23:31.861 }, 00:23:31.861 { 00:23:31.861 "method": "nvmf_subsystem_add_ns", 00:23:31.861 "params": { 00:23:31.861 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:31.861 "namespace": { 00:23:31.861 "nsid": 1, 00:23:31.861 "bdev_name": "malloc0", 00:23:31.861 "nguid": "1B51C49D6E894C1983A6CCDE0C5A6EBF", 00:23:31.861 "uuid": "1b51c49d-6e89-4c19-83a6-ccde0c5a6ebf", 00:23:31.861 "no_auto_visible": false 00:23:31.861 } 00:23:31.861 } 00:23:31.861 }, 00:23:31.861 { 00:23:31.861 "method": "nvmf_subsystem_add_listener", 00:23:31.861 "params": { 00:23:31.861 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:31.861 "listen_address": { 00:23:31.861 "trtype": "TCP", 00:23:31.861 "adrfam": "IPv4", 00:23:31.861 "traddr": "10.0.0.2", 00:23:31.861 "trsvcid": "4420" 00:23:31.861 }, 00:23:31.861 "secure_channel": true 00:23:31.861 } 00:23:31.861 } 00:23:31.861 ] 00:23:31.861 } 00:23:31.861 ] 00:23:31.861 }' 00:23:31.861 10:25:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:31.861 10:25:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3958089 00:23:31.861 10:25:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:23:31.861 10:25:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3958089 00:23:31.861 10:25:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3958089 ']' 00:23:31.861 10:25:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:31.861 10:25:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:31.861 10:25:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:31.861 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:31.861 10:25:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:31.861 10:25:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:31.861 [2024-12-13 10:25:25.541763] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:23:31.861 [2024-12-13 10:25:25.541854] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:31.861 [2024-12-13 10:25:25.658381] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:32.120 [2024-12-13 10:25:25.764396] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:32.120 [2024-12-13 10:25:25.764437] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:32.120 [2024-12-13 10:25:25.764451] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:32.120 [2024-12-13 10:25:25.764461] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:32.120 [2024-12-13 10:25:25.764469] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:32.120 [2024-12-13 10:25:25.765716] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:23:32.378 [2024-12-13 10:25:26.255115] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:32.637 [2024-12-13 10:25:26.287158] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:32.637 [2024-12-13 10:25:26.287404] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:32.637 10:25:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:32.637 10:25:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:32.637 10:25:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:32.637 10:25:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:32.637 10:25:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:32.637 10:25:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:32.637 10:25:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=3958120 00:23:32.637 10:25:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 3958120 /var/tmp/bdevperf.sock 00:23:32.637 10:25:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3958120 ']' 00:23:32.637 10:25:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:32.637 10:25:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:23:32.637 10:25:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:32.637 10:25:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:32.637 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:32.637 10:25:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:23:32.637 "subsystems": [ 00:23:32.637 { 00:23:32.637 "subsystem": "keyring", 00:23:32.637 "config": [ 00:23:32.637 { 00:23:32.637 "method": "keyring_file_add_key", 00:23:32.637 "params": { 00:23:32.637 "name": "key0", 00:23:32.637 "path": "/tmp/tmp.IyV0de1qDz" 00:23:32.637 } 00:23:32.637 } 00:23:32.637 ] 00:23:32.637 }, 00:23:32.637 { 00:23:32.637 "subsystem": "iobuf", 00:23:32.637 "config": [ 00:23:32.637 { 00:23:32.637 "method": "iobuf_set_options", 00:23:32.637 "params": { 00:23:32.637 "small_pool_count": 8192, 00:23:32.637 "large_pool_count": 1024, 00:23:32.637 "small_bufsize": 8192, 00:23:32.637 "large_bufsize": 135168, 00:23:32.637 "enable_numa": false 00:23:32.637 } 00:23:32.637 } 00:23:32.637 ] 00:23:32.637 }, 00:23:32.637 { 00:23:32.637 "subsystem": "sock", 00:23:32.637 "config": [ 00:23:32.637 { 00:23:32.637 "method": "sock_set_default_impl", 00:23:32.637 "params": { 00:23:32.637 "impl_name": "posix" 00:23:32.637 } 00:23:32.637 }, 00:23:32.637 { 00:23:32.637 "method": "sock_impl_set_options", 00:23:32.637 "params": { 00:23:32.637 "impl_name": "ssl", 00:23:32.637 "recv_buf_size": 4096, 00:23:32.637 "send_buf_size": 4096, 00:23:32.637 "enable_recv_pipe": true, 00:23:32.637 "enable_quickack": false, 00:23:32.637 "enable_placement_id": 0, 00:23:32.637 "enable_zerocopy_send_server": true, 00:23:32.637 "enable_zerocopy_send_client": false, 00:23:32.637 "zerocopy_threshold": 0, 00:23:32.637 "tls_version": 0, 00:23:32.637 "enable_ktls": false 00:23:32.637 } 00:23:32.637 }, 00:23:32.637 { 00:23:32.637 "method": "sock_impl_set_options", 00:23:32.637 "params": { 00:23:32.637 "impl_name": "posix", 00:23:32.637 "recv_buf_size": 2097152, 00:23:32.637 "send_buf_size": 2097152, 00:23:32.637 "enable_recv_pipe": true, 00:23:32.637 "enable_quickack": false, 00:23:32.637 "enable_placement_id": 0, 00:23:32.637 "enable_zerocopy_send_server": true, 00:23:32.637 "enable_zerocopy_send_client": false, 00:23:32.637 "zerocopy_threshold": 0, 00:23:32.637 "tls_version": 0, 00:23:32.637 "enable_ktls": false 00:23:32.637 } 00:23:32.637 } 00:23:32.637 ] 00:23:32.637 }, 00:23:32.637 { 00:23:32.637 "subsystem": "vmd", 00:23:32.637 "config": [] 00:23:32.637 }, 00:23:32.637 { 00:23:32.637 "subsystem": "accel", 00:23:32.637 "config": [ 00:23:32.637 { 00:23:32.637 "method": "accel_set_options", 00:23:32.637 "params": { 00:23:32.637 "small_cache_size": 128, 00:23:32.637 "large_cache_size": 16, 00:23:32.637 "task_count": 2048, 00:23:32.637 "sequence_count": 2048, 00:23:32.637 "buf_count": 2048 00:23:32.637 } 00:23:32.637 } 00:23:32.637 ] 00:23:32.637 }, 00:23:32.637 { 00:23:32.637 "subsystem": "bdev", 00:23:32.637 "config": [ 00:23:32.637 { 00:23:32.637 "method": "bdev_set_options", 00:23:32.637 "params": { 00:23:32.637 "bdev_io_pool_size": 65535, 00:23:32.638 "bdev_io_cache_size": 256, 00:23:32.638 "bdev_auto_examine": true, 00:23:32.638 "iobuf_small_cache_size": 128, 00:23:32.638 "iobuf_large_cache_size": 16 00:23:32.638 } 00:23:32.638 }, 00:23:32.638 { 00:23:32.638 "method": "bdev_raid_set_options", 00:23:32.638 "params": { 00:23:32.638 "process_window_size_kb": 1024, 00:23:32.638 "process_max_bandwidth_mb_sec": 0 00:23:32.638 } 00:23:32.638 }, 00:23:32.638 { 00:23:32.638 "method": "bdev_iscsi_set_options", 00:23:32.638 "params": { 00:23:32.638 "timeout_sec": 30 00:23:32.638 } 00:23:32.638 }, 00:23:32.638 { 00:23:32.638 "method": "bdev_nvme_set_options", 00:23:32.638 "params": { 00:23:32.638 "action_on_timeout": "none", 00:23:32.638 "timeout_us": 0, 00:23:32.638 "timeout_admin_us": 0, 00:23:32.638 "keep_alive_timeout_ms": 10000, 00:23:32.638 "arbitration_burst": 0, 00:23:32.638 "low_priority_weight": 0, 00:23:32.638 "medium_priority_weight": 0, 00:23:32.638 "high_priority_weight": 0, 00:23:32.638 "nvme_adminq_poll_period_us": 10000, 00:23:32.638 "nvme_ioq_poll_period_us": 0, 00:23:32.638 "io_queue_requests": 512, 00:23:32.638 "delay_cmd_submit": true, 00:23:32.638 "transport_retry_count": 4, 00:23:32.638 "bdev_retry_count": 3, 00:23:32.638 "transport_ack_timeout": 0, 00:23:32.638 "ctrlr_loss_timeout_sec": 0, 00:23:32.638 "reconnect_delay_sec": 0, 00:23:32.638 "fast_io_fail_timeout_sec": 0, 00:23:32.638 "disable_auto_failback": false, 00:23:32.638 "generate_uuids": false, 00:23:32.638 "transport_tos": 0, 00:23:32.638 "nvme_error_stat": false, 00:23:32.638 "rdma_srq_size": 0, 00:23:32.638 "io_path_stat": false, 00:23:32.638 "allow_accel_sequence": false, 00:23:32.638 "rdma_max_cq_size": 0, 00:23:32.638 "rdma_cm_event_timeout_ms": 0, 00:23:32.638 "dhchap_digests": [ 00:23:32.638 "sha256", 00:23:32.638 "sha384", 00:23:32.638 "sha512" 00:23:32.638 ], 00:23:32.638 "dhchap_dhgroups": [ 00:23:32.638 "null", 00:23:32.638 "ffdhe2048", 00:23:32.638 "ffdhe3072", 00:23:32.638 "ffdhe4096", 00:23:32.638 "ffdhe6144", 00:23:32.638 "ffdhe8192" 00:23:32.638 ], 00:23:32.638 "rdma_umr_per_io": false 00:23:32.638 } 00:23:32.638 }, 00:23:32.638 { 00:23:32.638 "method": "bdev_nvme_attach_controller", 00:23:32.638 "params": { 00:23:32.638 "name": "TLSTEST", 00:23:32.638 "trtype": "TCP", 00:23:32.638 "adrfam": "IPv4", 00:23:32.638 "traddr": "10.0.0.2", 00:23:32.638 "trsvcid": "4420", 00:23:32.638 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:32.638 "prchk_reftag": false, 00:23:32.638 "prchk_guard": false, 00:23:32.638 "ctrlr_loss_timeout_sec": 0, 00:23:32.638 "reconnect_delay_sec": 0, 00:23:32.638 "fast_io_fail_timeout_sec": 0, 00:23:32.638 "psk": "key0", 00:23:32.638 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:32.638 "hdgst": false, 00:23:32.638 "ddgst": false, 00:23:32.638 "multipath": "multipath" 00:23:32.638 } 00:23:32.638 }, 00:23:32.638 { 00:23:32.638 "method": "bdev_nvme_set_hotplug", 00:23:32.638 "params": { 00:23:32.638 "period_us": 100000, 00:23:32.638 "enable": false 00:23:32.638 } 00:23:32.638 }, 00:23:32.638 { 00:23:32.638 "method": "bdev_wait_for_examine" 00:23:32.638 } 00:23:32.638 ] 00:23:32.638 }, 00:23:32.638 { 00:23:32.638 "subsystem": "nbd", 00:23:32.638 "config": [] 00:23:32.638 } 00:23:32.638 ] 00:23:32.638 }' 00:23:32.638 10:25:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:32.638 10:25:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:32.638 [2024-12-13 10:25:26.458068] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:23:32.638 [2024-12-13 10:25:26.458156] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3958120 ] 00:23:32.897 [2024-12-13 10:25:26.570028] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:32.897 [2024-12-13 10:25:26.681136] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:23:33.465 [2024-12-13 10:25:27.099284] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:33.465 10:25:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:33.465 10:25:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:33.465 10:25:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:23:33.465 Running I/O for 10 seconds... 00:23:35.777 4400.00 IOPS, 17.19 MiB/s [2024-12-13T09:25:30.604Z] 4555.00 IOPS, 17.79 MiB/s [2024-12-13T09:25:31.540Z] 4488.33 IOPS, 17.53 MiB/s [2024-12-13T09:25:32.475Z] 4515.50 IOPS, 17.64 MiB/s [2024-12-13T09:25:33.411Z] 4542.20 IOPS, 17.74 MiB/s [2024-12-13T09:25:34.787Z] 4575.50 IOPS, 17.87 MiB/s [2024-12-13T09:25:35.721Z] 4541.86 IOPS, 17.74 MiB/s [2024-12-13T09:25:36.656Z] 4495.88 IOPS, 17.56 MiB/s [2024-12-13T09:25:37.592Z] 4464.56 IOPS, 17.44 MiB/s [2024-12-13T09:25:37.592Z] 4451.10 IOPS, 17.39 MiB/s 00:23:43.701 Latency(us) 00:23:43.701 [2024-12-13T09:25:37.593Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:43.702 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:43.702 Verification LBA range: start 0x0 length 0x2000 00:23:43.702 TLSTESTn1 : 10.02 4455.04 17.40 0.00 0.00 28686.85 6085.49 32705.58 00:23:43.702 [2024-12-13T09:25:37.593Z] =================================================================================================================== 00:23:43.702 [2024-12-13T09:25:37.593Z] Total : 4455.04 17.40 0.00 0.00 28686.85 6085.49 32705.58 00:23:43.702 { 00:23:43.702 "results": [ 00:23:43.702 { 00:23:43.702 "job": "TLSTESTn1", 00:23:43.702 "core_mask": "0x4", 00:23:43.702 "workload": "verify", 00:23:43.702 "status": "finished", 00:23:43.702 "verify_range": { 00:23:43.702 "start": 0, 00:23:43.702 "length": 8192 00:23:43.702 }, 00:23:43.702 "queue_depth": 128, 00:23:43.702 "io_size": 4096, 00:23:43.702 "runtime": 10.019896, 00:23:43.702 "iops": 4455.036259857387, 00:23:43.702 "mibps": 17.40248539006792, 00:23:43.702 "io_failed": 0, 00:23:43.702 "io_timeout": 0, 00:23:43.702 "avg_latency_us": 28686.851391234868, 00:23:43.702 "min_latency_us": 6085.4857142857145, 00:23:43.702 "max_latency_us": 32705.584761904764 00:23:43.702 } 00:23:43.702 ], 00:23:43.702 "core_count": 1 00:23:43.702 } 00:23:43.702 10:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:43.702 10:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 3958120 00:23:43.702 10:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3958120 ']' 00:23:43.702 10:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3958120 00:23:43.702 10:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:43.702 10:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:43.702 10:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3958120 00:23:43.702 10:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:43.702 10:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:43.702 10:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3958120' 00:23:43.702 killing process with pid 3958120 00:23:43.702 10:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3958120 00:23:43.702 Received shutdown signal, test time was about 10.000000 seconds 00:23:43.702 00:23:43.702 Latency(us) 00:23:43.702 [2024-12-13T09:25:37.593Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:43.702 [2024-12-13T09:25:37.593Z] =================================================================================================================== 00:23:43.702 [2024-12-13T09:25:37.593Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:43.702 10:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3958120 00:23:44.638 10:25:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 3958089 00:23:44.638 10:25:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3958089 ']' 00:23:44.638 10:25:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3958089 00:23:44.638 10:25:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:44.638 10:25:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:44.638 10:25:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3958089 00:23:44.638 10:25:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:44.638 10:25:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:44.638 10:25:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3958089' 00:23:44.638 killing process with pid 3958089 00:23:44.638 10:25:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3958089 00:23:44.638 10:25:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3958089 00:23:46.015 10:25:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:23:46.015 10:25:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:46.015 10:25:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:46.015 10:25:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:46.015 10:25:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3960348 00:23:46.015 10:25:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:23:46.015 10:25:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3960348 00:23:46.015 10:25:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3960348 ']' 00:23:46.015 10:25:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:46.015 10:25:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:46.015 10:25:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:46.015 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:46.015 10:25:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:46.015 10:25:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:46.015 [2024-12-13 10:25:39.736406] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:23:46.015 [2024-12-13 10:25:39.736527] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:46.015 [2024-12-13 10:25:39.852607] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:46.273 [2024-12-13 10:25:39.956280] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:46.273 [2024-12-13 10:25:39.956325] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:46.274 [2024-12-13 10:25:39.956335] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:46.274 [2024-12-13 10:25:39.956345] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:46.274 [2024-12-13 10:25:39.956352] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:46.274 [2024-12-13 10:25:39.957732] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:23:46.840 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:46.840 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:46.840 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:46.840 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:46.840 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:46.840 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:46.840 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.IyV0de1qDz 00:23:46.840 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.IyV0de1qDz 00:23:46.840 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:47.099 [2024-12-13 10:25:40.737869] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:47.099 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:47.099 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:47.357 [2024-12-13 10:25:41.110859] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:47.357 [2024-12-13 10:25:41.111130] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:47.357 10:25:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:47.615 malloc0 00:23:47.615 10:25:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:47.874 10:25:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.IyV0de1qDz 00:23:47.874 10:25:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:23:48.132 10:25:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=3960609 00:23:48.132 10:25:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:48.132 10:25:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 3960609 /var/tmp/bdevperf.sock 00:23:48.132 10:25:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3960609 ']' 00:23:48.132 10:25:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:48.132 10:25:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:23:48.132 10:25:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:48.132 10:25:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:48.132 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:48.132 10:25:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:48.132 10:25:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:48.132 [2024-12-13 10:25:41.969789] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:23:48.132 [2024-12-13 10:25:41.969878] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3960609 ] 00:23:48.391 [2024-12-13 10:25:42.083697] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:48.391 [2024-12-13 10:25:42.189111] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:23:48.958 10:25:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:48.958 10:25:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:48.958 10:25:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.IyV0de1qDz 00:23:49.217 10:25:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:23:49.475 [2024-12-13 10:25:43.137236] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:49.475 nvme0n1 00:23:49.475 10:25:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:49.475 Running I/O for 1 seconds... 00:23:50.849 4560.00 IOPS, 17.81 MiB/s 00:23:50.849 Latency(us) 00:23:50.849 [2024-12-13T09:25:44.740Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:50.849 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:50.849 Verification LBA range: start 0x0 length 0x2000 00:23:50.849 nvme0n1 : 1.02 4599.29 17.97 0.00 0.00 27580.29 5991.86 24841.26 00:23:50.849 [2024-12-13T09:25:44.740Z] =================================================================================================================== 00:23:50.849 [2024-12-13T09:25:44.740Z] Total : 4599.29 17.97 0.00 0.00 27580.29 5991.86 24841.26 00:23:50.849 { 00:23:50.849 "results": [ 00:23:50.849 { 00:23:50.849 "job": "nvme0n1", 00:23:50.849 "core_mask": "0x2", 00:23:50.849 "workload": "verify", 00:23:50.849 "status": "finished", 00:23:50.849 "verify_range": { 00:23:50.849 "start": 0, 00:23:50.849 "length": 8192 00:23:50.849 }, 00:23:50.849 "queue_depth": 128, 00:23:50.849 "io_size": 4096, 00:23:50.849 "runtime": 1.019288, 00:23:50.849 "iops": 4599.288915399769, 00:23:50.849 "mibps": 17.96597232578035, 00:23:50.849 "io_failed": 0, 00:23:50.849 "io_timeout": 0, 00:23:50.849 "avg_latency_us": 27580.289185763042, 00:23:50.849 "min_latency_us": 5991.862857142857, 00:23:50.849 "max_latency_us": 24841.26476190476 00:23:50.849 } 00:23:50.849 ], 00:23:50.849 "core_count": 1 00:23:50.849 } 00:23:50.849 10:25:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 3960609 00:23:50.849 10:25:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3960609 ']' 00:23:50.849 10:25:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3960609 00:23:50.849 10:25:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:50.849 10:25:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:50.850 10:25:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3960609 00:23:50.850 10:25:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:50.850 10:25:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:50.850 10:25:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3960609' 00:23:50.850 killing process with pid 3960609 00:23:50.850 10:25:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3960609 00:23:50.850 Received shutdown signal, test time was about 1.000000 seconds 00:23:50.850 00:23:50.850 Latency(us) 00:23:50.850 [2024-12-13T09:25:44.741Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:50.850 [2024-12-13T09:25:44.741Z] =================================================================================================================== 00:23:50.850 [2024-12-13T09:25:44.741Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:50.850 10:25:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3960609 00:23:51.477 10:25:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 3960348 00:23:51.477 10:25:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3960348 ']' 00:23:51.477 10:25:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3960348 00:23:51.477 10:25:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:51.477 10:25:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:51.477 10:25:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3960348 00:23:51.477 10:25:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:51.477 10:25:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:51.477 10:25:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3960348' 00:23:51.477 killing process with pid 3960348 00:23:51.477 10:25:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3960348 00:23:51.477 10:25:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3960348 00:23:52.854 10:25:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:23:52.854 10:25:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:52.854 10:25:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:52.854 10:25:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:52.854 10:25:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3961509 00:23:52.854 10:25:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3961509 00:23:52.854 10:25:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:23:52.854 10:25:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3961509 ']' 00:23:52.854 10:25:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:52.854 10:25:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:52.854 10:25:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:52.854 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:52.854 10:25:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:52.854 10:25:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:52.854 [2024-12-13 10:25:46.616818] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:23:52.854 [2024-12-13 10:25:46.616920] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:52.854 [2024-12-13 10:25:46.731642] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:53.113 [2024-12-13 10:25:46.835258] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:53.113 [2024-12-13 10:25:46.835305] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:53.113 [2024-12-13 10:25:46.835316] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:53.113 [2024-12-13 10:25:46.835326] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:53.113 [2024-12-13 10:25:46.835334] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:53.113 [2024-12-13 10:25:46.836579] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:23:53.681 10:25:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:53.681 10:25:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:53.681 10:25:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:53.681 10:25:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:53.681 10:25:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:53.681 10:25:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:53.681 10:25:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:23:53.681 10:25:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.681 10:25:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:53.681 [2024-12-13 10:25:47.454187] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:53.681 malloc0 00:23:53.681 [2024-12-13 10:25:47.510838] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:53.681 [2024-12-13 10:25:47.511096] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:53.681 10:25:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.681 10:25:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=3961545 00:23:53.681 10:25:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 3961545 /var/tmp/bdevperf.sock 00:23:53.681 10:25:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:23:53.681 10:25:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3961545 ']' 00:23:53.681 10:25:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:53.681 10:25:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:53.681 10:25:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:53.681 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:53.681 10:25:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:53.681 10:25:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:53.940 [2024-12-13 10:25:47.614655] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:23:53.940 [2024-12-13 10:25:47.614741] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3961545 ] 00:23:53.940 [2024-12-13 10:25:47.727837] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:54.198 [2024-12-13 10:25:47.835959] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:23:54.764 10:25:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:54.764 10:25:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:54.764 10:25:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.IyV0de1qDz 00:23:54.764 10:25:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:23:55.022 [2024-12-13 10:25:48.798550] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:55.022 nvme0n1 00:23:55.022 10:25:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:55.281 Running I/O for 1 seconds... 00:23:56.217 4400.00 IOPS, 17.19 MiB/s 00:23:56.217 Latency(us) 00:23:56.217 [2024-12-13T09:25:50.108Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:56.217 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:56.217 Verification LBA range: start 0x0 length 0x2000 00:23:56.217 nvme0n1 : 1.02 4456.85 17.41 0.00 0.00 28487.59 6397.56 30708.30 00:23:56.217 [2024-12-13T09:25:50.108Z] =================================================================================================================== 00:23:56.217 [2024-12-13T09:25:50.108Z] Total : 4456.85 17.41 0.00 0.00 28487.59 6397.56 30708.30 00:23:56.217 { 00:23:56.217 "results": [ 00:23:56.217 { 00:23:56.217 "job": "nvme0n1", 00:23:56.217 "core_mask": "0x2", 00:23:56.217 "workload": "verify", 00:23:56.217 "status": "finished", 00:23:56.217 "verify_range": { 00:23:56.217 "start": 0, 00:23:56.217 "length": 8192 00:23:56.217 }, 00:23:56.217 "queue_depth": 128, 00:23:56.217 "io_size": 4096, 00:23:56.217 "runtime": 1.015964, 00:23:56.217 "iops": 4456.850833297242, 00:23:56.217 "mibps": 17.409573567567353, 00:23:56.217 "io_failed": 0, 00:23:56.217 "io_timeout": 0, 00:23:56.217 "avg_latency_us": 28487.586956082785, 00:23:56.217 "min_latency_us": 6397.561904761905, 00:23:56.217 "max_latency_us": 30708.297142857144 00:23:56.217 } 00:23:56.217 ], 00:23:56.217 "core_count": 1 00:23:56.217 } 00:23:56.217 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:23:56.217 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:56.217 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:56.476 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:56.476 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:23:56.476 "subsystems": [ 00:23:56.476 { 00:23:56.476 "subsystem": "keyring", 00:23:56.476 "config": [ 00:23:56.476 { 00:23:56.476 "method": "keyring_file_add_key", 00:23:56.476 "params": { 00:23:56.476 "name": "key0", 00:23:56.476 "path": "/tmp/tmp.IyV0de1qDz" 00:23:56.476 } 00:23:56.476 } 00:23:56.476 ] 00:23:56.476 }, 00:23:56.476 { 00:23:56.476 "subsystem": "iobuf", 00:23:56.476 "config": [ 00:23:56.476 { 00:23:56.476 "method": "iobuf_set_options", 00:23:56.476 "params": { 00:23:56.476 "small_pool_count": 8192, 00:23:56.476 "large_pool_count": 1024, 00:23:56.476 "small_bufsize": 8192, 00:23:56.476 "large_bufsize": 135168, 00:23:56.476 "enable_numa": false 00:23:56.476 } 00:23:56.476 } 00:23:56.476 ] 00:23:56.476 }, 00:23:56.476 { 00:23:56.476 "subsystem": "sock", 00:23:56.476 "config": [ 00:23:56.476 { 00:23:56.476 "method": "sock_set_default_impl", 00:23:56.476 "params": { 00:23:56.476 "impl_name": "posix" 00:23:56.476 } 00:23:56.476 }, 00:23:56.476 { 00:23:56.476 "method": "sock_impl_set_options", 00:23:56.476 "params": { 00:23:56.476 "impl_name": "ssl", 00:23:56.476 "recv_buf_size": 4096, 00:23:56.476 "send_buf_size": 4096, 00:23:56.476 "enable_recv_pipe": true, 00:23:56.476 "enable_quickack": false, 00:23:56.476 "enable_placement_id": 0, 00:23:56.476 "enable_zerocopy_send_server": true, 00:23:56.476 "enable_zerocopy_send_client": false, 00:23:56.476 "zerocopy_threshold": 0, 00:23:56.476 "tls_version": 0, 00:23:56.476 "enable_ktls": false 00:23:56.476 } 00:23:56.476 }, 00:23:56.476 { 00:23:56.476 "method": "sock_impl_set_options", 00:23:56.476 "params": { 00:23:56.476 "impl_name": "posix", 00:23:56.476 "recv_buf_size": 2097152, 00:23:56.476 "send_buf_size": 2097152, 00:23:56.476 "enable_recv_pipe": true, 00:23:56.476 "enable_quickack": false, 00:23:56.476 "enable_placement_id": 0, 00:23:56.476 "enable_zerocopy_send_server": true, 00:23:56.476 "enable_zerocopy_send_client": false, 00:23:56.476 "zerocopy_threshold": 0, 00:23:56.476 "tls_version": 0, 00:23:56.476 "enable_ktls": false 00:23:56.476 } 00:23:56.476 } 00:23:56.476 ] 00:23:56.476 }, 00:23:56.476 { 00:23:56.476 "subsystem": "vmd", 00:23:56.476 "config": [] 00:23:56.476 }, 00:23:56.476 { 00:23:56.476 "subsystem": "accel", 00:23:56.476 "config": [ 00:23:56.476 { 00:23:56.476 "method": "accel_set_options", 00:23:56.476 "params": { 00:23:56.476 "small_cache_size": 128, 00:23:56.476 "large_cache_size": 16, 00:23:56.476 "task_count": 2048, 00:23:56.476 "sequence_count": 2048, 00:23:56.476 "buf_count": 2048 00:23:56.476 } 00:23:56.476 } 00:23:56.476 ] 00:23:56.476 }, 00:23:56.476 { 00:23:56.476 "subsystem": "bdev", 00:23:56.476 "config": [ 00:23:56.476 { 00:23:56.476 "method": "bdev_set_options", 00:23:56.476 "params": { 00:23:56.476 "bdev_io_pool_size": 65535, 00:23:56.476 "bdev_io_cache_size": 256, 00:23:56.476 "bdev_auto_examine": true, 00:23:56.476 "iobuf_small_cache_size": 128, 00:23:56.476 "iobuf_large_cache_size": 16 00:23:56.476 } 00:23:56.476 }, 00:23:56.476 { 00:23:56.476 "method": "bdev_raid_set_options", 00:23:56.476 "params": { 00:23:56.476 "process_window_size_kb": 1024, 00:23:56.476 "process_max_bandwidth_mb_sec": 0 00:23:56.476 } 00:23:56.476 }, 00:23:56.476 { 00:23:56.477 "method": "bdev_iscsi_set_options", 00:23:56.477 "params": { 00:23:56.477 "timeout_sec": 30 00:23:56.477 } 00:23:56.477 }, 00:23:56.477 { 00:23:56.477 "method": "bdev_nvme_set_options", 00:23:56.477 "params": { 00:23:56.477 "action_on_timeout": "none", 00:23:56.477 "timeout_us": 0, 00:23:56.477 "timeout_admin_us": 0, 00:23:56.477 "keep_alive_timeout_ms": 10000, 00:23:56.477 "arbitration_burst": 0, 00:23:56.477 "low_priority_weight": 0, 00:23:56.477 "medium_priority_weight": 0, 00:23:56.477 "high_priority_weight": 0, 00:23:56.477 "nvme_adminq_poll_period_us": 10000, 00:23:56.477 "nvme_ioq_poll_period_us": 0, 00:23:56.477 "io_queue_requests": 0, 00:23:56.477 "delay_cmd_submit": true, 00:23:56.477 "transport_retry_count": 4, 00:23:56.477 "bdev_retry_count": 3, 00:23:56.477 "transport_ack_timeout": 0, 00:23:56.477 "ctrlr_loss_timeout_sec": 0, 00:23:56.477 "reconnect_delay_sec": 0, 00:23:56.477 "fast_io_fail_timeout_sec": 0, 00:23:56.477 "disable_auto_failback": false, 00:23:56.477 "generate_uuids": false, 00:23:56.477 "transport_tos": 0, 00:23:56.477 "nvme_error_stat": false, 00:23:56.477 "rdma_srq_size": 0, 00:23:56.477 "io_path_stat": false, 00:23:56.477 "allow_accel_sequence": false, 00:23:56.477 "rdma_max_cq_size": 0, 00:23:56.477 "rdma_cm_event_timeout_ms": 0, 00:23:56.477 "dhchap_digests": [ 00:23:56.477 "sha256", 00:23:56.477 "sha384", 00:23:56.477 "sha512" 00:23:56.477 ], 00:23:56.477 "dhchap_dhgroups": [ 00:23:56.477 "null", 00:23:56.477 "ffdhe2048", 00:23:56.477 "ffdhe3072", 00:23:56.477 "ffdhe4096", 00:23:56.477 "ffdhe6144", 00:23:56.477 "ffdhe8192" 00:23:56.477 ], 00:23:56.477 "rdma_umr_per_io": false 00:23:56.477 } 00:23:56.477 }, 00:23:56.477 { 00:23:56.477 "method": "bdev_nvme_set_hotplug", 00:23:56.477 "params": { 00:23:56.477 "period_us": 100000, 00:23:56.477 "enable": false 00:23:56.477 } 00:23:56.477 }, 00:23:56.477 { 00:23:56.477 "method": "bdev_malloc_create", 00:23:56.477 "params": { 00:23:56.477 "name": "malloc0", 00:23:56.477 "num_blocks": 8192, 00:23:56.477 "block_size": 4096, 00:23:56.477 "physical_block_size": 4096, 00:23:56.477 "uuid": "c2eae3ab-ca6b-43cd-b2b3-302b6699963a", 00:23:56.477 "optimal_io_boundary": 0, 00:23:56.477 "md_size": 0, 00:23:56.477 "dif_type": 0, 00:23:56.477 "dif_is_head_of_md": false, 00:23:56.477 "dif_pi_format": 0 00:23:56.477 } 00:23:56.477 }, 00:23:56.477 { 00:23:56.477 "method": "bdev_wait_for_examine" 00:23:56.477 } 00:23:56.477 ] 00:23:56.477 }, 00:23:56.477 { 00:23:56.477 "subsystem": "nbd", 00:23:56.477 "config": [] 00:23:56.477 }, 00:23:56.477 { 00:23:56.477 "subsystem": "scheduler", 00:23:56.477 "config": [ 00:23:56.477 { 00:23:56.477 "method": "framework_set_scheduler", 00:23:56.477 "params": { 00:23:56.477 "name": "static" 00:23:56.477 } 00:23:56.477 } 00:23:56.477 ] 00:23:56.477 }, 00:23:56.477 { 00:23:56.477 "subsystem": "nvmf", 00:23:56.477 "config": [ 00:23:56.477 { 00:23:56.477 "method": "nvmf_set_config", 00:23:56.477 "params": { 00:23:56.477 "discovery_filter": "match_any", 00:23:56.477 "admin_cmd_passthru": { 00:23:56.477 "identify_ctrlr": false 00:23:56.477 }, 00:23:56.477 "dhchap_digests": [ 00:23:56.477 "sha256", 00:23:56.477 "sha384", 00:23:56.477 "sha512" 00:23:56.477 ], 00:23:56.477 "dhchap_dhgroups": [ 00:23:56.477 "null", 00:23:56.477 "ffdhe2048", 00:23:56.477 "ffdhe3072", 00:23:56.477 "ffdhe4096", 00:23:56.477 "ffdhe6144", 00:23:56.477 "ffdhe8192" 00:23:56.477 ] 00:23:56.477 } 00:23:56.477 }, 00:23:56.477 { 00:23:56.477 "method": "nvmf_set_max_subsystems", 00:23:56.477 "params": { 00:23:56.477 "max_subsystems": 1024 00:23:56.477 } 00:23:56.477 }, 00:23:56.477 { 00:23:56.477 "method": "nvmf_set_crdt", 00:23:56.477 "params": { 00:23:56.477 "crdt1": 0, 00:23:56.477 "crdt2": 0, 00:23:56.477 "crdt3": 0 00:23:56.477 } 00:23:56.477 }, 00:23:56.477 { 00:23:56.477 "method": "nvmf_create_transport", 00:23:56.477 "params": { 00:23:56.477 "trtype": "TCP", 00:23:56.477 "max_queue_depth": 128, 00:23:56.477 "max_io_qpairs_per_ctrlr": 127, 00:23:56.477 "in_capsule_data_size": 4096, 00:23:56.477 "max_io_size": 131072, 00:23:56.477 "io_unit_size": 131072, 00:23:56.477 "max_aq_depth": 128, 00:23:56.477 "num_shared_buffers": 511, 00:23:56.477 "buf_cache_size": 4294967295, 00:23:56.477 "dif_insert_or_strip": false, 00:23:56.477 "zcopy": false, 00:23:56.477 "c2h_success": false, 00:23:56.477 "sock_priority": 0, 00:23:56.477 "abort_timeout_sec": 1, 00:23:56.477 "ack_timeout": 0, 00:23:56.477 "data_wr_pool_size": 0 00:23:56.477 } 00:23:56.477 }, 00:23:56.477 { 00:23:56.477 "method": "nvmf_create_subsystem", 00:23:56.477 "params": { 00:23:56.477 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:56.477 "allow_any_host": false, 00:23:56.477 "serial_number": "00000000000000000000", 00:23:56.477 "model_number": "SPDK bdev Controller", 00:23:56.477 "max_namespaces": 32, 00:23:56.477 "min_cntlid": 1, 00:23:56.477 "max_cntlid": 65519, 00:23:56.477 "ana_reporting": false 00:23:56.477 } 00:23:56.477 }, 00:23:56.477 { 00:23:56.477 "method": "nvmf_subsystem_add_host", 00:23:56.477 "params": { 00:23:56.477 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:56.477 "host": "nqn.2016-06.io.spdk:host1", 00:23:56.477 "psk": "key0" 00:23:56.477 } 00:23:56.477 }, 00:23:56.477 { 00:23:56.477 "method": "nvmf_subsystem_add_ns", 00:23:56.477 "params": { 00:23:56.477 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:56.477 "namespace": { 00:23:56.477 "nsid": 1, 00:23:56.477 "bdev_name": "malloc0", 00:23:56.477 "nguid": "C2EAE3ABCA6B43CDB2B3302B6699963A", 00:23:56.477 "uuid": "c2eae3ab-ca6b-43cd-b2b3-302b6699963a", 00:23:56.477 "no_auto_visible": false 00:23:56.477 } 00:23:56.477 } 00:23:56.477 }, 00:23:56.477 { 00:23:56.477 "method": "nvmf_subsystem_add_listener", 00:23:56.477 "params": { 00:23:56.477 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:56.477 "listen_address": { 00:23:56.477 "trtype": "TCP", 00:23:56.477 "adrfam": "IPv4", 00:23:56.477 "traddr": "10.0.0.2", 00:23:56.477 "trsvcid": "4420" 00:23:56.477 }, 00:23:56.477 "secure_channel": false, 00:23:56.477 "sock_impl": "ssl" 00:23:56.477 } 00:23:56.477 } 00:23:56.477 ] 00:23:56.477 } 00:23:56.477 ] 00:23:56.477 }' 00:23:56.477 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:23:56.737 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:23:56.737 "subsystems": [ 00:23:56.737 { 00:23:56.737 "subsystem": "keyring", 00:23:56.737 "config": [ 00:23:56.737 { 00:23:56.737 "method": "keyring_file_add_key", 00:23:56.737 "params": { 00:23:56.737 "name": "key0", 00:23:56.737 "path": "/tmp/tmp.IyV0de1qDz" 00:23:56.737 } 00:23:56.737 } 00:23:56.737 ] 00:23:56.737 }, 00:23:56.737 { 00:23:56.737 "subsystem": "iobuf", 00:23:56.737 "config": [ 00:23:56.737 { 00:23:56.737 "method": "iobuf_set_options", 00:23:56.737 "params": { 00:23:56.737 "small_pool_count": 8192, 00:23:56.737 "large_pool_count": 1024, 00:23:56.737 "small_bufsize": 8192, 00:23:56.737 "large_bufsize": 135168, 00:23:56.737 "enable_numa": false 00:23:56.737 } 00:23:56.737 } 00:23:56.737 ] 00:23:56.737 }, 00:23:56.737 { 00:23:56.737 "subsystem": "sock", 00:23:56.737 "config": [ 00:23:56.737 { 00:23:56.737 "method": "sock_set_default_impl", 00:23:56.737 "params": { 00:23:56.737 "impl_name": "posix" 00:23:56.737 } 00:23:56.737 }, 00:23:56.737 { 00:23:56.737 "method": "sock_impl_set_options", 00:23:56.737 "params": { 00:23:56.737 "impl_name": "ssl", 00:23:56.737 "recv_buf_size": 4096, 00:23:56.737 "send_buf_size": 4096, 00:23:56.737 "enable_recv_pipe": true, 00:23:56.737 "enable_quickack": false, 00:23:56.737 "enable_placement_id": 0, 00:23:56.737 "enable_zerocopy_send_server": true, 00:23:56.737 "enable_zerocopy_send_client": false, 00:23:56.737 "zerocopy_threshold": 0, 00:23:56.737 "tls_version": 0, 00:23:56.737 "enable_ktls": false 00:23:56.737 } 00:23:56.737 }, 00:23:56.737 { 00:23:56.737 "method": "sock_impl_set_options", 00:23:56.737 "params": { 00:23:56.737 "impl_name": "posix", 00:23:56.737 "recv_buf_size": 2097152, 00:23:56.737 "send_buf_size": 2097152, 00:23:56.737 "enable_recv_pipe": true, 00:23:56.737 "enable_quickack": false, 00:23:56.737 "enable_placement_id": 0, 00:23:56.737 "enable_zerocopy_send_server": true, 00:23:56.737 "enable_zerocopy_send_client": false, 00:23:56.737 "zerocopy_threshold": 0, 00:23:56.737 "tls_version": 0, 00:23:56.737 "enable_ktls": false 00:23:56.737 } 00:23:56.737 } 00:23:56.737 ] 00:23:56.737 }, 00:23:56.737 { 00:23:56.737 "subsystem": "vmd", 00:23:56.737 "config": [] 00:23:56.737 }, 00:23:56.737 { 00:23:56.737 "subsystem": "accel", 00:23:56.737 "config": [ 00:23:56.737 { 00:23:56.737 "method": "accel_set_options", 00:23:56.737 "params": { 00:23:56.737 "small_cache_size": 128, 00:23:56.737 "large_cache_size": 16, 00:23:56.737 "task_count": 2048, 00:23:56.737 "sequence_count": 2048, 00:23:56.737 "buf_count": 2048 00:23:56.737 } 00:23:56.737 } 00:23:56.737 ] 00:23:56.737 }, 00:23:56.738 { 00:23:56.738 "subsystem": "bdev", 00:23:56.738 "config": [ 00:23:56.738 { 00:23:56.738 "method": "bdev_set_options", 00:23:56.738 "params": { 00:23:56.738 "bdev_io_pool_size": 65535, 00:23:56.738 "bdev_io_cache_size": 256, 00:23:56.738 "bdev_auto_examine": true, 00:23:56.738 "iobuf_small_cache_size": 128, 00:23:56.738 "iobuf_large_cache_size": 16 00:23:56.738 } 00:23:56.738 }, 00:23:56.738 { 00:23:56.738 "method": "bdev_raid_set_options", 00:23:56.738 "params": { 00:23:56.738 "process_window_size_kb": 1024, 00:23:56.738 "process_max_bandwidth_mb_sec": 0 00:23:56.738 } 00:23:56.738 }, 00:23:56.738 { 00:23:56.738 "method": "bdev_iscsi_set_options", 00:23:56.738 "params": { 00:23:56.738 "timeout_sec": 30 00:23:56.738 } 00:23:56.738 }, 00:23:56.738 { 00:23:56.738 "method": "bdev_nvme_set_options", 00:23:56.738 "params": { 00:23:56.738 "action_on_timeout": "none", 00:23:56.738 "timeout_us": 0, 00:23:56.738 "timeout_admin_us": 0, 00:23:56.738 "keep_alive_timeout_ms": 10000, 00:23:56.738 "arbitration_burst": 0, 00:23:56.738 "low_priority_weight": 0, 00:23:56.738 "medium_priority_weight": 0, 00:23:56.738 "high_priority_weight": 0, 00:23:56.738 "nvme_adminq_poll_period_us": 10000, 00:23:56.738 "nvme_ioq_poll_period_us": 0, 00:23:56.738 "io_queue_requests": 512, 00:23:56.738 "delay_cmd_submit": true, 00:23:56.738 "transport_retry_count": 4, 00:23:56.738 "bdev_retry_count": 3, 00:23:56.738 "transport_ack_timeout": 0, 00:23:56.738 "ctrlr_loss_timeout_sec": 0, 00:23:56.738 "reconnect_delay_sec": 0, 00:23:56.738 "fast_io_fail_timeout_sec": 0, 00:23:56.738 "disable_auto_failback": false, 00:23:56.738 "generate_uuids": false, 00:23:56.738 "transport_tos": 0, 00:23:56.738 "nvme_error_stat": false, 00:23:56.738 "rdma_srq_size": 0, 00:23:56.738 "io_path_stat": false, 00:23:56.738 "allow_accel_sequence": false, 00:23:56.738 "rdma_max_cq_size": 0, 00:23:56.738 "rdma_cm_event_timeout_ms": 0, 00:23:56.738 "dhchap_digests": [ 00:23:56.738 "sha256", 00:23:56.738 "sha384", 00:23:56.738 "sha512" 00:23:56.738 ], 00:23:56.738 "dhchap_dhgroups": [ 00:23:56.738 "null", 00:23:56.738 "ffdhe2048", 00:23:56.738 "ffdhe3072", 00:23:56.738 "ffdhe4096", 00:23:56.738 "ffdhe6144", 00:23:56.738 "ffdhe8192" 00:23:56.738 ], 00:23:56.738 "rdma_umr_per_io": false 00:23:56.738 } 00:23:56.738 }, 00:23:56.738 { 00:23:56.738 "method": "bdev_nvme_attach_controller", 00:23:56.738 "params": { 00:23:56.738 "name": "nvme0", 00:23:56.738 "trtype": "TCP", 00:23:56.738 "adrfam": "IPv4", 00:23:56.738 "traddr": "10.0.0.2", 00:23:56.738 "trsvcid": "4420", 00:23:56.738 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:56.738 "prchk_reftag": false, 00:23:56.738 "prchk_guard": false, 00:23:56.738 "ctrlr_loss_timeout_sec": 0, 00:23:56.738 "reconnect_delay_sec": 0, 00:23:56.738 "fast_io_fail_timeout_sec": 0, 00:23:56.738 "psk": "key0", 00:23:56.738 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:56.738 "hdgst": false, 00:23:56.738 "ddgst": false, 00:23:56.738 "multipath": "multipath" 00:23:56.738 } 00:23:56.738 }, 00:23:56.738 { 00:23:56.738 "method": "bdev_nvme_set_hotplug", 00:23:56.738 "params": { 00:23:56.738 "period_us": 100000, 00:23:56.738 "enable": false 00:23:56.738 } 00:23:56.738 }, 00:23:56.738 { 00:23:56.738 "method": "bdev_enable_histogram", 00:23:56.738 "params": { 00:23:56.738 "name": "nvme0n1", 00:23:56.738 "enable": true 00:23:56.738 } 00:23:56.738 }, 00:23:56.738 { 00:23:56.738 "method": "bdev_wait_for_examine" 00:23:56.738 } 00:23:56.738 ] 00:23:56.738 }, 00:23:56.738 { 00:23:56.738 "subsystem": "nbd", 00:23:56.738 "config": [] 00:23:56.738 } 00:23:56.738 ] 00:23:56.738 }' 00:23:56.738 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 3961545 00:23:56.738 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3961545 ']' 00:23:56.738 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3961545 00:23:56.738 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:56.738 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:56.738 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3961545 00:23:56.738 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:56.738 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:56.738 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3961545' 00:23:56.738 killing process with pid 3961545 00:23:56.738 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3961545 00:23:56.738 Received shutdown signal, test time was about 1.000000 seconds 00:23:56.738 00:23:56.738 Latency(us) 00:23:56.738 [2024-12-13T09:25:50.629Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:56.738 [2024-12-13T09:25:50.629Z] =================================================================================================================== 00:23:56.738 [2024-12-13T09:25:50.629Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:56.738 10:25:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3961545 00:23:57.675 10:25:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 3961509 00:23:57.675 10:25:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3961509 ']' 00:23:57.675 10:25:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3961509 00:23:57.675 10:25:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:57.675 10:25:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:57.675 10:25:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3961509 00:23:57.675 10:25:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:57.675 10:25:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:57.675 10:25:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3961509' 00:23:57.675 killing process with pid 3961509 00:23:57.675 10:25:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3961509 00:23:57.675 10:25:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3961509 00:23:59.054 10:25:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:23:59.054 10:25:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:59.054 10:25:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:59.054 10:25:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:23:59.054 "subsystems": [ 00:23:59.054 { 00:23:59.054 "subsystem": "keyring", 00:23:59.054 "config": [ 00:23:59.054 { 00:23:59.054 "method": "keyring_file_add_key", 00:23:59.054 "params": { 00:23:59.054 "name": "key0", 00:23:59.054 "path": "/tmp/tmp.IyV0de1qDz" 00:23:59.054 } 00:23:59.054 } 00:23:59.054 ] 00:23:59.054 }, 00:23:59.054 { 00:23:59.054 "subsystem": "iobuf", 00:23:59.054 "config": [ 00:23:59.054 { 00:23:59.054 "method": "iobuf_set_options", 00:23:59.054 "params": { 00:23:59.054 "small_pool_count": 8192, 00:23:59.054 "large_pool_count": 1024, 00:23:59.054 "small_bufsize": 8192, 00:23:59.054 "large_bufsize": 135168, 00:23:59.054 "enable_numa": false 00:23:59.054 } 00:23:59.054 } 00:23:59.054 ] 00:23:59.054 }, 00:23:59.054 { 00:23:59.054 "subsystem": "sock", 00:23:59.054 "config": [ 00:23:59.054 { 00:23:59.054 "method": "sock_set_default_impl", 00:23:59.054 "params": { 00:23:59.054 "impl_name": "posix" 00:23:59.054 } 00:23:59.054 }, 00:23:59.054 { 00:23:59.054 "method": "sock_impl_set_options", 00:23:59.054 "params": { 00:23:59.054 "impl_name": "ssl", 00:23:59.054 "recv_buf_size": 4096, 00:23:59.054 "send_buf_size": 4096, 00:23:59.054 "enable_recv_pipe": true, 00:23:59.054 "enable_quickack": false, 00:23:59.054 "enable_placement_id": 0, 00:23:59.054 "enable_zerocopy_send_server": true, 00:23:59.054 "enable_zerocopy_send_client": false, 00:23:59.054 "zerocopy_threshold": 0, 00:23:59.054 "tls_version": 0, 00:23:59.054 "enable_ktls": false 00:23:59.054 } 00:23:59.054 }, 00:23:59.054 { 00:23:59.054 "method": "sock_impl_set_options", 00:23:59.054 "params": { 00:23:59.054 "impl_name": "posix", 00:23:59.054 "recv_buf_size": 2097152, 00:23:59.054 "send_buf_size": 2097152, 00:23:59.054 "enable_recv_pipe": true, 00:23:59.054 "enable_quickack": false, 00:23:59.054 "enable_placement_id": 0, 00:23:59.054 "enable_zerocopy_send_server": true, 00:23:59.054 "enable_zerocopy_send_client": false, 00:23:59.054 "zerocopy_threshold": 0, 00:23:59.054 "tls_version": 0, 00:23:59.054 "enable_ktls": false 00:23:59.054 } 00:23:59.054 } 00:23:59.054 ] 00:23:59.054 }, 00:23:59.055 { 00:23:59.055 "subsystem": "vmd", 00:23:59.055 "config": [] 00:23:59.055 }, 00:23:59.055 { 00:23:59.055 "subsystem": "accel", 00:23:59.055 "config": [ 00:23:59.055 { 00:23:59.055 "method": "accel_set_options", 00:23:59.055 "params": { 00:23:59.055 "small_cache_size": 128, 00:23:59.055 "large_cache_size": 16, 00:23:59.055 "task_count": 2048, 00:23:59.055 "sequence_count": 2048, 00:23:59.055 "buf_count": 2048 00:23:59.055 } 00:23:59.055 } 00:23:59.055 ] 00:23:59.055 }, 00:23:59.055 { 00:23:59.055 "subsystem": "bdev", 00:23:59.055 "config": [ 00:23:59.055 { 00:23:59.055 "method": "bdev_set_options", 00:23:59.055 "params": { 00:23:59.055 "bdev_io_pool_size": 65535, 00:23:59.055 "bdev_io_cache_size": 256, 00:23:59.055 "bdev_auto_examine": true, 00:23:59.055 "iobuf_small_cache_size": 128, 00:23:59.055 "iobuf_large_cache_size": 16 00:23:59.055 } 00:23:59.055 }, 00:23:59.055 { 00:23:59.055 "method": "bdev_raid_set_options", 00:23:59.055 "params": { 00:23:59.055 "process_window_size_kb": 1024, 00:23:59.055 "process_max_bandwidth_mb_sec": 0 00:23:59.055 } 00:23:59.055 }, 00:23:59.055 { 00:23:59.055 "method": "bdev_iscsi_set_options", 00:23:59.055 "params": { 00:23:59.055 "timeout_sec": 30 00:23:59.055 } 00:23:59.055 }, 00:23:59.055 { 00:23:59.055 "method": "bdev_nvme_set_options", 00:23:59.055 "params": { 00:23:59.055 "action_on_timeout": "none", 00:23:59.055 "timeout_us": 0, 00:23:59.055 "timeout_admin_us": 0, 00:23:59.055 "keep_alive_timeout_ms": 10000, 00:23:59.055 "arbitration_burst": 0, 00:23:59.055 "low_priority_weight": 0, 00:23:59.055 "medium_priority_weight": 0, 00:23:59.055 "high_priority_weight": 0, 00:23:59.055 "nvme_adminq_poll_period_us": 10000, 00:23:59.055 "nvme_ioq_poll_period_us": 0, 00:23:59.055 "io_queue_requests": 0, 00:23:59.055 "delay_cmd_submit": true, 00:23:59.055 "transport_retry_count": 4, 00:23:59.055 "bdev_retry_count": 3, 00:23:59.055 "transport_ack_timeout": 0, 00:23:59.055 "ctrlr_loss_timeout_sec": 0, 00:23:59.055 "reconnect_delay_sec": 0, 00:23:59.055 "fast_io_fail_timeout_sec": 0, 00:23:59.055 "disable_auto_failback": false, 00:23:59.055 "generate_uuids": false, 00:23:59.055 "transport_tos": 0, 00:23:59.055 "nvme_error_stat": false, 00:23:59.055 "rdma_srq_size": 0, 00:23:59.055 "io_path_stat": false, 00:23:59.055 "allow_accel_sequence": false, 00:23:59.055 "rdma_max_cq_size": 0, 00:23:59.055 "rdma_cm_event_timeout_ms": 0, 00:23:59.055 "dhchap_digests": [ 00:23:59.055 "sha256", 00:23:59.055 "sha384", 00:23:59.055 "sha512" 00:23:59.055 ], 00:23:59.055 "dhchap_dhgroups": [ 00:23:59.055 "null", 00:23:59.055 "ffdhe2048", 00:23:59.055 "ffdhe3072", 00:23:59.055 "ffdhe4096", 00:23:59.055 "ffdhe6144", 00:23:59.055 "ffdhe8192" 00:23:59.055 ], 00:23:59.055 "rdma_umr_per_io": false 00:23:59.055 } 00:23:59.055 }, 00:23:59.055 { 00:23:59.055 "method": "bdev_nvme_set_hotplug", 00:23:59.055 "params": { 00:23:59.055 "period_us": 100000, 00:23:59.055 "enable": false 00:23:59.055 } 00:23:59.055 }, 00:23:59.055 { 00:23:59.055 "method": "bdev_malloc_create", 00:23:59.055 "params": { 00:23:59.055 "name": "malloc0", 00:23:59.055 "num_blocks": 8192, 00:23:59.055 "block_size": 4096, 00:23:59.055 "physical_block_size": 4096, 00:23:59.055 "uuid": "c2eae3ab-ca6b-43cd-b2b3-302b6699963a", 00:23:59.055 "optimal_io_boundary": 0, 00:23:59.055 "md_size": 0, 00:23:59.055 "dif_type": 0, 00:23:59.055 "dif_is_head_of_md": false, 00:23:59.055 "dif_pi_format": 0 00:23:59.055 } 00:23:59.055 }, 00:23:59.055 { 00:23:59.055 "method": "bdev_wait_for_examine" 00:23:59.055 } 00:23:59.055 ] 00:23:59.055 }, 00:23:59.055 { 00:23:59.055 "subsystem": "nbd", 00:23:59.055 "config": [] 00:23:59.055 }, 00:23:59.055 { 00:23:59.055 "subsystem": "scheduler", 00:23:59.055 "config": [ 00:23:59.055 { 00:23:59.055 "method": "framework_set_scheduler", 00:23:59.055 "params": { 00:23:59.055 "name": "static" 00:23:59.055 } 00:23:59.055 } 00:23:59.055 ] 00:23:59.055 }, 00:23:59.055 { 00:23:59.055 "subsystem": "nvmf", 00:23:59.055 "config": [ 00:23:59.055 { 00:23:59.055 "method": "nvmf_set_config", 00:23:59.055 "params": { 00:23:59.055 "discovery_filter": "match_any", 00:23:59.055 "admin_cmd_passthru": { 00:23:59.055 "identify_ctrlr": false 00:23:59.055 }, 00:23:59.055 "dhchap_digests": [ 00:23:59.055 "sha256", 00:23:59.055 "sha384", 00:23:59.055 "sha512" 00:23:59.055 ], 00:23:59.055 "dhchap_dhgroups": [ 00:23:59.055 "null", 00:23:59.055 "ffdhe2048", 00:23:59.055 "ffdhe3072", 00:23:59.055 "ffdhe4096", 00:23:59.055 "ffdhe6144", 00:23:59.055 "ffdhe8192" 00:23:59.055 ] 00:23:59.055 } 00:23:59.055 }, 00:23:59.055 { 00:23:59.055 "method": "nvmf_set_max_subsystems", 00:23:59.055 "params": { 00:23:59.055 "max_subsystems": 1024 00:23:59.055 } 00:23:59.055 }, 00:23:59.055 { 00:23:59.055 "method": "nvmf_set_crdt", 00:23:59.055 "params": { 00:23:59.055 "crdt1": 0, 00:23:59.055 "crdt2": 0, 00:23:59.055 "crdt3": 0 00:23:59.055 } 00:23:59.055 }, 00:23:59.055 { 00:23:59.055 "method": "nvmf_create_transport", 00:23:59.055 "params": { 00:23:59.055 "trtype": "TCP", 00:23:59.055 "max_queue_depth": 128, 00:23:59.055 "max_io_qpairs_per_ctrlr": 127, 00:23:59.055 "in_capsule_data_size": 4096, 00:23:59.055 "max_io_size": 131072, 00:23:59.055 "io_unit_size": 131072, 00:23:59.055 "max_aq_depth": 128, 00:23:59.055 "num_shared_buffers": 511, 00:23:59.055 "buf_cache_size": 4294967295, 00:23:59.055 "dif_insert_or_strip": false, 00:23:59.055 "zcopy": false, 00:23:59.055 "c2h_success": false, 00:23:59.055 "sock_priority": 0, 00:23:59.055 "abort_timeout_sec": 1, 00:23:59.055 "ack_timeout": 0, 00:23:59.055 "data_wr_pool_size": 0 00:23:59.055 } 00:23:59.055 }, 00:23:59.055 { 00:23:59.055 "method": "nvmf_create_subsystem", 00:23:59.055 "params": { 00:23:59.055 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:59.055 "allow_any_host": false, 00:23:59.055 "serial_number": "00000000000000000000", 00:23:59.055 "model_number": "SPDK bdev Controller", 00:23:59.055 "max_namespaces": 32, 00:23:59.055 "min_cntlid": 1, 00:23:59.055 "max_cntlid": 65519, 00:23:59.055 "ana_reporting": false 00:23:59.055 } 00:23:59.055 }, 00:23:59.055 { 00:23:59.055 "method": "nvmf_subsystem_add_host", 00:23:59.055 "params": { 00:23:59.055 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:59.055 "host": "nqn.2016-06.io.spdk:host1", 00:23:59.055 "psk": "key0" 00:23:59.055 } 00:23:59.055 }, 00:23:59.055 { 00:23:59.055 "method": "nvmf_subsystem_add_ns", 00:23:59.055 "params": { 00:23:59.055 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:59.055 "namespace": { 00:23:59.055 "nsid": 1, 00:23:59.055 "bdev_name": "malloc0", 00:23:59.055 "nguid": "C2EAE3ABCA6B43CDB2B3302B6699963A", 00:23:59.055 "uuid": "c2eae3ab-ca6b-43cd-b2b3-302b6699963a", 00:23:59.055 "no_auto_visible": false 00:23:59.055 } 00:23:59.055 } 00:23:59.055 }, 00:23:59.055 { 00:23:59.055 "method": "nvmf_subsystem_add_listener", 00:23:59.055 "params": { 00:23:59.055 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:59.055 "listen_address": { 00:23:59.055 "trtype": "TCP", 00:23:59.055 "adrfam": "IPv4", 00:23:59.055 "traddr": "10.0.0.2", 00:23:59.055 "trsvcid": "4420" 00:23:59.055 }, 00:23:59.055 "secure_channel": false, 00:23:59.055 "sock_impl": "ssl" 00:23:59.055 } 00:23:59.055 } 00:23:59.055 ] 00:23:59.055 } 00:23:59.055 ] 00:23:59.055 }' 00:23:59.055 10:25:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:59.055 10:25:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3962448 00:23:59.055 10:25:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:23:59.055 10:25:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3962448 00:23:59.055 10:25:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3962448 ']' 00:23:59.055 10:25:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:59.055 10:25:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:59.055 10:25:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:59.055 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:59.055 10:25:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:59.055 10:25:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:59.055 [2024-12-13 10:25:52.647398] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:23:59.055 [2024-12-13 10:25:52.647495] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:59.055 [2024-12-13 10:25:52.764406] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:59.055 [2024-12-13 10:25:52.866133] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:59.055 [2024-12-13 10:25:52.866181] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:59.055 [2024-12-13 10:25:52.866191] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:59.055 [2024-12-13 10:25:52.866201] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:59.055 [2024-12-13 10:25:52.866208] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:59.056 [2024-12-13 10:25:52.867623] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:23:59.624 [2024-12-13 10:25:53.364039] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:59.624 [2024-12-13 10:25:53.396088] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:59.624 [2024-12-13 10:25:53.396328] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:59.624 10:25:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:59.624 10:25:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:59.624 10:25:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:59.624 10:25:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:59.624 10:25:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:59.624 10:25:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:59.624 10:25:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=3962651 00:23:59.624 10:25:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 3962651 /var/tmp/bdevperf.sock 00:23:59.624 10:25:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3962651 ']' 00:23:59.624 10:25:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:59.624 10:25:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:23:59.624 10:25:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:59.624 10:25:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:59.624 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:59.624 10:25:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:23:59.624 "subsystems": [ 00:23:59.624 { 00:23:59.624 "subsystem": "keyring", 00:23:59.624 "config": [ 00:23:59.624 { 00:23:59.624 "method": "keyring_file_add_key", 00:23:59.624 "params": { 00:23:59.624 "name": "key0", 00:23:59.624 "path": "/tmp/tmp.IyV0de1qDz" 00:23:59.624 } 00:23:59.624 } 00:23:59.624 ] 00:23:59.624 }, 00:23:59.624 { 00:23:59.624 "subsystem": "iobuf", 00:23:59.624 "config": [ 00:23:59.625 { 00:23:59.625 "method": "iobuf_set_options", 00:23:59.625 "params": { 00:23:59.625 "small_pool_count": 8192, 00:23:59.625 "large_pool_count": 1024, 00:23:59.625 "small_bufsize": 8192, 00:23:59.625 "large_bufsize": 135168, 00:23:59.625 "enable_numa": false 00:23:59.625 } 00:23:59.625 } 00:23:59.625 ] 00:23:59.625 }, 00:23:59.625 { 00:23:59.625 "subsystem": "sock", 00:23:59.625 "config": [ 00:23:59.625 { 00:23:59.625 "method": "sock_set_default_impl", 00:23:59.625 "params": { 00:23:59.625 "impl_name": "posix" 00:23:59.625 } 00:23:59.625 }, 00:23:59.625 { 00:23:59.625 "method": "sock_impl_set_options", 00:23:59.625 "params": { 00:23:59.625 "impl_name": "ssl", 00:23:59.625 "recv_buf_size": 4096, 00:23:59.625 "send_buf_size": 4096, 00:23:59.625 "enable_recv_pipe": true, 00:23:59.625 "enable_quickack": false, 00:23:59.625 "enable_placement_id": 0, 00:23:59.625 "enable_zerocopy_send_server": true, 00:23:59.625 "enable_zerocopy_send_client": false, 00:23:59.625 "zerocopy_threshold": 0, 00:23:59.625 "tls_version": 0, 00:23:59.625 "enable_ktls": false 00:23:59.625 } 00:23:59.625 }, 00:23:59.625 { 00:23:59.625 "method": "sock_impl_set_options", 00:23:59.625 "params": { 00:23:59.625 "impl_name": "posix", 00:23:59.625 "recv_buf_size": 2097152, 00:23:59.625 "send_buf_size": 2097152, 00:23:59.625 "enable_recv_pipe": true, 00:23:59.625 "enable_quickack": false, 00:23:59.625 "enable_placement_id": 0, 00:23:59.625 "enable_zerocopy_send_server": true, 00:23:59.625 "enable_zerocopy_send_client": false, 00:23:59.625 "zerocopy_threshold": 0, 00:23:59.625 "tls_version": 0, 00:23:59.625 "enable_ktls": false 00:23:59.625 } 00:23:59.625 } 00:23:59.625 ] 00:23:59.625 }, 00:23:59.625 { 00:23:59.625 "subsystem": "vmd", 00:23:59.625 "config": [] 00:23:59.625 }, 00:23:59.625 { 00:23:59.625 "subsystem": "accel", 00:23:59.625 "config": [ 00:23:59.625 { 00:23:59.625 "method": "accel_set_options", 00:23:59.625 "params": { 00:23:59.625 "small_cache_size": 128, 00:23:59.625 "large_cache_size": 16, 00:23:59.625 "task_count": 2048, 00:23:59.625 "sequence_count": 2048, 00:23:59.625 "buf_count": 2048 00:23:59.625 } 00:23:59.625 } 00:23:59.625 ] 00:23:59.625 }, 00:23:59.625 { 00:23:59.625 "subsystem": "bdev", 00:23:59.625 "config": [ 00:23:59.625 { 00:23:59.625 "method": "bdev_set_options", 00:23:59.625 "params": { 00:23:59.625 "bdev_io_pool_size": 65535, 00:23:59.625 "bdev_io_cache_size": 256, 00:23:59.625 "bdev_auto_examine": true, 00:23:59.625 "iobuf_small_cache_size": 128, 00:23:59.625 "iobuf_large_cache_size": 16 00:23:59.625 } 00:23:59.625 }, 00:23:59.625 { 00:23:59.625 "method": "bdev_raid_set_options", 00:23:59.625 "params": { 00:23:59.625 "process_window_size_kb": 1024, 00:23:59.625 "process_max_bandwidth_mb_sec": 0 00:23:59.625 } 00:23:59.625 }, 00:23:59.625 { 00:23:59.625 "method": "bdev_iscsi_set_options", 00:23:59.625 "params": { 00:23:59.625 "timeout_sec": 30 00:23:59.625 } 00:23:59.625 }, 00:23:59.625 { 00:23:59.625 "method": "bdev_nvme_set_options", 00:23:59.625 "params": { 00:23:59.625 "action_on_timeout": "none", 00:23:59.625 "timeout_us": 0, 00:23:59.625 "timeout_admin_us": 0, 00:23:59.625 "keep_alive_timeout_ms": 10000, 00:23:59.625 "arbitration_burst": 0, 00:23:59.625 "low_priority_weight": 0, 00:23:59.625 "medium_priority_weight": 0, 00:23:59.625 "high_priority_weight": 0, 00:23:59.625 "nvme_adminq_poll_period_us": 10000, 00:23:59.625 "nvme_ioq_poll_period_us": 0, 00:23:59.625 "io_queue_requests": 512, 00:23:59.625 "delay_cmd_submit": true, 00:23:59.625 "transport_retry_count": 4, 00:23:59.625 "bdev_retry_count": 3, 00:23:59.625 "transport_ack_timeout": 0, 00:23:59.625 "ctrlr_loss_timeout_sec": 0, 00:23:59.625 "reconnect_delay_sec": 0, 00:23:59.625 "fast_io_fail_timeout_sec": 0, 00:23:59.625 "disable_auto_failback": false, 00:23:59.625 "generate_uuids": false, 00:23:59.625 "transport_tos": 0, 00:23:59.625 "nvme_error_stat": false, 00:23:59.625 "rdma_srq_size": 0, 00:23:59.625 "io_path_stat": false, 00:23:59.625 "allow_accel_sequence": false, 00:23:59.625 "rdma_max_cq_size": 0, 00:23:59.625 "rdma_cm_event_timeout_ms": 0, 00:23:59.625 "dhchap_digests": [ 00:23:59.625 "sha256", 00:23:59.625 "sha384", 00:23:59.625 "sha512" 00:23:59.625 ], 00:23:59.625 "dhchap_dhgroups": [ 00:23:59.625 "null", 00:23:59.625 "ffdhe2048", 00:23:59.625 "ffdhe3072", 00:23:59.625 "ffdhe4096", 00:23:59.625 "ffdhe6144", 00:23:59.625 "ffdhe8192" 00:23:59.625 ], 00:23:59.625 "rdma_umr_per_io": false 00:23:59.625 } 00:23:59.625 }, 00:23:59.625 { 00:23:59.625 "method": "bdev_nvme_attach_controller", 00:23:59.625 "params": { 00:23:59.625 "name": "nvme0", 00:23:59.625 "trtype": "TCP", 00:23:59.625 "adrfam": "IPv4", 00:23:59.625 "traddr": "10.0.0.2", 00:23:59.625 "trsvcid": "4420", 00:23:59.625 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:59.625 "prchk_reftag": false, 00:23:59.625 "prchk_guard": false, 00:23:59.625 "ctrlr_loss_timeout_sec": 0, 00:23:59.625 "reconnect_delay_sec": 0, 00:23:59.625 "fast_io_fail_timeout_sec": 0, 00:23:59.625 "psk": "key0", 00:23:59.625 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:59.625 "hdgst": false, 00:23:59.625 "ddgst": false, 00:23:59.625 "multipath": "multipath" 00:23:59.625 } 00:23:59.625 }, 00:23:59.625 { 00:23:59.625 "method": "bdev_nvme_set_hotplug", 00:23:59.625 "params": { 00:23:59.625 "period_us": 100000, 00:23:59.625 "enable": false 00:23:59.625 } 00:23:59.625 }, 00:23:59.625 { 00:23:59.625 "method": "bdev_enable_histogram", 00:23:59.625 "params": { 00:23:59.625 "name": "nvme0n1", 00:23:59.625 "enable": true 00:23:59.625 } 00:23:59.625 }, 00:23:59.625 { 00:23:59.625 "method": "bdev_wait_for_examine" 00:23:59.625 } 00:23:59.625 ] 00:23:59.625 }, 00:23:59.625 { 00:23:59.625 "subsystem": "nbd", 00:23:59.625 "config": [] 00:23:59.625 } 00:23:59.625 ] 00:23:59.625 }' 00:23:59.625 10:25:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:59.625 10:25:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:59.884 [2024-12-13 10:25:53.552210] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:23:59.884 [2024-12-13 10:25:53.552294] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3962651 ] 00:23:59.884 [2024-12-13 10:25:53.663483] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:59.884 [2024-12-13 10:25:53.773439] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:24:00.452 [2024-12-13 10:25:54.185421] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:00.711 10:25:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:00.711 10:25:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:00.711 10:25:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:00.711 10:25:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:24:00.711 10:25:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:00.711 10:25:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:00.970 Running I/O for 1 seconds... 00:24:01.906 4595.00 IOPS, 17.95 MiB/s 00:24:01.906 Latency(us) 00:24:01.906 [2024-12-13T09:25:55.797Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:01.906 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:24:01.906 Verification LBA range: start 0x0 length 0x2000 00:24:01.906 nvme0n1 : 1.02 4641.77 18.13 0.00 0.00 27330.45 5586.16 32705.58 00:24:01.906 [2024-12-13T09:25:55.797Z] =================================================================================================================== 00:24:01.906 [2024-12-13T09:25:55.797Z] Total : 4641.77 18.13 0.00 0.00 27330.45 5586.16 32705.58 00:24:01.906 { 00:24:01.906 "results": [ 00:24:01.906 { 00:24:01.906 "job": "nvme0n1", 00:24:01.906 "core_mask": "0x2", 00:24:01.906 "workload": "verify", 00:24:01.906 "status": "finished", 00:24:01.906 "verify_range": { 00:24:01.906 "start": 0, 00:24:01.906 "length": 8192 00:24:01.906 }, 00:24:01.906 "queue_depth": 128, 00:24:01.906 "io_size": 4096, 00:24:01.906 "runtime": 1.017499, 00:24:01.906 "iops": 4641.773603708702, 00:24:01.906 "mibps": 18.131928139487115, 00:24:01.906 "io_failed": 0, 00:24:01.906 "io_timeout": 0, 00:24:01.906 "avg_latency_us": 27330.447361342165, 00:24:01.906 "min_latency_us": 5586.1638095238095, 00:24:01.906 "max_latency_us": 32705.584761904764 00:24:01.906 } 00:24:01.906 ], 00:24:01.906 "core_count": 1 00:24:01.906 } 00:24:01.906 10:25:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:24:01.906 10:25:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:24:01.906 10:25:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:24:01.906 10:25:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # type=--id 00:24:01.906 10:25:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@813 -- # id=0 00:24:01.906 10:25:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:24:01.906 10:25:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:24:01.906 10:25:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:24:01.906 10:25:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:24:01.906 10:25:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@824 -- # for n in $shm_files 00:24:01.906 10:25:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:24:01.906 nvmf_trace.0 00:24:01.906 10:25:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@827 -- # return 0 00:24:01.906 10:25:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 3962651 00:24:01.906 10:25:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3962651 ']' 00:24:01.906 10:25:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3962651 00:24:01.906 10:25:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:01.906 10:25:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:01.906 10:25:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3962651 00:24:02.165 10:25:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:02.165 10:25:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:02.165 10:25:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3962651' 00:24:02.165 killing process with pid 3962651 00:24:02.165 10:25:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3962651 00:24:02.165 Received shutdown signal, test time was about 1.000000 seconds 00:24:02.165 00:24:02.165 Latency(us) 00:24:02.165 [2024-12-13T09:25:56.056Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:02.165 [2024-12-13T09:25:56.057Z] =================================================================================================================== 00:24:02.166 [2024-12-13T09:25:56.057Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:02.166 10:25:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3962651 00:24:03.102 10:25:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:24:03.102 10:25:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:03.102 10:25:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:24:03.102 10:25:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:03.103 10:25:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:24:03.103 10:25:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:03.103 10:25:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:03.103 rmmod nvme_tcp 00:24:03.103 rmmod nvme_fabrics 00:24:03.103 rmmod nvme_keyring 00:24:03.103 10:25:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:03.103 10:25:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:24:03.103 10:25:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:24:03.103 10:25:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 3962448 ']' 00:24:03.103 10:25:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 3962448 00:24:03.103 10:25:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3962448 ']' 00:24:03.103 10:25:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3962448 00:24:03.103 10:25:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:03.103 10:25:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:03.103 10:25:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3962448 00:24:03.103 10:25:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:03.103 10:25:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:03.103 10:25:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3962448' 00:24:03.103 killing process with pid 3962448 00:24:03.103 10:25:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3962448 00:24:03.103 10:25:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3962448 00:24:04.481 10:25:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:04.481 10:25:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:04.481 10:25:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:04.481 10:25:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:24:04.481 10:25:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 00:24:04.481 10:25:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:04.481 10:25:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 00:24:04.481 10:25:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:04.481 10:25:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:04.481 10:25:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:04.481 10:25:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:04.481 10:25:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:06.389 10:26:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:06.389 10:26:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.WUOot6VNck /tmp/tmp.ppWchDVReS /tmp/tmp.IyV0de1qDz 00:24:06.389 00:24:06.389 real 1m45.894s 00:24:06.389 user 2m44.280s 00:24:06.389 sys 0m31.417s 00:24:06.389 10:26:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:06.389 10:26:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:06.389 ************************************ 00:24:06.389 END TEST nvmf_tls 00:24:06.389 ************************************ 00:24:06.389 10:26:00 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:24:06.389 10:26:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:06.389 10:26:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:06.389 10:26:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:06.389 ************************************ 00:24:06.389 START TEST nvmf_fips 00:24:06.389 ************************************ 00:24:06.389 10:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:24:06.389 * Looking for test storage... 00:24:06.389 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:24:06.389 10:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:06.389 10:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # lcov --version 00:24:06.389 10:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:06.649 10:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:06.649 10:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:06.649 10:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:06.649 10:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:06.649 10:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:24:06.649 10:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:24:06.649 10:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:24:06.649 10:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:24:06.649 10:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:24:06.649 10:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:24:06.649 10:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:24:06.649 10:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:06.649 10:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:24:06.649 10:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:24:06.649 10:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:06.649 10:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:06.649 10:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:24:06.649 10:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:24:06.649 10:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:06.649 10:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:24:06.649 10:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:24:06.649 10:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:24:06.649 10:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:24:06.649 10:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:06.649 10:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:24:06.649 10:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:24:06.649 10:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:06.649 10:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:06.649 10:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:24:06.649 10:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:06.649 10:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:06.649 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:06.649 --rc genhtml_branch_coverage=1 00:24:06.649 --rc genhtml_function_coverage=1 00:24:06.649 --rc genhtml_legend=1 00:24:06.649 --rc geninfo_all_blocks=1 00:24:06.649 --rc geninfo_unexecuted_blocks=1 00:24:06.649 00:24:06.649 ' 00:24:06.649 10:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:06.649 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:06.649 --rc genhtml_branch_coverage=1 00:24:06.649 --rc genhtml_function_coverage=1 00:24:06.649 --rc genhtml_legend=1 00:24:06.649 --rc geninfo_all_blocks=1 00:24:06.649 --rc geninfo_unexecuted_blocks=1 00:24:06.649 00:24:06.649 ' 00:24:06.649 10:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:06.649 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:06.649 --rc genhtml_branch_coverage=1 00:24:06.649 --rc genhtml_function_coverage=1 00:24:06.649 --rc genhtml_legend=1 00:24:06.649 --rc geninfo_all_blocks=1 00:24:06.649 --rc geninfo_unexecuted_blocks=1 00:24:06.649 00:24:06.649 ' 00:24:06.649 10:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:06.649 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:06.649 --rc genhtml_branch_coverage=1 00:24:06.649 --rc genhtml_function_coverage=1 00:24:06.649 --rc genhtml_legend=1 00:24:06.649 --rc geninfo_all_blocks=1 00:24:06.649 --rc geninfo_unexecuted_blocks=1 00:24:06.649 00:24:06.649 ' 00:24:06.649 10:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:06.649 10:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:24:06.649 10:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:06.649 10:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:06.649 10:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:06.649 10:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:06.649 10:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:06.649 10:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:06.649 10:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:06.649 10:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:06.649 10:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:06.649 10:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:06.650 10:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:24:06.650 10:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:24:06.650 10:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:06.650 10:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:06.650 10:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:06.650 10:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:06.650 10:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:06.650 10:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:24:06.650 10:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:06.650 10:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:06.650 10:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:06.650 10:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:06.650 10:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:06.650 10:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:06.650 10:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:24:06.650 10:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:06.650 10:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:24:06.650 10:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:06.650 10:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:06.650 10:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:06.650 10:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:06.650 10:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:06.650 10:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:06.650 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:06.650 10:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:06.650 10:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:06.650 10:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:06.650 10:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:06.650 10:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:24:06.650 10:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:24:06.650 10:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:24:06.650 10:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:24:06.650 10:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:24:06.650 10:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:24:06.650 10:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:06.650 10:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:06.650 10:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:24:06.650 10:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:24:06.650 10:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:24:06.650 10:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:24:06.650 10:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:24:06.650 10:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:24:06.650 10:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:24:06.650 10:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:06.650 10:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:24:06.650 10:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:24:06.650 10:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:06.650 10:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:06.650 10:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:24:06.650 10:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:24:06.650 10:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:24:06.650 10:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:24:06.650 10:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:24:06.650 10:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:24:06.650 10:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:24:06.650 10:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:24:06.650 10:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:24:06.650 10:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:24:06.650 10:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:06.650 10:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:06.650 10:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:24:06.650 10:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:06.650 10:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:24:06.650 10:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:24:06.650 10:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:06.650 10:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:24:06.650 10:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:24:06.650 10:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:24:06.650 10:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:24:06.650 10:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:24:06.650 10:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:24:06.650 10:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:24:06.650 10:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:06.650 10:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:24:06.650 10:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:24:06.650 10:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:24:06.650 10:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:24:06.650 10:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:24:06.650 10:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:24:06.650 10:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:24:06.650 10:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:24:06.650 10:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:24:06.650 10:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:24:06.650 10:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:24:06.650 10:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:24:06.650 10:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:24:06.650 10:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:24:06.650 10:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:24:06.650 10:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:24:06.650 10:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:24:06.651 10:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:24:06.651 10:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:24:06.651 10:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:24:06.651 10:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:24:06.651 10:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # local es=0 00:24:06.651 10:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@654 -- # valid_exec_arg openssl md5 /dev/fd/62 00:24:06.651 10:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # local arg=openssl 00:24:06.651 10:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:24:06.651 10:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:06.651 10:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -t openssl 00:24:06.651 10:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:06.651 10:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # type -P openssl 00:24:06.651 10:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:06.651 10:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # arg=/usr/bin/openssl 00:24:06.651 10:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # [[ -x /usr/bin/openssl ]] 00:24:06.651 10:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # openssl md5 /dev/fd/62 00:24:06.651 Error setting digest 00:24:06.651 4012B364F37F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:24:06.651 4012B364F37F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:24:06.651 10:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # es=1 00:24:06.651 10:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:06.651 10:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:06.651 10:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:06.651 10:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:24:06.651 10:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:06.651 10:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:06.651 10:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:06.651 10:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:06.651 10:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:06.651 10:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:06.651 10:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:06.651 10:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:06.651 10:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:06.651 10:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:06.651 10:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@309 -- # xtrace_disable 00:24:06.651 10:26:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:11.921 10:26:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:11.921 10:26:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # pci_devs=() 00:24:11.921 10:26:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:11.921 10:26:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:11.921 10:26:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:11.921 10:26:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:11.921 10:26:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:11.921 10:26:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # net_devs=() 00:24:11.921 10:26:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:11.921 10:26:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # e810=() 00:24:11.921 10:26:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # local -ga e810 00:24:11.921 10:26:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # x722=() 00:24:11.921 10:26:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # local -ga x722 00:24:11.921 10:26:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # mlx=() 00:24:11.921 10:26:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # local -ga mlx 00:24:11.921 10:26:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:11.921 10:26:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:11.921 10:26:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:11.921 10:26:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:11.921 10:26:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:11.921 10:26:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:11.921 10:26:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:11.921 10:26:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:11.921 10:26:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:11.921 10:26:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:11.921 10:26:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:11.921 10:26:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:11.921 10:26:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:11.921 10:26:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:11.921 10:26:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:11.921 10:26:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:11.922 10:26:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:11.922 10:26:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:11.922 10:26:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:11.922 10:26:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:24:11.922 Found 0000:af:00.0 (0x8086 - 0x159b) 00:24:11.922 10:26:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:11.922 10:26:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:11.922 10:26:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:11.922 10:26:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:11.922 10:26:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:11.922 10:26:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:11.922 10:26:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:24:11.922 Found 0000:af:00.1 (0x8086 - 0x159b) 00:24:11.922 10:26:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:11.922 10:26:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:11.922 10:26:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:11.922 10:26:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:11.922 10:26:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:11.922 10:26:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:11.922 10:26:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:11.922 10:26:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:11.922 10:26:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:11.922 10:26:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:11.922 10:26:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:11.922 10:26:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:11.922 10:26:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:11.922 10:26:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:11.922 10:26:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:11.922 10:26:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:24:11.922 Found net devices under 0000:af:00.0: cvl_0_0 00:24:11.922 10:26:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:11.922 10:26:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:11.922 10:26:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:11.922 10:26:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:11.922 10:26:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:11.922 10:26:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:11.922 10:26:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:11.922 10:26:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:11.922 10:26:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:24:11.922 Found net devices under 0000:af:00.1: cvl_0_1 00:24:11.922 10:26:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:11.922 10:26:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:11.922 10:26:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # is_hw=yes 00:24:11.922 10:26:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:11.922 10:26:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:11.922 10:26:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:11.922 10:26:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:11.922 10:26:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:11.922 10:26:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:11.922 10:26:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:11.922 10:26:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:11.922 10:26:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:11.922 10:26:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:11.922 10:26:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:11.922 10:26:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:11.922 10:26:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:11.922 10:26:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:11.922 10:26:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:11.922 10:26:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:11.922 10:26:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:11.922 10:26:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:11.922 10:26:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:11.922 10:26:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:11.922 10:26:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:11.922 10:26:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:11.922 10:26:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:11.922 10:26:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:11.922 10:26:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:11.922 10:26:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:11.922 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:11.922 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.412 ms 00:24:11.922 00:24:11.922 --- 10.0.0.2 ping statistics --- 00:24:11.922 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:11.922 rtt min/avg/max/mdev = 0.412/0.412/0.412/0.000 ms 00:24:11.922 10:26:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:11.922 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:11.922 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.214 ms 00:24:11.922 00:24:11.922 --- 10.0.0.1 ping statistics --- 00:24:11.922 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:11.922 rtt min/avg/max/mdev = 0.214/0.214/0.214/0.000 ms 00:24:11.922 10:26:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:11.922 10:26:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # return 0 00:24:11.922 10:26:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:11.922 10:26:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:11.922 10:26:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:11.922 10:26:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:11.922 10:26:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:11.922 10:26:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:11.922 10:26:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:11.922 10:26:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:24:11.922 10:26:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:11.922 10:26:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:11.922 10:26:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:11.922 10:26:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=3966693 00:24:11.922 10:26:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 3966693 00:24:11.922 10:26:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:11.922 10:26:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 3966693 ']' 00:24:11.922 10:26:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:11.922 10:26:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:11.922 10:26:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:11.922 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:11.922 10:26:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:11.922 10:26:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:12.181 [2024-12-13 10:26:05.919377] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:24:12.181 [2024-12-13 10:26:05.919507] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:12.181 [2024-12-13 10:26:06.044307] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:12.441 [2024-12-13 10:26:06.149205] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:12.441 [2024-12-13 10:26:06.149246] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:12.441 [2024-12-13 10:26:06.149257] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:12.441 [2024-12-13 10:26:06.149267] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:12.441 [2024-12-13 10:26:06.149275] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:12.441 [2024-12-13 10:26:06.150695] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:24:13.009 10:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:13.009 10:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:24:13.009 10:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:13.009 10:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:13.009 10:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:13.009 10:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:13.009 10:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:24:13.009 10:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:24:13.009 10:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:24:13.009 10:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.6vb 00:24:13.009 10:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:24:13.009 10:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.6vb 00:24:13.009 10:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.6vb 00:24:13.009 10:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.6vb 00:24:13.009 10:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:13.010 [2024-12-13 10:26:06.894565] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:13.268 [2024-12-13 10:26:06.910549] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:13.268 [2024-12-13 10:26:06.910776] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:13.268 malloc0 00:24:13.268 10:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:13.269 10:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=3966890 00:24:13.269 10:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:13.269 10:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 3966890 /var/tmp/bdevperf.sock 00:24:13.269 10:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 3966890 ']' 00:24:13.269 10:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:13.269 10:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:13.269 10:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:13.269 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:13.269 10:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:13.269 10:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:13.269 [2024-12-13 10:26:07.089640] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:24:13.269 [2024-12-13 10:26:07.089749] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3966890 ] 00:24:13.528 [2024-12-13 10:26:07.207056] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:13.528 [2024-12-13 10:26:07.319877] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:24:14.095 10:26:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:14.095 10:26:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:24:14.095 10:26:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.6vb 00:24:14.354 10:26:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:24:14.354 [2024-12-13 10:26:08.202726] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:14.613 TLSTESTn1 00:24:14.613 10:26:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:14.613 Running I/O for 10 seconds... 00:24:16.927 4635.00 IOPS, 18.11 MiB/s [2024-12-13T09:26:11.754Z] 4600.00 IOPS, 17.97 MiB/s [2024-12-13T09:26:12.690Z] 4608.33 IOPS, 18.00 MiB/s [2024-12-13T09:26:13.720Z] 4593.50 IOPS, 17.94 MiB/s [2024-12-13T09:26:14.671Z] 4625.40 IOPS, 18.07 MiB/s [2024-12-13T09:26:15.606Z] 4566.50 IOPS, 17.84 MiB/s [2024-12-13T09:26:16.542Z] 4551.00 IOPS, 17.78 MiB/s [2024-12-13T09:26:17.478Z] 4504.25 IOPS, 17.59 MiB/s [2024-12-13T09:26:18.855Z] 4481.22 IOPS, 17.50 MiB/s [2024-12-13T09:26:18.855Z] 4452.40 IOPS, 17.39 MiB/s 00:24:24.964 Latency(us) 00:24:24.964 [2024-12-13T09:26:18.855Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:24.964 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:24.964 Verification LBA range: start 0x0 length 0x2000 00:24:24.964 TLSTESTn1 : 10.03 4452.69 17.39 0.00 0.00 28692.21 8550.89 31956.60 00:24:24.964 [2024-12-13T09:26:18.855Z] =================================================================================================================== 00:24:24.964 [2024-12-13T09:26:18.855Z] Total : 4452.69 17.39 0.00 0.00 28692.21 8550.89 31956.60 00:24:24.964 { 00:24:24.964 "results": [ 00:24:24.964 { 00:24:24.964 "job": "TLSTESTn1", 00:24:24.964 "core_mask": "0x4", 00:24:24.964 "workload": "verify", 00:24:24.964 "status": "finished", 00:24:24.964 "verify_range": { 00:24:24.964 "start": 0, 00:24:24.964 "length": 8192 00:24:24.964 }, 00:24:24.964 "queue_depth": 128, 00:24:24.964 "io_size": 4096, 00:24:24.964 "runtime": 10.02786, 00:24:24.964 "iops": 4452.694792308628, 00:24:24.964 "mibps": 17.39333903245558, 00:24:24.964 "io_failed": 0, 00:24:24.964 "io_timeout": 0, 00:24:24.964 "avg_latency_us": 28692.21198480064, 00:24:24.964 "min_latency_us": 8550.887619047619, 00:24:24.964 "max_latency_us": 31956.601904761905 00:24:24.964 } 00:24:24.964 ], 00:24:24.964 "core_count": 1 00:24:24.964 } 00:24:24.964 10:26:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:24:24.964 10:26:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:24:24.964 10:26:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # type=--id 00:24:24.964 10:26:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@813 -- # id=0 00:24:24.964 10:26:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:24:24.964 10:26:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:24:24.964 10:26:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:24:24.964 10:26:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:24:24.964 10:26:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@824 -- # for n in $shm_files 00:24:24.964 10:26:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:24:24.964 nvmf_trace.0 00:24:24.964 10:26:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@827 -- # return 0 00:24:24.964 10:26:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 3966890 00:24:24.964 10:26:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 3966890 ']' 00:24:24.964 10:26:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 3966890 00:24:24.964 10:26:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:24:24.964 10:26:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:24.964 10:26:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3966890 00:24:24.964 10:26:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:24:24.964 10:26:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:24:24.964 10:26:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3966890' 00:24:24.964 killing process with pid 3966890 00:24:24.964 10:26:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 3966890 00:24:24.964 Received shutdown signal, test time was about 10.000000 seconds 00:24:24.964 00:24:24.964 Latency(us) 00:24:24.964 [2024-12-13T09:26:18.856Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:24.965 [2024-12-13T09:26:18.856Z] =================================================================================================================== 00:24:24.965 [2024-12-13T09:26:18.856Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:24.965 10:26:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 3966890 00:24:25.902 10:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:24:25.902 10:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:25.902 10:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:24:25.902 10:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:25.902 10:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:24:25.902 10:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:25.902 10:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:25.902 rmmod nvme_tcp 00:24:25.902 rmmod nvme_fabrics 00:24:25.902 rmmod nvme_keyring 00:24:25.902 10:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:25.902 10:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:24:25.902 10:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:24:25.902 10:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 3966693 ']' 00:24:25.902 10:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 3966693 00:24:25.902 10:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 3966693 ']' 00:24:25.902 10:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 3966693 00:24:25.902 10:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:24:25.902 10:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:25.902 10:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3966693 00:24:25.902 10:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:25.902 10:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:25.902 10:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3966693' 00:24:25.902 killing process with pid 3966693 00:24:25.902 10:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 3966693 00:24:25.902 10:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 3966693 00:24:27.280 10:26:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:27.280 10:26:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:27.280 10:26:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:27.280 10:26:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:24:27.280 10:26:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:27.280 10:26:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 00:24:27.280 10:26:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 00:24:27.280 10:26:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:27.280 10:26:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:27.280 10:26:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:27.280 10:26:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:27.280 10:26:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:29.186 10:26:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:29.186 10:26:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.6vb 00:24:29.186 00:24:29.186 real 0m22.793s 00:24:29.186 user 0m25.739s 00:24:29.186 sys 0m9.056s 00:24:29.186 10:26:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:29.186 10:26:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:29.186 ************************************ 00:24:29.186 END TEST nvmf_fips 00:24:29.186 ************************************ 00:24:29.186 10:26:22 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:24:29.186 10:26:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:29.186 10:26:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:29.186 10:26:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:29.186 ************************************ 00:24:29.186 START TEST nvmf_control_msg_list 00:24:29.186 ************************************ 00:24:29.186 10:26:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:24:29.446 * Looking for test storage... 00:24:29.446 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:29.446 10:26:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:29.446 10:26:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # lcov --version 00:24:29.446 10:26:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:29.446 10:26:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:29.446 10:26:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:29.446 10:26:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:29.446 10:26:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:29.446 10:26:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:24:29.446 10:26:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:24:29.446 10:26:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:24:29.446 10:26:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:24:29.446 10:26:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:24:29.446 10:26:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:24:29.446 10:26:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:24:29.446 10:26:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:29.446 10:26:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:24:29.446 10:26:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:24:29.446 10:26:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:29.446 10:26:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:29.446 10:26:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:24:29.446 10:26:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:24:29.446 10:26:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:29.446 10:26:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:24:29.446 10:26:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:24:29.446 10:26:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:24:29.446 10:26:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:24:29.446 10:26:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:29.446 10:26:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:24:29.446 10:26:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:24:29.446 10:26:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:29.446 10:26:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:29.446 10:26:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:24:29.446 10:26:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:29.446 10:26:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:29.446 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:29.446 --rc genhtml_branch_coverage=1 00:24:29.446 --rc genhtml_function_coverage=1 00:24:29.446 --rc genhtml_legend=1 00:24:29.446 --rc geninfo_all_blocks=1 00:24:29.446 --rc geninfo_unexecuted_blocks=1 00:24:29.446 00:24:29.446 ' 00:24:29.446 10:26:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:29.446 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:29.446 --rc genhtml_branch_coverage=1 00:24:29.446 --rc genhtml_function_coverage=1 00:24:29.446 --rc genhtml_legend=1 00:24:29.446 --rc geninfo_all_blocks=1 00:24:29.446 --rc geninfo_unexecuted_blocks=1 00:24:29.446 00:24:29.446 ' 00:24:29.446 10:26:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:29.446 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:29.446 --rc genhtml_branch_coverage=1 00:24:29.446 --rc genhtml_function_coverage=1 00:24:29.446 --rc genhtml_legend=1 00:24:29.446 --rc geninfo_all_blocks=1 00:24:29.446 --rc geninfo_unexecuted_blocks=1 00:24:29.446 00:24:29.446 ' 00:24:29.446 10:26:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:29.446 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:29.446 --rc genhtml_branch_coverage=1 00:24:29.446 --rc genhtml_function_coverage=1 00:24:29.446 --rc genhtml_legend=1 00:24:29.446 --rc geninfo_all_blocks=1 00:24:29.446 --rc geninfo_unexecuted_blocks=1 00:24:29.446 00:24:29.446 ' 00:24:29.446 10:26:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:29.446 10:26:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:24:29.446 10:26:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:29.446 10:26:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:29.446 10:26:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:29.446 10:26:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:29.446 10:26:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:29.446 10:26:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:29.446 10:26:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:29.446 10:26:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:29.446 10:26:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:29.446 10:26:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:29.446 10:26:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:24:29.446 10:26:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:24:29.446 10:26:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:29.446 10:26:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:29.446 10:26:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:29.446 10:26:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:29.446 10:26:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:29.446 10:26:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:24:29.446 10:26:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:29.446 10:26:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:29.446 10:26:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:29.447 10:26:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:29.447 10:26:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:29.447 10:26:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:29.447 10:26:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:24:29.447 10:26:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:29.447 10:26:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:24:29.447 10:26:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:29.447 10:26:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:29.447 10:26:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:29.447 10:26:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:29.447 10:26:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:29.447 10:26:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:29.447 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:29.447 10:26:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:29.447 10:26:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:29.447 10:26:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:29.447 10:26:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:24:29.447 10:26:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:29.447 10:26:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:29.447 10:26:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:29.447 10:26:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:29.447 10:26:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:29.447 10:26:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:29.447 10:26:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:29.447 10:26:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:29.447 10:26:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:29.447 10:26:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:29.447 10:26:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@309 -- # xtrace_disable 00:24:29.447 10:26:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:34.722 10:26:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:34.722 10:26:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # pci_devs=() 00:24:34.722 10:26:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:34.722 10:26:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:34.722 10:26:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:34.722 10:26:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:34.722 10:26:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:34.722 10:26:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # net_devs=() 00:24:34.722 10:26:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:34.722 10:26:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # e810=() 00:24:34.722 10:26:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # local -ga e810 00:24:34.722 10:26:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # x722=() 00:24:34.722 10:26:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # local -ga x722 00:24:34.722 10:26:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # mlx=() 00:24:34.722 10:26:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # local -ga mlx 00:24:34.722 10:26:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:34.722 10:26:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:34.722 10:26:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:34.722 10:26:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:34.722 10:26:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:34.722 10:26:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:34.722 10:26:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:34.722 10:26:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:34.722 10:26:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:34.722 10:26:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:34.722 10:26:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:34.722 10:26:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:34.722 10:26:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:34.722 10:26:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:34.722 10:26:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:34.722 10:26:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:34.722 10:26:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:34.722 10:26:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:34.722 10:26:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:34.722 10:26:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:24:34.722 Found 0000:af:00.0 (0x8086 - 0x159b) 00:24:34.722 10:26:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:34.722 10:26:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:34.722 10:26:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:34.722 10:26:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:34.722 10:26:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:34.722 10:26:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:34.722 10:26:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:24:34.722 Found 0000:af:00.1 (0x8086 - 0x159b) 00:24:34.722 10:26:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:34.722 10:26:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:34.722 10:26:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:34.722 10:26:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:34.722 10:26:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:34.722 10:26:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:34.722 10:26:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:34.722 10:26:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:34.722 10:26:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:34.722 10:26:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:34.722 10:26:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:34.722 10:26:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:34.722 10:26:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:34.722 10:26:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:34.722 10:26:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:34.722 10:26:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:24:34.722 Found net devices under 0000:af:00.0: cvl_0_0 00:24:34.722 10:26:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:34.722 10:26:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:34.722 10:26:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:34.722 10:26:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:34.722 10:26:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:34.722 10:26:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:34.722 10:26:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:34.722 10:26:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:34.722 10:26:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:24:34.722 Found net devices under 0000:af:00.1: cvl_0_1 00:24:34.722 10:26:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:34.722 10:26:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:34.722 10:26:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # is_hw=yes 00:24:34.722 10:26:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:34.722 10:26:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:34.722 10:26:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:34.722 10:26:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:34.722 10:26:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:34.722 10:26:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:34.722 10:26:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:34.722 10:26:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:34.722 10:26:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:34.722 10:26:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:34.722 10:26:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:34.722 10:26:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:34.722 10:26:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:34.722 10:26:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:34.722 10:26:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:34.722 10:26:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:34.722 10:26:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:34.722 10:26:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:34.981 10:26:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:34.981 10:26:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:34.981 10:26:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:34.981 10:26:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:34.981 10:26:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:34.981 10:26:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:34.981 10:26:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:34.981 10:26:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:34.981 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:34.981 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.373 ms 00:24:34.981 00:24:34.981 --- 10.0.0.2 ping statistics --- 00:24:34.981 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:34.981 rtt min/avg/max/mdev = 0.373/0.373/0.373/0.000 ms 00:24:34.981 10:26:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:34.981 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:34.981 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.215 ms 00:24:34.981 00:24:34.981 --- 10.0.0.1 ping statistics --- 00:24:34.981 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:34.981 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:24:34.981 10:26:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:34.981 10:26:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@450 -- # return 0 00:24:34.981 10:26:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:34.981 10:26:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:34.981 10:26:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:34.981 10:26:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:34.981 10:26:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:34.981 10:26:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:34.981 10:26:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:34.981 10:26:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:24:34.981 10:26:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:34.981 10:26:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:34.981 10:26:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:34.981 10:26:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=3972577 00:24:34.981 10:26:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 3972577 00:24:34.981 10:26:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:24:34.981 10:26:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # '[' -z 3972577 ']' 00:24:34.981 10:26:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:34.981 10:26:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:34.981 10:26:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:34.981 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:34.981 10:26:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:34.981 10:26:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:34.981 [2024-12-13 10:26:28.872040] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:24:34.981 [2024-12-13 10:26:28.872147] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:35.240 [2024-12-13 10:26:28.989533] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:35.240 [2024-12-13 10:26:29.091501] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:35.240 [2024-12-13 10:26:29.091545] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:35.240 [2024-12-13 10:26:29.091555] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:35.240 [2024-12-13 10:26:29.091565] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:35.240 [2024-12-13 10:26:29.091573] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:35.240 [2024-12-13 10:26:29.092805] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:24:35.807 10:26:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:35.807 10:26:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@868 -- # return 0 00:24:35.807 10:26:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:35.807 10:26:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:35.807 10:26:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:36.066 10:26:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:36.066 10:26:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:24:36.066 10:26:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:24:36.066 10:26:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:24:36.066 10:26:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:36.066 10:26:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:36.066 [2024-12-13 10:26:29.727576] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:36.066 10:26:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:36.066 10:26:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:24:36.066 10:26:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:36.066 10:26:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:36.066 10:26:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:36.066 10:26:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:24:36.066 10:26:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:36.066 10:26:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:36.066 Malloc0 00:24:36.066 10:26:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:36.066 10:26:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:24:36.066 10:26:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:36.066 10:26:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:36.066 10:26:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:36.066 10:26:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:36.066 10:26:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:36.066 10:26:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:36.066 [2024-12-13 10:26:29.793742] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:36.066 10:26:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:36.066 10:26:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=3972742 00:24:36.066 10:26:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:36.066 10:26:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=3972744 00:24:36.066 10:26:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:36.066 10:26:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=3972746 00:24:36.066 10:26:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 3972742 00:24:36.066 10:26:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:36.066 [2024-12-13 10:26:29.903280] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:24:36.066 [2024-12-13 10:26:29.903545] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:24:36.066 [2024-12-13 10:26:29.912501] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:24:37.443 Initializing NVMe Controllers 00:24:37.443 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:24:37.443 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:24:37.443 Initialization complete. Launching workers. 00:24:37.443 ======================================================== 00:24:37.443 Latency(us) 00:24:37.443 Device Information : IOPS MiB/s Average min max 00:24:37.443 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 4031.00 15.75 247.60 167.57 16176.81 00:24:37.443 ======================================================== 00:24:37.443 Total : 4031.00 15.75 247.60 167.57 16176.81 00:24:37.443 00:24:37.443 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 3972744 00:24:37.443 Initializing NVMe Controllers 00:24:37.443 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:24:37.443 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:24:37.443 Initialization complete. Launching workers. 00:24:37.443 ======================================================== 00:24:37.443 Latency(us) 00:24:37.443 Device Information : IOPS MiB/s Average min max 00:24:37.443 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 4179.00 16.32 238.83 167.60 16007.42 00:24:37.443 ======================================================== 00:24:37.443 Total : 4179.00 16.32 238.83 167.60 16007.42 00:24:37.443 00:24:37.443 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 3972746 00:24:37.443 Initializing NVMe Controllers 00:24:37.443 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:24:37.443 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:24:37.443 Initialization complete. Launching workers. 00:24:37.443 ======================================================== 00:24:37.443 Latency(us) 00:24:37.444 Device Information : IOPS MiB/s Average min max 00:24:37.444 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 4316.00 16.86 231.21 160.22 638.57 00:24:37.444 ======================================================== 00:24:37.444 Total : 4316.00 16.86 231.21 160.22 638.57 00:24:37.444 00:24:37.444 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:24:37.444 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:24:37.444 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:37.444 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:24:37.444 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:37.444 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:24:37.444 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:37.444 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:37.444 rmmod nvme_tcp 00:24:37.444 rmmod nvme_fabrics 00:24:37.444 rmmod nvme_keyring 00:24:37.444 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:37.444 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:24:37.444 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:24:37.444 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 3972577 ']' 00:24:37.444 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 3972577 00:24:37.444 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # '[' -z 3972577 ']' 00:24:37.444 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # kill -0 3972577 00:24:37.444 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # uname 00:24:37.444 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:37.444 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3972577 00:24:37.703 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:37.703 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:37.703 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3972577' 00:24:37.703 killing process with pid 3972577 00:24:37.703 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@973 -- # kill 3972577 00:24:37.703 10:26:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@978 -- # wait 3972577 00:24:38.640 10:26:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:38.640 10:26:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:38.640 10:26:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:38.640 10:26:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:24:38.640 10:26:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 00:24:38.640 10:26:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:38.640 10:26:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 00:24:38.640 10:26:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:38.640 10:26:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:38.640 10:26:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:38.640 10:26:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:38.640 10:26:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:41.173 10:26:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:41.174 00:24:41.174 real 0m11.582s 00:24:41.174 user 0m8.429s 00:24:41.174 sys 0m5.433s 00:24:41.174 10:26:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:41.174 10:26:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:41.174 ************************************ 00:24:41.174 END TEST nvmf_control_msg_list 00:24:41.174 ************************************ 00:24:41.174 10:26:34 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:24:41.174 10:26:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:41.174 10:26:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:41.174 10:26:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:41.174 ************************************ 00:24:41.174 START TEST nvmf_wait_for_buf 00:24:41.174 ************************************ 00:24:41.174 10:26:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:24:41.174 * Looking for test storage... 00:24:41.174 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:41.174 10:26:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:41.174 10:26:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # lcov --version 00:24:41.174 10:26:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:41.174 10:26:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:41.174 10:26:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:41.174 10:26:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:41.174 10:26:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:41.174 10:26:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:24:41.174 10:26:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:24:41.174 10:26:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:24:41.174 10:26:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:24:41.174 10:26:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:24:41.174 10:26:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:24:41.174 10:26:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:24:41.174 10:26:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:41.174 10:26:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:24:41.174 10:26:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:24:41.174 10:26:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:41.174 10:26:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:41.174 10:26:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:24:41.174 10:26:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:24:41.174 10:26:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:41.174 10:26:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:24:41.174 10:26:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:24:41.174 10:26:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:24:41.174 10:26:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:24:41.174 10:26:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:41.174 10:26:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:24:41.174 10:26:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:24:41.174 10:26:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:41.174 10:26:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:41.174 10:26:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:24:41.174 10:26:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:41.174 10:26:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:41.174 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:41.174 --rc genhtml_branch_coverage=1 00:24:41.174 --rc genhtml_function_coverage=1 00:24:41.174 --rc genhtml_legend=1 00:24:41.174 --rc geninfo_all_blocks=1 00:24:41.174 --rc geninfo_unexecuted_blocks=1 00:24:41.174 00:24:41.174 ' 00:24:41.174 10:26:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:41.174 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:41.174 --rc genhtml_branch_coverage=1 00:24:41.174 --rc genhtml_function_coverage=1 00:24:41.174 --rc genhtml_legend=1 00:24:41.174 --rc geninfo_all_blocks=1 00:24:41.174 --rc geninfo_unexecuted_blocks=1 00:24:41.174 00:24:41.174 ' 00:24:41.174 10:26:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:41.174 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:41.174 --rc genhtml_branch_coverage=1 00:24:41.174 --rc genhtml_function_coverage=1 00:24:41.174 --rc genhtml_legend=1 00:24:41.174 --rc geninfo_all_blocks=1 00:24:41.174 --rc geninfo_unexecuted_blocks=1 00:24:41.174 00:24:41.174 ' 00:24:41.174 10:26:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:41.174 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:41.174 --rc genhtml_branch_coverage=1 00:24:41.174 --rc genhtml_function_coverage=1 00:24:41.174 --rc genhtml_legend=1 00:24:41.174 --rc geninfo_all_blocks=1 00:24:41.174 --rc geninfo_unexecuted_blocks=1 00:24:41.174 00:24:41.174 ' 00:24:41.174 10:26:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:41.174 10:26:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:24:41.174 10:26:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:41.174 10:26:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:41.174 10:26:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:41.174 10:26:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:41.174 10:26:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:41.174 10:26:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:41.174 10:26:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:41.174 10:26:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:41.174 10:26:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:41.174 10:26:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:41.174 10:26:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:24:41.174 10:26:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:24:41.174 10:26:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:41.174 10:26:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:41.174 10:26:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:41.174 10:26:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:41.174 10:26:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:41.174 10:26:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:24:41.174 10:26:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:41.174 10:26:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:41.174 10:26:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:41.174 10:26:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:41.174 10:26:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:41.174 10:26:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:41.174 10:26:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:24:41.175 10:26:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:41.175 10:26:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:24:41.175 10:26:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:41.175 10:26:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:41.175 10:26:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:41.175 10:26:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:41.175 10:26:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:41.175 10:26:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:41.175 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:41.175 10:26:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:41.175 10:26:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:41.175 10:26:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:41.175 10:26:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:24:41.175 10:26:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:41.175 10:26:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:41.175 10:26:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:41.175 10:26:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:41.175 10:26:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:41.175 10:26:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:41.175 10:26:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:41.175 10:26:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:41.175 10:26:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:41.175 10:26:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:41.175 10:26:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@309 -- # xtrace_disable 00:24:41.175 10:26:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:46.447 10:26:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:46.447 10:26:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # pci_devs=() 00:24:46.447 10:26:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:46.447 10:26:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:46.447 10:26:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:46.447 10:26:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:46.447 10:26:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:46.447 10:26:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # net_devs=() 00:24:46.447 10:26:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:46.447 10:26:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # e810=() 00:24:46.447 10:26:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # local -ga e810 00:24:46.447 10:26:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # x722=() 00:24:46.447 10:26:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # local -ga x722 00:24:46.447 10:26:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # mlx=() 00:24:46.447 10:26:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # local -ga mlx 00:24:46.447 10:26:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:46.447 10:26:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:46.447 10:26:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:46.447 10:26:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:46.447 10:26:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:46.447 10:26:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:46.447 10:26:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:46.447 10:26:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:46.447 10:26:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:46.447 10:26:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:46.447 10:26:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:46.447 10:26:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:46.447 10:26:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:46.447 10:26:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:46.447 10:26:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:46.447 10:26:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:46.447 10:26:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:46.447 10:26:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:46.447 10:26:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:46.447 10:26:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:24:46.447 Found 0000:af:00.0 (0x8086 - 0x159b) 00:24:46.447 10:26:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:46.447 10:26:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:46.447 10:26:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:46.447 10:26:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:46.447 10:26:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:46.447 10:26:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:46.447 10:26:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:24:46.447 Found 0000:af:00.1 (0x8086 - 0x159b) 00:24:46.447 10:26:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:46.447 10:26:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:46.447 10:26:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:46.447 10:26:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:46.447 10:26:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:46.447 10:26:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:46.447 10:26:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:46.447 10:26:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:46.447 10:26:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:46.447 10:26:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:46.447 10:26:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:46.447 10:26:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:46.447 10:26:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:46.447 10:26:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:46.447 10:26:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:46.447 10:26:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:24:46.447 Found net devices under 0000:af:00.0: cvl_0_0 00:24:46.447 10:26:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:46.447 10:26:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:46.447 10:26:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:46.447 10:26:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:46.447 10:26:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:46.447 10:26:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:46.447 10:26:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:46.447 10:26:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:46.447 10:26:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:24:46.447 Found net devices under 0000:af:00.1: cvl_0_1 00:24:46.447 10:26:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:46.447 10:26:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:46.447 10:26:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # is_hw=yes 00:24:46.447 10:26:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:46.447 10:26:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:46.447 10:26:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:46.447 10:26:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:46.447 10:26:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:46.447 10:26:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:46.447 10:26:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:46.447 10:26:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:46.447 10:26:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:46.447 10:26:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:46.447 10:26:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:46.447 10:26:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:46.448 10:26:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:46.448 10:26:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:46.448 10:26:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:46.448 10:26:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:46.448 10:26:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:46.448 10:26:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:46.707 10:26:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:46.707 10:26:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:46.707 10:26:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:46.707 10:26:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:46.965 10:26:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:46.965 10:26:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:46.966 10:26:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:46.966 10:26:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:46.966 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:46.966 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.411 ms 00:24:46.966 00:24:46.966 --- 10.0.0.2 ping statistics --- 00:24:46.966 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:46.966 rtt min/avg/max/mdev = 0.411/0.411/0.411/0.000 ms 00:24:46.966 10:26:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:46.966 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:46.966 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.149 ms 00:24:46.966 00:24:46.966 --- 10.0.0.1 ping statistics --- 00:24:46.966 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:46.966 rtt min/avg/max/mdev = 0.149/0.149/0.149/0.000 ms 00:24:46.966 10:26:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:46.966 10:26:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@450 -- # return 0 00:24:46.966 10:26:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:46.966 10:26:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:46.966 10:26:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:46.966 10:26:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:46.966 10:26:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:46.966 10:26:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:46.966 10:26:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:46.966 10:26:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:24:46.966 10:26:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:46.966 10:26:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:46.966 10:26:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:46.966 10:26:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=3976520 00:24:46.966 10:26:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:24:46.966 10:26:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 3976520 00:24:46.966 10:26:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # '[' -z 3976520 ']' 00:24:46.966 10:26:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:46.966 10:26:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:46.966 10:26:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:46.966 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:46.966 10:26:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:46.966 10:26:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:46.966 [2024-12-13 10:26:40.759069] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:24:46.966 [2024-12-13 10:26:40.759155] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:47.225 [2024-12-13 10:26:40.877165] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:47.225 [2024-12-13 10:26:40.984193] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:47.225 [2024-12-13 10:26:40.984242] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:47.225 [2024-12-13 10:26:40.984253] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:47.225 [2024-12-13 10:26:40.984263] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:47.225 [2024-12-13 10:26:40.984271] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:47.225 [2024-12-13 10:26:40.985786] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:24:47.793 10:26:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:47.793 10:26:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@868 -- # return 0 00:24:47.793 10:26:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:47.793 10:26:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:47.793 10:26:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:47.793 10:26:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:47.793 10:26:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:24:47.793 10:26:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:24:47.793 10:26:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:24:47.793 10:26:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:47.793 10:26:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:47.793 10:26:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:47.793 10:26:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:24:47.793 10:26:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:47.793 10:26:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:47.793 10:26:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:47.793 10:26:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:24:47.793 10:26:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:47.793 10:26:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:48.052 10:26:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:48.052 10:26:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:24:48.052 10:26:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:48.052 10:26:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:48.052 Malloc0 00:24:48.052 10:26:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:48.052 10:26:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:24:48.052 10:26:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:48.052 10:26:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:48.052 [2024-12-13 10:26:41.929120] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:48.052 10:26:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:48.052 10:26:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:24:48.052 10:26:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:48.052 10:26:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:48.052 10:26:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:48.052 10:26:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:24:48.052 10:26:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:48.052 10:26:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:48.311 10:26:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:48.311 10:26:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:48.311 10:26:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:48.311 10:26:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:48.311 [2024-12-13 10:26:41.953340] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:48.311 10:26:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:48.311 10:26:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:48.311 [2024-12-13 10:26:42.074586] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:24:49.688 Initializing NVMe Controllers 00:24:49.688 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:24:49.688 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:24:49.688 Initialization complete. Launching workers. 00:24:49.688 ======================================================== 00:24:49.688 Latency(us) 00:24:49.688 Device Information : IOPS MiB/s Average min max 00:24:49.688 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 25.00 3.12 165498.87 7125.10 193528.05 00:24:49.688 ======================================================== 00:24:49.688 Total : 25.00 3.12 165498.87 7125.10 193528.05 00:24:49.688 00:24:49.947 10:26:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:24:49.947 10:26:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:24:49.947 10:26:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:49.947 10:26:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:49.947 10:26:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:49.947 10:26:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=374 00:24:49.947 10:26:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 374 -eq 0 ]] 00:24:49.947 10:26:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:24:49.947 10:26:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:24:49.947 10:26:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:49.947 10:26:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:24:49.947 10:26:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:49.947 10:26:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:24:49.947 10:26:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:49.947 10:26:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:49.947 rmmod nvme_tcp 00:24:49.947 rmmod nvme_fabrics 00:24:49.947 rmmod nvme_keyring 00:24:49.947 10:26:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:49.947 10:26:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:24:49.947 10:26:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:24:49.947 10:26:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 3976520 ']' 00:24:49.947 10:26:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 3976520 00:24:49.947 10:26:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # '[' -z 3976520 ']' 00:24:49.947 10:26:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # kill -0 3976520 00:24:49.947 10:26:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # uname 00:24:49.947 10:26:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:49.947 10:26:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3976520 00:24:49.947 10:26:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:49.947 10:26:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:49.947 10:26:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3976520' 00:24:49.947 killing process with pid 3976520 00:24:49.947 10:26:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@973 -- # kill 3976520 00:24:49.947 10:26:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@978 -- # wait 3976520 00:24:51.324 10:26:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:51.324 10:26:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:51.324 10:26:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:51.324 10:26:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:24:51.324 10:26:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 00:24:51.324 10:26:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:51.324 10:26:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 00:24:51.324 10:26:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:51.324 10:26:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:51.324 10:26:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:51.324 10:26:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:51.324 10:26:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:53.228 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:53.228 00:24:53.228 real 0m12.209s 00:24:53.228 user 0m5.883s 00:24:53.228 sys 0m4.878s 00:24:53.228 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:53.228 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:53.228 ************************************ 00:24:53.228 END TEST nvmf_wait_for_buf 00:24:53.228 ************************************ 00:24:53.228 10:26:46 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 1 -eq 1 ']' 00:24:53.228 10:26:46 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@48 -- # run_test nvmf_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:24:53.228 10:26:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:53.228 10:26:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:53.228 10:26:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:53.228 ************************************ 00:24:53.228 START TEST nvmf_fuzz 00:24:53.228 ************************************ 00:24:53.228 10:26:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:24:53.228 * Looking for test storage... 00:24:53.228 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:53.228 10:26:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:53.228 10:26:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1711 -- # lcov --version 00:24:53.228 10:26:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:53.228 10:26:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:53.228 10:26:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:53.228 10:26:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:53.228 10:26:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:53.228 10:26:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:24:53.228 10:26:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:24:53.228 10:26:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:24:53.229 10:26:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:24:53.229 10:26:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:24:53.229 10:26:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:24:53.229 10:26:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:24:53.229 10:26:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:53.229 10:26:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:24:53.229 10:26:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@345 -- # : 1 00:24:53.229 10:26:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:53.229 10:26:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:53.229 10:26:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@365 -- # decimal 1 00:24:53.229 10:26:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@353 -- # local d=1 00:24:53.229 10:26:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:53.229 10:26:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@355 -- # echo 1 00:24:53.488 10:26:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:24:53.488 10:26:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@366 -- # decimal 2 00:24:53.488 10:26:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@353 -- # local d=2 00:24:53.488 10:26:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:53.488 10:26:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@355 -- # echo 2 00:24:53.488 10:26:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:24:53.488 10:26:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:53.488 10:26:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:53.488 10:26:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@368 -- # return 0 00:24:53.488 10:26:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:53.488 10:26:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:53.488 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:53.488 --rc genhtml_branch_coverage=1 00:24:53.488 --rc genhtml_function_coverage=1 00:24:53.488 --rc genhtml_legend=1 00:24:53.488 --rc geninfo_all_blocks=1 00:24:53.488 --rc geninfo_unexecuted_blocks=1 00:24:53.488 00:24:53.488 ' 00:24:53.488 10:26:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:53.488 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:53.488 --rc genhtml_branch_coverage=1 00:24:53.488 --rc genhtml_function_coverage=1 00:24:53.488 --rc genhtml_legend=1 00:24:53.488 --rc geninfo_all_blocks=1 00:24:53.488 --rc geninfo_unexecuted_blocks=1 00:24:53.488 00:24:53.488 ' 00:24:53.488 10:26:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:53.488 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:53.488 --rc genhtml_branch_coverage=1 00:24:53.488 --rc genhtml_function_coverage=1 00:24:53.488 --rc genhtml_legend=1 00:24:53.488 --rc geninfo_all_blocks=1 00:24:53.488 --rc geninfo_unexecuted_blocks=1 00:24:53.488 00:24:53.488 ' 00:24:53.488 10:26:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:53.488 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:53.488 --rc genhtml_branch_coverage=1 00:24:53.488 --rc genhtml_function_coverage=1 00:24:53.488 --rc genhtml_legend=1 00:24:53.488 --rc geninfo_all_blocks=1 00:24:53.488 --rc geninfo_unexecuted_blocks=1 00:24:53.488 00:24:53.488 ' 00:24:53.488 10:26:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:53.488 10:26:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # uname -s 00:24:53.488 10:26:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:53.488 10:26:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:53.488 10:26:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:53.488 10:26:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:53.488 10:26:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:53.488 10:26:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:53.488 10:26:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:53.488 10:26:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:53.488 10:26:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:53.488 10:26:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:53.488 10:26:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:24:53.488 10:26:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:24:53.488 10:26:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:53.488 10:26:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:53.488 10:26:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:53.488 10:26:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:53.488 10:26:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:53.488 10:26:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:24:53.488 10:26:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:53.488 10:26:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:53.488 10:26:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:53.488 10:26:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:53.488 10:26:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:53.488 10:26:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:53.488 10:26:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@5 -- # export PATH 00:24:53.488 10:26:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:53.488 10:26:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@51 -- # : 0 00:24:53.488 10:26:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:53.488 10:26:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:53.488 10:26:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:53.488 10:26:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:53.488 10:26:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:53.488 10:26:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:53.488 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:53.488 10:26:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:53.488 10:26:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:53.488 10:26:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:53.488 10:26:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:24:53.488 10:26:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:53.488 10:26:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:53.488 10:26:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:53.488 10:26:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:53.488 10:26:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:53.488 10:26:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:53.488 10:26:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:53.488 10:26:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:53.489 10:26:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:53.489 10:26:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:53.489 10:26:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@309 -- # xtrace_disable 00:24:53.489 10:26:47 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:58.760 10:26:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:58.760 10:26:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@315 -- # pci_devs=() 00:24:58.760 10:26:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:58.760 10:26:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:58.760 10:26:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:58.760 10:26:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:58.760 10:26:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:58.760 10:26:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@319 -- # net_devs=() 00:24:58.760 10:26:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:58.761 10:26:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@320 -- # e810=() 00:24:58.761 10:26:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@320 -- # local -ga e810 00:24:58.761 10:26:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@321 -- # x722=() 00:24:58.761 10:26:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@321 -- # local -ga x722 00:24:58.761 10:26:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@322 -- # mlx=() 00:24:58.761 10:26:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@322 -- # local -ga mlx 00:24:58.761 10:26:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:58.761 10:26:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:58.761 10:26:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:58.761 10:26:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:58.761 10:26:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:58.761 10:26:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:58.761 10:26:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:58.761 10:26:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:58.761 10:26:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:58.761 10:26:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:58.761 10:26:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:58.761 10:26:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:58.761 10:26:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:58.761 10:26:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:58.761 10:26:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:58.761 10:26:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:58.761 10:26:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:58.761 10:26:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:58.761 10:26:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:58.761 10:26:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:24:58.761 Found 0000:af:00.0 (0x8086 - 0x159b) 00:24:58.761 10:26:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:58.761 10:26:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:58.761 10:26:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:58.761 10:26:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:58.761 10:26:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:58.761 10:26:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:58.761 10:26:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:24:58.761 Found 0000:af:00.1 (0x8086 - 0x159b) 00:24:58.761 10:26:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:58.761 10:26:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:58.761 10:26:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:58.761 10:26:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:58.761 10:26:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:58.761 10:26:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:58.761 10:26:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:58.761 10:26:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:58.761 10:26:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:58.761 10:26:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:58.761 10:26:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:58.761 10:26:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:58.761 10:26:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:58.761 10:26:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:58.761 10:26:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:58.761 10:26:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:24:58.761 Found net devices under 0000:af:00.0: cvl_0_0 00:24:58.761 10:26:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:58.761 10:26:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:58.761 10:26:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:58.761 10:26:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:58.761 10:26:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:58.761 10:26:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:58.761 10:26:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:58.761 10:26:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:58.761 10:26:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:24:58.761 Found net devices under 0000:af:00.1: cvl_0_1 00:24:58.761 10:26:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:58.761 10:26:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:58.761 10:26:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@442 -- # is_hw=yes 00:24:58.761 10:26:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:58.761 10:26:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:58.761 10:26:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:58.761 10:26:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:58.761 10:26:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:58.761 10:26:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:58.761 10:26:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:58.761 10:26:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:58.761 10:26:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:58.761 10:26:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:58.761 10:26:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:58.761 10:26:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:58.761 10:26:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:58.761 10:26:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:58.761 10:26:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:58.761 10:26:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:58.761 10:26:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:58.761 10:26:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:58.761 10:26:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:58.761 10:26:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:58.761 10:26:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:58.761 10:26:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:59.020 10:26:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:59.020 10:26:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:59.020 10:26:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:59.020 10:26:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:59.020 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:59.020 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.427 ms 00:24:59.020 00:24:59.020 --- 10.0.0.2 ping statistics --- 00:24:59.020 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:59.020 rtt min/avg/max/mdev = 0.427/0.427/0.427/0.000 ms 00:24:59.020 10:26:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:59.020 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:59.020 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.201 ms 00:24:59.020 00:24:59.020 --- 10.0.0.1 ping statistics --- 00:24:59.020 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:59.020 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:24:59.020 10:26:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:59.021 10:26:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@450 -- # return 0 00:24:59.021 10:26:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:59.021 10:26:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:59.021 10:26:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:59.021 10:26:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:59.021 10:26:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:59.021 10:26:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:59.021 10:26:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:59.021 10:26:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@14 -- # nvmfpid=3980665 00:24:59.021 10:26:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@13 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:24:59.021 10:26:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:24:59.021 10:26:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@18 -- # waitforlisten 3980665 00:24:59.021 10:26:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@835 -- # '[' -z 3980665 ']' 00:24:59.021 10:26:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:59.021 10:26:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:59.021 10:26:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:59.021 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:59.021 10:26:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:59.021 10:26:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:59.958 10:26:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:59.958 10:26:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@868 -- # return 0 00:24:59.958 10:26:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:59.958 10:26:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:59.958 10:26:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:59.958 10:26:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:59.958 10:26:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:24:59.958 10:26:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:59.958 10:26:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:59.958 Malloc0 00:24:59.958 10:26:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:59.958 10:26:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:59.958 10:26:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:59.958 10:26:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:59.958 10:26:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:59.958 10:26:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:59.958 10:26:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:59.958 10:26:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:59.958 10:26:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:59.958 10:26:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:59.958 10:26:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:59.958 10:26:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:59.958 10:26:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:59.958 10:26:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:24:59.958 10:26:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:25:32.040 Fuzzing completed. Shutting down the fuzz application 00:25:32.040 00:25:32.040 Dumping successful admin opcodes: 00:25:32.040 9, 10, 00:25:32.040 Dumping successful io opcodes: 00:25:32.040 0, 9, 00:25:32.040 NS: 0x2000008efec0 I/O qp, Total commands completed: 657502, total successful commands: 3836, random_seed: 2213047744 00:25:32.040 NS: 0x2000008efec0 admin qp, Total commands completed: 77472, total successful commands: 16, random_seed: 886198464 00:25:32.040 10:27:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:25:32.298 Fuzzing completed. Shutting down the fuzz application 00:25:32.298 00:25:32.298 Dumping successful admin opcodes: 00:25:32.298 00:25:32.298 Dumping successful io opcodes: 00:25:32.298 00:25:32.298 NS: 0x2000008efec0 I/O qp, Total commands completed: 0, total successful commands: 0, random_seed: 1261960278 00:25:32.298 NS: 0x2000008efec0 admin qp, Total commands completed: 16, total successful commands: 0, random_seed: 1262057406 00:25:32.298 10:27:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:32.298 10:27:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.298 10:27:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:32.298 10:27:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.298 10:27:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:25:32.298 10:27:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:25:32.298 10:27:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:32.298 10:27:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@121 -- # sync 00:25:32.298 10:27:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:32.298 10:27:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@124 -- # set +e 00:25:32.298 10:27:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:32.298 10:27:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:32.298 rmmod nvme_tcp 00:25:32.298 rmmod nvme_fabrics 00:25:32.298 rmmod nvme_keyring 00:25:32.298 10:27:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:32.298 10:27:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@128 -- # set -e 00:25:32.298 10:27:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@129 -- # return 0 00:25:32.298 10:27:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@517 -- # '[' -n 3980665 ']' 00:25:32.298 10:27:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@518 -- # killprocess 3980665 00:25:32.298 10:27:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@954 -- # '[' -z 3980665 ']' 00:25:32.298 10:27:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@958 -- # kill -0 3980665 00:25:32.298 10:27:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@959 -- # uname 00:25:32.557 10:27:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:32.557 10:27:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3980665 00:25:32.557 10:27:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:32.557 10:27:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:32.557 10:27:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3980665' 00:25:32.557 killing process with pid 3980665 00:25:32.557 10:27:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@973 -- # kill 3980665 00:25:32.557 10:27:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@978 -- # wait 3980665 00:25:33.933 10:27:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:33.933 10:27:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:33.933 10:27:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:33.933 10:27:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@297 -- # iptr 00:25:33.933 10:27:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@791 -- # iptables-save 00:25:33.933 10:27:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:33.933 10:27:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@791 -- # iptables-restore 00:25:33.933 10:27:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:33.933 10:27:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:33.933 10:27:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:33.933 10:27:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:33.933 10:27:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:35.835 10:27:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:35.835 10:27:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@39 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs1.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs2.txt 00:25:35.835 00:25:35.835 real 0m42.694s 00:25:35.835 user 0m56.785s 00:25:35.835 sys 0m16.150s 00:25:35.835 10:27:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:35.835 10:27:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:35.835 ************************************ 00:25:35.835 END TEST nvmf_fuzz 00:25:35.835 ************************************ 00:25:35.835 10:27:29 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@49 -- # run_test nvmf_multiconnection /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:25:35.835 10:27:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:35.835 10:27:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:35.835 10:27:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:25:35.835 ************************************ 00:25:35.835 START TEST nvmf_multiconnection 00:25:35.835 ************************************ 00:25:35.835 10:27:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:25:36.094 * Looking for test storage... 00:25:36.094 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:36.094 10:27:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:25:36.094 10:27:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1711 -- # lcov --version 00:25:36.094 10:27:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:25:36.094 10:27:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:25:36.094 10:27:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:36.094 10:27:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:36.094 10:27:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:36.094 10:27:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@336 -- # IFS=.-: 00:25:36.094 10:27:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@336 -- # read -ra ver1 00:25:36.094 10:27:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@337 -- # IFS=.-: 00:25:36.094 10:27:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@337 -- # read -ra ver2 00:25:36.094 10:27:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@338 -- # local 'op=<' 00:25:36.094 10:27:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@340 -- # ver1_l=2 00:25:36.094 10:27:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@341 -- # ver2_l=1 00:25:36.094 10:27:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:36.094 10:27:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@344 -- # case "$op" in 00:25:36.094 10:27:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@345 -- # : 1 00:25:36.094 10:27:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:36.094 10:27:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:36.094 10:27:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@365 -- # decimal 1 00:25:36.094 10:27:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@353 -- # local d=1 00:25:36.094 10:27:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:36.094 10:27:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@355 -- # echo 1 00:25:36.094 10:27:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@365 -- # ver1[v]=1 00:25:36.094 10:27:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@366 -- # decimal 2 00:25:36.094 10:27:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@353 -- # local d=2 00:25:36.094 10:27:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:36.094 10:27:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@355 -- # echo 2 00:25:36.094 10:27:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@366 -- # ver2[v]=2 00:25:36.094 10:27:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:36.094 10:27:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:36.094 10:27:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@368 -- # return 0 00:25:36.094 10:27:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:36.094 10:27:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:25:36.094 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:36.094 --rc genhtml_branch_coverage=1 00:25:36.094 --rc genhtml_function_coverage=1 00:25:36.094 --rc genhtml_legend=1 00:25:36.094 --rc geninfo_all_blocks=1 00:25:36.094 --rc geninfo_unexecuted_blocks=1 00:25:36.094 00:25:36.094 ' 00:25:36.094 10:27:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:25:36.094 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:36.094 --rc genhtml_branch_coverage=1 00:25:36.094 --rc genhtml_function_coverage=1 00:25:36.094 --rc genhtml_legend=1 00:25:36.094 --rc geninfo_all_blocks=1 00:25:36.094 --rc geninfo_unexecuted_blocks=1 00:25:36.094 00:25:36.094 ' 00:25:36.094 10:27:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:25:36.094 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:36.094 --rc genhtml_branch_coverage=1 00:25:36.094 --rc genhtml_function_coverage=1 00:25:36.094 --rc genhtml_legend=1 00:25:36.094 --rc geninfo_all_blocks=1 00:25:36.094 --rc geninfo_unexecuted_blocks=1 00:25:36.094 00:25:36.094 ' 00:25:36.094 10:27:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:25:36.094 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:36.094 --rc genhtml_branch_coverage=1 00:25:36.094 --rc genhtml_function_coverage=1 00:25:36.094 --rc genhtml_legend=1 00:25:36.094 --rc geninfo_all_blocks=1 00:25:36.094 --rc geninfo_unexecuted_blocks=1 00:25:36.094 00:25:36.094 ' 00:25:36.094 10:27:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:36.094 10:27:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # uname -s 00:25:36.094 10:27:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:36.094 10:27:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:36.094 10:27:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:36.094 10:27:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:36.094 10:27:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:36.094 10:27:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:36.094 10:27:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:36.094 10:27:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:36.094 10:27:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:36.094 10:27:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:36.094 10:27:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:25:36.094 10:27:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:25:36.094 10:27:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:36.094 10:27:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:36.094 10:27:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:36.094 10:27:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:36.094 10:27:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:36.094 10:27:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@15 -- # shopt -s extglob 00:25:36.094 10:27:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:36.094 10:27:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:36.095 10:27:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:36.095 10:27:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:36.095 10:27:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:36.095 10:27:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:36.095 10:27:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@5 -- # export PATH 00:25:36.095 10:27:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:36.095 10:27:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@51 -- # : 0 00:25:36.095 10:27:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:36.095 10:27:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:36.095 10:27:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:36.095 10:27:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:36.095 10:27:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:36.095 10:27:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:36.095 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:36.095 10:27:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:36.095 10:27:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:36.095 10:27:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:36.095 10:27:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:36.095 10:27:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:36.095 10:27:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:25:36.095 10:27:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@16 -- # nvmftestinit 00:25:36.095 10:27:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:36.095 10:27:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:36.095 10:27:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:36.095 10:27:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:36.095 10:27:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:36.095 10:27:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:36.095 10:27:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:36.095 10:27:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:36.095 10:27:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:36.095 10:27:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:36.095 10:27:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@309 -- # xtrace_disable 00:25:36.095 10:27:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:41.366 10:27:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:41.366 10:27:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@315 -- # pci_devs=() 00:25:41.366 10:27:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:41.366 10:27:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:41.366 10:27:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:41.366 10:27:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:41.366 10:27:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:41.366 10:27:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@319 -- # net_devs=() 00:25:41.366 10:27:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:41.366 10:27:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@320 -- # e810=() 00:25:41.366 10:27:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@320 -- # local -ga e810 00:25:41.366 10:27:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@321 -- # x722=() 00:25:41.366 10:27:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@321 -- # local -ga x722 00:25:41.366 10:27:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@322 -- # mlx=() 00:25:41.366 10:27:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@322 -- # local -ga mlx 00:25:41.366 10:27:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:41.366 10:27:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:41.366 10:27:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:41.366 10:27:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:41.366 10:27:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:41.366 10:27:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:41.366 10:27:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:41.366 10:27:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:41.366 10:27:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:41.366 10:27:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:41.366 10:27:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:41.366 10:27:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:41.366 10:27:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:41.366 10:27:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:41.366 10:27:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:41.366 10:27:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:41.366 10:27:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:41.366 10:27:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:41.366 10:27:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:41.366 10:27:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:25:41.366 Found 0000:af:00.0 (0x8086 - 0x159b) 00:25:41.366 10:27:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:41.366 10:27:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:41.366 10:27:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:41.366 10:27:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:41.366 10:27:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:41.366 10:27:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:41.366 10:27:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:25:41.366 Found 0000:af:00.1 (0x8086 - 0x159b) 00:25:41.366 10:27:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:41.366 10:27:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:41.366 10:27:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:41.366 10:27:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:41.366 10:27:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:41.366 10:27:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:41.366 10:27:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:41.366 10:27:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:41.366 10:27:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:41.366 10:27:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:41.366 10:27:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:41.366 10:27:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:41.366 10:27:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:41.366 10:27:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:41.366 10:27:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:41.366 10:27:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:25:41.366 Found net devices under 0000:af:00.0: cvl_0_0 00:25:41.367 10:27:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:41.367 10:27:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:41.367 10:27:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:41.367 10:27:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:41.367 10:27:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:41.367 10:27:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:41.367 10:27:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:41.367 10:27:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:41.367 10:27:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:25:41.367 Found net devices under 0000:af:00.1: cvl_0_1 00:25:41.367 10:27:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:41.367 10:27:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:41.367 10:27:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@442 -- # is_hw=yes 00:25:41.367 10:27:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:41.367 10:27:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:41.367 10:27:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:41.367 10:27:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:41.367 10:27:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:41.367 10:27:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:41.367 10:27:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:41.367 10:27:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:41.367 10:27:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:41.367 10:27:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:41.367 10:27:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:41.367 10:27:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:41.367 10:27:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:41.367 10:27:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:41.367 10:27:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:41.367 10:27:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:41.367 10:27:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:41.367 10:27:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:41.367 10:27:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:41.367 10:27:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:41.367 10:27:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:41.367 10:27:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:41.626 10:27:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:41.626 10:27:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:41.626 10:27:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:41.626 10:27:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:41.626 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:41.626 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.367 ms 00:25:41.626 00:25:41.626 --- 10.0.0.2 ping statistics --- 00:25:41.626 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:41.626 rtt min/avg/max/mdev = 0.367/0.367/0.367/0.000 ms 00:25:41.626 10:27:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:41.626 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:41.626 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.189 ms 00:25:41.626 00:25:41.626 --- 10.0.0.1 ping statistics --- 00:25:41.626 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:41.626 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:25:41.626 10:27:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:41.626 10:27:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@450 -- # return 0 00:25:41.626 10:27:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:41.626 10:27:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:41.626 10:27:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:41.626 10:27:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:41.626 10:27:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:41.626 10:27:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:41.626 10:27:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:41.626 10:27:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:25:41.626 10:27:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:41.626 10:27:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:41.626 10:27:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:41.626 10:27:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@509 -- # nvmfpid=3990002 00:25:41.626 10:27:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@510 -- # waitforlisten 3990002 00:25:41.626 10:27:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:41.626 10:27:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@835 -- # '[' -z 3990002 ']' 00:25:41.626 10:27:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:41.626 10:27:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:41.626 10:27:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:41.626 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:41.626 10:27:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:41.626 10:27:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:41.626 [2024-12-13 10:27:35.433816] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:25:41.626 [2024-12-13 10:27:35.433903] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:41.884 [2024-12-13 10:27:35.556331] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:41.884 [2024-12-13 10:27:35.676875] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:41.885 [2024-12-13 10:27:35.676925] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:41.885 [2024-12-13 10:27:35.676936] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:41.885 [2024-12-13 10:27:35.676946] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:41.885 [2024-12-13 10:27:35.676954] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:41.885 [2024-12-13 10:27:35.679458] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:25:41.885 [2024-12-13 10:27:35.679527] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:25:41.885 [2024-12-13 10:27:35.679543] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:25:41.885 [2024-12-13 10:27:35.679552] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:25:42.451 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:42.451 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@868 -- # return 0 00:25:42.451 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:42.451 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:42.451 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:42.451 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:42.451 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:42.451 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.451 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:42.451 [2024-12-13 10:27:36.284693] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:42.451 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.451 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # seq 1 11 00:25:42.452 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:42.452 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:25:42.452 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.452 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:42.710 Malloc1 00:25:42.710 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.710 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:25:42.710 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.710 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:42.710 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.710 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:25:42.710 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.710 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:42.710 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.710 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:42.710 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.711 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:42.711 [2024-12-13 10:27:36.409700] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:42.711 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.711 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:42.711 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:25:42.711 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.711 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:42.711 Malloc2 00:25:42.711 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.711 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:25:42.711 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.711 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:42.711 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.711 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:25:42.711 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.711 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:42.711 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.711 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:25:42.711 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.711 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:42.711 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.711 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:42.711 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:25:42.711 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.711 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:42.711 Malloc3 00:25:42.711 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.711 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:25:42.711 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.711 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:42.970 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.970 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:25:42.970 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.970 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:42.970 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.970 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:25:42.970 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.970 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:42.970 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.970 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:42.970 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:25:42.970 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.970 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:42.970 Malloc4 00:25:42.970 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.970 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:25:42.970 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.970 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:42.970 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.970 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:25:42.970 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.970 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:42.970 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.970 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:25:42.970 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.970 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:42.970 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.970 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:42.970 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:25:42.970 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.970 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:42.970 Malloc5 00:25:42.970 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.970 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:25:42.970 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.970 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:42.970 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.970 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:25:42.970 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.970 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:42.970 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.970 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:25:42.970 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.970 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:42.970 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.970 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:42.970 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:25:42.970 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.970 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:43.229 Malloc6 00:25:43.229 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.229 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:25:43.229 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.229 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:43.229 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.229 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:25:43.229 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.229 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:43.229 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.229 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:25:43.229 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.229 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:43.229 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.229 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:43.229 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:25:43.229 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.229 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:43.229 Malloc7 00:25:43.229 10:27:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.229 10:27:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:25:43.229 10:27:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.229 10:27:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:43.229 10:27:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.229 10:27:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:25:43.229 10:27:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.229 10:27:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:43.229 10:27:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.229 10:27:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:25:43.229 10:27:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.229 10:27:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:43.229 10:27:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.229 10:27:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:43.229 10:27:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:25:43.229 10:27:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.229 10:27:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:43.488 Malloc8 00:25:43.488 10:27:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.488 10:27:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:25:43.488 10:27:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.488 10:27:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:43.488 10:27:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.488 10:27:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:25:43.488 10:27:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.488 10:27:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:43.488 10:27:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.488 10:27:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:25:43.488 10:27:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.488 10:27:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:43.488 10:27:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.488 10:27:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:43.488 10:27:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:25:43.488 10:27:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.488 10:27:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:43.488 Malloc9 00:25:43.488 10:27:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.488 10:27:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:25:43.488 10:27:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.488 10:27:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:43.488 10:27:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.488 10:27:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:25:43.488 10:27:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.488 10:27:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:43.488 10:27:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.488 10:27:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:25:43.488 10:27:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.488 10:27:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:43.488 10:27:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.488 10:27:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:43.488 10:27:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:25:43.488 10:27:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.488 10:27:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:43.488 Malloc10 00:25:43.488 10:27:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.488 10:27:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:25:43.488 10:27:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.488 10:27:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:43.488 10:27:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.488 10:27:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:25:43.488 10:27:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.488 10:27:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:43.489 10:27:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.489 10:27:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:25:43.489 10:27:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.489 10:27:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:43.489 10:27:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.489 10:27:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:43.489 10:27:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:25:43.489 10:27:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.489 10:27:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:43.747 Malloc11 00:25:43.747 10:27:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.747 10:27:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:25:43.747 10:27:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.747 10:27:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:43.747 10:27:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.747 10:27:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:25:43.747 10:27:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.747 10:27:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:43.747 10:27:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.747 10:27:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:25:43.747 10:27:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.747 10:27:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:43.747 10:27:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.747 10:27:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # seq 1 11 00:25:43.747 10:27:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:43.747 10:27:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:25:45.122 10:27:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:25:45.122 10:27:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:45.122 10:27:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:45.122 10:27:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:45.122 10:27:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:25:47.022 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:47.022 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:47.022 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK1 00:25:47.022 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:47.022 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:47.022 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:25:47.022 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:47.022 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:25:48.397 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:25:48.397 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:48.397 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:48.397 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:48.397 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:25:50.297 10:27:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:50.297 10:27:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:50.297 10:27:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK2 00:25:50.297 10:27:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:50.297 10:27:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:50.297 10:27:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:25:50.297 10:27:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:50.297 10:27:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:25:51.672 10:27:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:25:51.672 10:27:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:51.672 10:27:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:51.672 10:27:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:51.672 10:27:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:25:53.573 10:27:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:53.573 10:27:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:53.573 10:27:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK3 00:25:53.573 10:27:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:53.573 10:27:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:53.573 10:27:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:25:53.573 10:27:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:53.573 10:27:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:25:54.948 10:27:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:25:54.948 10:27:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:54.948 10:27:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:54.948 10:27:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:54.948 10:27:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:25:56.848 10:27:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:56.848 10:27:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:56.848 10:27:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK4 00:25:56.848 10:27:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:56.848 10:27:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:56.848 10:27:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:25:56.848 10:27:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:56.848 10:27:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:25:58.223 10:27:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:25:58.223 10:27:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:58.223 10:27:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:58.223 10:27:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:58.223 10:27:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:26:00.124 10:27:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:26:00.124 10:27:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:26:00.124 10:27:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK5 00:26:00.124 10:27:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:26:00.124 10:27:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:26:00.124 10:27:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:26:00.124 10:27:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:00.124 10:27:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:26:01.498 10:27:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:26:01.498 10:27:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:26:01.498 10:27:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:26:01.498 10:27:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:26:01.498 10:27:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:26:03.398 10:27:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:26:03.398 10:27:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:26:03.398 10:27:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK6 00:26:03.398 10:27:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:26:03.398 10:27:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:26:03.398 10:27:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:26:03.398 10:27:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:03.398 10:27:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:26:04.773 10:27:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:26:04.773 10:27:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:26:04.773 10:27:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:26:04.773 10:27:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:26:04.773 10:27:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:26:07.326 10:28:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:26:07.326 10:28:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:26:07.326 10:28:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK7 00:26:07.326 10:28:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:26:07.326 10:28:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:26:07.326 10:28:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:26:07.326 10:28:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:07.326 10:28:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:26:08.317 10:28:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:26:08.317 10:28:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:26:08.317 10:28:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:26:08.317 10:28:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:26:08.317 10:28:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:26:10.215 10:28:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:26:10.215 10:28:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:26:10.215 10:28:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK8 00:26:10.215 10:28:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:26:10.215 10:28:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:26:10.215 10:28:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:26:10.215 10:28:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:10.215 10:28:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:26:11.587 10:28:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:26:11.587 10:28:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:26:11.587 10:28:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:26:11.587 10:28:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:26:11.587 10:28:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:26:14.116 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:26:14.116 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:26:14.116 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK9 00:26:14.116 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:26:14.116 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:26:14.116 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:26:14.116 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:14.116 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:26:15.050 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:26:15.050 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:26:15.050 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:26:15.050 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:26:15.050 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:26:17.580 10:28:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:26:17.580 10:28:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:26:17.580 10:28:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK10 00:26:17.580 10:28:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:26:17.580 10:28:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:26:17.580 10:28:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:26:17.580 10:28:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:17.580 10:28:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:26:18.514 10:28:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:26:18.514 10:28:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:26:18.514 10:28:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:26:18.514 10:28:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:26:18.514 10:28:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:26:21.043 10:28:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:26:21.043 10:28:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:26:21.043 10:28:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK11 00:26:21.043 10:28:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:26:21.043 10:28:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:26:21.044 10:28:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:26:21.044 10:28:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:26:21.044 [global] 00:26:21.044 thread=1 00:26:21.044 invalidate=1 00:26:21.044 rw=read 00:26:21.044 time_based=1 00:26:21.044 runtime=10 00:26:21.044 ioengine=libaio 00:26:21.044 direct=1 00:26:21.044 bs=262144 00:26:21.044 iodepth=64 00:26:21.044 norandommap=1 00:26:21.044 numjobs=1 00:26:21.044 00:26:21.044 [job0] 00:26:21.044 filename=/dev/nvme0n1 00:26:21.044 [job1] 00:26:21.044 filename=/dev/nvme10n1 00:26:21.044 [job2] 00:26:21.044 filename=/dev/nvme1n1 00:26:21.044 [job3] 00:26:21.044 filename=/dev/nvme2n1 00:26:21.044 [job4] 00:26:21.044 filename=/dev/nvme3n1 00:26:21.044 [job5] 00:26:21.044 filename=/dev/nvme4n1 00:26:21.044 [job6] 00:26:21.044 filename=/dev/nvme5n1 00:26:21.044 [job7] 00:26:21.044 filename=/dev/nvme6n1 00:26:21.044 [job8] 00:26:21.044 filename=/dev/nvme7n1 00:26:21.044 [job9] 00:26:21.044 filename=/dev/nvme8n1 00:26:21.044 [job10] 00:26:21.044 filename=/dev/nvme9n1 00:26:21.044 Could not set queue depth (nvme0n1) 00:26:21.044 Could not set queue depth (nvme10n1) 00:26:21.044 Could not set queue depth (nvme1n1) 00:26:21.044 Could not set queue depth (nvme2n1) 00:26:21.044 Could not set queue depth (nvme3n1) 00:26:21.044 Could not set queue depth (nvme4n1) 00:26:21.044 Could not set queue depth (nvme5n1) 00:26:21.044 Could not set queue depth (nvme6n1) 00:26:21.044 Could not set queue depth (nvme7n1) 00:26:21.044 Could not set queue depth (nvme8n1) 00:26:21.044 Could not set queue depth (nvme9n1) 00:26:21.044 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:21.044 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:21.044 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:21.044 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:21.044 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:21.044 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:21.044 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:21.044 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:21.044 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:21.044 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:21.044 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:21.044 fio-3.35 00:26:21.044 Starting 11 threads 00:26:33.246 00:26:33.246 job0: (groupid=0, jobs=1): err= 0: pid=3996743: Fri Dec 13 10:28:25 2024 00:26:33.246 read: IOPS=340, BW=85.1MiB/s (89.2MB/s)(869MiB/10208msec) 00:26:33.246 slat (usec): min=21, max=567991, avg=1667.55, stdev=16520.40 00:26:33.246 clat (usec): min=1314, max=1337.3k, avg=186175.34, stdev=285922.95 00:26:33.246 lat (usec): min=1342, max=1356.4k, avg=187842.89, stdev=288194.00 00:26:33.246 clat percentiles (msec): 00:26:33.246 | 1.00th=[ 3], 5.00th=[ 4], 10.00th=[ 13], 20.00th=[ 16], 00:26:33.246 | 30.00th=[ 23], 40.00th=[ 35], 50.00th=[ 43], 60.00th=[ 74], 00:26:33.246 | 70.00th=[ 108], 80.00th=[ 317], 90.00th=[ 659], 95.00th=[ 902], 00:26:33.246 | 99.00th=[ 1167], 99.50th=[ 1183], 99.90th=[ 1200], 99.95th=[ 1200], 00:26:33.246 | 99.99th=[ 1334] 00:26:33.246 bw ( KiB/s): min=10752, max=322560, per=10.43%, avg=91890.53, stdev=100629.29, samples=19 00:26:33.246 iops : min= 42, max= 1260, avg=358.95, stdev=393.08, samples=19 00:26:33.246 lat (msec) : 2=0.43%, 4=4.66%, 10=3.94%, 20=16.47%, 50=28.15% 00:26:33.246 lat (msec) : 100=15.80%, 250=8.09%, 500=6.88%, 750=8.46%, 1000=3.66% 00:26:33.246 lat (msec) : 2000=3.45% 00:26:33.246 cpu : usr=0.13%, sys=1.23%, ctx=1618, majf=0, minf=4097 00:26:33.246 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=0.9%, >=64=98.2% 00:26:33.246 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:33.246 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:33.246 issued rwts: total=3474,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:33.246 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:33.246 job1: (groupid=0, jobs=1): err= 0: pid=3996744: Fri Dec 13 10:28:25 2024 00:26:33.246 read: IOPS=387, BW=96.9MiB/s (102MB/s)(986MiB/10172msec) 00:26:33.246 slat (usec): min=14, max=613380, avg=1617.79, stdev=17590.50 00:26:33.246 clat (usec): min=1045, max=1464.1k, avg=163319.71, stdev=223614.44 00:26:33.246 lat (usec): min=1074, max=1464.1k, avg=164937.51, stdev=226525.99 00:26:33.246 clat percentiles (msec): 00:26:33.246 | 1.00th=[ 4], 5.00th=[ 10], 10.00th=[ 13], 20.00th=[ 27], 00:26:33.246 | 30.00th=[ 36], 40.00th=[ 63], 50.00th=[ 80], 60.00th=[ 86], 00:26:33.246 | 70.00th=[ 99], 80.00th=[ 275], 90.00th=[ 575], 95.00th=[ 701], 00:26:33.246 | 99.00th=[ 944], 99.50th=[ 986], 99.90th=[ 1083], 99.95th=[ 1469], 00:26:33.246 | 99.99th=[ 1469] 00:26:33.246 bw ( KiB/s): min=11264, max=304128, per=11.86%, avg=104514.42, stdev=94757.50, samples=19 00:26:33.246 iops : min= 44, max= 1188, avg=408.21, stdev=370.14, samples=19 00:26:33.246 lat (msec) : 2=0.53%, 4=0.61%, 10=4.13%, 20=10.98%, 50=20.22% 00:26:33.246 lat (msec) : 100=34.47%, 250=7.86%, 500=9.92%, 750=8.17%, 1000=2.66% 00:26:33.246 lat (msec) : 2000=0.43% 00:26:33.246 cpu : usr=0.19%, sys=1.37%, ctx=1172, majf=0, minf=4097 00:26:33.246 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:26:33.246 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:33.246 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:33.246 issued rwts: total=3942,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:33.246 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:33.246 job2: (groupid=0, jobs=1): err= 0: pid=3996745: Fri Dec 13 10:28:25 2024 00:26:33.246 read: IOPS=211, BW=52.8MiB/s (55.4MB/s)(537MiB/10166msec) 00:26:33.246 slat (usec): min=17, max=424097, avg=3668.70, stdev=19786.55 00:26:33.246 clat (msec): min=4, max=1260, avg=298.90, stdev=264.16 00:26:33.246 lat (msec): min=4, max=1260, avg=302.57, stdev=266.29 00:26:33.246 clat percentiles (msec): 00:26:33.246 | 1.00th=[ 64], 5.00th=[ 87], 10.00th=[ 96], 20.00th=[ 107], 00:26:33.246 | 30.00th=[ 115], 40.00th=[ 123], 50.00th=[ 153], 60.00th=[ 239], 00:26:33.246 | 70.00th=[ 334], 80.00th=[ 617], 90.00th=[ 735], 95.00th=[ 852], 00:26:33.246 | 99.00th=[ 1020], 99.50th=[ 1062], 99.90th=[ 1200], 99.95th=[ 1200], 00:26:33.246 | 99.99th=[ 1267] 00:26:33.246 bw ( KiB/s): min= 9728, max=145920, per=6.05%, avg=53353.70, stdev=45012.67, samples=20 00:26:33.247 iops : min= 38, max= 570, avg=208.40, stdev=175.84, samples=20 00:26:33.247 lat (msec) : 10=0.05%, 50=0.37%, 100=12.01%, 250=49.02%, 500=16.29% 00:26:33.247 lat (msec) : 750=13.04%, 1000=7.68%, 2000=1.54% 00:26:33.247 cpu : usr=0.07%, sys=0.97%, ctx=359, majf=0, minf=4097 00:26:33.247 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.7%, 32=1.5%, >=64=97.1% 00:26:33.247 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:33.247 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:33.247 issued rwts: total=2148,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:33.247 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:33.247 job3: (groupid=0, jobs=1): err= 0: pid=3996746: Fri Dec 13 10:28:25 2024 00:26:33.247 read: IOPS=262, BW=65.6MiB/s (68.8MB/s)(662MiB/10082msec) 00:26:33.247 slat (usec): min=14, max=412305, avg=3780.28, stdev=18170.15 00:26:33.247 clat (msec): min=58, max=1120, avg=239.69, stdev=208.73 00:26:33.247 lat (msec): min=58, max=1120, avg=243.47, stdev=211.93 00:26:33.247 clat percentiles (msec): 00:26:33.247 | 1.00th=[ 68], 5.00th=[ 82], 10.00th=[ 89], 20.00th=[ 103], 00:26:33.247 | 30.00th=[ 120], 40.00th=[ 136], 50.00th=[ 163], 60.00th=[ 190], 00:26:33.247 | 70.00th=[ 226], 80.00th=[ 296], 90.00th=[ 625], 95.00th=[ 760], 00:26:33.247 | 99.00th=[ 919], 99.50th=[ 1045], 99.90th=[ 1070], 99.95th=[ 1116], 00:26:33.247 | 99.99th=[ 1116] 00:26:33.247 bw ( KiB/s): min= 2560, max=151552, per=7.50%, avg=66124.80, stdev=49507.01, samples=20 00:26:33.247 iops : min= 10, max= 592, avg=258.30, stdev=193.39, samples=20 00:26:33.247 lat (msec) : 100=18.93%, 250=54.93%, 500=13.60%, 750=7.29%, 1000=4.68% 00:26:33.247 lat (msec) : 2000=0.57% 00:26:33.247 cpu : usr=0.08%, sys=1.18%, ctx=352, majf=0, minf=4097 00:26:33.247 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.6%, 32=1.2%, >=64=97.6% 00:26:33.247 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:33.247 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:33.247 issued rwts: total=2647,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:33.247 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:33.247 job4: (groupid=0, jobs=1): err= 0: pid=3996747: Fri Dec 13 10:28:25 2024 00:26:33.247 read: IOPS=423, BW=106MiB/s (111MB/s)(1065MiB/10066msec) 00:26:33.247 slat (usec): min=15, max=373231, avg=1333.57, stdev=10782.55 00:26:33.247 clat (usec): min=1898, max=1199.6k, avg=149703.88, stdev=195640.80 00:26:33.247 lat (msec): min=2, max=1199, avg=151.04, stdev=197.28 00:26:33.247 clat percentiles (msec): 00:26:33.247 | 1.00th=[ 7], 5.00th=[ 32], 10.00th=[ 41], 20.00th=[ 43], 00:26:33.247 | 30.00th=[ 44], 40.00th=[ 48], 50.00th=[ 64], 60.00th=[ 102], 00:26:33.247 | 70.00th=[ 129], 80.00th=[ 211], 90.00th=[ 359], 95.00th=[ 550], 00:26:33.247 | 99.00th=[ 986], 99.50th=[ 1045], 99.90th=[ 1150], 99.95th=[ 1200], 00:26:33.247 | 99.99th=[ 1200] 00:26:33.247 bw ( KiB/s): min=10240, max=384000, per=12.19%, avg=107448.20, stdev=114303.29, samples=20 00:26:33.247 iops : min= 40, max= 1500, avg=419.70, stdev=446.51, samples=20 00:26:33.247 lat (msec) : 2=0.02%, 4=0.05%, 10=1.27%, 20=1.53%, 50=38.10% 00:26:33.247 lat (msec) : 100=18.52%, 250=22.46%, 500=12.37%, 750=1.38%, 1000=3.57% 00:26:33.247 lat (msec) : 2000=0.73% 00:26:33.247 cpu : usr=0.20%, sys=1.67%, ctx=863, majf=0, minf=4097 00:26:33.247 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:26:33.247 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:33.247 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:33.247 issued rwts: total=4260,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:33.247 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:33.247 job5: (groupid=0, jobs=1): err= 0: pid=3996748: Fri Dec 13 10:28:25 2024 00:26:33.247 read: IOPS=268, BW=67.1MiB/s (70.4MB/s)(683MiB/10180msec) 00:26:33.247 slat (usec): min=9, max=533893, avg=3196.89, stdev=17397.91 00:26:33.247 clat (msec): min=35, max=1207, avg=235.07, stdev=224.40 00:26:33.247 lat (msec): min=35, max=1272, avg=238.27, stdev=227.32 00:26:33.247 clat percentiles (msec): 00:26:33.247 | 1.00th=[ 43], 5.00th=[ 64], 10.00th=[ 77], 20.00th=[ 95], 00:26:33.247 | 30.00th=[ 106], 40.00th=[ 121], 50.00th=[ 142], 60.00th=[ 167], 00:26:33.247 | 70.00th=[ 207], 80.00th=[ 359], 90.00th=[ 625], 95.00th=[ 793], 00:26:33.247 | 99.00th=[ 1020], 99.50th=[ 1083], 99.90th=[ 1200], 99.95th=[ 1200], 00:26:33.247 | 99.99th=[ 1200] 00:26:33.247 bw ( KiB/s): min=11264, max=174592, per=7.75%, avg=68300.80, stdev=53620.44, samples=20 00:26:33.247 iops : min= 44, max= 682, avg=266.80, stdev=209.45, samples=20 00:26:33.247 lat (msec) : 50=2.64%, 100=22.25%, 250=48.94%, 500=14.71%, 750=4.58% 00:26:33.247 lat (msec) : 1000=5.53%, 2000=1.35% 00:26:33.247 cpu : usr=0.08%, sys=1.17%, ctx=416, majf=0, minf=4097 00:26:33.247 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.6%, 32=1.2%, >=64=97.7% 00:26:33.247 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:33.247 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:33.247 issued rwts: total=2732,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:33.247 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:33.247 job6: (groupid=0, jobs=1): err= 0: pid=3996749: Fri Dec 13 10:28:25 2024 00:26:33.247 read: IOPS=240, BW=60.0MiB/s (62.9MB/s)(606MiB/10089msec) 00:26:33.247 slat (usec): min=13, max=514927, avg=2234.54, stdev=18404.99 00:26:33.247 clat (usec): min=985, max=1184.1k, avg=264123.75, stdev=252386.38 00:26:33.247 lat (usec): min=1016, max=1231.9k, avg=266358.29, stdev=255681.64 00:26:33.247 clat percentiles (msec): 00:26:33.247 | 1.00th=[ 10], 5.00th=[ 28], 10.00th=[ 51], 20.00th=[ 73], 00:26:33.247 | 30.00th=[ 91], 40.00th=[ 110], 50.00th=[ 176], 60.00th=[ 232], 00:26:33.247 | 70.00th=[ 284], 80.00th=[ 443], 90.00th=[ 718], 95.00th=[ 810], 00:26:33.247 | 99.00th=[ 1003], 99.50th=[ 1036], 99.90th=[ 1183], 99.95th=[ 1183], 00:26:33.247 | 99.99th=[ 1183] 00:26:33.247 bw ( KiB/s): min=10752, max=187392, per=6.85%, avg=60368.10, stdev=45989.81, samples=20 00:26:33.247 iops : min= 42, max= 732, avg=235.80, stdev=179.66, samples=20 00:26:33.247 lat (usec) : 1000=0.04% 00:26:33.247 lat (msec) : 2=0.17%, 4=0.21%, 10=0.91%, 20=1.57%, 50=7.06% 00:26:33.247 lat (msec) : 100=24.57%, 250=28.32%, 500=19.28%, 750=10.65%, 1000=6.07% 00:26:33.247 lat (msec) : 2000=1.16% 00:26:33.247 cpu : usr=0.06%, sys=0.91%, ctx=633, majf=0, minf=4097 00:26:33.247 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.7%, 32=1.3%, >=64=97.4% 00:26:33.247 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:33.247 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:33.247 issued rwts: total=2422,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:33.247 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:33.247 job7: (groupid=0, jobs=1): err= 0: pid=3996750: Fri Dec 13 10:28:25 2024 00:26:33.247 read: IOPS=230, BW=57.7MiB/s (60.5MB/s)(583MiB/10091msec) 00:26:33.247 slat (usec): min=15, max=342964, avg=3335.33, stdev=16478.39 00:26:33.247 clat (msec): min=8, max=1158, avg=273.56, stdev=231.64 00:26:33.247 lat (msec): min=8, max=1158, avg=276.89, stdev=233.29 00:26:33.247 clat percentiles (msec): 00:26:33.247 | 1.00th=[ 24], 5.00th=[ 92], 10.00th=[ 108], 20.00th=[ 127], 00:26:33.247 | 30.00th=[ 140], 40.00th=[ 159], 50.00th=[ 184], 60.00th=[ 220], 00:26:33.247 | 70.00th=[ 271], 80.00th=[ 355], 90.00th=[ 651], 95.00th=[ 885], 00:26:33.247 | 99.00th=[ 1070], 99.50th=[ 1133], 99.90th=[ 1167], 99.95th=[ 1167], 00:26:33.247 | 99.99th=[ 1167] 00:26:33.247 bw ( KiB/s): min= 9216, max=118272, per=6.58%, avg=58009.60, stdev=36801.48, samples=20 00:26:33.247 iops : min= 36, max= 462, avg=226.60, stdev=143.76, samples=20 00:26:33.247 lat (msec) : 10=0.47%, 20=0.39%, 50=0.90%, 100=5.11%, 250=58.28% 00:26:33.247 lat (msec) : 500=21.03%, 750=5.88%, 1000=6.22%, 2000=1.72% 00:26:33.247 cpu : usr=0.09%, sys=1.06%, ctx=348, majf=0, minf=4097 00:26:33.247 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.7%, 32=1.4%, >=64=97.3% 00:26:33.247 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:33.247 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:33.247 issued rwts: total=2330,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:33.247 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:33.247 job8: (groupid=0, jobs=1): err= 0: pid=3996751: Fri Dec 13 10:28:25 2024 00:26:33.247 read: IOPS=401, BW=100MiB/s (105MB/s)(1021MiB/10166msec) 00:26:33.247 slat (usec): min=19, max=390125, avg=2343.83, stdev=12262.91 00:26:33.247 clat (msec): min=20, max=1136, avg=156.84, stdev=190.96 00:26:33.247 lat (msec): min=22, max=1136, avg=159.19, stdev=193.68 00:26:33.247 clat percentiles (msec): 00:26:33.247 | 1.00th=[ 27], 5.00th=[ 30], 10.00th=[ 31], 20.00th=[ 33], 00:26:33.247 | 30.00th=[ 43], 40.00th=[ 71], 50.00th=[ 86], 60.00th=[ 113], 00:26:33.247 | 70.00th=[ 140], 80.00th=[ 218], 90.00th=[ 384], 95.00th=[ 634], 00:26:33.247 | 99.00th=[ 936], 99.50th=[ 1036], 99.90th=[ 1133], 99.95th=[ 1133], 00:26:33.247 | 99.99th=[ 1133] 00:26:33.247 bw ( KiB/s): min= 1536, max=505344, per=11.67%, avg=102886.40, stdev=116882.64, samples=20 00:26:33.247 iops : min= 6, max= 1974, avg=401.90, stdev=456.57, samples=20 00:26:33.247 lat (msec) : 50=30.98%, 100=22.90%, 250=29.56%, 500=9.31%, 750=5.02% 00:26:33.247 lat (msec) : 1000=1.42%, 2000=0.81% 00:26:33.247 cpu : usr=0.17%, sys=1.67%, ctx=599, majf=0, minf=4097 00:26:33.247 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:26:33.247 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:33.247 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:33.247 issued rwts: total=4083,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:33.247 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:33.247 job9: (groupid=0, jobs=1): err= 0: pid=3996752: Fri Dec 13 10:28:25 2024 00:26:33.247 read: IOPS=330, BW=82.7MiB/s (86.7MB/s)(841MiB/10174msec) 00:26:33.247 slat (usec): min=15, max=611338, avg=2076.18, stdev=19408.34 00:26:33.247 clat (usec): min=1615, max=1288.2k, avg=191326.67, stdev=224194.48 00:26:33.247 lat (usec): min=1657, max=1288.2k, avg=193402.85, stdev=226542.83 00:26:33.247 clat percentiles (msec): 00:26:33.247 | 1.00th=[ 10], 5.00th=[ 26], 10.00th=[ 35], 20.00th=[ 53], 00:26:33.247 | 30.00th=[ 87], 40.00th=[ 105], 50.00th=[ 122], 60.00th=[ 130], 00:26:33.247 | 70.00th=[ 140], 80.00th=[ 251], 90.00th=[ 567], 95.00th=[ 701], 00:26:33.247 | 99.00th=[ 1167], 99.50th=[ 1250], 99.90th=[ 1267], 99.95th=[ 1284], 00:26:33.247 | 99.99th=[ 1284] 00:26:33.247 bw ( KiB/s): min= 4096, max=240640, per=10.09%, avg=88929.58, stdev=63210.61, samples=19 00:26:33.247 iops : min= 16, max= 940, avg=347.37, stdev=246.93, samples=19 00:26:33.247 lat (msec) : 2=0.09%, 4=0.09%, 10=1.10%, 20=2.26%, 50=15.43% 00:26:33.247 lat (msec) : 100=19.14%, 250=41.71%, 500=8.71%, 750=7.61%, 1000=2.56% 00:26:33.247 lat (msec) : 2000=1.31% 00:26:33.247 cpu : usr=0.14%, sys=1.27%, ctx=1261, majf=0, minf=4097 00:26:33.247 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=1.0%, >=64=98.1% 00:26:33.248 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:33.248 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:33.248 issued rwts: total=3364,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:33.248 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:33.248 job10: (groupid=0, jobs=1): err= 0: pid=3996753: Fri Dec 13 10:28:25 2024 00:26:33.248 read: IOPS=370, BW=92.7MiB/s (97.2MB/s)(935MiB/10082msec) 00:26:33.248 slat (usec): min=13, max=808178, avg=1817.10, stdev=16402.02 00:26:33.248 clat (msec): min=2, max=1304, avg=170.65, stdev=203.82 00:26:33.248 lat (msec): min=2, max=1429, avg=172.47, stdev=205.23 00:26:33.248 clat percentiles (msec): 00:26:33.248 | 1.00th=[ 22], 5.00th=[ 35], 10.00th=[ 39], 20.00th=[ 65], 00:26:33.248 | 30.00th=[ 94], 40.00th=[ 112], 50.00th=[ 123], 60.00th=[ 130], 00:26:33.248 | 70.00th=[ 140], 80.00th=[ 165], 90.00th=[ 338], 95.00th=[ 558], 00:26:33.248 | 99.00th=[ 1301], 99.50th=[ 1301], 99.90th=[ 1301], 99.95th=[ 1301], 00:26:33.248 | 99.99th=[ 1301] 00:26:33.248 bw ( KiB/s): min= 4096, max=314368, per=11.23%, avg=99004.63, stdev=73162.32, samples=19 00:26:33.248 iops : min= 16, max= 1228, avg=386.74, stdev=285.79, samples=19 00:26:33.248 lat (msec) : 4=0.05%, 10=0.19%, 20=0.56%, 50=12.97%, 100=18.89% 00:26:33.248 lat (msec) : 250=52.97%, 500=7.76%, 750=3.85%, 1000=0.94%, 2000=1.82% 00:26:33.248 cpu : usr=0.13%, sys=1.24%, ctx=821, majf=0, minf=3722 00:26:33.248 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.3% 00:26:33.248 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:33.248 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:33.248 issued rwts: total=3738,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:33.248 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:33.248 00:26:33.248 Run status group 0 (all jobs): 00:26:33.248 READ: bw=861MiB/s (902MB/s), 52.8MiB/s-106MiB/s (55.4MB/s-111MB/s), io=8785MiB (9212MB), run=10066-10208msec 00:26:33.248 00:26:33.248 Disk stats (read/write): 00:26:33.248 nvme0n1: ios=6946/0, merge=0/0, ticks=1268925/0, in_queue=1268925, util=97.37% 00:26:33.248 nvme10n1: ios=7739/0, merge=0/0, ticks=1203365/0, in_queue=1203365, util=97.49% 00:26:33.248 nvme1n1: ios=4161/0, merge=0/0, ticks=1185530/0, in_queue=1185530, util=97.77% 00:26:33.248 nvme2n1: ios=5128/0, merge=0/0, ticks=1230574/0, in_queue=1230574, util=97.89% 00:26:33.248 nvme3n1: ios=8225/0, merge=0/0, ticks=1250348/0, in_queue=1250348, util=97.97% 00:26:33.248 nvme4n1: ios=5337/0, merge=0/0, ticks=1231088/0, in_queue=1231088, util=98.31% 00:26:33.248 nvme5n1: ios=4678/0, merge=0/0, ticks=1228526/0, in_queue=1228526, util=98.47% 00:26:33.248 nvme6n1: ios=4487/0, merge=0/0, ticks=1233735/0, in_queue=1233735, util=98.58% 00:26:33.248 nvme7n1: ios=8025/0, merge=0/0, ticks=1180853/0, in_queue=1180853, util=98.95% 00:26:33.248 nvme8n1: ios=6607/0, merge=0/0, ticks=1205102/0, in_queue=1205102, util=99.13% 00:26:33.248 nvme9n1: ios=7310/0, merge=0/0, ticks=1238600/0, in_queue=1238600, util=99.26% 00:26:33.248 10:28:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:26:33.248 [global] 00:26:33.248 thread=1 00:26:33.248 invalidate=1 00:26:33.248 rw=randwrite 00:26:33.248 time_based=1 00:26:33.248 runtime=10 00:26:33.248 ioengine=libaio 00:26:33.248 direct=1 00:26:33.248 bs=262144 00:26:33.248 iodepth=64 00:26:33.248 norandommap=1 00:26:33.248 numjobs=1 00:26:33.248 00:26:33.248 [job0] 00:26:33.248 filename=/dev/nvme0n1 00:26:33.248 [job1] 00:26:33.248 filename=/dev/nvme10n1 00:26:33.248 [job2] 00:26:33.248 filename=/dev/nvme1n1 00:26:33.248 [job3] 00:26:33.248 filename=/dev/nvme2n1 00:26:33.248 [job4] 00:26:33.248 filename=/dev/nvme3n1 00:26:33.248 [job5] 00:26:33.248 filename=/dev/nvme4n1 00:26:33.248 [job6] 00:26:33.248 filename=/dev/nvme5n1 00:26:33.248 [job7] 00:26:33.248 filename=/dev/nvme6n1 00:26:33.248 [job8] 00:26:33.248 filename=/dev/nvme7n1 00:26:33.248 [job9] 00:26:33.248 filename=/dev/nvme8n1 00:26:33.248 [job10] 00:26:33.248 filename=/dev/nvme9n1 00:26:33.248 Could not set queue depth (nvme0n1) 00:26:33.248 Could not set queue depth (nvme10n1) 00:26:33.248 Could not set queue depth (nvme1n1) 00:26:33.248 Could not set queue depth (nvme2n1) 00:26:33.248 Could not set queue depth (nvme3n1) 00:26:33.248 Could not set queue depth (nvme4n1) 00:26:33.248 Could not set queue depth (nvme5n1) 00:26:33.248 Could not set queue depth (nvme6n1) 00:26:33.248 Could not set queue depth (nvme7n1) 00:26:33.248 Could not set queue depth (nvme8n1) 00:26:33.248 Could not set queue depth (nvme9n1) 00:26:33.248 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:33.248 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:33.248 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:33.248 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:33.248 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:33.248 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:33.248 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:33.248 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:33.248 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:33.248 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:33.248 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:33.248 fio-3.35 00:26:33.248 Starting 11 threads 00:26:43.223 00:26:43.223 job0: (groupid=0, jobs=1): err= 0: pid=3997775: Fri Dec 13 10:28:36 2024 00:26:43.223 write: IOPS=300, BW=75.2MiB/s (78.9MB/s)(760MiB/10105msec); 0 zone resets 00:26:43.223 slat (usec): min=23, max=166187, avg=3076.40, stdev=8334.31 00:26:43.223 clat (msec): min=15, max=559, avg=209.52, stdev=131.15 00:26:43.223 lat (msec): min=15, max=559, avg=212.60, stdev=132.90 00:26:43.223 clat percentiles (msec): 00:26:43.223 | 1.00th=[ 36], 5.00th=[ 82], 10.00th=[ 84], 20.00th=[ 88], 00:26:43.223 | 30.00th=[ 100], 40.00th=[ 133], 50.00th=[ 176], 60.00th=[ 224], 00:26:43.223 | 70.00th=[ 266], 80.00th=[ 321], 90.00th=[ 430], 95.00th=[ 477], 00:26:43.223 | 99.00th=[ 558], 99.50th=[ 558], 99.90th=[ 558], 99.95th=[ 558], 00:26:43.223 | 99.99th=[ 558] 00:26:43.223 bw ( KiB/s): min=30720, max=186368, per=7.25%, avg=76240.05, stdev=46127.02, samples=20 00:26:43.223 iops : min= 120, max= 728, avg=297.80, stdev=180.20, samples=20 00:26:43.223 lat (msec) : 20=0.03%, 50=2.47%, 100=27.66%, 250=38.70%, 500=27.23% 00:26:43.223 lat (msec) : 750=3.91% 00:26:43.223 cpu : usr=0.66%, sys=0.84%, ctx=1012, majf=0, minf=1 00:26:43.223 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.1%, >=64=97.9% 00:26:43.223 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:43.223 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:43.223 issued rwts: total=0,3041,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:43.223 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:43.223 job1: (groupid=0, jobs=1): err= 0: pid=3997776: Fri Dec 13 10:28:36 2024 00:26:43.223 write: IOPS=335, BW=83.9MiB/s (88.0MB/s)(856MiB/10196msec); 0 zone resets 00:26:43.223 slat (usec): min=32, max=88718, avg=2230.22, stdev=6174.25 00:26:43.223 clat (usec): min=1106, max=576988, avg=188297.39, stdev=124965.32 00:26:43.223 lat (usec): min=1144, max=577035, avg=190527.61, stdev=126237.13 00:26:43.223 clat percentiles (msec): 00:26:43.223 | 1.00th=[ 21], 5.00th=[ 34], 10.00th=[ 55], 20.00th=[ 64], 00:26:43.223 | 30.00th=[ 81], 40.00th=[ 133], 50.00th=[ 174], 60.00th=[ 215], 00:26:43.223 | 70.00th=[ 232], 80.00th=[ 317], 90.00th=[ 384], 95.00th=[ 414], 00:26:43.223 | 99.00th=[ 460], 99.50th=[ 498], 99.90th=[ 567], 99.95th=[ 575], 00:26:43.223 | 99.99th=[ 575] 00:26:43.223 bw ( KiB/s): min=35840, max=260608, per=8.18%, avg=85994.25, stdev=62173.01, samples=20 00:26:43.223 iops : min= 140, max= 1018, avg=335.90, stdev=242.88, samples=20 00:26:43.223 lat (msec) : 2=0.15%, 4=0.12%, 10=0.06%, 20=0.70%, 50=7.74% 00:26:43.223 lat (msec) : 100=25.51%, 250=38.11%, 500=27.12%, 750=0.50% 00:26:43.223 cpu : usr=0.82%, sys=1.19%, ctx=1516, majf=0, minf=1 00:26:43.223 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=0.9%, >=64=98.2% 00:26:43.223 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:43.223 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:43.223 issued rwts: total=0,3422,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:43.223 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:43.223 job2: (groupid=0, jobs=1): err= 0: pid=3997777: Fri Dec 13 10:28:36 2024 00:26:43.223 write: IOPS=516, BW=129MiB/s (135MB/s)(1325MiB/10253msec); 0 zone resets 00:26:43.223 slat (usec): min=20, max=58226, avg=1680.16, stdev=4021.38 00:26:43.223 clat (msec): min=3, max=753, avg=122.04, stdev=82.75 00:26:43.223 lat (msec): min=3, max=753, avg=123.72, stdev=83.79 00:26:43.223 clat percentiles (msec): 00:26:43.223 | 1.00th=[ 23], 5.00th=[ 47], 10.00th=[ 50], 20.00th=[ 54], 00:26:43.224 | 30.00th=[ 75], 40.00th=[ 92], 50.00th=[ 100], 60.00th=[ 114], 00:26:43.224 | 70.00th=[ 146], 80.00th=[ 176], 90.00th=[ 218], 95.00th=[ 241], 00:26:43.224 | 99.00th=[ 460], 99.50th=[ 542], 99.90th=[ 726], 99.95th=[ 726], 00:26:43.224 | 99.99th=[ 751] 00:26:43.224 bw ( KiB/s): min=30720, max=290816, per=12.75%, avg=134037.25, stdev=66851.51, samples=20 00:26:43.224 iops : min= 120, max= 1136, avg=523.55, stdev=261.10, samples=20 00:26:43.224 lat (msec) : 4=0.08%, 10=0.15%, 20=0.55%, 50=14.08%, 100=37.76% 00:26:43.224 lat (msec) : 250=43.25%, 500=3.47%, 750=0.62%, 1000=0.04% 00:26:43.224 cpu : usr=1.28%, sys=1.69%, ctx=1779, majf=0, minf=1 00:26:43.224 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:26:43.224 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:43.224 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:43.224 issued rwts: total=0,5299,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:43.224 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:43.224 job3: (groupid=0, jobs=1): err= 0: pid=3997789: Fri Dec 13 10:28:36 2024 00:26:43.224 write: IOPS=389, BW=97.3MiB/s (102MB/s)(998MiB/10256msec); 0 zone resets 00:26:43.224 slat (usec): min=25, max=36082, avg=2109.52, stdev=4861.18 00:26:43.224 clat (msec): min=3, max=756, avg=162.22, stdev=96.30 00:26:43.224 lat (msec): min=5, max=756, avg=164.33, stdev=97.42 00:26:43.224 clat percentiles (msec): 00:26:43.224 | 1.00th=[ 45], 5.00th=[ 81], 10.00th=[ 93], 20.00th=[ 97], 00:26:43.224 | 30.00th=[ 102], 40.00th=[ 104], 50.00th=[ 108], 60.00th=[ 161], 00:26:43.224 | 70.00th=[ 186], 80.00th=[ 230], 90.00th=[ 292], 95.00th=[ 351], 00:26:43.224 | 99.00th=[ 493], 99.50th=[ 609], 99.90th=[ 726], 99.95th=[ 726], 00:26:43.224 | 99.99th=[ 760] 00:26:43.224 bw ( KiB/s): min=32256, max=164352, per=9.57%, avg=100588.75, stdev=44830.53, samples=20 00:26:43.224 iops : min= 126, max= 642, avg=392.90, stdev=175.14, samples=20 00:26:43.224 lat (msec) : 4=0.03%, 10=0.20%, 50=1.05%, 100=25.30%, 250=59.64% 00:26:43.224 lat (msec) : 500=12.93%, 750=0.83%, 1000=0.03% 00:26:43.224 cpu : usr=0.69%, sys=1.36%, ctx=1483, majf=0, minf=1 00:26:43.224 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:26:43.224 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:43.224 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:43.224 issued rwts: total=0,3992,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:43.224 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:43.224 job4: (groupid=0, jobs=1): err= 0: pid=3997790: Fri Dec 13 10:28:36 2024 00:26:43.224 write: IOPS=304, BW=76.1MiB/s (79.8MB/s)(779MiB/10237msec); 0 zone resets 00:26:43.224 slat (usec): min=27, max=109488, avg=2274.80, stdev=6593.96 00:26:43.224 clat (usec): min=1620, max=494444, avg=207876.83, stdev=120636.57 00:26:43.224 lat (usec): min=1688, max=494533, avg=210151.63, stdev=122057.85 00:26:43.224 clat percentiles (msec): 00:26:43.224 | 1.00th=[ 14], 5.00th=[ 37], 10.00th=[ 63], 20.00th=[ 95], 00:26:43.224 | 30.00th=[ 118], 40.00th=[ 165], 50.00th=[ 190], 60.00th=[ 218], 00:26:43.224 | 70.00th=[ 262], 80.00th=[ 342], 90.00th=[ 388], 95.00th=[ 422], 00:26:43.224 | 99.00th=[ 460], 99.50th=[ 472], 99.90th=[ 485], 99.95th=[ 493], 00:26:43.224 | 99.99th=[ 493] 00:26:43.224 bw ( KiB/s): min=39936, max=172544, per=7.43%, avg=78131.20, stdev=36691.58, samples=20 00:26:43.224 iops : min= 156, max= 674, avg=305.20, stdev=143.33, samples=20 00:26:43.224 lat (msec) : 2=0.10%, 4=0.19%, 10=0.55%, 20=1.93%, 50=4.08% 00:26:43.224 lat (msec) : 100=14.54%, 250=46.97%, 500=31.65% 00:26:43.224 cpu : usr=0.61%, sys=1.07%, ctx=1589, majf=0, minf=1 00:26:43.224 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.0%, >=64=98.0% 00:26:43.224 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:43.224 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:43.224 issued rwts: total=0,3115,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:43.224 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:43.224 job5: (groupid=0, jobs=1): err= 0: pid=3997791: Fri Dec 13 10:28:36 2024 00:26:43.224 write: IOPS=337, BW=84.4MiB/s (88.5MB/s)(853MiB/10104msec); 0 zone resets 00:26:43.224 slat (usec): min=18, max=83460, avg=2090.45, stdev=6388.98 00:26:43.224 clat (usec): min=1079, max=484579, avg=187307.71, stdev=138607.02 00:26:43.224 lat (usec): min=1130, max=484634, avg=189398.15, stdev=140429.93 00:26:43.224 clat percentiles (msec): 00:26:43.224 | 1.00th=[ 3], 5.00th=[ 16], 10.00th=[ 25], 20.00th=[ 45], 00:26:43.224 | 30.00th=[ 70], 40.00th=[ 134], 50.00th=[ 169], 60.00th=[ 207], 00:26:43.224 | 70.00th=[ 249], 80.00th=[ 342], 90.00th=[ 405], 95.00th=[ 426], 00:26:43.224 | 99.00th=[ 468], 99.50th=[ 481], 99.90th=[ 485], 99.95th=[ 485], 00:26:43.224 | 99.99th=[ 485] 00:26:43.224 bw ( KiB/s): min=36864, max=315904, per=8.16%, avg=85750.60, stdev=65715.47, samples=20 00:26:43.224 iops : min= 144, max= 1234, avg=334.95, stdev=256.68, samples=20 00:26:43.224 lat (msec) : 2=0.44%, 4=0.94%, 10=1.79%, 20=4.07%, 50=15.09% 00:26:43.224 lat (msec) : 100=12.28%, 250=35.45%, 500=29.94% 00:26:43.224 cpu : usr=0.75%, sys=1.03%, ctx=2039, majf=0, minf=2 00:26:43.224 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=0.9%, >=64=98.2% 00:26:43.224 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:43.224 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:43.224 issued rwts: total=0,3413,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:43.224 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:43.224 job6: (groupid=0, jobs=1): err= 0: pid=3997792: Fri Dec 13 10:28:36 2024 00:26:43.224 write: IOPS=260, BW=65.0MiB/s (68.2MB/s)(661MiB/10162msec); 0 zone resets 00:26:43.224 slat (usec): min=25, max=184237, avg=3063.34, stdev=8539.08 00:26:43.224 clat (usec): min=916, max=516893, avg=242803.87, stdev=124475.70 00:26:43.224 lat (usec): min=956, max=516935, avg=245867.22, stdev=125902.40 00:26:43.224 clat percentiles (usec): 00:26:43.224 | 1.00th=[ 1975], 5.00th=[ 16909], 10.00th=[ 58983], 20.00th=[112722], 00:26:43.224 | 30.00th=[183501], 40.00th=[223347], 50.00th=[235930], 60.00th=[295699], 00:26:43.224 | 70.00th=[325059], 80.00th=[350225], 90.00th=[404751], 95.00th=[429917], 00:26:43.224 | 99.00th=[467665], 99.50th=[476054], 99.90th=[509608], 99.95th=[513803], 00:26:43.224 | 99.99th=[517997] 00:26:43.224 bw ( KiB/s): min=32833, max=138240, per=6.28%, avg=66051.25, stdev=27036.29, samples=20 00:26:43.224 iops : min= 128, max= 540, avg=258.00, stdev=105.63, samples=20 00:26:43.224 lat (usec) : 1000=0.08% 00:26:43.224 lat (msec) : 2=0.95%, 4=1.82%, 10=1.32%, 20=1.70%, 50=3.03% 00:26:43.224 lat (msec) : 100=7.57%, 250=35.15%, 500=48.20%, 750=0.19% 00:26:43.224 cpu : usr=0.62%, sys=0.76%, ctx=1229, majf=0, minf=1 00:26:43.224 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.6%, 32=1.2%, >=64=97.6% 00:26:43.224 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:43.224 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:43.224 issued rwts: total=0,2643,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:43.224 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:43.224 job7: (groupid=0, jobs=1): err= 0: pid=3997793: Fri Dec 13 10:28:36 2024 00:26:43.224 write: IOPS=421, BW=105MiB/s (110MB/s)(1081MiB/10256msec); 0 zone resets 00:26:43.224 slat (usec): min=28, max=121150, avg=2050.92, stdev=6299.66 00:26:43.224 clat (usec): min=966, max=657929, avg=149664.79, stdev=142887.51 00:26:43.224 lat (usec): min=1011, max=657973, avg=151715.71, stdev=144501.14 00:26:43.224 clat percentiles (msec): 00:26:43.224 | 1.00th=[ 3], 5.00th=[ 14], 10.00th=[ 51], 20.00th=[ 53], 00:26:43.224 | 30.00th=[ 54], 40.00th=[ 56], 50.00th=[ 58], 60.00th=[ 113], 00:26:43.224 | 70.00th=[ 167], 80.00th=[ 296], 90.00th=[ 393], 95.00th=[ 451], 00:26:43.224 | 99.00th=[ 510], 99.50th=[ 584], 99.90th=[ 642], 99.95th=[ 651], 00:26:43.224 | 99.99th=[ 659] 00:26:43.224 bw ( KiB/s): min=34304, max=311808, per=10.37%, avg=109030.40, stdev=95616.52, samples=20 00:26:43.224 iops : min= 134, max= 1218, avg=425.90, stdev=373.50, samples=20 00:26:43.224 lat (usec) : 1000=0.02% 00:26:43.224 lat (msec) : 2=0.81%, 4=1.48%, 10=2.08%, 20=0.79%, 50=4.51% 00:26:43.224 lat (msec) : 100=47.10%, 250=18.64%, 500=23.29%, 750=1.27% 00:26:43.224 cpu : usr=1.18%, sys=1.26%, ctx=1452, majf=0, minf=1 00:26:43.224 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.5% 00:26:43.224 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:43.224 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:43.224 issued rwts: total=0,4323,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:43.224 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:43.224 job8: (groupid=0, jobs=1): err= 0: pid=3997794: Fri Dec 13 10:28:36 2024 00:26:43.224 write: IOPS=387, BW=96.8MiB/s (102MB/s)(993MiB/10257msec); 0 zone resets 00:26:43.224 slat (usec): min=27, max=139994, avg=2258.47, stdev=5903.78 00:26:43.224 clat (msec): min=3, max=734, avg=162.83, stdev=106.03 00:26:43.224 lat (msec): min=5, max=734, avg=165.09, stdev=107.35 00:26:43.224 clat percentiles (msec): 00:26:43.224 | 1.00th=[ 18], 5.00th=[ 35], 10.00th=[ 83], 20.00th=[ 93], 00:26:43.224 | 30.00th=[ 99], 40.00th=[ 110], 50.00th=[ 125], 60.00th=[ 148], 00:26:43.224 | 70.00th=[ 194], 80.00th=[ 232], 90.00th=[ 321], 95.00th=[ 380], 00:26:43.224 | 99.00th=[ 493], 99.50th=[ 584], 99.90th=[ 709], 99.95th=[ 735], 00:26:43.224 | 99.99th=[ 735] 00:26:43.224 bw ( KiB/s): min=32768, max=172032, per=9.52%, avg=100077.00, stdev=40870.43, samples=20 00:26:43.224 iops : min= 128, max= 672, avg=390.90, stdev=159.67, samples=20 00:26:43.224 lat (msec) : 4=0.03%, 10=0.28%, 20=1.51%, 50=6.29%, 100=25.92% 00:26:43.224 lat (msec) : 250=49.43%, 500=15.58%, 750=0.96% 00:26:43.224 cpu : usr=0.91%, sys=1.37%, ctx=1437, majf=0, minf=1 00:26:43.224 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:26:43.224 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:43.224 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:43.224 issued rwts: total=0,3973,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:43.224 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:43.224 job9: (groupid=0, jobs=1): err= 0: pid=3997795: Fri Dec 13 10:28:36 2024 00:26:43.224 write: IOPS=443, BW=111MiB/s (116MB/s)(1120MiB/10100msec); 0 zone resets 00:26:43.224 slat (usec): min=20, max=72398, avg=2073.89, stdev=4569.93 00:26:43.224 clat (msec): min=12, max=453, avg=142.20, stdev=78.15 00:26:43.224 lat (msec): min=12, max=464, avg=144.27, stdev=79.06 00:26:43.224 clat percentiles (msec): 00:26:43.224 | 1.00th=[ 51], 5.00th=[ 57], 10.00th=[ 64], 20.00th=[ 94], 00:26:43.224 | 30.00th=[ 100], 40.00th=[ 103], 50.00th=[ 106], 60.00th=[ 144], 00:26:43.224 | 70.00th=[ 171], 80.00th=[ 186], 90.00th=[ 230], 95.00th=[ 321], 00:26:43.225 | 99.00th=[ 414], 99.50th=[ 426], 99.90th=[ 451], 99.95th=[ 456], 00:26:43.225 | 99.99th=[ 456] 00:26:43.225 bw ( KiB/s): min=46685, max=247808, per=10.76%, avg=113054.25, stdev=53288.42, samples=20 00:26:43.225 iops : min= 182, max= 968, avg=441.60, stdev=208.18, samples=20 00:26:43.225 lat (msec) : 20=0.36%, 50=0.36%, 100=31.59%, 250=60.42%, 500=7.28% 00:26:43.225 cpu : usr=0.85%, sys=1.36%, ctx=1306, majf=0, minf=1 00:26:43.225 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:26:43.225 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:43.225 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:43.225 issued rwts: total=0,4479,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:43.225 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:43.225 job10: (groupid=0, jobs=1): err= 0: pid=3997796: Fri Dec 13 10:28:36 2024 00:26:43.225 write: IOPS=429, BW=107MiB/s (113MB/s)(1102MiB/10253msec); 0 zone resets 00:26:43.225 slat (usec): min=23, max=39315, avg=1955.34, stdev=4995.98 00:26:43.225 clat (usec): min=1612, max=753632, avg=146468.24, stdev=113557.74 00:26:43.225 lat (msec): min=2, max=753, avg=148.42, stdev=115.02 00:26:43.225 clat percentiles (msec): 00:26:43.225 | 1.00th=[ 7], 5.00th=[ 23], 10.00th=[ 49], 20.00th=[ 54], 00:26:43.225 | 30.00th=[ 56], 40.00th=[ 87], 50.00th=[ 90], 60.00th=[ 148], 00:26:43.225 | 70.00th=[ 199], 80.00th=[ 247], 90.00th=[ 313], 95.00th=[ 347], 00:26:43.225 | 99.00th=[ 485], 99.50th=[ 575], 99.90th=[ 726], 99.95th=[ 726], 00:26:43.225 | 99.99th=[ 751] 00:26:43.225 bw ( KiB/s): min=31232, max=306176, per=10.58%, avg=111232.00, stdev=81138.31, samples=20 00:26:43.225 iops : min= 122, max= 1196, avg=434.50, stdev=316.95, samples=20 00:26:43.225 lat (msec) : 2=0.05%, 4=0.27%, 10=1.52%, 20=2.50%, 50=7.21% 00:26:43.225 lat (msec) : 100=41.42%, 250=27.34%, 500=18.90%, 750=0.75%, 1000=0.05% 00:26:43.225 cpu : usr=0.85%, sys=1.36%, ctx=1782, majf=0, minf=1 00:26:43.225 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:26:43.225 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:43.225 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:43.225 issued rwts: total=0,4408,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:43.225 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:43.225 00:26:43.225 Run status group 0 (all jobs): 00:26:43.225 WRITE: bw=1026MiB/s (1076MB/s), 65.0MiB/s-129MiB/s (68.2MB/s-135MB/s), io=10.3GiB (11.0GB), run=10100-10257msec 00:26:43.225 00:26:43.225 Disk stats (read/write): 00:26:43.225 nvme0n1: ios=49/5907, merge=0/0, ticks=49/1208396, in_queue=1208445, util=97.42% 00:26:43.225 nvme10n1: ios=47/6830, merge=0/0, ticks=2388/1237822, in_queue=1240210, util=99.80% 00:26:43.225 nvme1n1: ios=45/10543, merge=0/0, ticks=1892/1220694, in_queue=1222586, util=100.00% 00:26:43.225 nvme2n1: ios=46/7925, merge=0/0, ticks=43/1229875, in_queue=1229918, util=98.02% 00:26:43.225 nvme3n1: ios=43/6188, merge=0/0, ticks=2563/1235912, in_queue=1238475, util=100.00% 00:26:43.225 nvme4n1: ios=0/6639, merge=0/0, ticks=0/1216045, in_queue=1216045, util=98.10% 00:26:43.225 nvme5n1: ios=48/5075, merge=0/0, ticks=3946/1193757, in_queue=1197703, util=100.00% 00:26:43.225 nvme6n1: ios=41/8590, merge=0/0, ticks=1309/1225365, in_queue=1226674, util=100.00% 00:26:43.225 nvme7n1: ios=43/7885, merge=0/0, ticks=3320/1212305, in_queue=1215625, util=100.00% 00:26:43.225 nvme8n1: ios=0/8779, merge=0/0, ticks=0/1206697, in_queue=1206697, util=98.93% 00:26:43.225 nvme9n1: ios=48/8761, merge=0/0, ticks=1285/1226076, in_queue=1227361, util=100.00% 00:26:43.225 10:28:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@36 -- # sync 00:26:43.225 10:28:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # seq 1 11 00:26:43.225 10:28:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:43.225 10:28:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:26:43.225 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:26:43.225 10:28:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:26:43.225 10:28:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:43.225 10:28:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:43.225 10:28:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK1 00:26:43.225 10:28:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK1 00:26:43.225 10:28:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:43.225 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:43.225 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:43.225 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:43.225 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:43.225 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:43.225 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:43.225 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:26:43.792 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:26:43.792 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:26:43.792 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:43.792 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:43.792 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK2 00:26:43.792 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK2 00:26:43.792 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:43.792 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:43.792 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:26:43.792 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:43.792 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:43.792 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:43.792 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:43.792 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:26:44.359 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:26:44.359 10:28:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:26:44.359 10:28:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:44.359 10:28:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:44.359 10:28:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK3 00:26:44.359 10:28:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:44.359 10:28:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK3 00:26:44.359 10:28:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:44.359 10:28:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:26:44.359 10:28:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:44.359 10:28:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:44.359 10:28:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:44.359 10:28:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:44.359 10:28:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:26:44.926 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:26:44.926 10:28:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:26:44.926 10:28:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:44.926 10:28:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:44.926 10:28:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK4 00:26:44.926 10:28:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:44.926 10:28:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK4 00:26:44.926 10:28:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:44.926 10:28:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:26:44.926 10:28:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:44.926 10:28:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:44.926 10:28:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:44.926 10:28:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:44.926 10:28:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:26:45.493 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:26:45.493 10:28:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:26:45.493 10:28:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:45.493 10:28:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:45.493 10:28:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK5 00:26:45.493 10:28:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:45.493 10:28:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK5 00:26:45.493 10:28:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:45.493 10:28:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:26:45.493 10:28:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:45.493 10:28:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:45.493 10:28:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:45.493 10:28:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:45.493 10:28:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:26:46.061 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:26:46.061 10:28:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:26:46.061 10:28:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:46.061 10:28:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:46.061 10:28:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK6 00:26:46.061 10:28:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:46.061 10:28:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK6 00:26:46.061 10:28:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:46.061 10:28:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:26:46.061 10:28:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:46.061 10:28:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:46.061 10:28:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:46.061 10:28:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:46.061 10:28:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:26:46.320 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:26:46.320 10:28:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:26:46.320 10:28:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:46.320 10:28:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:46.320 10:28:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK7 00:26:46.320 10:28:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK7 00:26:46.320 10:28:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:46.320 10:28:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:46.320 10:28:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:26:46.320 10:28:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:46.320 10:28:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:46.320 10:28:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:46.320 10:28:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:46.320 10:28:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:26:46.887 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:26:46.887 10:28:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:26:46.887 10:28:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:46.887 10:28:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:46.887 10:28:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK8 00:26:46.887 10:28:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK8 00:26:46.887 10:28:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:46.887 10:28:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:46.887 10:28:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:26:46.887 10:28:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:46.887 10:28:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:46.887 10:28:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:46.887 10:28:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:46.887 10:28:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:26:47.454 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:26:47.454 10:28:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:26:47.454 10:28:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:47.454 10:28:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:47.454 10:28:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK9 00:26:47.454 10:28:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK9 00:26:47.454 10:28:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:47.454 10:28:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:47.454 10:28:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:26:47.454 10:28:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:47.454 10:28:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:47.454 10:28:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:47.454 10:28:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:47.454 10:28:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:26:47.712 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:26:47.712 10:28:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:26:47.712 10:28:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:47.712 10:28:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:47.712 10:28:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK10 00:26:47.712 10:28:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:47.712 10:28:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK10 00:26:47.712 10:28:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:47.712 10:28:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:26:47.712 10:28:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:47.712 10:28:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:47.712 10:28:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:47.712 10:28:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:47.712 10:28:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:26:47.971 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:26:47.971 10:28:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:26:47.971 10:28:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:47.971 10:28:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:47.971 10:28:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK11 00:26:47.971 10:28:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:47.971 10:28:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK11 00:26:47.971 10:28:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:47.971 10:28:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:26:47.971 10:28:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:47.971 10:28:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:47.971 10:28:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:47.971 10:28:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:26:47.971 10:28:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:26:47.971 10:28:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@47 -- # nvmftestfini 00:26:47.971 10:28:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:47.971 10:28:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@121 -- # sync 00:26:47.971 10:28:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:47.971 10:28:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@124 -- # set +e 00:26:47.971 10:28:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:47.971 10:28:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:47.971 rmmod nvme_tcp 00:26:47.971 rmmod nvme_fabrics 00:26:48.230 rmmod nvme_keyring 00:26:48.230 10:28:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:48.230 10:28:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@128 -- # set -e 00:26:48.230 10:28:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@129 -- # return 0 00:26:48.230 10:28:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@517 -- # '[' -n 3990002 ']' 00:26:48.230 10:28:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@518 -- # killprocess 3990002 00:26:48.230 10:28:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@954 -- # '[' -z 3990002 ']' 00:26:48.230 10:28:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@958 -- # kill -0 3990002 00:26:48.230 10:28:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@959 -- # uname 00:26:48.230 10:28:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:48.230 10:28:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3990002 00:26:48.230 10:28:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:48.230 10:28:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:48.230 10:28:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3990002' 00:26:48.230 killing process with pid 3990002 00:26:48.230 10:28:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@973 -- # kill 3990002 00:26:48.230 10:28:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@978 -- # wait 3990002 00:26:51.513 10:28:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:51.513 10:28:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:51.513 10:28:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:51.513 10:28:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@297 -- # iptr 00:26:51.513 10:28:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:51.513 10:28:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@791 -- # iptables-save 00:26:51.513 10:28:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@791 -- # iptables-restore 00:26:51.513 10:28:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:51.513 10:28:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:51.513 10:28:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:51.513 10:28:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:51.513 10:28:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:53.416 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:53.416 00:26:53.416 real 1m17.574s 00:26:53.416 user 4m42.280s 00:26:53.416 sys 0m17.575s 00:26:53.416 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:53.416 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:53.416 ************************************ 00:26:53.416 END TEST nvmf_multiconnection 00:26:53.416 ************************************ 00:26:53.675 10:28:47 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@50 -- # run_test nvmf_initiator_timeout /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:26:53.675 10:28:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:53.675 10:28:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:53.675 10:28:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:26:53.676 ************************************ 00:26:53.676 START TEST nvmf_initiator_timeout 00:26:53.676 ************************************ 00:26:53.676 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:26:53.676 * Looking for test storage... 00:26:53.676 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:53.676 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:26:53.676 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1711 -- # lcov --version 00:26:53.676 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:26:53.676 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:26:53.676 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:53.676 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:53.676 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:53.676 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@336 -- # IFS=.-: 00:26:53.676 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@336 -- # read -ra ver1 00:26:53.676 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@337 -- # IFS=.-: 00:26:53.676 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@337 -- # read -ra ver2 00:26:53.676 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@338 -- # local 'op=<' 00:26:53.676 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@340 -- # ver1_l=2 00:26:53.676 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@341 -- # ver2_l=1 00:26:53.676 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:53.676 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@344 -- # case "$op" in 00:26:53.676 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@345 -- # : 1 00:26:53.676 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:53.676 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:53.676 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@365 -- # decimal 1 00:26:53.676 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@353 -- # local d=1 00:26:53.676 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:53.676 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@355 -- # echo 1 00:26:53.676 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@365 -- # ver1[v]=1 00:26:53.676 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@366 -- # decimal 2 00:26:53.676 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@353 -- # local d=2 00:26:53.676 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:53.676 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@355 -- # echo 2 00:26:53.676 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@366 -- # ver2[v]=2 00:26:53.676 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:53.676 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:53.676 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@368 -- # return 0 00:26:53.676 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:53.676 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:26:53.676 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:53.676 --rc genhtml_branch_coverage=1 00:26:53.676 --rc genhtml_function_coverage=1 00:26:53.676 --rc genhtml_legend=1 00:26:53.676 --rc geninfo_all_blocks=1 00:26:53.676 --rc geninfo_unexecuted_blocks=1 00:26:53.676 00:26:53.676 ' 00:26:53.676 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:26:53.676 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:53.676 --rc genhtml_branch_coverage=1 00:26:53.676 --rc genhtml_function_coverage=1 00:26:53.676 --rc genhtml_legend=1 00:26:53.676 --rc geninfo_all_blocks=1 00:26:53.676 --rc geninfo_unexecuted_blocks=1 00:26:53.676 00:26:53.676 ' 00:26:53.676 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:26:53.676 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:53.676 --rc genhtml_branch_coverage=1 00:26:53.676 --rc genhtml_function_coverage=1 00:26:53.676 --rc genhtml_legend=1 00:26:53.676 --rc geninfo_all_blocks=1 00:26:53.676 --rc geninfo_unexecuted_blocks=1 00:26:53.676 00:26:53.676 ' 00:26:53.676 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:26:53.676 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:53.676 --rc genhtml_branch_coverage=1 00:26:53.676 --rc genhtml_function_coverage=1 00:26:53.676 --rc genhtml_legend=1 00:26:53.676 --rc geninfo_all_blocks=1 00:26:53.676 --rc geninfo_unexecuted_blocks=1 00:26:53.676 00:26:53.676 ' 00:26:53.676 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:53.676 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # uname -s 00:26:53.676 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:53.676 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:53.676 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:53.676 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:53.676 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:53.676 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:53.676 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:53.676 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:53.676 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:53.676 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:53.676 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:26:53.676 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:26:53.676 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:53.676 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:53.676 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:53.676 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:53.676 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:53.676 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@15 -- # shopt -s extglob 00:26:53.676 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:53.676 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:53.676 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:53.676 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:53.676 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:53.676 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:53.676 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@5 -- # export PATH 00:26:53.676 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:53.676 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@51 -- # : 0 00:26:53.676 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:53.676 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:53.676 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:53.676 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:53.676 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:53.677 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:53.677 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:53.677 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:53.677 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:53.677 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:53.936 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:53.936 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:53.936 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:26:53.936 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:53.936 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:53.936 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:53.936 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:53.936 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:53.936 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:53.936 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:53.936 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:53.936 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:53.936 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:53.936 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@309 -- # xtrace_disable 00:26:53.936 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:59.207 10:28:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:59.207 10:28:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@315 -- # pci_devs=() 00:26:59.207 10:28:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:59.207 10:28:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:59.207 10:28:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:59.207 10:28:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:59.207 10:28:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:59.207 10:28:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@319 -- # net_devs=() 00:26:59.207 10:28:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:59.207 10:28:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@320 -- # e810=() 00:26:59.207 10:28:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@320 -- # local -ga e810 00:26:59.207 10:28:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@321 -- # x722=() 00:26:59.207 10:28:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@321 -- # local -ga x722 00:26:59.207 10:28:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@322 -- # mlx=() 00:26:59.207 10:28:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@322 -- # local -ga mlx 00:26:59.207 10:28:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:59.207 10:28:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:59.207 10:28:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:59.207 10:28:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:59.207 10:28:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:59.207 10:28:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:59.207 10:28:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:59.207 10:28:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:59.207 10:28:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:59.207 10:28:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:59.207 10:28:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:59.207 10:28:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:59.207 10:28:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:59.207 10:28:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:59.207 10:28:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:59.207 10:28:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:59.207 10:28:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:59.207 10:28:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:59.207 10:28:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:59.207 10:28:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:26:59.207 Found 0000:af:00.0 (0x8086 - 0x159b) 00:26:59.207 10:28:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:59.207 10:28:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:59.207 10:28:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:59.207 10:28:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:59.207 10:28:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:59.207 10:28:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:59.207 10:28:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:26:59.207 Found 0000:af:00.1 (0x8086 - 0x159b) 00:26:59.207 10:28:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:59.207 10:28:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:59.207 10:28:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:59.207 10:28:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:59.207 10:28:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:59.207 10:28:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:59.207 10:28:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:59.207 10:28:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:59.207 10:28:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:59.207 10:28:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:59.207 10:28:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:59.207 10:28:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:59.207 10:28:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:59.207 10:28:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:59.207 10:28:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:59.207 10:28:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:26:59.207 Found net devices under 0000:af:00.0: cvl_0_0 00:26:59.207 10:28:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:59.207 10:28:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:59.207 10:28:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:59.207 10:28:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:59.207 10:28:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:59.207 10:28:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:59.207 10:28:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:59.207 10:28:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:59.207 10:28:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:26:59.207 Found net devices under 0000:af:00.1: cvl_0_1 00:26:59.207 10:28:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:59.207 10:28:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:59.207 10:28:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@442 -- # is_hw=yes 00:26:59.208 10:28:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:59.208 10:28:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:59.208 10:28:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:59.208 10:28:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:59.208 10:28:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:59.208 10:28:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:59.208 10:28:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:59.208 10:28:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:59.208 10:28:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:59.208 10:28:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:59.208 10:28:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:59.208 10:28:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:59.208 10:28:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:59.208 10:28:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:59.208 10:28:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:59.208 10:28:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:59.208 10:28:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:59.208 10:28:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:59.208 10:28:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:59.208 10:28:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:59.208 10:28:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:59.208 10:28:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:59.208 10:28:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:59.208 10:28:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:59.208 10:28:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:59.208 10:28:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:59.208 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:59.208 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.449 ms 00:26:59.208 00:26:59.208 --- 10.0.0.2 ping statistics --- 00:26:59.208 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:59.208 rtt min/avg/max/mdev = 0.449/0.449/0.449/0.000 ms 00:26:59.208 10:28:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:59.208 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:59.208 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.213 ms 00:26:59.208 00:26:59.208 --- 10.0.0.1 ping statistics --- 00:26:59.208 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:59.208 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:26:59.208 10:28:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:59.208 10:28:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@450 -- # return 0 00:26:59.208 10:28:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:59.208 10:28:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:59.208 10:28:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:59.208 10:28:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:59.208 10:28:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:59.208 10:28:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:59.208 10:28:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:59.208 10:28:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:26:59.208 10:28:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:59.208 10:28:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:59.208 10:28:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:59.208 10:28:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@509 -- # nvmfpid=4003800 00:26:59.208 10:28:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@510 -- # waitforlisten 4003800 00:26:59.208 10:28:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:26:59.208 10:28:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@835 -- # '[' -z 4003800 ']' 00:26:59.208 10:28:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:59.208 10:28:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:59.208 10:28:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:59.208 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:59.208 10:28:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:59.208 10:28:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:59.467 [2024-12-13 10:28:53.100516] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:26:59.467 [2024-12-13 10:28:53.100605] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:59.467 [2024-12-13 10:28:53.219333] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:59.467 [2024-12-13 10:28:53.328296] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:59.467 [2024-12-13 10:28:53.328344] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:59.467 [2024-12-13 10:28:53.328354] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:59.467 [2024-12-13 10:28:53.328364] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:59.467 [2024-12-13 10:28:53.328371] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:59.467 [2024-12-13 10:28:53.330720] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:26:59.467 [2024-12-13 10:28:53.330734] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:26:59.467 [2024-12-13 10:28:53.330831] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:26:59.467 [2024-12-13 10:28:53.330840] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:27:00.032 10:28:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:00.032 10:28:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@868 -- # return 0 00:27:00.032 10:28:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:00.032 10:28:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:00.032 10:28:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:00.290 10:28:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:00.290 10:28:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:27:00.290 10:28:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:00.290 10:28:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:00.290 10:28:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:00.290 Malloc0 00:27:00.290 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:00.290 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:27:00.291 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:00.291 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:00.291 Delay0 00:27:00.291 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:00.291 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:00.291 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:00.291 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:00.291 [2024-12-13 10:28:54.065778] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:00.291 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:00.291 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:27:00.291 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:00.291 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:00.291 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:00.291 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:00.291 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:00.291 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:00.291 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:00.291 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:00.291 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:00.291 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:00.291 [2024-12-13 10:28:54.098092] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:00.291 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:00.291 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:27:01.665 10:28:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:27:01.665 10:28:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1202 -- # local i=0 00:27:01.665 10:28:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:27:01.665 10:28:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:27:01.665 10:28:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1209 -- # sleep 2 00:27:03.658 10:28:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:27:03.659 10:28:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:27:03.659 10:28:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:27:03.659 10:28:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:27:03.659 10:28:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:27:03.659 10:28:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1212 -- # return 0 00:27:03.659 10:28:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@35 -- # fio_pid=4004471 00:27:03.659 10:28:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:27:03.659 10:28:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@37 -- # sleep 3 00:27:03.659 [global] 00:27:03.659 thread=1 00:27:03.659 invalidate=1 00:27:03.659 rw=write 00:27:03.659 time_based=1 00:27:03.659 runtime=60 00:27:03.659 ioengine=libaio 00:27:03.659 direct=1 00:27:03.659 bs=4096 00:27:03.659 iodepth=1 00:27:03.659 norandommap=0 00:27:03.659 numjobs=1 00:27:03.659 00:27:03.659 verify_dump=1 00:27:03.659 verify_backlog=512 00:27:03.659 verify_state_save=0 00:27:03.659 do_verify=1 00:27:03.659 verify=crc32c-intel 00:27:03.659 [job0] 00:27:03.659 filename=/dev/nvme0n1 00:27:03.659 Could not set queue depth (nvme0n1) 00:27:03.659 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:27:03.659 fio-3.35 00:27:03.659 Starting 1 thread 00:27:06.945 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:27:06.945 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:06.945 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:06.945 true 00:27:06.945 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:06.945 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:27:06.945 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:06.945 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:06.945 true 00:27:06.945 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:06.945 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:27:06.945 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:06.945 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:06.945 true 00:27:06.945 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:06.945 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:27:06.945 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:06.945 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:06.945 true 00:27:06.945 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:06.945 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@45 -- # sleep 3 00:27:09.516 10:29:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:27:09.516 10:29:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:09.516 10:29:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:09.516 true 00:27:09.516 10:29:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:09.516 10:29:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:27:09.516 10:29:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:09.516 10:29:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:09.516 true 00:27:09.516 10:29:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:09.516 10:29:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:27:09.516 10:29:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:09.516 10:29:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:09.516 true 00:27:09.516 10:29:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:09.516 10:29:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:27:09.516 10:29:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:09.516 10:29:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:09.516 true 00:27:09.516 10:29:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:09.516 10:29:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@53 -- # fio_status=0 00:27:09.516 10:29:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@54 -- # wait 4004471 00:28:05.734 00:28:05.734 job0: (groupid=0, jobs=1): err= 0: pid=4004629: Fri Dec 13 10:29:57 2024 00:28:05.734 read: IOPS=24, BW=97.7KiB/s (100kB/s)(5864KiB/60041msec) 00:28:05.734 slat (usec): min=7, max=2769, avg=14.90, stdev=72.36 00:28:05.734 clat (usec): min=236, max=41395k, avg=40684.81, stdev=1080966.23 00:28:05.735 lat (usec): min=243, max=41395k, avg=40699.71, stdev=1080966.62 00:28:05.735 clat percentiles (usec): 00:28:05.735 | 1.00th=[ 241], 5.00th=[ 247], 10.00th=[ 251], 00:28:05.735 | 20.00th=[ 260], 30.00th=[ 269], 40.00th=[ 277], 00:28:05.735 | 50.00th=[ 285], 60.00th=[ 297], 70.00th=[ 619], 00:28:05.735 | 80.00th=[ 41157], 90.00th=[ 41157], 95.00th=[ 41157], 00:28:05.735 | 99.00th=[ 42206], 99.50th=[ 42206], 99.90th=[ 44827], 00:28:05.735 | 99.95th=[17112761], 99.99th=[17112761] 00:28:05.735 write: IOPS=25, BW=102KiB/s (105kB/s)(6144KiB/60041msec); 0 zone resets 00:28:05.735 slat (usec): min=10, max=28401, avg=31.15, stdev=724.37 00:28:05.735 clat (usec): min=172, max=366, avg=204.27, stdev=16.97 00:28:05.735 lat (usec): min=184, max=28760, avg=235.42, stdev=728.51 00:28:05.735 clat percentiles (usec): 00:28:05.735 | 1.00th=[ 176], 5.00th=[ 180], 10.00th=[ 184], 20.00th=[ 192], 00:28:05.735 | 30.00th=[ 198], 40.00th=[ 200], 50.00th=[ 204], 60.00th=[ 208], 00:28:05.735 | 70.00th=[ 210], 80.00th=[ 217], 90.00th=[ 223], 95.00th=[ 231], 00:28:05.735 | 99.00th=[ 245], 99.50th=[ 262], 99.90th=[ 359], 99.95th=[ 367], 00:28:05.735 | 99.99th=[ 367] 00:28:05.735 bw ( KiB/s): min= 4096, max= 8192, per=100.00%, avg=6144.00, stdev=2896.31, samples=2 00:28:05.735 iops : min= 1024, max= 2048, avg=1536.00, stdev=724.08, samples=2 00:28:05.735 lat (usec) : 250=55.16%, 500=30.11%, 750=0.10% 00:28:05.735 lat (msec) : 50=14.59%, >=2000=0.03% 00:28:05.735 cpu : usr=0.08%, sys=0.08%, ctx=3006, majf=0, minf=1 00:28:05.735 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:05.735 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:05.735 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:05.735 issued rwts: total=1466,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:05.735 latency : target=0, window=0, percentile=100.00%, depth=1 00:28:05.735 00:28:05.735 Run status group 0 (all jobs): 00:28:05.735 READ: bw=97.7KiB/s (100kB/s), 97.7KiB/s-97.7KiB/s (100kB/s-100kB/s), io=5864KiB (6005kB), run=60041-60041msec 00:28:05.735 WRITE: bw=102KiB/s (105kB/s), 102KiB/s-102KiB/s (105kB/s-105kB/s), io=6144KiB (6291kB), run=60041-60041msec 00:28:05.735 00:28:05.735 Disk stats (read/write): 00:28:05.735 nvme0n1: ios=1514/1536, merge=0/0, ticks=19381/291, in_queue=19672, util=99.87% 00:28:05.735 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:28:05.735 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:28:05.735 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:28:05.735 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1223 -- # local i=0 00:28:05.735 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:28:05.735 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:28:05.735 10:29:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:28:05.735 10:29:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:28:05.735 10:29:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1235 -- # return 0 00:28:05.735 10:29:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:28:05.735 10:29:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:28:05.735 nvmf hotplug test: fio successful as expected 00:28:05.735 10:29:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:05.735 10:29:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:05.735 10:29:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:05.735 10:29:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:05.735 10:29:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:28:05.735 10:29:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:28:05.735 10:29:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:28:05.735 10:29:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:05.735 10:29:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@121 -- # sync 00:28:05.735 10:29:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:05.735 10:29:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@124 -- # set +e 00:28:05.735 10:29:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:05.735 10:29:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:05.735 rmmod nvme_tcp 00:28:05.735 rmmod nvme_fabrics 00:28:05.735 rmmod nvme_keyring 00:28:05.735 10:29:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:05.735 10:29:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@128 -- # set -e 00:28:05.735 10:29:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@129 -- # return 0 00:28:05.735 10:29:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@517 -- # '[' -n 4003800 ']' 00:28:05.735 10:29:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@518 -- # killprocess 4003800 00:28:05.735 10:29:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@954 -- # '[' -z 4003800 ']' 00:28:05.735 10:29:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@958 -- # kill -0 4003800 00:28:05.735 10:29:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@959 -- # uname 00:28:05.735 10:29:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:05.735 10:29:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4003800 00:28:05.735 10:29:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:05.735 10:29:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:05.735 10:29:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4003800' 00:28:05.735 killing process with pid 4003800 00:28:05.735 10:29:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@973 -- # kill 4003800 00:28:05.735 10:29:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@978 -- # wait 4003800 00:28:05.735 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:05.735 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:05.735 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:05.735 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # iptr 00:28:05.735 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@791 -- # iptables-save 00:28:05.735 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:05.735 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@791 -- # iptables-restore 00:28:05.735 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:05.735 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:05.735 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:05.735 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:05.735 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:08.266 10:30:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:08.266 00:28:08.266 real 1m14.209s 00:28:08.266 user 4m29.180s 00:28:08.266 sys 0m6.105s 00:28:08.266 10:30:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:08.266 10:30:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:08.266 ************************************ 00:28:08.266 END TEST nvmf_initiator_timeout 00:28:08.266 ************************************ 00:28:08.266 10:30:01 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:28:08.266 10:30:01 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' tcp = tcp ']' 00:28:08.266 10:30:01 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # gather_supported_nvmf_pci_devs 00:28:08.266 10:30:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@309 -- # xtrace_disable 00:28:08.266 10:30:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:28:13.529 10:30:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:13.529 10:30:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # pci_devs=() 00:28:13.529 10:30:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:13.529 10:30:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:13.529 10:30:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:13.529 10:30:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:13.529 10:30:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:13.529 10:30:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # net_devs=() 00:28:13.529 10:30:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:13.529 10:30:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # e810=() 00:28:13.529 10:30:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # local -ga e810 00:28:13.529 10:30:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # x722=() 00:28:13.529 10:30:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # local -ga x722 00:28:13.529 10:30:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # mlx=() 00:28:13.529 10:30:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # local -ga mlx 00:28:13.529 10:30:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:13.529 10:30:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:13.529 10:30:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:13.529 10:30:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:13.529 10:30:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:13.529 10:30:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:13.529 10:30:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:13.529 10:30:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:13.529 10:30:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:13.529 10:30:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:13.529 10:30:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:13.529 10:30:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:13.529 10:30:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:13.529 10:30:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:13.529 10:30:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:13.529 10:30:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:13.529 10:30:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:13.529 10:30:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:13.529 10:30:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:13.529 10:30:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:28:13.529 Found 0000:af:00.0 (0x8086 - 0x159b) 00:28:13.529 10:30:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:13.529 10:30:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:13.529 10:30:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:13.529 10:30:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:13.529 10:30:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:13.529 10:30:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:13.529 10:30:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:28:13.529 Found 0000:af:00.1 (0x8086 - 0x159b) 00:28:13.529 10:30:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:13.529 10:30:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:13.529 10:30:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:13.529 10:30:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:13.529 10:30:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:13.529 10:30:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:13.529 10:30:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:13.529 10:30:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:13.529 10:30:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:13.529 10:30:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:13.529 10:30:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:13.529 10:30:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:13.529 10:30:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:13.529 10:30:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:13.529 10:30:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:13.529 10:30:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:28:13.529 Found net devices under 0000:af:00.0: cvl_0_0 00:28:13.529 10:30:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:13.529 10:30:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:13.529 10:30:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:13.529 10:30:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:13.529 10:30:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:13.529 10:30:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:13.529 10:30:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:13.529 10:30:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:13.529 10:30:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:28:13.529 Found net devices under 0000:af:00.1: cvl_0_1 00:28:13.529 10:30:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:13.529 10:30:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:13.529 10:30:06 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:13.529 10:30:06 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@57 -- # (( 2 > 0 )) 00:28:13.529 10:30:06 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@58 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:28:13.529 10:30:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:28:13.529 10:30:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:13.529 10:30:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:28:13.529 ************************************ 00:28:13.529 START TEST nvmf_perf_adq 00:28:13.529 ************************************ 00:28:13.529 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:28:13.529 * Looking for test storage... 00:28:13.529 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:13.529 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:28:13.529 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1711 -- # lcov --version 00:28:13.529 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:28:13.529 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:28:13.529 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:13.529 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:13.529 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:13.529 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # IFS=.-: 00:28:13.529 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # read -ra ver1 00:28:13.529 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # IFS=.-: 00:28:13.529 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # read -ra ver2 00:28:13.529 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@338 -- # local 'op=<' 00:28:13.529 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@340 -- # ver1_l=2 00:28:13.529 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@341 -- # ver2_l=1 00:28:13.529 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:13.529 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@344 -- # case "$op" in 00:28:13.529 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@345 -- # : 1 00:28:13.529 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:13.529 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:13.529 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # decimal 1 00:28:13.529 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=1 00:28:13.529 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:13.529 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 1 00:28:13.530 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # ver1[v]=1 00:28:13.530 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # decimal 2 00:28:13.530 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=2 00:28:13.530 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:13.530 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 2 00:28:13.530 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # ver2[v]=2 00:28:13.530 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:13.530 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:13.530 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # return 0 00:28:13.530 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:13.530 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:28:13.530 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:13.530 --rc genhtml_branch_coverage=1 00:28:13.530 --rc genhtml_function_coverage=1 00:28:13.530 --rc genhtml_legend=1 00:28:13.530 --rc geninfo_all_blocks=1 00:28:13.530 --rc geninfo_unexecuted_blocks=1 00:28:13.530 00:28:13.530 ' 00:28:13.530 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:28:13.530 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:13.530 --rc genhtml_branch_coverage=1 00:28:13.530 --rc genhtml_function_coverage=1 00:28:13.530 --rc genhtml_legend=1 00:28:13.530 --rc geninfo_all_blocks=1 00:28:13.530 --rc geninfo_unexecuted_blocks=1 00:28:13.530 00:28:13.530 ' 00:28:13.530 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:28:13.530 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:13.530 --rc genhtml_branch_coverage=1 00:28:13.530 --rc genhtml_function_coverage=1 00:28:13.530 --rc genhtml_legend=1 00:28:13.530 --rc geninfo_all_blocks=1 00:28:13.530 --rc geninfo_unexecuted_blocks=1 00:28:13.530 00:28:13.530 ' 00:28:13.530 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:28:13.530 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:13.530 --rc genhtml_branch_coverage=1 00:28:13.530 --rc genhtml_function_coverage=1 00:28:13.530 --rc genhtml_legend=1 00:28:13.530 --rc geninfo_all_blocks=1 00:28:13.530 --rc geninfo_unexecuted_blocks=1 00:28:13.530 00:28:13.530 ' 00:28:13.530 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:13.530 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:28:13.530 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:13.530 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:13.530 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:13.530 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:13.530 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:13.530 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:13.530 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:13.530 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:13.530 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:13.530 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:13.530 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:28:13.530 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:28:13.530 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:13.530 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:13.530 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:13.530 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:13.530 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:13.530 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@15 -- # shopt -s extglob 00:28:13.530 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:13.530 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:13.530 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:13.530 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:13.530 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:13.530 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:13.530 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:28:13.530 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:13.530 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # : 0 00:28:13.530 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:13.530 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:13.530 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:13.530 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:13.530 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:13.530 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:13.530 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:13.530 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:13.530 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:13.530 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:13.530 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:28:13.530 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:28:13.530 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:18.801 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:18.801 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:28:18.801 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:18.801 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:18.801 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:18.801 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:18.801 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:18.801 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:28:18.801 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:18.801 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:28:18.801 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:28:18.801 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:28:18.801 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:28:18.801 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:28:18.801 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:28:18.801 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:18.801 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:18.801 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:18.801 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:18.801 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:18.801 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:18.802 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:18.802 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:18.802 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:18.802 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:18.802 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:18.802 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:18.802 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:18.802 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:18.802 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:18.802 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:18.802 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:18.802 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:18.802 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:18.802 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:28:18.802 Found 0000:af:00.0 (0x8086 - 0x159b) 00:28:18.802 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:18.802 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:18.802 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:18.802 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:18.802 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:18.802 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:18.802 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:28:18.802 Found 0000:af:00.1 (0x8086 - 0x159b) 00:28:18.802 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:18.802 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:18.802 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:18.802 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:18.802 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:18.802 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:18.802 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:18.802 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:18.802 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:18.802 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:18.802 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:18.802 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:18.802 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:18.802 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:18.802 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:18.802 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:28:18.802 Found net devices under 0000:af:00.0: cvl_0_0 00:28:18.802 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:18.802 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:18.802 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:18.802 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:18.802 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:18.802 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:18.802 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:18.802 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:18.802 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:28:18.802 Found net devices under 0000:af:00.1: cvl_0_1 00:28:18.802 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:18.802 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:18.802 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:18.802 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:28:18.802 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:28:18.802 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # adq_reload_driver 00:28:18.802 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:28:18.802 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:28:19.369 10:30:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:28:22.657 10:30:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:28:27.927 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@76 -- # nvmftestinit 00:28:27.927 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:27.927 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:27.927 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:27.927 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:27.927 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:27.927 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:27.927 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:27.927 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:27.927 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:27.928 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:27.928 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:28:27.928 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:27.928 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:27.928 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:28:27.928 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:27.928 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:27.928 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:27.928 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:27.928 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:27.928 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:28:27.928 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:27.928 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:28:27.928 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:28:27.928 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:28:27.928 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:28:27.928 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:28:27.928 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:28:27.928 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:27.928 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:27.928 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:27.928 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:27.928 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:27.928 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:27.928 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:27.928 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:27.928 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:27.928 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:27.928 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:27.928 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:27.928 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:27.928 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:27.928 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:27.928 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:27.928 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:27.928 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:27.928 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:27.928 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:28:27.928 Found 0000:af:00.0 (0x8086 - 0x159b) 00:28:27.928 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:27.928 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:27.928 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:27.928 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:27.928 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:27.928 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:27.928 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:28:27.928 Found 0000:af:00.1 (0x8086 - 0x159b) 00:28:27.928 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:27.928 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:27.928 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:27.928 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:27.928 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:27.928 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:27.928 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:27.928 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:27.928 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:27.928 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:27.928 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:27.928 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:27.928 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:27.928 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:27.928 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:27.928 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:28:27.928 Found net devices under 0000:af:00.0: cvl_0_0 00:28:27.928 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:27.928 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:27.928 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:27.928 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:27.928 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:27.928 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:27.928 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:27.928 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:27.928 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:28:27.928 Found net devices under 0000:af:00.1: cvl_0_1 00:28:27.928 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:27.928 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:27.928 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:28:27.928 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:27.928 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:27.928 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:27.928 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:27.928 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:27.928 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:27.928 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:27.928 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:27.928 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:27.928 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:27.928 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:27.928 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:27.928 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:27.928 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:27.928 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:27.928 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:27.928 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:27.928 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:27.928 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:27.928 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:27.928 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:27.928 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:27.928 10:30:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:27.928 10:30:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:27.928 10:30:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:27.928 10:30:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:27.928 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:27.928 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.742 ms 00:28:27.928 00:28:27.928 --- 10.0.0.2 ping statistics --- 00:28:27.929 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:27.929 rtt min/avg/max/mdev = 0.742/0.742/0.742/0.000 ms 00:28:27.929 10:30:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:27.929 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:27.929 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.224 ms 00:28:27.929 00:28:27.929 --- 10.0.0.1 ping statistics --- 00:28:27.929 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:27.929 rtt min/avg/max/mdev = 0.224/0.224/0.224/0.000 ms 00:28:27.929 10:30:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:27.929 10:30:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:28:27.929 10:30:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:27.929 10:30:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:27.929 10:30:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:27.929 10:30:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:27.929 10:30:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:27.929 10:30:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:27.929 10:30:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:27.929 10:30:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmfappstart -m 0xF --wait-for-rpc 00:28:27.929 10:30:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:27.929 10:30:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:27.929 10:30:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:27.929 10:30:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=4022741 00:28:27.929 10:30:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 4022741 00:28:27.929 10:30:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:28:27.929 10:30:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 4022741 ']' 00:28:27.929 10:30:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:27.929 10:30:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:27.929 10:30:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:27.929 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:27.929 10:30:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:27.929 10:30:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:27.929 [2024-12-13 10:30:21.202644] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:28:27.929 [2024-12-13 10:30:21.202731] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:27.929 [2024-12-13 10:30:21.335975] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:27.929 [2024-12-13 10:30:21.444095] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:27.929 [2024-12-13 10:30:21.444140] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:27.929 [2024-12-13 10:30:21.444151] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:27.929 [2024-12-13 10:30:21.444161] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:27.929 [2024-12-13 10:30:21.444169] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:27.929 [2024-12-13 10:30:21.446609] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:28:27.929 [2024-12-13 10:30:21.446681] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:28:27.929 [2024-12-13 10:30:21.446742] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:28:27.929 [2024-12-13 10:30:21.446752] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:28:28.187 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:28.187 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:28:28.187 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:28.187 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:28.187 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:28.187 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:28.187 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # adq_configure_nvmf_target 0 00:28:28.187 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:28:28.187 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:28:28.187 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:28.187 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:28.187 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:28.446 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:28:28.446 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:28:28.446 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:28.446 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:28.446 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:28.446 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:28:28.446 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:28.446 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:28.704 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:28.704 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:28:28.704 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:28.704 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:28.704 [2024-12-13 10:30:22.467808] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:28.704 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:28.704 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:28:28.704 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:28.704 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:28.704 Malloc1 00:28:28.704 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:28.704 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:28.704 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:28.704 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:28.704 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:28.704 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:28:28.704 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:28.704 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:28.704 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:28.704 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:28.704 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:28.704 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:28.704 [2024-12-13 10:30:22.594913] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:28.962 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:28.962 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@82 -- # perfpid=4023019 00:28:28.962 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # sleep 2 00:28:28.962 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:28:30.864 10:30:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # rpc_cmd nvmf_get_stats 00:28:30.864 10:30:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:30.864 10:30:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:30.864 10:30:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:30.864 10:30:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # nvmf_stats='{ 00:28:30.864 "tick_rate": 2100000000, 00:28:30.864 "poll_groups": [ 00:28:30.864 { 00:28:30.864 "name": "nvmf_tgt_poll_group_000", 00:28:30.864 "admin_qpairs": 1, 00:28:30.864 "io_qpairs": 1, 00:28:30.864 "current_admin_qpairs": 1, 00:28:30.864 "current_io_qpairs": 1, 00:28:30.864 "pending_bdev_io": 0, 00:28:30.864 "completed_nvme_io": 18729, 00:28:30.864 "transports": [ 00:28:30.864 { 00:28:30.864 "trtype": "TCP" 00:28:30.864 } 00:28:30.864 ] 00:28:30.864 }, 00:28:30.864 { 00:28:30.864 "name": "nvmf_tgt_poll_group_001", 00:28:30.864 "admin_qpairs": 0, 00:28:30.864 "io_qpairs": 1, 00:28:30.864 "current_admin_qpairs": 0, 00:28:30.864 "current_io_qpairs": 1, 00:28:30.864 "pending_bdev_io": 0, 00:28:30.864 "completed_nvme_io": 18504, 00:28:30.864 "transports": [ 00:28:30.864 { 00:28:30.864 "trtype": "TCP" 00:28:30.864 } 00:28:30.864 ] 00:28:30.864 }, 00:28:30.864 { 00:28:30.864 "name": "nvmf_tgt_poll_group_002", 00:28:30.864 "admin_qpairs": 0, 00:28:30.864 "io_qpairs": 1, 00:28:30.864 "current_admin_qpairs": 0, 00:28:30.864 "current_io_qpairs": 1, 00:28:30.864 "pending_bdev_io": 0, 00:28:30.864 "completed_nvme_io": 18834, 00:28:30.864 "transports": [ 00:28:30.864 { 00:28:30.864 "trtype": "TCP" 00:28:30.864 } 00:28:30.864 ] 00:28:30.864 }, 00:28:30.864 { 00:28:30.864 "name": "nvmf_tgt_poll_group_003", 00:28:30.864 "admin_qpairs": 0, 00:28:30.864 "io_qpairs": 1, 00:28:30.864 "current_admin_qpairs": 0, 00:28:30.864 "current_io_qpairs": 1, 00:28:30.864 "pending_bdev_io": 0, 00:28:30.864 "completed_nvme_io": 18552, 00:28:30.864 "transports": [ 00:28:30.864 { 00:28:30.864 "trtype": "TCP" 00:28:30.864 } 00:28:30.864 ] 00:28:30.864 } 00:28:30.864 ] 00:28:30.864 }' 00:28:30.864 10:30:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:28:30.864 10:30:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # wc -l 00:28:30.864 10:30:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # count=4 00:28:30.864 10:30:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@87 -- # [[ 4 -ne 4 ]] 00:28:30.864 10:30:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # wait 4023019 00:28:38.977 Initializing NVMe Controllers 00:28:38.977 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:38.977 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:28:38.977 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:28:38.977 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:28:38.977 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:28:38.977 Initialization complete. Launching workers. 00:28:38.977 ======================================================== 00:28:38.977 Latency(us) 00:28:38.977 Device Information : IOPS MiB/s Average min max 00:28:38.977 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10224.78 39.94 6258.25 2109.79 10438.91 00:28:38.977 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10156.68 39.67 6300.08 2058.68 11693.80 00:28:38.977 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10284.58 40.17 6223.45 2016.22 11294.36 00:28:38.977 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10078.29 39.37 6351.17 1836.70 11475.92 00:28:38.977 ======================================================== 00:28:38.977 Total : 40744.34 159.16 6282.88 1836.70 11693.80 00:28:38.977 00:28:38.977 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # nvmftestfini 00:28:38.977 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:38.977 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:28:38.977 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:38.977 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:28:38.977 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:38.977 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:38.977 rmmod nvme_tcp 00:28:38.977 rmmod nvme_fabrics 00:28:39.236 rmmod nvme_keyring 00:28:39.236 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:39.236 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:28:39.236 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:28:39.236 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 4022741 ']' 00:28:39.236 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 4022741 00:28:39.236 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 4022741 ']' 00:28:39.236 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 4022741 00:28:39.236 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:28:39.236 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:39.236 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4022741 00:28:39.236 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:39.236 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:39.236 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4022741' 00:28:39.236 killing process with pid 4022741 00:28:39.236 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 4022741 00:28:39.236 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 4022741 00:28:40.616 10:30:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:40.616 10:30:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:40.616 10:30:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:40.616 10:30:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:28:40.616 10:30:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:28:40.616 10:30:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:28:40.616 10:30:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:40.616 10:30:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:40.616 10:30:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:40.616 10:30:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:40.616 10:30:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:40.616 10:30:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:42.521 10:30:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:42.521 10:30:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@94 -- # adq_reload_driver 00:28:42.521 10:30:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:28:42.521 10:30:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:28:43.897 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:28:46.433 10:30:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:28:51.707 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # nvmftestinit 00:28:51.707 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:51.707 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:51.707 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:51.707 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:51.707 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:51.707 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:51.707 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:51.707 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:51.707 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:51.707 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:51.707 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:28:51.707 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:51.707 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:51.707 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:28:51.707 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:51.707 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:51.707 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:51.707 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:51.707 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:51.707 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:28:51.707 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:51.707 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:28:51.707 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:28:51.707 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:28:51.707 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:28:51.707 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:28:51.707 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:28:51.707 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:51.707 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:51.707 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:51.707 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:51.707 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:51.707 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:51.707 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:51.707 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:51.707 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:51.707 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:51.707 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:51.707 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:51.707 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:51.707 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:51.707 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:51.707 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:51.707 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:51.707 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:51.707 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:51.707 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:28:51.707 Found 0000:af:00.0 (0x8086 - 0x159b) 00:28:51.707 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:51.707 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:51.707 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:51.707 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:51.707 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:51.707 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:51.707 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:28:51.707 Found 0000:af:00.1 (0x8086 - 0x159b) 00:28:51.707 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:51.707 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:51.707 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:51.707 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:51.707 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:51.707 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:51.707 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:51.707 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:51.707 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:51.707 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:51.707 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:51.707 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:51.707 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:51.707 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:51.707 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:51.707 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:28:51.707 Found net devices under 0000:af:00.0: cvl_0_0 00:28:51.707 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:51.707 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:51.707 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:51.707 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:51.707 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:51.707 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:51.707 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:51.707 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:51.707 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:28:51.707 Found net devices under 0000:af:00.1: cvl_0_1 00:28:51.707 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:51.707 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:51.707 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:28:51.707 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:51.707 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:51.707 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:51.707 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:51.707 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:51.707 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:51.707 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:51.707 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:51.707 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:51.707 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:51.707 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:51.707 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:51.707 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:51.707 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:51.707 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:51.707 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:51.707 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:51.707 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:51.707 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:51.707 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:51.707 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:51.707 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:51.707 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:51.707 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:51.707 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:51.707 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:51.707 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:51.707 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.495 ms 00:28:51.707 00:28:51.707 --- 10.0.0.2 ping statistics --- 00:28:51.707 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:51.707 rtt min/avg/max/mdev = 0.495/0.495/0.495/0.000 ms 00:28:51.707 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:51.707 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:51.707 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.191 ms 00:28:51.707 00:28:51.707 --- 10.0.0.1 ping statistics --- 00:28:51.707 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:51.707 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:28:51.708 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:51.708 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:28:51.708 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:51.708 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:51.708 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:51.708 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:51.708 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:51.708 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:51.708 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:51.708 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@98 -- # adq_configure_driver 00:28:51.708 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:28:51.708 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:28:51.708 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:28:51.708 net.core.busy_poll = 1 00:28:51.708 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:28:51.708 net.core.busy_read = 1 00:28:51.708 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:28:51.708 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:28:51.967 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:28:51.967 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:28:51.967 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:28:51.967 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmfappstart -m 0xF --wait-for-rpc 00:28:51.967 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:51.967 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:51.967 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:51.967 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=4027124 00:28:51.967 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 4027124 00:28:51.967 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:28:51.967 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 4027124 ']' 00:28:51.967 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:51.967 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:51.967 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:51.967 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:51.967 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:51.967 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:51.967 [2024-12-13 10:30:45.780260] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:28:51.967 [2024-12-13 10:30:45.780348] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:52.225 [2024-12-13 10:30:45.897617] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:52.225 [2024-12-13 10:30:46.005537] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:52.225 [2024-12-13 10:30:46.005584] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:52.225 [2024-12-13 10:30:46.005595] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:52.225 [2024-12-13 10:30:46.005607] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:52.225 [2024-12-13 10:30:46.005617] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:52.225 [2024-12-13 10:30:46.008022] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:28:52.225 [2024-12-13 10:30:46.008094] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:28:52.225 [2024-12-13 10:30:46.008157] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:28:52.225 [2024-12-13 10:30:46.008165] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:28:52.793 10:30:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:52.793 10:30:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:28:52.793 10:30:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:52.793 10:30:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:52.793 10:30:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:52.793 10:30:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:52.793 10:30:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # adq_configure_nvmf_target 1 00:28:52.793 10:30:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:28:52.793 10:30:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:28:52.793 10:30:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:52.793 10:30:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:52.793 10:30:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:53.052 10:30:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:28:53.052 10:30:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:28:53.052 10:30:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:53.052 10:30:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:53.052 10:30:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:53.052 10:30:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:28:53.052 10:30:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:53.052 10:30:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:53.311 10:30:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:53.311 10:30:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:28:53.311 10:30:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:53.311 10:30:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:53.311 [2024-12-13 10:30:47.065508] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:53.311 10:30:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:53.311 10:30:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:28:53.311 10:30:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:53.311 10:30:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:53.311 Malloc1 00:28:53.311 10:30:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:53.311 10:30:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:53.311 10:30:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:53.311 10:30:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:53.311 10:30:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:53.311 10:30:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:28:53.311 10:30:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:53.311 10:30:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:53.311 10:30:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:53.311 10:30:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:53.311 10:30:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:53.311 10:30:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:53.311 [2024-12-13 10:30:47.190881] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:53.311 10:30:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:53.311 10:30:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@104 -- # perfpid=4027368 00:28:53.311 10:30:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@105 -- # sleep 2 00:28:53.311 10:30:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:28:55.845 10:30:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # rpc_cmd nvmf_get_stats 00:28:55.845 10:30:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:55.845 10:30:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:55.845 10:30:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:55.845 10:30:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmf_stats='{ 00:28:55.845 "tick_rate": 2100000000, 00:28:55.845 "poll_groups": [ 00:28:55.845 { 00:28:55.845 "name": "nvmf_tgt_poll_group_000", 00:28:55.845 "admin_qpairs": 1, 00:28:55.845 "io_qpairs": 3, 00:28:55.845 "current_admin_qpairs": 1, 00:28:55.845 "current_io_qpairs": 3, 00:28:55.845 "pending_bdev_io": 0, 00:28:55.845 "completed_nvme_io": 25933, 00:28:55.845 "transports": [ 00:28:55.845 { 00:28:55.845 "trtype": "TCP" 00:28:55.845 } 00:28:55.845 ] 00:28:55.845 }, 00:28:55.845 { 00:28:55.845 "name": "nvmf_tgt_poll_group_001", 00:28:55.845 "admin_qpairs": 0, 00:28:55.845 "io_qpairs": 1, 00:28:55.845 "current_admin_qpairs": 0, 00:28:55.845 "current_io_qpairs": 1, 00:28:55.845 "pending_bdev_io": 0, 00:28:55.845 "completed_nvme_io": 25439, 00:28:55.845 "transports": [ 00:28:55.845 { 00:28:55.845 "trtype": "TCP" 00:28:55.845 } 00:28:55.845 ] 00:28:55.845 }, 00:28:55.845 { 00:28:55.845 "name": "nvmf_tgt_poll_group_002", 00:28:55.845 "admin_qpairs": 0, 00:28:55.845 "io_qpairs": 0, 00:28:55.845 "current_admin_qpairs": 0, 00:28:55.845 "current_io_qpairs": 0, 00:28:55.845 "pending_bdev_io": 0, 00:28:55.845 "completed_nvme_io": 0, 00:28:55.846 "transports": [ 00:28:55.846 { 00:28:55.846 "trtype": "TCP" 00:28:55.846 } 00:28:55.846 ] 00:28:55.846 }, 00:28:55.846 { 00:28:55.846 "name": "nvmf_tgt_poll_group_003", 00:28:55.846 "admin_qpairs": 0, 00:28:55.846 "io_qpairs": 0, 00:28:55.846 "current_admin_qpairs": 0, 00:28:55.846 "current_io_qpairs": 0, 00:28:55.846 "pending_bdev_io": 0, 00:28:55.846 "completed_nvme_io": 0, 00:28:55.846 "transports": [ 00:28:55.846 { 00:28:55.846 "trtype": "TCP" 00:28:55.846 } 00:28:55.846 ] 00:28:55.846 } 00:28:55.846 ] 00:28:55.846 }' 00:28:55.846 10:30:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:28:55.846 10:30:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # wc -l 00:28:55.846 10:30:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # count=2 00:28:55.846 10:30:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # [[ 2 -lt 2 ]] 00:28:55.846 10:30:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@114 -- # wait 4027368 00:29:03.964 Initializing NVMe Controllers 00:29:03.964 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:03.964 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:29:03.964 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:29:03.964 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:29:03.964 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:29:03.964 Initialization complete. Launching workers. 00:29:03.964 ======================================================== 00:29:03.964 Latency(us) 00:29:03.964 Device Information : IOPS MiB/s Average min max 00:29:03.964 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 4983.50 19.47 12886.29 1664.11 62278.78 00:29:03.964 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 4855.10 18.97 13226.36 1741.96 62117.45 00:29:03.964 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 13437.70 52.49 4776.46 1724.04 45954.33 00:29:03.964 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 4128.20 16.13 15557.05 1889.89 62566.32 00:29:03.964 ======================================================== 00:29:03.964 Total : 27404.50 107.05 9372.23 1664.11 62566.32 00:29:03.964 00:29:03.964 10:30:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@115 -- # nvmftestfini 00:29:03.964 10:30:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:03.964 10:30:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:29:03.964 10:30:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:03.964 10:30:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:29:03.964 10:30:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:03.964 10:30:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:03.964 rmmod nvme_tcp 00:29:03.964 rmmod nvme_fabrics 00:29:03.964 rmmod nvme_keyring 00:29:03.964 10:30:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:03.964 10:30:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:29:03.964 10:30:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:29:03.964 10:30:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 4027124 ']' 00:29:03.964 10:30:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 4027124 00:29:03.964 10:30:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 4027124 ']' 00:29:03.964 10:30:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 4027124 00:29:03.964 10:30:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:29:03.964 10:30:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:03.964 10:30:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4027124 00:29:03.964 10:30:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:03.964 10:30:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:03.964 10:30:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4027124' 00:29:03.964 killing process with pid 4027124 00:29:03.964 10:30:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 4027124 00:29:03.964 10:30:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 4027124 00:29:05.342 10:30:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:05.342 10:30:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:05.342 10:30:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:05.342 10:30:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:29:05.342 10:30:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:29:05.342 10:30:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:05.342 10:30:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:29:05.342 10:30:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:05.342 10:30:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:05.342 10:30:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:05.342 10:30:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:05.342 10:30:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:07.352 10:31:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:07.352 10:31:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@117 -- # trap - SIGINT SIGTERM EXIT 00:29:07.352 00:29:07.352 real 0m54.184s 00:29:07.352 user 2m58.807s 00:29:07.352 sys 0m10.301s 00:29:07.352 10:31:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:07.352 10:31:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:07.352 ************************************ 00:29:07.352 END TEST nvmf_perf_adq 00:29:07.352 ************************************ 00:29:07.352 10:31:00 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:29:07.352 10:31:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:07.352 10:31:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:07.352 10:31:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:29:07.352 ************************************ 00:29:07.352 START TEST nvmf_shutdown 00:29:07.352 ************************************ 00:29:07.352 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:29:07.352 * Looking for test storage... 00:29:07.352 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:07.352 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:07.352 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1711 -- # lcov --version 00:29:07.352 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:07.352 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:07.352 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:07.352 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:07.352 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:07.352 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:29:07.352 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:29:07.352 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:29:07.352 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:29:07.352 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:29:07.352 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:29:07.352 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:29:07.352 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:07.352 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:29:07.352 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:29:07.352 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:07.352 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:07.352 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:29:07.352 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:29:07.352 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:07.352 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:29:07.352 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:29:07.352 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:29:07.352 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:29:07.352 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:07.352 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:29:07.352 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:29:07.352 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:07.352 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:07.352 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:29:07.352 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:07.352 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:07.352 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:07.352 --rc genhtml_branch_coverage=1 00:29:07.352 --rc genhtml_function_coverage=1 00:29:07.352 --rc genhtml_legend=1 00:29:07.352 --rc geninfo_all_blocks=1 00:29:07.352 --rc geninfo_unexecuted_blocks=1 00:29:07.352 00:29:07.352 ' 00:29:07.352 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:07.352 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:07.352 --rc genhtml_branch_coverage=1 00:29:07.352 --rc genhtml_function_coverage=1 00:29:07.352 --rc genhtml_legend=1 00:29:07.352 --rc geninfo_all_blocks=1 00:29:07.352 --rc geninfo_unexecuted_blocks=1 00:29:07.352 00:29:07.352 ' 00:29:07.352 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:07.352 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:07.352 --rc genhtml_branch_coverage=1 00:29:07.352 --rc genhtml_function_coverage=1 00:29:07.352 --rc genhtml_legend=1 00:29:07.352 --rc geninfo_all_blocks=1 00:29:07.352 --rc geninfo_unexecuted_blocks=1 00:29:07.352 00:29:07.352 ' 00:29:07.352 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:07.352 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:07.352 --rc genhtml_branch_coverage=1 00:29:07.352 --rc genhtml_function_coverage=1 00:29:07.352 --rc genhtml_legend=1 00:29:07.352 --rc geninfo_all_blocks=1 00:29:07.352 --rc geninfo_unexecuted_blocks=1 00:29:07.352 00:29:07.352 ' 00:29:07.352 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:07.352 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:29:07.352 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:07.352 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:07.352 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:07.352 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:07.352 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:07.352 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:07.352 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:07.352 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:07.352 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:07.352 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:07.352 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:29:07.352 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:29:07.352 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:07.352 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:07.352 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:07.352 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:07.352 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:07.352 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:29:07.352 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:07.352 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:07.352 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:07.352 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:07.353 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:07.353 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:07.353 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:29:07.353 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:07.353 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:29:07.353 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:07.353 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:07.353 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:07.353 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:07.353 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:07.353 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:07.353 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:07.353 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:07.353 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:07.353 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:07.353 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:29:07.353 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:29:07.353 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:29:07.353 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:07.353 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:07.353 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:07.611 ************************************ 00:29:07.611 START TEST nvmf_shutdown_tc1 00:29:07.611 ************************************ 00:29:07.611 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc1 00:29:07.611 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:29:07.612 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:29:07.612 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:07.612 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:07.612 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:07.612 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:07.612 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:07.612 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:07.612 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:07.612 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:07.612 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:07.612 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:07.612 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:29:07.612 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:12.881 10:31:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:12.881 10:31:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:29:12.881 10:31:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:12.881 10:31:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:12.881 10:31:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:12.881 10:31:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:12.881 10:31:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:12.881 10:31:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:29:12.881 10:31:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:12.882 10:31:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:29:12.882 10:31:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:29:12.882 10:31:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:29:12.882 10:31:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:29:12.882 10:31:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:29:12.882 10:31:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:29:12.882 10:31:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:12.882 10:31:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:12.882 10:31:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:12.882 10:31:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:12.882 10:31:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:12.882 10:31:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:12.882 10:31:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:12.882 10:31:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:12.882 10:31:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:12.882 10:31:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:12.882 10:31:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:12.882 10:31:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:12.882 10:31:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:12.882 10:31:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:12.882 10:31:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:12.882 10:31:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:12.882 10:31:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:12.882 10:31:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:12.882 10:31:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:12.882 10:31:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:29:12.882 Found 0000:af:00.0 (0x8086 - 0x159b) 00:29:12.882 10:31:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:12.882 10:31:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:12.882 10:31:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:12.882 10:31:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:12.882 10:31:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:12.882 10:31:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:12.882 10:31:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:29:12.882 Found 0000:af:00.1 (0x8086 - 0x159b) 00:29:12.882 10:31:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:12.882 10:31:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:12.882 10:31:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:12.882 10:31:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:12.882 10:31:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:12.882 10:31:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:12.882 10:31:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:12.882 10:31:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:12.882 10:31:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:12.882 10:31:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:12.882 10:31:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:12.882 10:31:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:12.882 10:31:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:12.882 10:31:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:12.882 10:31:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:12.882 10:31:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:29:12.882 Found net devices under 0000:af:00.0: cvl_0_0 00:29:12.882 10:31:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:12.882 10:31:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:12.882 10:31:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:12.882 10:31:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:12.882 10:31:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:12.882 10:31:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:12.882 10:31:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:12.882 10:31:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:12.882 10:31:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:29:12.882 Found net devices under 0000:af:00.1: cvl_0_1 00:29:12.882 10:31:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:12.882 10:31:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:12.882 10:31:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # is_hw=yes 00:29:12.882 10:31:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:12.882 10:31:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:12.882 10:31:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:12.882 10:31:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:12.882 10:31:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:12.882 10:31:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:12.882 10:31:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:12.882 10:31:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:12.882 10:31:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:12.882 10:31:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:12.882 10:31:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:12.882 10:31:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:12.882 10:31:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:12.882 10:31:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:12.882 10:31:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:12.882 10:31:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:12.882 10:31:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:12.882 10:31:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:12.882 10:31:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:12.882 10:31:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:12.882 10:31:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:12.882 10:31:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:13.141 10:31:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:13.141 10:31:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:13.141 10:31:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:13.141 10:31:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:13.141 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:13.141 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.285 ms 00:29:13.141 00:29:13.141 --- 10.0.0.2 ping statistics --- 00:29:13.141 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:13.141 rtt min/avg/max/mdev = 0.285/0.285/0.285/0.000 ms 00:29:13.141 10:31:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:13.141 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:13.141 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.174 ms 00:29:13.141 00:29:13.141 --- 10.0.0.1 ping statistics --- 00:29:13.141 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:13.141 rtt min/avg/max/mdev = 0.174/0.174/0.174/0.000 ms 00:29:13.141 10:31:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:13.141 10:31:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # return 0 00:29:13.141 10:31:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:13.141 10:31:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:13.141 10:31:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:13.141 10:31:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:13.141 10:31:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:13.141 10:31:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:13.141 10:31:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:13.141 10:31:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:29:13.141 10:31:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:13.141 10:31:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:13.142 10:31:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:13.142 10:31:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@509 -- # nvmfpid=4032706 00:29:13.142 10:31:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@510 -- # waitforlisten 4032706 00:29:13.142 10:31:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:29:13.142 10:31:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 4032706 ']' 00:29:13.142 10:31:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:13.142 10:31:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:13.142 10:31:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:13.142 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:13.142 10:31:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:13.142 10:31:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:13.142 [2024-12-13 10:31:06.925523] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:29:13.142 [2024-12-13 10:31:06.925616] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:13.400 [2024-12-13 10:31:07.046259] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:13.400 [2024-12-13 10:31:07.152470] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:13.400 [2024-12-13 10:31:07.152516] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:13.400 [2024-12-13 10:31:07.152526] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:13.400 [2024-12-13 10:31:07.152535] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:13.400 [2024-12-13 10:31:07.152543] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:13.400 [2024-12-13 10:31:07.154709] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:29:13.400 [2024-12-13 10:31:07.154782] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:29:13.400 [2024-12-13 10:31:07.154865] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:29:13.400 [2024-12-13 10:31:07.154887] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:29:13.966 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:13.966 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:29:13.966 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:13.966 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:13.966 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:13.966 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:13.966 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:13.966 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:13.966 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:13.966 [2024-12-13 10:31:07.782557] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:13.966 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:13.966 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:29:13.966 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:29:13.966 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:13.966 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:13.966 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:13.966 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:13.966 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:29:13.966 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:13.966 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:29:13.966 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:13.966 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:29:13.966 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:13.966 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:29:13.966 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:13.966 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:29:13.966 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:13.966 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:29:13.966 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:13.966 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:29:13.966 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:13.966 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:29:13.966 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:13.966 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:29:13.966 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:13.966 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:29:13.966 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:29:13.966 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:13.966 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:14.224 Malloc1 00:29:14.224 [2024-12-13 10:31:07.953541] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:14.224 Malloc2 00:29:14.482 Malloc3 00:29:14.482 Malloc4 00:29:14.482 Malloc5 00:29:14.739 Malloc6 00:29:14.739 Malloc7 00:29:14.739 Malloc8 00:29:14.997 Malloc9 00:29:14.997 Malloc10 00:29:14.997 10:31:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:14.997 10:31:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:29:14.997 10:31:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:14.997 10:31:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:14.997 10:31:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=4032988 00:29:14.997 10:31:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 4032988 /var/tmp/bdevperf.sock 00:29:14.997 10:31:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 4032988 ']' 00:29:14.997 10:31:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:14.997 10:31:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:29:14.997 10:31:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:29:14.997 10:31:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:14.997 10:31:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:14.997 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:14.997 10:31:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:29:14.997 10:31:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:14.997 10:31:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:29:14.997 10:31:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:14.998 10:31:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:14.998 10:31:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:14.998 { 00:29:14.998 "params": { 00:29:14.998 "name": "Nvme$subsystem", 00:29:14.998 "trtype": "$TEST_TRANSPORT", 00:29:14.998 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:14.998 "adrfam": "ipv4", 00:29:14.998 "trsvcid": "$NVMF_PORT", 00:29:14.998 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:14.998 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:14.998 "hdgst": ${hdgst:-false}, 00:29:14.998 "ddgst": ${ddgst:-false} 00:29:14.998 }, 00:29:14.998 "method": "bdev_nvme_attach_controller" 00:29:14.998 } 00:29:14.998 EOF 00:29:14.998 )") 00:29:14.998 10:31:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:15.256 10:31:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:15.256 10:31:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:15.256 { 00:29:15.256 "params": { 00:29:15.256 "name": "Nvme$subsystem", 00:29:15.256 "trtype": "$TEST_TRANSPORT", 00:29:15.256 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:15.256 "adrfam": "ipv4", 00:29:15.256 "trsvcid": "$NVMF_PORT", 00:29:15.256 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:15.256 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:15.256 "hdgst": ${hdgst:-false}, 00:29:15.256 "ddgst": ${ddgst:-false} 00:29:15.256 }, 00:29:15.256 "method": "bdev_nvme_attach_controller" 00:29:15.256 } 00:29:15.256 EOF 00:29:15.256 )") 00:29:15.256 10:31:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:15.256 10:31:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:15.256 10:31:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:15.256 { 00:29:15.256 "params": { 00:29:15.256 "name": "Nvme$subsystem", 00:29:15.256 "trtype": "$TEST_TRANSPORT", 00:29:15.256 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:15.256 "adrfam": "ipv4", 00:29:15.256 "trsvcid": "$NVMF_PORT", 00:29:15.256 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:15.256 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:15.256 "hdgst": ${hdgst:-false}, 00:29:15.256 "ddgst": ${ddgst:-false} 00:29:15.256 }, 00:29:15.256 "method": "bdev_nvme_attach_controller" 00:29:15.256 } 00:29:15.256 EOF 00:29:15.256 )") 00:29:15.256 10:31:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:15.256 10:31:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:15.256 10:31:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:15.256 { 00:29:15.256 "params": { 00:29:15.256 "name": "Nvme$subsystem", 00:29:15.256 "trtype": "$TEST_TRANSPORT", 00:29:15.256 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:15.256 "adrfam": "ipv4", 00:29:15.256 "trsvcid": "$NVMF_PORT", 00:29:15.256 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:15.256 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:15.256 "hdgst": ${hdgst:-false}, 00:29:15.256 "ddgst": ${ddgst:-false} 00:29:15.256 }, 00:29:15.256 "method": "bdev_nvme_attach_controller" 00:29:15.256 } 00:29:15.256 EOF 00:29:15.256 )") 00:29:15.256 10:31:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:15.256 10:31:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:15.256 10:31:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:15.256 { 00:29:15.256 "params": { 00:29:15.256 "name": "Nvme$subsystem", 00:29:15.256 "trtype": "$TEST_TRANSPORT", 00:29:15.256 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:15.256 "adrfam": "ipv4", 00:29:15.256 "trsvcid": "$NVMF_PORT", 00:29:15.256 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:15.256 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:15.256 "hdgst": ${hdgst:-false}, 00:29:15.256 "ddgst": ${ddgst:-false} 00:29:15.256 }, 00:29:15.256 "method": "bdev_nvme_attach_controller" 00:29:15.256 } 00:29:15.256 EOF 00:29:15.256 )") 00:29:15.256 10:31:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:15.256 10:31:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:15.256 10:31:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:15.256 { 00:29:15.256 "params": { 00:29:15.256 "name": "Nvme$subsystem", 00:29:15.256 "trtype": "$TEST_TRANSPORT", 00:29:15.256 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:15.256 "adrfam": "ipv4", 00:29:15.256 "trsvcid": "$NVMF_PORT", 00:29:15.256 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:15.256 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:15.256 "hdgst": ${hdgst:-false}, 00:29:15.256 "ddgst": ${ddgst:-false} 00:29:15.256 }, 00:29:15.256 "method": "bdev_nvme_attach_controller" 00:29:15.256 } 00:29:15.256 EOF 00:29:15.256 )") 00:29:15.256 10:31:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:15.256 10:31:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:15.256 10:31:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:15.256 { 00:29:15.256 "params": { 00:29:15.256 "name": "Nvme$subsystem", 00:29:15.256 "trtype": "$TEST_TRANSPORT", 00:29:15.256 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:15.256 "adrfam": "ipv4", 00:29:15.256 "trsvcid": "$NVMF_PORT", 00:29:15.256 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:15.256 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:15.256 "hdgst": ${hdgst:-false}, 00:29:15.256 "ddgst": ${ddgst:-false} 00:29:15.256 }, 00:29:15.256 "method": "bdev_nvme_attach_controller" 00:29:15.256 } 00:29:15.256 EOF 00:29:15.256 )") 00:29:15.256 10:31:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:15.256 10:31:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:15.256 10:31:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:15.256 { 00:29:15.256 "params": { 00:29:15.257 "name": "Nvme$subsystem", 00:29:15.257 "trtype": "$TEST_TRANSPORT", 00:29:15.257 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:15.257 "adrfam": "ipv4", 00:29:15.257 "trsvcid": "$NVMF_PORT", 00:29:15.257 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:15.257 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:15.257 "hdgst": ${hdgst:-false}, 00:29:15.257 "ddgst": ${ddgst:-false} 00:29:15.257 }, 00:29:15.257 "method": "bdev_nvme_attach_controller" 00:29:15.257 } 00:29:15.257 EOF 00:29:15.257 )") 00:29:15.257 10:31:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:15.257 10:31:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:15.257 10:31:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:15.257 { 00:29:15.257 "params": { 00:29:15.257 "name": "Nvme$subsystem", 00:29:15.257 "trtype": "$TEST_TRANSPORT", 00:29:15.257 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:15.257 "adrfam": "ipv4", 00:29:15.257 "trsvcid": "$NVMF_PORT", 00:29:15.257 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:15.257 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:15.257 "hdgst": ${hdgst:-false}, 00:29:15.257 "ddgst": ${ddgst:-false} 00:29:15.257 }, 00:29:15.257 "method": "bdev_nvme_attach_controller" 00:29:15.257 } 00:29:15.257 EOF 00:29:15.257 )") 00:29:15.257 10:31:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:15.257 10:31:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:15.257 10:31:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:15.257 { 00:29:15.257 "params": { 00:29:15.257 "name": "Nvme$subsystem", 00:29:15.257 "trtype": "$TEST_TRANSPORT", 00:29:15.257 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:15.257 "adrfam": "ipv4", 00:29:15.257 "trsvcid": "$NVMF_PORT", 00:29:15.257 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:15.257 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:15.257 "hdgst": ${hdgst:-false}, 00:29:15.257 "ddgst": ${ddgst:-false} 00:29:15.257 }, 00:29:15.257 "method": "bdev_nvme_attach_controller" 00:29:15.257 } 00:29:15.257 EOF 00:29:15.257 )") 00:29:15.257 10:31:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:15.257 10:31:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:29:15.257 [2024-12-13 10:31:08.951855] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:29:15.257 [2024-12-13 10:31:08.951945] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:29:15.257 10:31:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:29:15.257 10:31:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:15.257 "params": { 00:29:15.257 "name": "Nvme1", 00:29:15.257 "trtype": "tcp", 00:29:15.257 "traddr": "10.0.0.2", 00:29:15.257 "adrfam": "ipv4", 00:29:15.257 "trsvcid": "4420", 00:29:15.257 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:15.257 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:15.257 "hdgst": false, 00:29:15.257 "ddgst": false 00:29:15.257 }, 00:29:15.257 "method": "bdev_nvme_attach_controller" 00:29:15.257 },{ 00:29:15.257 "params": { 00:29:15.257 "name": "Nvme2", 00:29:15.257 "trtype": "tcp", 00:29:15.257 "traddr": "10.0.0.2", 00:29:15.257 "adrfam": "ipv4", 00:29:15.257 "trsvcid": "4420", 00:29:15.257 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:29:15.257 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:29:15.257 "hdgst": false, 00:29:15.257 "ddgst": false 00:29:15.257 }, 00:29:15.257 "method": "bdev_nvme_attach_controller" 00:29:15.257 },{ 00:29:15.257 "params": { 00:29:15.257 "name": "Nvme3", 00:29:15.257 "trtype": "tcp", 00:29:15.257 "traddr": "10.0.0.2", 00:29:15.257 "adrfam": "ipv4", 00:29:15.257 "trsvcid": "4420", 00:29:15.257 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:29:15.257 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:29:15.257 "hdgst": false, 00:29:15.257 "ddgst": false 00:29:15.257 }, 00:29:15.257 "method": "bdev_nvme_attach_controller" 00:29:15.257 },{ 00:29:15.257 "params": { 00:29:15.257 "name": "Nvme4", 00:29:15.257 "trtype": "tcp", 00:29:15.257 "traddr": "10.0.0.2", 00:29:15.257 "adrfam": "ipv4", 00:29:15.257 "trsvcid": "4420", 00:29:15.257 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:29:15.257 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:29:15.257 "hdgst": false, 00:29:15.257 "ddgst": false 00:29:15.257 }, 00:29:15.257 "method": "bdev_nvme_attach_controller" 00:29:15.257 },{ 00:29:15.257 "params": { 00:29:15.257 "name": "Nvme5", 00:29:15.257 "trtype": "tcp", 00:29:15.257 "traddr": "10.0.0.2", 00:29:15.257 "adrfam": "ipv4", 00:29:15.257 "trsvcid": "4420", 00:29:15.257 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:29:15.257 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:29:15.257 "hdgst": false, 00:29:15.257 "ddgst": false 00:29:15.257 }, 00:29:15.257 "method": "bdev_nvme_attach_controller" 00:29:15.257 },{ 00:29:15.257 "params": { 00:29:15.257 "name": "Nvme6", 00:29:15.257 "trtype": "tcp", 00:29:15.257 "traddr": "10.0.0.2", 00:29:15.257 "adrfam": "ipv4", 00:29:15.257 "trsvcid": "4420", 00:29:15.257 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:29:15.257 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:29:15.257 "hdgst": false, 00:29:15.257 "ddgst": false 00:29:15.257 }, 00:29:15.257 "method": "bdev_nvme_attach_controller" 00:29:15.257 },{ 00:29:15.257 "params": { 00:29:15.257 "name": "Nvme7", 00:29:15.257 "trtype": "tcp", 00:29:15.257 "traddr": "10.0.0.2", 00:29:15.257 "adrfam": "ipv4", 00:29:15.257 "trsvcid": "4420", 00:29:15.257 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:29:15.257 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:29:15.257 "hdgst": false, 00:29:15.257 "ddgst": false 00:29:15.257 }, 00:29:15.257 "method": "bdev_nvme_attach_controller" 00:29:15.257 },{ 00:29:15.257 "params": { 00:29:15.257 "name": "Nvme8", 00:29:15.257 "trtype": "tcp", 00:29:15.257 "traddr": "10.0.0.2", 00:29:15.257 "adrfam": "ipv4", 00:29:15.257 "trsvcid": "4420", 00:29:15.257 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:29:15.257 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:29:15.257 "hdgst": false, 00:29:15.257 "ddgst": false 00:29:15.257 }, 00:29:15.257 "method": "bdev_nvme_attach_controller" 00:29:15.257 },{ 00:29:15.257 "params": { 00:29:15.257 "name": "Nvme9", 00:29:15.257 "trtype": "tcp", 00:29:15.257 "traddr": "10.0.0.2", 00:29:15.257 "adrfam": "ipv4", 00:29:15.257 "trsvcid": "4420", 00:29:15.257 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:29:15.257 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:29:15.257 "hdgst": false, 00:29:15.257 "ddgst": false 00:29:15.257 }, 00:29:15.257 "method": "bdev_nvme_attach_controller" 00:29:15.257 },{ 00:29:15.257 "params": { 00:29:15.257 "name": "Nvme10", 00:29:15.257 "trtype": "tcp", 00:29:15.257 "traddr": "10.0.0.2", 00:29:15.257 "adrfam": "ipv4", 00:29:15.257 "trsvcid": "4420", 00:29:15.257 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:29:15.257 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:29:15.257 "hdgst": false, 00:29:15.257 "ddgst": false 00:29:15.257 }, 00:29:15.257 "method": "bdev_nvme_attach_controller" 00:29:15.257 }' 00:29:15.257 [2024-12-13 10:31:09.067592] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:15.515 [2024-12-13 10:31:09.181107] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:29:16.889 10:31:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:16.889 10:31:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:29:16.889 10:31:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:29:16.889 10:31:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:16.889 10:31:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:16.889 10:31:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:16.889 10:31:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 4032988 00:29:16.889 10:31:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:29:16.889 10:31:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:29:17.820 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 4032988 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:29:17.820 10:31:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 4032706 00:29:17.820 10:31:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:29:17.820 10:31:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:29:17.820 10:31:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:29:17.820 10:31:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:29:17.820 10:31:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:17.820 10:31:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:17.820 { 00:29:17.820 "params": { 00:29:17.820 "name": "Nvme$subsystem", 00:29:17.820 "trtype": "$TEST_TRANSPORT", 00:29:17.820 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:17.820 "adrfam": "ipv4", 00:29:17.820 "trsvcid": "$NVMF_PORT", 00:29:17.820 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:17.820 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:17.820 "hdgst": ${hdgst:-false}, 00:29:17.820 "ddgst": ${ddgst:-false} 00:29:17.820 }, 00:29:17.820 "method": "bdev_nvme_attach_controller" 00:29:17.820 } 00:29:17.820 EOF 00:29:17.820 )") 00:29:17.820 10:31:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:17.820 10:31:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:17.820 10:31:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:17.820 { 00:29:17.820 "params": { 00:29:17.820 "name": "Nvme$subsystem", 00:29:17.820 "trtype": "$TEST_TRANSPORT", 00:29:17.820 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:17.820 "adrfam": "ipv4", 00:29:17.820 "trsvcid": "$NVMF_PORT", 00:29:17.821 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:17.821 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:17.821 "hdgst": ${hdgst:-false}, 00:29:17.821 "ddgst": ${ddgst:-false} 00:29:17.821 }, 00:29:17.821 "method": "bdev_nvme_attach_controller" 00:29:17.821 } 00:29:17.821 EOF 00:29:17.821 )") 00:29:17.821 10:31:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:18.079 10:31:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:18.079 10:31:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:18.079 { 00:29:18.079 "params": { 00:29:18.079 "name": "Nvme$subsystem", 00:29:18.079 "trtype": "$TEST_TRANSPORT", 00:29:18.079 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:18.079 "adrfam": "ipv4", 00:29:18.079 "trsvcid": "$NVMF_PORT", 00:29:18.079 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:18.079 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:18.079 "hdgst": ${hdgst:-false}, 00:29:18.079 "ddgst": ${ddgst:-false} 00:29:18.079 }, 00:29:18.079 "method": "bdev_nvme_attach_controller" 00:29:18.079 } 00:29:18.079 EOF 00:29:18.079 )") 00:29:18.079 10:31:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:18.079 10:31:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:18.079 10:31:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:18.079 { 00:29:18.079 "params": { 00:29:18.079 "name": "Nvme$subsystem", 00:29:18.079 "trtype": "$TEST_TRANSPORT", 00:29:18.079 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:18.079 "adrfam": "ipv4", 00:29:18.079 "trsvcid": "$NVMF_PORT", 00:29:18.079 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:18.079 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:18.079 "hdgst": ${hdgst:-false}, 00:29:18.079 "ddgst": ${ddgst:-false} 00:29:18.079 }, 00:29:18.079 "method": "bdev_nvme_attach_controller" 00:29:18.079 } 00:29:18.079 EOF 00:29:18.079 )") 00:29:18.079 10:31:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:18.079 10:31:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:18.079 10:31:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:18.079 { 00:29:18.079 "params": { 00:29:18.079 "name": "Nvme$subsystem", 00:29:18.079 "trtype": "$TEST_TRANSPORT", 00:29:18.079 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:18.079 "adrfam": "ipv4", 00:29:18.079 "trsvcid": "$NVMF_PORT", 00:29:18.079 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:18.079 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:18.079 "hdgst": ${hdgst:-false}, 00:29:18.079 "ddgst": ${ddgst:-false} 00:29:18.079 }, 00:29:18.079 "method": "bdev_nvme_attach_controller" 00:29:18.079 } 00:29:18.079 EOF 00:29:18.079 )") 00:29:18.079 10:31:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:18.079 10:31:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:18.079 10:31:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:18.079 { 00:29:18.079 "params": { 00:29:18.079 "name": "Nvme$subsystem", 00:29:18.079 "trtype": "$TEST_TRANSPORT", 00:29:18.079 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:18.079 "adrfam": "ipv4", 00:29:18.079 "trsvcid": "$NVMF_PORT", 00:29:18.079 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:18.079 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:18.079 "hdgst": ${hdgst:-false}, 00:29:18.079 "ddgst": ${ddgst:-false} 00:29:18.079 }, 00:29:18.079 "method": "bdev_nvme_attach_controller" 00:29:18.079 } 00:29:18.079 EOF 00:29:18.079 )") 00:29:18.079 10:31:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:18.079 10:31:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:18.079 10:31:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:18.079 { 00:29:18.079 "params": { 00:29:18.079 "name": "Nvme$subsystem", 00:29:18.079 "trtype": "$TEST_TRANSPORT", 00:29:18.079 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:18.079 "adrfam": "ipv4", 00:29:18.079 "trsvcid": "$NVMF_PORT", 00:29:18.079 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:18.079 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:18.079 "hdgst": ${hdgst:-false}, 00:29:18.079 "ddgst": ${ddgst:-false} 00:29:18.079 }, 00:29:18.079 "method": "bdev_nvme_attach_controller" 00:29:18.079 } 00:29:18.079 EOF 00:29:18.079 )") 00:29:18.079 10:31:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:18.079 10:31:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:18.079 10:31:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:18.079 { 00:29:18.079 "params": { 00:29:18.079 "name": "Nvme$subsystem", 00:29:18.079 "trtype": "$TEST_TRANSPORT", 00:29:18.079 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:18.079 "adrfam": "ipv4", 00:29:18.079 "trsvcid": "$NVMF_PORT", 00:29:18.079 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:18.079 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:18.079 "hdgst": ${hdgst:-false}, 00:29:18.079 "ddgst": ${ddgst:-false} 00:29:18.079 }, 00:29:18.079 "method": "bdev_nvme_attach_controller" 00:29:18.079 } 00:29:18.079 EOF 00:29:18.079 )") 00:29:18.079 10:31:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:18.079 10:31:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:18.079 10:31:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:18.079 { 00:29:18.079 "params": { 00:29:18.079 "name": "Nvme$subsystem", 00:29:18.079 "trtype": "$TEST_TRANSPORT", 00:29:18.079 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:18.079 "adrfam": "ipv4", 00:29:18.079 "trsvcid": "$NVMF_PORT", 00:29:18.079 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:18.079 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:18.079 "hdgst": ${hdgst:-false}, 00:29:18.079 "ddgst": ${ddgst:-false} 00:29:18.079 }, 00:29:18.079 "method": "bdev_nvme_attach_controller" 00:29:18.079 } 00:29:18.079 EOF 00:29:18.079 )") 00:29:18.079 10:31:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:18.079 10:31:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:18.079 10:31:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:18.079 { 00:29:18.079 "params": { 00:29:18.079 "name": "Nvme$subsystem", 00:29:18.079 "trtype": "$TEST_TRANSPORT", 00:29:18.079 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:18.079 "adrfam": "ipv4", 00:29:18.079 "trsvcid": "$NVMF_PORT", 00:29:18.079 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:18.079 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:18.079 "hdgst": ${hdgst:-false}, 00:29:18.079 "ddgst": ${ddgst:-false} 00:29:18.079 }, 00:29:18.079 "method": "bdev_nvme_attach_controller" 00:29:18.079 } 00:29:18.079 EOF 00:29:18.079 )") 00:29:18.079 10:31:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:18.079 10:31:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:29:18.079 [2024-12-13 10:31:11.770630] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:29:18.079 [2024-12-13 10:31:11.770728] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4033468 ] 00:29:18.079 10:31:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:29:18.079 10:31:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:18.079 "params": { 00:29:18.079 "name": "Nvme1", 00:29:18.079 "trtype": "tcp", 00:29:18.079 "traddr": "10.0.0.2", 00:29:18.079 "adrfam": "ipv4", 00:29:18.079 "trsvcid": "4420", 00:29:18.079 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:18.079 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:18.079 "hdgst": false, 00:29:18.079 "ddgst": false 00:29:18.079 }, 00:29:18.080 "method": "bdev_nvme_attach_controller" 00:29:18.080 },{ 00:29:18.080 "params": { 00:29:18.080 "name": "Nvme2", 00:29:18.080 "trtype": "tcp", 00:29:18.080 "traddr": "10.0.0.2", 00:29:18.080 "adrfam": "ipv4", 00:29:18.080 "trsvcid": "4420", 00:29:18.080 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:29:18.080 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:29:18.080 "hdgst": false, 00:29:18.080 "ddgst": false 00:29:18.080 }, 00:29:18.080 "method": "bdev_nvme_attach_controller" 00:29:18.080 },{ 00:29:18.080 "params": { 00:29:18.080 "name": "Nvme3", 00:29:18.080 "trtype": "tcp", 00:29:18.080 "traddr": "10.0.0.2", 00:29:18.080 "adrfam": "ipv4", 00:29:18.080 "trsvcid": "4420", 00:29:18.080 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:29:18.080 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:29:18.080 "hdgst": false, 00:29:18.080 "ddgst": false 00:29:18.080 }, 00:29:18.080 "method": "bdev_nvme_attach_controller" 00:29:18.080 },{ 00:29:18.080 "params": { 00:29:18.080 "name": "Nvme4", 00:29:18.080 "trtype": "tcp", 00:29:18.080 "traddr": "10.0.0.2", 00:29:18.080 "adrfam": "ipv4", 00:29:18.080 "trsvcid": "4420", 00:29:18.080 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:29:18.080 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:29:18.080 "hdgst": false, 00:29:18.080 "ddgst": false 00:29:18.080 }, 00:29:18.080 "method": "bdev_nvme_attach_controller" 00:29:18.080 },{ 00:29:18.080 "params": { 00:29:18.080 "name": "Nvme5", 00:29:18.080 "trtype": "tcp", 00:29:18.080 "traddr": "10.0.0.2", 00:29:18.080 "adrfam": "ipv4", 00:29:18.080 "trsvcid": "4420", 00:29:18.080 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:29:18.080 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:29:18.080 "hdgst": false, 00:29:18.080 "ddgst": false 00:29:18.080 }, 00:29:18.080 "method": "bdev_nvme_attach_controller" 00:29:18.080 },{ 00:29:18.080 "params": { 00:29:18.080 "name": "Nvme6", 00:29:18.080 "trtype": "tcp", 00:29:18.080 "traddr": "10.0.0.2", 00:29:18.080 "adrfam": "ipv4", 00:29:18.080 "trsvcid": "4420", 00:29:18.080 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:29:18.080 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:29:18.080 "hdgst": false, 00:29:18.080 "ddgst": false 00:29:18.080 }, 00:29:18.080 "method": "bdev_nvme_attach_controller" 00:29:18.080 },{ 00:29:18.080 "params": { 00:29:18.080 "name": "Nvme7", 00:29:18.080 "trtype": "tcp", 00:29:18.080 "traddr": "10.0.0.2", 00:29:18.080 "adrfam": "ipv4", 00:29:18.080 "trsvcid": "4420", 00:29:18.080 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:29:18.080 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:29:18.080 "hdgst": false, 00:29:18.080 "ddgst": false 00:29:18.080 }, 00:29:18.080 "method": "bdev_nvme_attach_controller" 00:29:18.080 },{ 00:29:18.080 "params": { 00:29:18.080 "name": "Nvme8", 00:29:18.080 "trtype": "tcp", 00:29:18.080 "traddr": "10.0.0.2", 00:29:18.080 "adrfam": "ipv4", 00:29:18.080 "trsvcid": "4420", 00:29:18.080 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:29:18.080 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:29:18.080 "hdgst": false, 00:29:18.080 "ddgst": false 00:29:18.080 }, 00:29:18.080 "method": "bdev_nvme_attach_controller" 00:29:18.080 },{ 00:29:18.080 "params": { 00:29:18.080 "name": "Nvme9", 00:29:18.080 "trtype": "tcp", 00:29:18.080 "traddr": "10.0.0.2", 00:29:18.080 "adrfam": "ipv4", 00:29:18.080 "trsvcid": "4420", 00:29:18.080 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:29:18.080 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:29:18.080 "hdgst": false, 00:29:18.080 "ddgst": false 00:29:18.080 }, 00:29:18.080 "method": "bdev_nvme_attach_controller" 00:29:18.080 },{ 00:29:18.080 "params": { 00:29:18.080 "name": "Nvme10", 00:29:18.080 "trtype": "tcp", 00:29:18.080 "traddr": "10.0.0.2", 00:29:18.080 "adrfam": "ipv4", 00:29:18.080 "trsvcid": "4420", 00:29:18.080 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:29:18.080 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:29:18.080 "hdgst": false, 00:29:18.080 "ddgst": false 00:29:18.080 }, 00:29:18.080 "method": "bdev_nvme_attach_controller" 00:29:18.080 }' 00:29:18.080 [2024-12-13 10:31:11.888129] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:18.338 [2024-12-13 10:31:12.002584] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:29:20.238 Running I/O for 1 seconds... 00:29:21.172 1947.00 IOPS, 121.69 MiB/s 00:29:21.172 Latency(us) 00:29:21.172 [2024-12-13T09:31:15.063Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:21.172 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:21.172 Verification LBA range: start 0x0 length 0x400 00:29:21.172 Nvme1n1 : 1.11 230.59 14.41 0.00 0.00 274941.07 22469.49 240673.16 00:29:21.172 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:21.172 Verification LBA range: start 0x0 length 0x400 00:29:21.172 Nvme2n1 : 1.06 246.29 15.39 0.00 0.00 251378.75 7146.54 224694.86 00:29:21.172 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:21.172 Verification LBA range: start 0x0 length 0x400 00:29:21.172 Nvme3n1 : 1.15 278.63 17.41 0.00 0.00 217876.87 6397.56 247663.66 00:29:21.172 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:21.172 Verification LBA range: start 0x0 length 0x400 00:29:21.172 Nvme4n1 : 1.10 252.80 15.80 0.00 0.00 232744.85 10673.01 248662.31 00:29:21.172 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:21.172 Verification LBA range: start 0x0 length 0x400 00:29:21.172 Nvme5n1 : 1.14 241.51 15.09 0.00 0.00 236352.55 8550.89 241671.80 00:29:21.172 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:21.172 Verification LBA range: start 0x0 length 0x400 00:29:21.172 Nvme6n1 : 1.11 230.05 14.38 0.00 0.00 254621.74 19348.72 243669.09 00:29:21.172 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:21.172 Verification LBA range: start 0x0 length 0x400 00:29:21.172 Nvme7n1 : 1.16 274.95 17.18 0.00 0.00 210511.77 16976.94 246665.02 00:29:21.172 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:21.172 Verification LBA range: start 0x0 length 0x400 00:29:21.172 Nvme8n1 : 1.17 273.98 17.12 0.00 0.00 208019.16 13544.11 242670.45 00:29:21.172 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:21.172 Verification LBA range: start 0x0 length 0x400 00:29:21.172 Nvme9n1 : 1.15 228.61 14.29 0.00 0.00 242717.02 8301.23 249660.95 00:29:21.172 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:21.172 Verification LBA range: start 0x0 length 0x400 00:29:21.172 Nvme10n1 : 1.15 221.92 13.87 0.00 0.00 248274.41 17850.76 265639.25 00:29:21.172 [2024-12-13T09:31:15.063Z] =================================================================================================================== 00:29:21.172 [2024-12-13T09:31:15.063Z] Total : 2479.32 154.96 0.00 0.00 235981.47 6397.56 265639.25 00:29:22.106 10:31:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:29:22.106 10:31:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:29:22.106 10:31:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:29:22.106 10:31:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:22.106 10:31:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:29:22.106 10:31:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:22.106 10:31:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:29:22.364 10:31:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:22.364 10:31:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:29:22.364 10:31:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:22.364 10:31:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:22.364 rmmod nvme_tcp 00:29:22.364 rmmod nvme_fabrics 00:29:22.364 rmmod nvme_keyring 00:29:22.364 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:22.364 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:29:22.364 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:29:22.364 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@517 -- # '[' -n 4032706 ']' 00:29:22.364 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@518 -- # killprocess 4032706 00:29:22.364 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # '[' -z 4032706 ']' 00:29:22.364 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # kill -0 4032706 00:29:22.364 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # uname 00:29:22.364 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:22.364 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4032706 00:29:22.364 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:22.364 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:22.364 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4032706' 00:29:22.364 killing process with pid 4032706 00:29:22.364 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@973 -- # kill 4032706 00:29:22.364 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@978 -- # wait 4032706 00:29:25.645 10:31:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:25.645 10:31:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:25.645 10:31:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:25.645 10:31:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # iptr 00:29:25.645 10:31:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:25.645 10:31:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-save 00:29:25.645 10:31:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-restore 00:29:25.645 10:31:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:25.645 10:31:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:25.645 10:31:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:25.645 10:31:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:25.645 10:31:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:27.547 10:31:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:27.547 00:29:27.547 real 0m19.973s 00:29:27.547 user 0m53.873s 00:29:27.547 sys 0m5.899s 00:29:27.547 10:31:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:27.547 10:31:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:27.547 ************************************ 00:29:27.547 END TEST nvmf_shutdown_tc1 00:29:27.547 ************************************ 00:29:27.547 10:31:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:29:27.547 10:31:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:27.547 10:31:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:27.547 10:31:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:27.547 ************************************ 00:29:27.547 START TEST nvmf_shutdown_tc2 00:29:27.547 ************************************ 00:29:27.547 10:31:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc2 00:29:27.547 10:31:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:29:27.547 10:31:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:29:27.547 10:31:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:27.547 10:31:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:27.547 10:31:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:27.547 10:31:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:27.548 10:31:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:27.548 10:31:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:27.548 10:31:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:27.548 10:31:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:27.548 10:31:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:27.548 10:31:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:27.548 10:31:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:29:27.548 10:31:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:27.548 10:31:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:27.548 10:31:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:29:27.548 10:31:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:27.548 10:31:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:27.548 10:31:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:27.548 10:31:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:27.548 10:31:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:27.548 10:31:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:29:27.548 10:31:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:27.548 10:31:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:29:27.548 10:31:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:29:27.548 10:31:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:29:27.548 10:31:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:29:27.548 10:31:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:29:27.548 10:31:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:29:27.548 10:31:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:27.548 10:31:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:27.548 10:31:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:27.548 10:31:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:27.548 10:31:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:27.548 10:31:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:27.548 10:31:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:27.548 10:31:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:27.548 10:31:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:27.548 10:31:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:27.548 10:31:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:27.548 10:31:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:27.548 10:31:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:27.548 10:31:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:27.548 10:31:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:27.548 10:31:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:27.548 10:31:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:27.548 10:31:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:27.548 10:31:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:27.548 10:31:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:29:27.548 Found 0000:af:00.0 (0x8086 - 0x159b) 00:29:27.548 10:31:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:27.548 10:31:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:27.548 10:31:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:27.548 10:31:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:27.548 10:31:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:27.548 10:31:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:27.548 10:31:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:29:27.548 Found 0000:af:00.1 (0x8086 - 0x159b) 00:29:27.548 10:31:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:27.548 10:31:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:27.548 10:31:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:27.548 10:31:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:27.548 10:31:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:27.548 10:31:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:27.548 10:31:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:27.548 10:31:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:27.548 10:31:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:27.548 10:31:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:27.548 10:31:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:27.548 10:31:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:27.548 10:31:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:27.548 10:31:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:27.548 10:31:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:27.548 10:31:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:29:27.548 Found net devices under 0000:af:00.0: cvl_0_0 00:29:27.548 10:31:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:27.548 10:31:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:27.548 10:31:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:27.548 10:31:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:27.548 10:31:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:27.548 10:31:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:27.548 10:31:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:27.548 10:31:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:27.548 10:31:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:29:27.548 Found net devices under 0000:af:00.1: cvl_0_1 00:29:27.548 10:31:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:27.548 10:31:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:27.548 10:31:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # is_hw=yes 00:29:27.548 10:31:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:27.548 10:31:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:27.548 10:31:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:27.548 10:31:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:27.548 10:31:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:27.548 10:31:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:27.548 10:31:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:27.548 10:31:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:27.548 10:31:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:27.548 10:31:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:27.548 10:31:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:27.548 10:31:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:27.548 10:31:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:27.548 10:31:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:27.548 10:31:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:27.548 10:31:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:27.548 10:31:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:27.548 10:31:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:27.807 10:31:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:27.807 10:31:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:27.807 10:31:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:27.807 10:31:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:27.807 10:31:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:27.807 10:31:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:27.807 10:31:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:27.807 10:31:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:27.807 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:27.807 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.255 ms 00:29:27.807 00:29:27.807 --- 10.0.0.2 ping statistics --- 00:29:27.807 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:27.807 rtt min/avg/max/mdev = 0.255/0.255/0.255/0.000 ms 00:29:27.807 10:31:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:27.807 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:27.807 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.150 ms 00:29:27.807 00:29:27.807 --- 10.0.0.1 ping statistics --- 00:29:27.807 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:27.807 rtt min/avg/max/mdev = 0.150/0.150/0.150/0.000 ms 00:29:27.807 10:31:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:27.807 10:31:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # return 0 00:29:27.807 10:31:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:27.807 10:31:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:27.807 10:31:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:27.807 10:31:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:27.807 10:31:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:27.807 10:31:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:27.807 10:31:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:27.807 10:31:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:29:27.807 10:31:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:27.807 10:31:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:27.807 10:31:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:27.807 10:31:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@509 -- # nvmfpid=4035141 00:29:27.807 10:31:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@510 -- # waitforlisten 4035141 00:29:27.807 10:31:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:29:27.807 10:31:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 4035141 ']' 00:29:27.807 10:31:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:27.807 10:31:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:27.807 10:31:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:27.807 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:27.807 10:31:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:27.807 10:31:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:28.067 [2024-12-13 10:31:21.763359] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:29:28.067 [2024-12-13 10:31:21.763470] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:28.067 [2024-12-13 10:31:21.882533] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:28.325 [2024-12-13 10:31:21.989524] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:28.325 [2024-12-13 10:31:21.989572] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:28.325 [2024-12-13 10:31:21.989583] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:28.325 [2024-12-13 10:31:21.989593] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:28.325 [2024-12-13 10:31:21.989601] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:28.325 [2024-12-13 10:31:21.991989] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:29:28.325 [2024-12-13 10:31:21.992060] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:29:28.325 [2024-12-13 10:31:21.992143] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:29:28.325 [2024-12-13 10:31:21.992165] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:29:28.891 10:31:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:28.891 10:31:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:29:28.891 10:31:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:28.891 10:31:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:28.891 10:31:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:28.891 10:31:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:28.891 10:31:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:28.891 10:31:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:28.891 10:31:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:28.891 [2024-12-13 10:31:22.609668] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:28.891 10:31:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:28.891 10:31:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:29:28.891 10:31:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:29:28.891 10:31:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:28.891 10:31:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:28.891 10:31:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:28.891 10:31:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:28.891 10:31:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:28.891 10:31:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:28.891 10:31:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:28.891 10:31:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:28.891 10:31:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:28.891 10:31:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:28.891 10:31:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:28.891 10:31:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:28.891 10:31:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:28.891 10:31:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:28.891 10:31:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:28.891 10:31:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:28.891 10:31:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:28.891 10:31:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:28.891 10:31:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:28.891 10:31:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:28.891 10:31:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:28.891 10:31:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:28.891 10:31:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:28.891 10:31:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:29:28.891 10:31:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:28.891 10:31:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:28.891 Malloc1 00:29:28.891 [2024-12-13 10:31:22.765102] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:29.149 Malloc2 00:29:29.149 Malloc3 00:29:29.149 Malloc4 00:29:29.407 Malloc5 00:29:29.407 Malloc6 00:29:29.407 Malloc7 00:29:29.665 Malloc8 00:29:29.665 Malloc9 00:29:29.923 Malloc10 00:29:29.923 10:31:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:29.923 10:31:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:29:29.923 10:31:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:29.923 10:31:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:29.923 10:31:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=4035628 00:29:29.923 10:31:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 4035628 /var/tmp/bdevperf.sock 00:29:29.923 10:31:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 4035628 ']' 00:29:29.923 10:31:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:29.923 10:31:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:29:29.923 10:31:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:29.923 10:31:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:29:29.923 10:31:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:29.923 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:29.923 10:31:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # config=() 00:29:29.923 10:31:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:29.923 10:31:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # local subsystem config 00:29:29.923 10:31:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:29.923 10:31:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:29.923 10:31:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:29.923 { 00:29:29.923 "params": { 00:29:29.923 "name": "Nvme$subsystem", 00:29:29.923 "trtype": "$TEST_TRANSPORT", 00:29:29.923 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:29.923 "adrfam": "ipv4", 00:29:29.923 "trsvcid": "$NVMF_PORT", 00:29:29.923 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:29.923 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:29.923 "hdgst": ${hdgst:-false}, 00:29:29.923 "ddgst": ${ddgst:-false} 00:29:29.923 }, 00:29:29.923 "method": "bdev_nvme_attach_controller" 00:29:29.923 } 00:29:29.923 EOF 00:29:29.923 )") 00:29:29.923 10:31:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:29:29.924 10:31:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:29.924 10:31:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:29.924 { 00:29:29.924 "params": { 00:29:29.924 "name": "Nvme$subsystem", 00:29:29.924 "trtype": "$TEST_TRANSPORT", 00:29:29.924 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:29.924 "adrfam": "ipv4", 00:29:29.924 "trsvcid": "$NVMF_PORT", 00:29:29.924 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:29.924 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:29.924 "hdgst": ${hdgst:-false}, 00:29:29.924 "ddgst": ${ddgst:-false} 00:29:29.924 }, 00:29:29.924 "method": "bdev_nvme_attach_controller" 00:29:29.924 } 00:29:29.924 EOF 00:29:29.924 )") 00:29:29.924 10:31:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:29:29.924 10:31:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:29.924 10:31:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:29.924 { 00:29:29.924 "params": { 00:29:29.924 "name": "Nvme$subsystem", 00:29:29.924 "trtype": "$TEST_TRANSPORT", 00:29:29.924 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:29.924 "adrfam": "ipv4", 00:29:29.924 "trsvcid": "$NVMF_PORT", 00:29:29.924 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:29.924 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:29.924 "hdgst": ${hdgst:-false}, 00:29:29.924 "ddgst": ${ddgst:-false} 00:29:29.924 }, 00:29:29.924 "method": "bdev_nvme_attach_controller" 00:29:29.924 } 00:29:29.924 EOF 00:29:29.924 )") 00:29:29.924 10:31:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:29:29.924 10:31:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:29.924 10:31:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:29.924 { 00:29:29.924 "params": { 00:29:29.924 "name": "Nvme$subsystem", 00:29:29.924 "trtype": "$TEST_TRANSPORT", 00:29:29.924 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:29.924 "adrfam": "ipv4", 00:29:29.924 "trsvcid": "$NVMF_PORT", 00:29:29.924 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:29.924 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:29.924 "hdgst": ${hdgst:-false}, 00:29:29.924 "ddgst": ${ddgst:-false} 00:29:29.924 }, 00:29:29.924 "method": "bdev_nvme_attach_controller" 00:29:29.924 } 00:29:29.924 EOF 00:29:29.924 )") 00:29:29.924 10:31:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:29:29.924 10:31:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:29.924 10:31:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:29.924 { 00:29:29.924 "params": { 00:29:29.924 "name": "Nvme$subsystem", 00:29:29.924 "trtype": "$TEST_TRANSPORT", 00:29:29.924 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:29.924 "adrfam": "ipv4", 00:29:29.924 "trsvcid": "$NVMF_PORT", 00:29:29.924 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:29.924 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:29.924 "hdgst": ${hdgst:-false}, 00:29:29.924 "ddgst": ${ddgst:-false} 00:29:29.924 }, 00:29:29.924 "method": "bdev_nvme_attach_controller" 00:29:29.924 } 00:29:29.924 EOF 00:29:29.924 )") 00:29:29.924 10:31:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:29:29.924 10:31:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:29.924 10:31:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:29.924 { 00:29:29.924 "params": { 00:29:29.924 "name": "Nvme$subsystem", 00:29:29.924 "trtype": "$TEST_TRANSPORT", 00:29:29.924 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:29.924 "adrfam": "ipv4", 00:29:29.924 "trsvcid": "$NVMF_PORT", 00:29:29.924 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:29.924 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:29.924 "hdgst": ${hdgst:-false}, 00:29:29.924 "ddgst": ${ddgst:-false} 00:29:29.924 }, 00:29:29.924 "method": "bdev_nvme_attach_controller" 00:29:29.924 } 00:29:29.924 EOF 00:29:29.924 )") 00:29:29.924 10:31:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:29:29.924 10:31:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:29.924 10:31:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:29.924 { 00:29:29.924 "params": { 00:29:29.924 "name": "Nvme$subsystem", 00:29:29.924 "trtype": "$TEST_TRANSPORT", 00:29:29.924 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:29.924 "adrfam": "ipv4", 00:29:29.924 "trsvcid": "$NVMF_PORT", 00:29:29.924 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:29.924 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:29.924 "hdgst": ${hdgst:-false}, 00:29:29.924 "ddgst": ${ddgst:-false} 00:29:29.924 }, 00:29:29.924 "method": "bdev_nvme_attach_controller" 00:29:29.924 } 00:29:29.924 EOF 00:29:29.924 )") 00:29:29.924 10:31:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:29:29.924 10:31:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:29.924 10:31:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:29.924 { 00:29:29.924 "params": { 00:29:29.924 "name": "Nvme$subsystem", 00:29:29.924 "trtype": "$TEST_TRANSPORT", 00:29:29.924 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:29.924 "adrfam": "ipv4", 00:29:29.924 "trsvcid": "$NVMF_PORT", 00:29:29.924 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:29.924 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:29.924 "hdgst": ${hdgst:-false}, 00:29:29.924 "ddgst": ${ddgst:-false} 00:29:29.924 }, 00:29:29.924 "method": "bdev_nvme_attach_controller" 00:29:29.924 } 00:29:29.924 EOF 00:29:29.924 )") 00:29:29.924 10:31:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:29:29.924 10:31:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:29.924 10:31:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:29.924 { 00:29:29.924 "params": { 00:29:29.924 "name": "Nvme$subsystem", 00:29:29.924 "trtype": "$TEST_TRANSPORT", 00:29:29.924 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:29.924 "adrfam": "ipv4", 00:29:29.924 "trsvcid": "$NVMF_PORT", 00:29:29.924 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:29.924 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:29.924 "hdgst": ${hdgst:-false}, 00:29:29.924 "ddgst": ${ddgst:-false} 00:29:29.924 }, 00:29:29.924 "method": "bdev_nvme_attach_controller" 00:29:29.924 } 00:29:29.924 EOF 00:29:29.924 )") 00:29:29.924 10:31:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:29:29.924 10:31:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:29.924 10:31:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:29.924 { 00:29:29.924 "params": { 00:29:29.924 "name": "Nvme$subsystem", 00:29:29.924 "trtype": "$TEST_TRANSPORT", 00:29:29.924 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:29.924 "adrfam": "ipv4", 00:29:29.924 "trsvcid": "$NVMF_PORT", 00:29:29.924 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:29.924 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:29.924 "hdgst": ${hdgst:-false}, 00:29:29.924 "ddgst": ${ddgst:-false} 00:29:29.924 }, 00:29:29.924 "method": "bdev_nvme_attach_controller" 00:29:29.924 } 00:29:29.924 EOF 00:29:29.924 )") 00:29:29.924 10:31:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:29:29.924 10:31:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@584 -- # jq . 00:29:29.924 10:31:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@585 -- # IFS=, 00:29:29.924 10:31:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:29.924 "params": { 00:29:29.924 "name": "Nvme1", 00:29:29.924 "trtype": "tcp", 00:29:29.924 "traddr": "10.0.0.2", 00:29:29.924 "adrfam": "ipv4", 00:29:29.924 "trsvcid": "4420", 00:29:29.924 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:29.924 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:29.924 "hdgst": false, 00:29:29.924 "ddgst": false 00:29:29.924 }, 00:29:29.924 "method": "bdev_nvme_attach_controller" 00:29:29.924 },{ 00:29:29.924 "params": { 00:29:29.924 "name": "Nvme2", 00:29:29.924 "trtype": "tcp", 00:29:29.924 "traddr": "10.0.0.2", 00:29:29.924 "adrfam": "ipv4", 00:29:29.924 "trsvcid": "4420", 00:29:29.924 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:29:29.924 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:29:29.924 "hdgst": false, 00:29:29.924 "ddgst": false 00:29:29.924 }, 00:29:29.924 "method": "bdev_nvme_attach_controller" 00:29:29.924 },{ 00:29:29.924 "params": { 00:29:29.924 "name": "Nvme3", 00:29:29.924 "trtype": "tcp", 00:29:29.924 "traddr": "10.0.0.2", 00:29:29.924 "adrfam": "ipv4", 00:29:29.924 "trsvcid": "4420", 00:29:29.924 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:29:29.924 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:29:29.924 "hdgst": false, 00:29:29.924 "ddgst": false 00:29:29.924 }, 00:29:29.924 "method": "bdev_nvme_attach_controller" 00:29:29.924 },{ 00:29:29.924 "params": { 00:29:29.924 "name": "Nvme4", 00:29:29.924 "trtype": "tcp", 00:29:29.924 "traddr": "10.0.0.2", 00:29:29.924 "adrfam": "ipv4", 00:29:29.924 "trsvcid": "4420", 00:29:29.925 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:29:29.925 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:29:29.925 "hdgst": false, 00:29:29.925 "ddgst": false 00:29:29.925 }, 00:29:29.925 "method": "bdev_nvme_attach_controller" 00:29:29.925 },{ 00:29:29.925 "params": { 00:29:29.925 "name": "Nvme5", 00:29:29.925 "trtype": "tcp", 00:29:29.925 "traddr": "10.0.0.2", 00:29:29.925 "adrfam": "ipv4", 00:29:29.925 "trsvcid": "4420", 00:29:29.925 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:29:29.925 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:29:29.925 "hdgst": false, 00:29:29.925 "ddgst": false 00:29:29.925 }, 00:29:29.925 "method": "bdev_nvme_attach_controller" 00:29:29.925 },{ 00:29:29.925 "params": { 00:29:29.925 "name": "Nvme6", 00:29:29.925 "trtype": "tcp", 00:29:29.925 "traddr": "10.0.0.2", 00:29:29.925 "adrfam": "ipv4", 00:29:29.925 "trsvcid": "4420", 00:29:29.925 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:29:29.925 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:29:29.925 "hdgst": false, 00:29:29.925 "ddgst": false 00:29:29.925 }, 00:29:29.925 "method": "bdev_nvme_attach_controller" 00:29:29.925 },{ 00:29:29.925 "params": { 00:29:29.925 "name": "Nvme7", 00:29:29.925 "trtype": "tcp", 00:29:29.925 "traddr": "10.0.0.2", 00:29:29.925 "adrfam": "ipv4", 00:29:29.925 "trsvcid": "4420", 00:29:29.925 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:29:29.925 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:29:29.925 "hdgst": false, 00:29:29.925 "ddgst": false 00:29:29.925 }, 00:29:29.925 "method": "bdev_nvme_attach_controller" 00:29:29.925 },{ 00:29:29.925 "params": { 00:29:29.925 "name": "Nvme8", 00:29:29.925 "trtype": "tcp", 00:29:29.925 "traddr": "10.0.0.2", 00:29:29.925 "adrfam": "ipv4", 00:29:29.925 "trsvcid": "4420", 00:29:29.925 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:29:29.925 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:29:29.925 "hdgst": false, 00:29:29.925 "ddgst": false 00:29:29.925 }, 00:29:29.925 "method": "bdev_nvme_attach_controller" 00:29:29.925 },{ 00:29:29.925 "params": { 00:29:29.925 "name": "Nvme9", 00:29:29.925 "trtype": "tcp", 00:29:29.925 "traddr": "10.0.0.2", 00:29:29.925 "adrfam": "ipv4", 00:29:29.925 "trsvcid": "4420", 00:29:29.925 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:29:29.925 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:29:29.925 "hdgst": false, 00:29:29.925 "ddgst": false 00:29:29.925 }, 00:29:29.925 "method": "bdev_nvme_attach_controller" 00:29:29.925 },{ 00:29:29.925 "params": { 00:29:29.925 "name": "Nvme10", 00:29:29.925 "trtype": "tcp", 00:29:29.925 "traddr": "10.0.0.2", 00:29:29.925 "adrfam": "ipv4", 00:29:29.925 "trsvcid": "4420", 00:29:29.925 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:29:29.925 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:29:29.925 "hdgst": false, 00:29:29.925 "ddgst": false 00:29:29.925 }, 00:29:29.925 "method": "bdev_nvme_attach_controller" 00:29:29.925 }' 00:29:29.925 [2024-12-13 10:31:23.739851] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:29:29.925 [2024-12-13 10:31:23.739933] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4035628 ] 00:29:30.184 [2024-12-13 10:31:23.854985] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:30.184 [2024-12-13 10:31:23.970362] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:29:32.082 Running I/O for 10 seconds... 00:29:32.648 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:32.648 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:29:32.648 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:29:32.648 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:32.648 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:32.648 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:32.648 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:29:32.648 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:29:32.648 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:29:32.648 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:29:32.648 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:29:32.648 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:29:32.648 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:29:32.648 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:29:32.648 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:29:32.648 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:32.648 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:32.648 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:32.648 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=67 00:29:32.648 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:29:32.648 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:29:32.906 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:29:32.906 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:29:32.906 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:29:32.906 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:29:32.906 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:32.906 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:32.906 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:32.906 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=155 00:29:32.906 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 155 -ge 100 ']' 00:29:32.906 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:29:32.906 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:29:32.906 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:29:32.906 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 4035628 00:29:32.906 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 4035628 ']' 00:29:32.906 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 4035628 00:29:32.906 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:29:32.906 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:32.906 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4035628 00:29:32.906 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:32.906 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:32.907 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4035628' 00:29:32.907 killing process with pid 4035628 00:29:32.907 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 4035628 00:29:32.907 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 4035628 00:29:32.907 Received shutdown signal, test time was about 0.908576 seconds 00:29:32.907 00:29:32.907 Latency(us) 00:29:32.907 [2024-12-13T09:31:26.798Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:32.907 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:32.907 Verification LBA range: start 0x0 length 0x400 00:29:32.907 Nvme1n1 : 0.90 284.96 17.81 0.00 0.00 221407.33 29210.33 230686.72 00:29:32.907 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:32.907 Verification LBA range: start 0x0 length 0x400 00:29:32.907 Nvme2n1 : 0.90 289.76 18.11 0.00 0.00 212082.55 8987.79 210713.84 00:29:32.907 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:32.907 Verification LBA range: start 0x0 length 0x400 00:29:32.907 Nvme3n1 : 0.90 283.97 17.75 0.00 0.00 213728.55 18100.42 233682.65 00:29:32.907 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:32.907 Verification LBA range: start 0x0 length 0x400 00:29:32.907 Nvme4n1 : 0.91 281.97 17.62 0.00 0.00 211701.39 14168.26 246665.02 00:29:32.907 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:32.907 Verification LBA range: start 0x0 length 0x400 00:29:32.907 Nvme5n1 : 0.88 216.97 13.56 0.00 0.00 269220.98 20222.54 259647.39 00:29:32.907 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:32.907 Verification LBA range: start 0x0 length 0x400 00:29:32.907 Nvme6n1 : 0.88 222.25 13.89 0.00 0.00 256325.93 3089.55 242670.45 00:29:32.907 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:32.907 Verification LBA range: start 0x0 length 0x400 00:29:32.907 Nvme7n1 : 0.86 223.53 13.97 0.00 0.00 249189.59 19848.05 240673.16 00:29:32.907 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:32.907 Verification LBA range: start 0x0 length 0x400 00:29:32.907 Nvme8n1 : 0.87 221.84 13.87 0.00 0.00 244965.83 16727.28 236678.58 00:29:32.907 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:32.907 Verification LBA range: start 0x0 length 0x400 00:29:32.907 Nvme9n1 : 0.87 219.46 13.72 0.00 0.00 243311.18 20347.37 245666.38 00:29:32.907 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:32.907 Verification LBA range: start 0x0 length 0x400 00:29:32.907 Nvme10n1 : 0.89 214.54 13.41 0.00 0.00 244421.00 20097.71 269633.83 00:29:32.907 [2024-12-13T09:31:26.798Z] =================================================================================================================== 00:29:32.907 [2024-12-13T09:31:26.798Z] Total : 2459.27 153.70 0.00 0.00 234048.79 3089.55 269633.83 00:29:34.280 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:29:35.213 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 4035141 00:29:35.213 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:29:35.213 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:29:35.213 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:29:35.213 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:35.213 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:29:35.213 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:35.213 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:29:35.214 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:35.214 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:29:35.214 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:35.214 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:35.214 rmmod nvme_tcp 00:29:35.214 rmmod nvme_fabrics 00:29:35.214 rmmod nvme_keyring 00:29:35.214 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:35.214 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:29:35.214 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:29:35.214 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@517 -- # '[' -n 4035141 ']' 00:29:35.214 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@518 -- # killprocess 4035141 00:29:35.214 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 4035141 ']' 00:29:35.214 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 4035141 00:29:35.214 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:29:35.214 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:35.214 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4035141 00:29:35.214 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:35.214 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:35.214 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4035141' 00:29:35.214 killing process with pid 4035141 00:29:35.214 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 4035141 00:29:35.214 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 4035141 00:29:38.494 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:38.494 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:38.494 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:38.494 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # iptr 00:29:38.494 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-save 00:29:38.494 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:38.494 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-restore 00:29:38.494 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:38.494 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:38.494 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:38.494 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:38.494 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:40.395 10:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:40.395 00:29:40.395 real 0m12.810s 00:29:40.395 user 0m43.188s 00:29:40.395 sys 0m1.729s 00:29:40.395 10:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:40.395 10:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:40.395 ************************************ 00:29:40.395 END TEST nvmf_shutdown_tc2 00:29:40.395 ************************************ 00:29:40.395 10:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:29:40.395 10:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:40.395 10:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:40.395 10:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:40.395 ************************************ 00:29:40.395 START TEST nvmf_shutdown_tc3 00:29:40.395 ************************************ 00:29:40.395 10:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc3 00:29:40.395 10:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:29:40.395 10:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:29:40.395 10:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:40.395 10:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:40.395 10:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:40.395 10:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:40.395 10:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:40.395 10:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:40.395 10:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:40.395 10:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:40.395 10:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:40.395 10:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:40.395 10:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:29:40.395 10:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:40.395 10:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:40.396 10:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:29:40.396 10:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:40.396 10:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:40.396 10:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:40.396 10:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:40.396 10:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:40.396 10:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:29:40.396 10:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:40.396 10:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:29:40.396 10:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:29:40.396 10:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:29:40.396 10:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:29:40.396 10:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:29:40.396 10:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:29:40.396 10:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:40.396 10:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:40.396 10:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:40.396 10:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:40.396 10:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:40.396 10:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:40.396 10:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:40.396 10:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:40.396 10:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:40.396 10:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:40.396 10:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:40.396 10:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:40.396 10:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:40.396 10:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:40.396 10:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:40.396 10:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:40.396 10:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:40.396 10:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:40.396 10:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:40.396 10:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:29:40.396 Found 0000:af:00.0 (0x8086 - 0x159b) 00:29:40.396 10:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:40.396 10:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:40.396 10:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:40.396 10:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:40.396 10:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:40.396 10:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:40.396 10:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:29:40.396 Found 0000:af:00.1 (0x8086 - 0x159b) 00:29:40.396 10:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:40.396 10:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:40.396 10:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:40.396 10:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:40.396 10:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:40.396 10:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:40.396 10:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:40.396 10:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:40.396 10:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:40.396 10:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:40.396 10:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:40.396 10:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:40.396 10:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:40.396 10:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:40.396 10:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:40.396 10:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:29:40.396 Found net devices under 0000:af:00.0: cvl_0_0 00:29:40.396 10:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:40.396 10:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:40.396 10:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:40.396 10:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:40.396 10:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:40.396 10:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:40.396 10:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:40.396 10:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:40.396 10:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:29:40.396 Found net devices under 0000:af:00.1: cvl_0_1 00:29:40.396 10:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:40.396 10:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:40.396 10:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # is_hw=yes 00:29:40.396 10:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:40.396 10:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:40.396 10:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:40.396 10:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:40.396 10:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:40.396 10:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:40.396 10:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:40.396 10:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:40.396 10:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:40.396 10:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:40.396 10:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:40.396 10:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:40.396 10:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:40.396 10:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:40.396 10:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:40.396 10:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:40.396 10:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:40.396 10:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:40.655 10:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:40.655 10:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:40.655 10:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:40.655 10:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:40.655 10:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:40.655 10:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:40.655 10:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:40.655 10:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:40.655 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:40.655 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.290 ms 00:29:40.655 00:29:40.655 --- 10.0.0.2 ping statistics --- 00:29:40.655 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:40.655 rtt min/avg/max/mdev = 0.290/0.290/0.290/0.000 ms 00:29:40.655 10:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:40.655 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:40.655 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.195 ms 00:29:40.655 00:29:40.655 --- 10.0.0.1 ping statistics --- 00:29:40.655 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:40.655 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:29:40.655 10:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:40.655 10:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # return 0 00:29:40.655 10:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:40.655 10:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:40.655 10:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:40.655 10:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:40.655 10:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:40.655 10:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:40.655 10:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:40.655 10:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:29:40.655 10:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:40.655 10:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:40.655 10:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:40.655 10:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@509 -- # nvmfpid=4037324 00:29:40.655 10:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@510 -- # waitforlisten 4037324 00:29:40.655 10:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:29:40.655 10:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 4037324 ']' 00:29:40.655 10:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:40.655 10:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:40.655 10:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:40.655 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:40.655 10:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:40.655 10:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:40.913 [2024-12-13 10:31:34.629479] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:29:40.913 [2024-12-13 10:31:34.629592] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:40.913 [2024-12-13 10:31:34.749395] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:41.171 [2024-12-13 10:31:34.860639] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:41.171 [2024-12-13 10:31:34.860685] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:41.171 [2024-12-13 10:31:34.860695] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:41.171 [2024-12-13 10:31:34.860705] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:41.171 [2024-12-13 10:31:34.860713] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:41.171 [2024-12-13 10:31:34.863084] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:29:41.171 [2024-12-13 10:31:34.863163] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:29:41.171 [2024-12-13 10:31:34.863241] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:29:41.171 [2024-12-13 10:31:34.863263] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:29:41.737 10:31:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:41.737 10:31:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:29:41.737 10:31:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:41.737 10:31:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:41.737 10:31:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:41.737 10:31:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:41.737 10:31:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:41.737 10:31:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:41.737 10:31:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:41.737 [2024-12-13 10:31:35.465522] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:41.737 10:31:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:41.737 10:31:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:29:41.737 10:31:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:29:41.737 10:31:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:41.737 10:31:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:41.737 10:31:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:41.737 10:31:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:41.737 10:31:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:41.737 10:31:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:41.737 10:31:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:41.737 10:31:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:41.737 10:31:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:41.737 10:31:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:41.737 10:31:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:41.737 10:31:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:41.737 10:31:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:41.737 10:31:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:41.737 10:31:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:41.737 10:31:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:41.737 10:31:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:41.737 10:31:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:41.737 10:31:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:41.737 10:31:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:41.737 10:31:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:41.737 10:31:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:41.737 10:31:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:41.737 10:31:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:29:41.737 10:31:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:41.737 10:31:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:41.737 Malloc1 00:29:41.737 [2024-12-13 10:31:35.621493] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:41.995 Malloc2 00:29:41.995 Malloc3 00:29:41.995 Malloc4 00:29:42.253 Malloc5 00:29:42.253 Malloc6 00:29:42.510 Malloc7 00:29:42.510 Malloc8 00:29:42.510 Malloc9 00:29:42.769 Malloc10 00:29:42.769 10:31:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:42.769 10:31:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:29:42.769 10:31:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:42.769 10:31:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:42.769 10:31:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=4037810 00:29:42.769 10:31:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 4037810 /var/tmp/bdevperf.sock 00:29:42.769 10:31:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 4037810 ']' 00:29:42.769 10:31:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:42.769 10:31:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:29:42.769 10:31:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:42.769 10:31:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:29:42.769 10:31:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:42.769 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:42.769 10:31:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # config=() 00:29:42.769 10:31:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:42.769 10:31:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # local subsystem config 00:29:42.769 10:31:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:42.769 10:31:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:42.769 10:31:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:42.769 { 00:29:42.769 "params": { 00:29:42.769 "name": "Nvme$subsystem", 00:29:42.769 "trtype": "$TEST_TRANSPORT", 00:29:42.769 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:42.769 "adrfam": "ipv4", 00:29:42.769 "trsvcid": "$NVMF_PORT", 00:29:42.769 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:42.769 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:42.769 "hdgst": ${hdgst:-false}, 00:29:42.769 "ddgst": ${ddgst:-false} 00:29:42.769 }, 00:29:42.769 "method": "bdev_nvme_attach_controller" 00:29:42.769 } 00:29:42.769 EOF 00:29:42.769 )") 00:29:42.769 10:31:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:29:42.769 10:31:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:42.769 10:31:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:42.769 { 00:29:42.769 "params": { 00:29:42.769 "name": "Nvme$subsystem", 00:29:42.769 "trtype": "$TEST_TRANSPORT", 00:29:42.769 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:42.769 "adrfam": "ipv4", 00:29:42.769 "trsvcid": "$NVMF_PORT", 00:29:42.769 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:42.769 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:42.769 "hdgst": ${hdgst:-false}, 00:29:42.769 "ddgst": ${ddgst:-false} 00:29:42.769 }, 00:29:42.769 "method": "bdev_nvme_attach_controller" 00:29:42.769 } 00:29:42.769 EOF 00:29:42.769 )") 00:29:42.769 10:31:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:29:42.769 10:31:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:42.769 10:31:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:42.769 { 00:29:42.769 "params": { 00:29:42.769 "name": "Nvme$subsystem", 00:29:42.769 "trtype": "$TEST_TRANSPORT", 00:29:42.769 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:42.769 "adrfam": "ipv4", 00:29:42.769 "trsvcid": "$NVMF_PORT", 00:29:42.769 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:42.769 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:42.769 "hdgst": ${hdgst:-false}, 00:29:42.769 "ddgst": ${ddgst:-false} 00:29:42.769 }, 00:29:42.769 "method": "bdev_nvme_attach_controller" 00:29:42.769 } 00:29:42.769 EOF 00:29:42.769 )") 00:29:42.769 10:31:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:29:42.769 10:31:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:42.769 10:31:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:42.769 { 00:29:42.769 "params": { 00:29:42.769 "name": "Nvme$subsystem", 00:29:42.769 "trtype": "$TEST_TRANSPORT", 00:29:42.769 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:42.769 "adrfam": "ipv4", 00:29:42.769 "trsvcid": "$NVMF_PORT", 00:29:42.769 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:42.769 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:42.769 "hdgst": ${hdgst:-false}, 00:29:42.769 "ddgst": ${ddgst:-false} 00:29:42.769 }, 00:29:42.769 "method": "bdev_nvme_attach_controller" 00:29:42.769 } 00:29:42.769 EOF 00:29:42.769 )") 00:29:42.769 10:31:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:29:42.769 10:31:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:42.769 10:31:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:42.769 { 00:29:42.769 "params": { 00:29:42.769 "name": "Nvme$subsystem", 00:29:42.769 "trtype": "$TEST_TRANSPORT", 00:29:42.769 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:42.769 "adrfam": "ipv4", 00:29:42.769 "trsvcid": "$NVMF_PORT", 00:29:42.769 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:42.769 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:42.769 "hdgst": ${hdgst:-false}, 00:29:42.769 "ddgst": ${ddgst:-false} 00:29:42.769 }, 00:29:42.769 "method": "bdev_nvme_attach_controller" 00:29:42.769 } 00:29:42.769 EOF 00:29:42.769 )") 00:29:42.769 10:31:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:29:42.769 10:31:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:42.770 10:31:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:42.770 { 00:29:42.770 "params": { 00:29:42.770 "name": "Nvme$subsystem", 00:29:42.770 "trtype": "$TEST_TRANSPORT", 00:29:42.770 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:42.770 "adrfam": "ipv4", 00:29:42.770 "trsvcid": "$NVMF_PORT", 00:29:42.770 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:42.770 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:42.770 "hdgst": ${hdgst:-false}, 00:29:42.770 "ddgst": ${ddgst:-false} 00:29:42.770 }, 00:29:42.770 "method": "bdev_nvme_attach_controller" 00:29:42.770 } 00:29:42.770 EOF 00:29:42.770 )") 00:29:42.770 10:31:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:29:42.770 10:31:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:42.770 10:31:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:42.770 { 00:29:42.770 "params": { 00:29:42.770 "name": "Nvme$subsystem", 00:29:42.770 "trtype": "$TEST_TRANSPORT", 00:29:42.770 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:42.770 "adrfam": "ipv4", 00:29:42.770 "trsvcid": "$NVMF_PORT", 00:29:42.770 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:42.770 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:42.770 "hdgst": ${hdgst:-false}, 00:29:42.770 "ddgst": ${ddgst:-false} 00:29:42.770 }, 00:29:42.770 "method": "bdev_nvme_attach_controller" 00:29:42.770 } 00:29:42.770 EOF 00:29:42.770 )") 00:29:42.770 10:31:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:29:42.770 10:31:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:42.770 10:31:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:42.770 { 00:29:42.770 "params": { 00:29:42.770 "name": "Nvme$subsystem", 00:29:42.770 "trtype": "$TEST_TRANSPORT", 00:29:42.770 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:42.770 "adrfam": "ipv4", 00:29:42.770 "trsvcid": "$NVMF_PORT", 00:29:42.770 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:42.770 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:42.770 "hdgst": ${hdgst:-false}, 00:29:42.770 "ddgst": ${ddgst:-false} 00:29:42.770 }, 00:29:42.770 "method": "bdev_nvme_attach_controller" 00:29:42.770 } 00:29:42.770 EOF 00:29:42.770 )") 00:29:42.770 10:31:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:29:42.770 10:31:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:42.770 10:31:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:42.770 { 00:29:42.770 "params": { 00:29:42.770 "name": "Nvme$subsystem", 00:29:42.770 "trtype": "$TEST_TRANSPORT", 00:29:42.770 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:42.770 "adrfam": "ipv4", 00:29:42.770 "trsvcid": "$NVMF_PORT", 00:29:42.770 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:42.770 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:42.770 "hdgst": ${hdgst:-false}, 00:29:42.770 "ddgst": ${ddgst:-false} 00:29:42.770 }, 00:29:42.770 "method": "bdev_nvme_attach_controller" 00:29:42.770 } 00:29:42.770 EOF 00:29:42.770 )") 00:29:42.770 10:31:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:29:42.770 10:31:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:42.770 10:31:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:42.770 { 00:29:42.770 "params": { 00:29:42.770 "name": "Nvme$subsystem", 00:29:42.770 "trtype": "$TEST_TRANSPORT", 00:29:42.770 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:42.770 "adrfam": "ipv4", 00:29:42.770 "trsvcid": "$NVMF_PORT", 00:29:42.770 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:42.770 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:42.770 "hdgst": ${hdgst:-false}, 00:29:42.770 "ddgst": ${ddgst:-false} 00:29:42.770 }, 00:29:42.770 "method": "bdev_nvme_attach_controller" 00:29:42.770 } 00:29:42.770 EOF 00:29:42.770 )") 00:29:42.770 10:31:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:29:42.770 10:31:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@584 -- # jq . 00:29:42.770 [2024-12-13 10:31:36.630120] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:29:42.770 [2024-12-13 10:31:36.630227] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4037810 ] 00:29:42.770 10:31:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@585 -- # IFS=, 00:29:42.770 10:31:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:42.770 "params": { 00:29:42.770 "name": "Nvme1", 00:29:42.770 "trtype": "tcp", 00:29:42.770 "traddr": "10.0.0.2", 00:29:42.770 "adrfam": "ipv4", 00:29:42.770 "trsvcid": "4420", 00:29:42.770 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:42.770 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:42.770 "hdgst": false, 00:29:42.770 "ddgst": false 00:29:42.770 }, 00:29:42.770 "method": "bdev_nvme_attach_controller" 00:29:42.770 },{ 00:29:42.770 "params": { 00:29:42.770 "name": "Nvme2", 00:29:42.770 "trtype": "tcp", 00:29:42.770 "traddr": "10.0.0.2", 00:29:42.770 "adrfam": "ipv4", 00:29:42.770 "trsvcid": "4420", 00:29:42.770 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:29:42.770 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:29:42.770 "hdgst": false, 00:29:42.770 "ddgst": false 00:29:42.770 }, 00:29:42.770 "method": "bdev_nvme_attach_controller" 00:29:42.770 },{ 00:29:42.770 "params": { 00:29:42.770 "name": "Nvme3", 00:29:42.770 "trtype": "tcp", 00:29:42.770 "traddr": "10.0.0.2", 00:29:42.770 "adrfam": "ipv4", 00:29:42.770 "trsvcid": "4420", 00:29:42.770 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:29:42.770 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:29:42.770 "hdgst": false, 00:29:42.770 "ddgst": false 00:29:42.770 }, 00:29:42.770 "method": "bdev_nvme_attach_controller" 00:29:42.770 },{ 00:29:42.770 "params": { 00:29:42.770 "name": "Nvme4", 00:29:42.770 "trtype": "tcp", 00:29:42.770 "traddr": "10.0.0.2", 00:29:42.770 "adrfam": "ipv4", 00:29:42.770 "trsvcid": "4420", 00:29:42.770 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:29:42.770 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:29:42.770 "hdgst": false, 00:29:42.770 "ddgst": false 00:29:42.770 }, 00:29:42.770 "method": "bdev_nvme_attach_controller" 00:29:42.770 },{ 00:29:42.770 "params": { 00:29:42.770 "name": "Nvme5", 00:29:42.770 "trtype": "tcp", 00:29:42.770 "traddr": "10.0.0.2", 00:29:42.770 "adrfam": "ipv4", 00:29:42.770 "trsvcid": "4420", 00:29:42.770 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:29:42.770 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:29:42.770 "hdgst": false, 00:29:42.770 "ddgst": false 00:29:42.770 }, 00:29:42.770 "method": "bdev_nvme_attach_controller" 00:29:42.770 },{ 00:29:42.770 "params": { 00:29:42.770 "name": "Nvme6", 00:29:42.770 "trtype": "tcp", 00:29:42.770 "traddr": "10.0.0.2", 00:29:42.770 "adrfam": "ipv4", 00:29:42.770 "trsvcid": "4420", 00:29:42.770 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:29:42.770 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:29:42.770 "hdgst": false, 00:29:42.770 "ddgst": false 00:29:42.770 }, 00:29:42.770 "method": "bdev_nvme_attach_controller" 00:29:42.770 },{ 00:29:42.770 "params": { 00:29:42.770 "name": "Nvme7", 00:29:42.770 "trtype": "tcp", 00:29:42.770 "traddr": "10.0.0.2", 00:29:42.770 "adrfam": "ipv4", 00:29:42.770 "trsvcid": "4420", 00:29:42.770 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:29:42.770 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:29:42.770 "hdgst": false, 00:29:42.770 "ddgst": false 00:29:42.770 }, 00:29:42.770 "method": "bdev_nvme_attach_controller" 00:29:42.770 },{ 00:29:42.770 "params": { 00:29:42.770 "name": "Nvme8", 00:29:42.770 "trtype": "tcp", 00:29:42.770 "traddr": "10.0.0.2", 00:29:42.770 "adrfam": "ipv4", 00:29:42.770 "trsvcid": "4420", 00:29:42.770 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:29:42.770 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:29:42.770 "hdgst": false, 00:29:42.770 "ddgst": false 00:29:42.770 }, 00:29:42.770 "method": "bdev_nvme_attach_controller" 00:29:42.770 },{ 00:29:42.770 "params": { 00:29:42.770 "name": "Nvme9", 00:29:42.770 "trtype": "tcp", 00:29:42.770 "traddr": "10.0.0.2", 00:29:42.770 "adrfam": "ipv4", 00:29:42.770 "trsvcid": "4420", 00:29:42.770 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:29:42.770 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:29:42.770 "hdgst": false, 00:29:42.770 "ddgst": false 00:29:42.770 }, 00:29:42.770 "method": "bdev_nvme_attach_controller" 00:29:42.770 },{ 00:29:42.770 "params": { 00:29:42.770 "name": "Nvme10", 00:29:42.770 "trtype": "tcp", 00:29:42.770 "traddr": "10.0.0.2", 00:29:42.770 "adrfam": "ipv4", 00:29:42.770 "trsvcid": "4420", 00:29:42.770 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:29:42.770 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:29:42.770 "hdgst": false, 00:29:42.770 "ddgst": false 00:29:42.770 }, 00:29:42.770 "method": "bdev_nvme_attach_controller" 00:29:42.770 }' 00:29:43.028 [2024-12-13 10:31:36.745835] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:43.028 [2024-12-13 10:31:36.852779] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:29:44.928 Running I/O for 10 seconds... 00:29:45.495 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:45.495 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:29:45.495 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:29:45.495 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:45.495 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:45.495 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:45.495 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:45.495 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:29:45.495 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:29:45.495 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:29:45.495 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:29:45.495 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:29:45.495 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:29:45.495 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:29:45.495 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:29:45.495 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:29:45.495 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:45.495 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:45.495 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:45.495 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=67 00:29:45.495 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:29:45.495 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:29:45.757 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:29:45.757 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:29:45.757 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:29:45.757 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:29:45.757 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:45.757 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:45.757 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:45.757 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=135 00:29:45.757 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 135 -ge 100 ']' 00:29:45.757 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:29:45.757 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:29:45.757 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:29:45.757 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 4037324 00:29:45.757 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 4037324 ']' 00:29:45.757 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 4037324 00:29:45.757 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # uname 00:29:45.757 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:45.757 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4037324 00:29:45.757 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:45.757 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:45.757 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4037324' 00:29:45.757 killing process with pid 4037324 00:29:45.757 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@973 -- # kill 4037324 00:29:45.757 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@978 -- # wait 4037324 00:29:45.757 [2024-12-13 10:31:39.625730] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:45.757 [2024-12-13 10:31:39.625790] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:45.757 [2024-12-13 10:31:39.625801] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:45.757 [2024-12-13 10:31:39.631493] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:45.757 [2024-12-13 10:31:39.631529] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:45.757 [2024-12-13 10:31:39.631540] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:45.757 [2024-12-13 10:31:39.631551] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:45.757 [2024-12-13 10:31:39.631560] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:45.757 [2024-12-13 10:31:39.631569] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:45.757 [2024-12-13 10:31:39.631579] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:45.757 [2024-12-13 10:31:39.631598] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:45.757 [2024-12-13 10:31:39.631607] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:45.757 [2024-12-13 10:31:39.631616] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:45.757 [2024-12-13 10:31:39.631625] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:45.757 [2024-12-13 10:31:39.631634] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:45.757 [2024-12-13 10:31:39.631642] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:45.757 [2024-12-13 10:31:39.631651] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:45.757 [2024-12-13 10:31:39.631661] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:45.757 [2024-12-13 10:31:39.631671] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:45.757 [2024-12-13 10:31:39.631680] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:45.757 [2024-12-13 10:31:39.631689] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:45.757 [2024-12-13 10:31:39.631697] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:45.757 [2024-12-13 10:31:39.631710] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:45.757 [2024-12-13 10:31:39.631720] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:45.757 [2024-12-13 10:31:39.631728] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:45.757 [2024-12-13 10:31:39.631738] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:45.757 [2024-12-13 10:31:39.631746] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:45.757 [2024-12-13 10:31:39.631755] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:45.757 [2024-12-13 10:31:39.631764] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:45.757 [2024-12-13 10:31:39.631772] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:45.757 [2024-12-13 10:31:39.631781] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:45.757 [2024-12-13 10:31:39.631790] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:45.757 [2024-12-13 10:31:39.631798] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:45.757 [2024-12-13 10:31:39.631807] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:45.757 [2024-12-13 10:31:39.631815] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:45.757 [2024-12-13 10:31:39.631823] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:45.757 [2024-12-13 10:31:39.631831] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:45.757 [2024-12-13 10:31:39.631841] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:45.757 [2024-12-13 10:31:39.631850] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:45.757 [2024-12-13 10:31:39.631858] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:45.757 [2024-12-13 10:31:39.631867] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:45.757 [2024-12-13 10:31:39.631875] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:45.758 [2024-12-13 10:31:39.631896] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:45.758 [2024-12-13 10:31:39.631904] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:45.758 [2024-12-13 10:31:39.631912] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:45.758 [2024-12-13 10:31:39.631920] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:45.758 [2024-12-13 10:31:39.631928] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:45.758 [2024-12-13 10:31:39.631937] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:45.758 [2024-12-13 10:31:39.631945] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:45.758 [2024-12-13 10:31:39.631955] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:45.758 [2024-12-13 10:31:39.631963] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:45.758 [2024-12-13 10:31:39.631971] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:45.758 [2024-12-13 10:31:39.631979] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:45.758 [2024-12-13 10:31:39.631988] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:45.758 [2024-12-13 10:31:39.631996] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:45.758 [2024-12-13 10:31:39.632005] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:45.758 [2024-12-13 10:31:39.632013] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:45.758 [2024-12-13 10:31:39.632021] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:45.758 [2024-12-13 10:31:39.632029] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:45.758 [2024-12-13 10:31:39.632037] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:45.758 [2024-12-13 10:31:39.632046] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:45.758 [2024-12-13 10:31:39.632054] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:45.758 [2024-12-13 10:31:39.632062] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:45.758 [2024-12-13 10:31:39.632070] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:45.758 [2024-12-13 10:31:39.632078] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:45.758 [2024-12-13 10:31:39.632086] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:45.758 [2024-12-13 10:31:39.634631] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:45.758 [2024-12-13 10:31:39.634656] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:45.758 [2024-12-13 10:31:39.634665] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:45.758 [2024-12-13 10:31:39.634675] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:45.758 [2024-12-13 10:31:39.634683] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:45.758 [2024-12-13 10:31:39.634692] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:45.758 [2024-12-13 10:31:39.634700] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:45.758 [2024-12-13 10:31:39.634708] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:45.758 [2024-12-13 10:31:39.634718] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:45.758 [2024-12-13 10:31:39.634731] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:45.758 [2024-12-13 10:31:39.634740] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:45.758 [2024-12-13 10:31:39.634749] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:45.758 [2024-12-13 10:31:39.634757] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:45.758 [2024-12-13 10:31:39.634767] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:45.758 [2024-12-13 10:31:39.634776] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:45.758 [2024-12-13 10:31:39.634784] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:45.758 [2024-12-13 10:31:39.634793] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:45.758 [2024-12-13 10:31:39.634801] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:45.758 [2024-12-13 10:31:39.634809] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:45.758 [2024-12-13 10:31:39.634818] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:45.758 [2024-12-13 10:31:39.634827] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:45.758 [2024-12-13 10:31:39.634836] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:45.758 [2024-12-13 10:31:39.634844] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:45.758 [2024-12-13 10:31:39.634852] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:45.758 [2024-12-13 10:31:39.634861] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:45.758 [2024-12-13 10:31:39.634869] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:45.758 [2024-12-13 10:31:39.634878] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:45.758 [2024-12-13 10:31:39.634887] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:45.758 [2024-12-13 10:31:39.634895] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:45.758 [2024-12-13 10:31:39.634903] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:45.758 [2024-12-13 10:31:39.634911] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:45.758 [2024-12-13 10:31:39.634920] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:45.758 [2024-12-13 10:31:39.634930] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:45.758 [2024-12-13 10:31:39.634938] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:45.758 [2024-12-13 10:31:39.634946] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:45.758 [2024-12-13 10:31:39.634955] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:45.758 [2024-12-13 10:31:39.634963] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:45.758 [2024-12-13 10:31:39.634973] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:45.758 [2024-12-13 10:31:39.634983] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:45.758 [2024-12-13 10:31:39.634992] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:45.758 [2024-12-13 10:31:39.635000] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:45.758 [2024-12-13 10:31:39.635008] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:45.758 [2024-12-13 10:31:39.635016] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:45.758 [2024-12-13 10:31:39.635025] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:45.758 [2024-12-13 10:31:39.635033] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:45.758 [2024-12-13 10:31:39.635042] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:45.758 [2024-12-13 10:31:39.635049] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:45.758 [2024-12-13 10:31:39.635058] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:45.758 [2024-12-13 10:31:39.635066] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:45.758 [2024-12-13 10:31:39.635075] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:45.758 [2024-12-13 10:31:39.635083] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:45.758 [2024-12-13 10:31:39.635091] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:45.758 [2024-12-13 10:31:39.635099] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:45.758 [2024-12-13 10:31:39.635107] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:45.758 [2024-12-13 10:31:39.635114] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:45.758 [2024-12-13 10:31:39.635123] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:45.758 [2024-12-13 10:31:39.635132] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:45.758 [2024-12-13 10:31:39.635140] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:45.758 [2024-12-13 10:31:39.635148] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:45.758 [2024-12-13 10:31:39.635157] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:45.758 [2024-12-13 10:31:39.635165] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:45.759 [2024-12-13 10:31:39.635176] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:45.759 [2024-12-13 10:31:39.635184] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:45.759 [2024-12-13 10:31:39.638573] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:45.759 [2024-12-13 10:31:39.638606] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:45.759 [2024-12-13 10:31:39.638617] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:45.759 [2024-12-13 10:31:39.638627] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:45.759 [2024-12-13 10:31:39.638635] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:45.759 [2024-12-13 10:31:39.638644] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:45.759 [2024-12-13 10:31:39.638652] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:45.759 [2024-12-13 10:31:39.638660] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:45.759 [2024-12-13 10:31:39.638669] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:45.759 [2024-12-13 10:31:39.638677] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:45.759 [2024-12-13 10:31:39.638686] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:45.759 [2024-12-13 10:31:39.638695] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:45.759 [2024-12-13 10:31:39.638703] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:45.759 [2024-12-13 10:31:39.638711] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:45.759 [2024-12-13 10:31:39.638719] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:45.759 [2024-12-13 10:31:39.638727] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:45.759 [2024-12-13 10:31:39.638735] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:45.759 [2024-12-13 10:31:39.638743] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:45.759 [2024-12-13 10:31:39.638752] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:45.759 [2024-12-13 10:31:39.638760] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:45.759 [2024-12-13 10:31:39.638768] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:45.759 [2024-12-13 10:31:39.638776] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:45.759 [2024-12-13 10:31:39.638785] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:45.759 [2024-12-13 10:31:39.638793] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:45.759 [2024-12-13 10:31:39.638802] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:45.759 [2024-12-13 10:31:39.638817] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:45.759 [2024-12-13 10:31:39.638826] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:45.759 [2024-12-13 10:31:39.638834] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:45.759 [2024-12-13 10:31:39.638842] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:45.759 [2024-12-13 10:31:39.638850] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:45.759 [2024-12-13 10:31:39.638858] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:45.759 [2024-12-13 10:31:39.638868] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:45.759 [2024-12-13 10:31:39.638876] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:45.759 [2024-12-13 10:31:39.638884] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:45.759 [2024-12-13 10:31:39.638893] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:45.759 [2024-12-13 10:31:39.638901] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:45.759 [2024-12-13 10:31:39.638910] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:45.759 [2024-12-13 10:31:39.638919] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:45.759 [2024-12-13 10:31:39.638927] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:45.759 [2024-12-13 10:31:39.638936] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:45.759 [2024-12-13 10:31:39.638944] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:45.759 [2024-12-13 10:31:39.638952] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:45.759 [2024-12-13 10:31:39.638960] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:45.759 [2024-12-13 10:31:39.638969] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:45.759 [2024-12-13 10:31:39.638978] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:45.759 [2024-12-13 10:31:39.638986] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:45.759 [2024-12-13 10:31:39.638994] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:45.759 [2024-12-13 10:31:39.639002] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:45.759 [2024-12-13 10:31:39.639010] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:45.759 [2024-12-13 10:31:39.639018] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:45.759 [2024-12-13 10:31:39.639026] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:45.759 [2024-12-13 10:31:39.639036] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:45.759 [2024-12-13 10:31:39.639045] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:45.759 [2024-12-13 10:31:39.639053] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:45.759 [2024-12-13 10:31:39.639060] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:45.759 [2024-12-13 10:31:39.639069] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:45.759 [2024-12-13 10:31:39.639076] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:45.759 [2024-12-13 10:31:39.639085] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:45.759 [2024-12-13 10:31:39.639093] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:45.759 [2024-12-13 10:31:39.639101] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:45.759 [2024-12-13 10:31:39.639109] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:45.759 [2024-12-13 10:31:39.639117] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:45.759 [2024-12-13 10:31:39.639124] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:45.759 [2024-12-13 10:31:39.641153] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:45.759 [2024-12-13 10:31:39.641193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.759 [2024-12-13 10:31:39.641208] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:45.759 [2024-12-13 10:31:39.641220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.759 [2024-12-13 10:31:39.641231] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:45.759 [2024-12-13 10:31:39.641242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.759 [2024-12-13 10:31:39.641253] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:45.759 [2024-12-13 10:31:39.641262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.759 [2024-12-13 10:31:39.641284] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032b480 is same with the state(6) to be set 00:29:45.759 [2024-12-13 10:31:39.641351] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:45.759 [2024-12-13 10:31:39.641369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.759 [2024-12-13 10:31:39.641384] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:45.759 [2024-12-13 10:31:39.641398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.759 [2024-12-13 10:31:39.641413] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:45.759 [2024-12-13 10:31:39.641432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.759 [2024-12-13 10:31:39.641454] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:45.759 [2024-12-13 10:31:39.641470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.760 [2024-12-13 10:31:39.641483] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000326480 is same with the state(6) to be set 00:29:45.760 [2024-12-13 10:31:39.641538] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:45.760 [2024-12-13 10:31:39.641567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.760 [2024-12-13 10:31:39.641585] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:45.760 [2024-12-13 10:31:39.641600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.760 [2024-12-13 10:31:39.641617] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:45.760 [2024-12-13 10:31:39.641633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.760 [2024-12-13 10:31:39.641645] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:45.760 [2024-12-13 10:31:39.641655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.760 [2024-12-13 10:31:39.641665] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000326e80 is [2024-12-13 10:31:39.641652] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same same with the state(6) to be set 00:29:45.760 with the state(6) to be set 00:29:45.760 [2024-12-13 10:31:39.641686] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:45.760 [2024-12-13 10:31:39.641698] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:45.760 [2024-12-13 10:31:39.641707] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:45.760 [2024-12-13 10:31:39.641710] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:45.760 [2024-12-13 10:31:39.641717] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:45.760 [2024-12-13 10:31:39.641723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.760 [2024-12-13 10:31:39.641727] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:45.760 [2024-12-13 10:31:39.641736] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:45.760 [2024-12-13 10:31:39.641738] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:45.760 [2024-12-13 10:31:39.641748] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:45.760 [2024-12-13 10:31:39.641749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.760 [2024-12-13 10:31:39.641757] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:45.760 [2024-12-13 10:31:39.641766] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 ns[2024-12-13 10:31:39.641766] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same id:0 cdw10:00000000 cdw11:00000000 00:29:45.760 with the state(6) to be set 00:29:45.760 [2024-12-13 10:31:39.641778] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same [2024-12-13 10:31:39.641778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cwith the state(6) to be set 00:29:45.760 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.760 [2024-12-13 10:31:39.641789] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:45.760 [2024-12-13 10:31:39.641792] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:45.760 [2024-12-13 10:31:39.641799] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:45.760 [2024-12-13 10:31:39.641804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.760 [2024-12-13 10:31:39.641808] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:45.760 [2024-12-13 10:31:39.641814] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000327880 is same with the state(6) to be set 00:29:45.760 [2024-12-13 10:31:39.641817] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:45.760 [2024-12-13 10:31:39.641826] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:45.760 [2024-12-13 10:31:39.641835] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:45.760 [2024-12-13 10:31:39.641845] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:45.760 [2024-12-13 10:31:39.641853] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:45.760 [2024-12-13 10:31:39.641861] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:45.760 [2024-12-13 10:31:39.641869] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:45.760 [2024-12-13 10:31:39.641872] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:45.760 [2024-12-13 10:31:39.641878] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:45.760 [2024-12-13 10:31:39.641886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.760 [2024-12-13 10:31:39.641887] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:45.760 [2024-12-13 10:31:39.641897] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:45.760 [2024-12-13 10:31:39.641899] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:45.760 [2024-12-13 10:31:39.641906] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:45.760 [2024-12-13 10:31:39.641910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.760 [2024-12-13 10:31:39.641917] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:45.760 [2024-12-13 10:31:39.641924] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:45.760 [2024-12-13 10:31:39.641927] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:45.760 [2024-12-13 10:31:39.641936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-12-13 10:31:39.641936] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.760 with the state(6) to be set 00:29:45.760 [2024-12-13 10:31:39.641949] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:45.760 [2024-12-13 10:31:39.641951] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:45.760 [2024-12-13 10:31:39.641958] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:45.760 [2024-12-13 10:31:39.641962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.760 [2024-12-13 10:31:39.641967] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:45.760 [2024-12-13 10:31:39.641973] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:29:45.760 [2024-12-13 10:31:39.641977] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:45.760 [2024-12-13 10:31:39.641986] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:45.760 [2024-12-13 10:31:39.641995] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:45.760 [2024-12-13 10:31:39.642004] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:45.760 [2024-12-13 10:31:39.642012] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:45.760 [2024-12-13 10:31:39.642020] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:45.760 [2024-12-13 10:31:39.642028] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:45.760 [2024-12-13 10:31:39.642036] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:45.760 [2024-12-13 10:31:39.642044] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:45.760 [2024-12-13 10:31:39.642052] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:45.760 [2024-12-13 10:31:39.642061] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:45.760 [2024-12-13 10:31:39.642069] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:45.760 [2024-12-13 10:31:39.642078] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:45.760 [2024-12-13 10:31:39.642086] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:45.760 [2024-12-13 10:31:39.642094] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:45.760 [2024-12-13 10:31:39.642112] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:45.760 [2024-12-13 10:31:39.642123] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:45.760 [2024-12-13 10:31:39.642132] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:45.760 [2024-12-13 10:31:39.642140] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:45.760 [2024-12-13 10:31:39.642148] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:45.760 [2024-12-13 10:31:39.642156] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:45.760 [2024-12-13 10:31:39.642164] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:45.761 [2024-12-13 10:31:39.642173] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:45.761 [2024-12-13 10:31:39.642181] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:45.761 [2024-12-13 10:31:39.642189] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:45.761 [2024-12-13 10:31:39.642197] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:45.761 [2024-12-13 10:31:39.642206] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:45.761 [2024-12-13 10:31:39.642214] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:45.761 [2024-12-13 10:31:39.642222] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:45.761 [2024-12-13 10:31:39.642230] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:45.761 [2024-12-13 10:31:39.642239] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:45.761 [2024-12-13 10:31:39.642247] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:45.761 [2024-12-13 10:31:39.642569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.761 [2024-12-13 10:31:39.642598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.761 [2024-12-13 10:31:39.642622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.761 [2024-12-13 10:31:39.642634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.761 [2024-12-13 10:31:39.642647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.761 [2024-12-13 10:31:39.642657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.761 [2024-12-13 10:31:39.642669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.761 [2024-12-13 10:31:39.642679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.761 [2024-12-13 10:31:39.642690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.761 [2024-12-13 10:31:39.642703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.761 [2024-12-13 10:31:39.642715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.761 [2024-12-13 10:31:39.642724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.761 [2024-12-13 10:31:39.642736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.761 [2024-12-13 10:31:39.642746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.761 [2024-12-13 10:31:39.642758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.761 [2024-12-13 10:31:39.642768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.761 [2024-12-13 10:31:39.642780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.761 [2024-12-13 10:31:39.642790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.761 [2024-12-13 10:31:39.642801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.761 [2024-12-13 10:31:39.642810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.761 [2024-12-13 10:31:39.642825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.761 [2024-12-13 10:31:39.642835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.761 [2024-12-13 10:31:39.642847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.761 [2024-12-13 10:31:39.642856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.761 [2024-12-13 10:31:39.642867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.761 [2024-12-13 10:31:39.642877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.761 [2024-12-13 10:31:39.642888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.761 [2024-12-13 10:31:39.642898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.761 [2024-12-13 10:31:39.642909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.761 [2024-12-13 10:31:39.642918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.761 [2024-12-13 10:31:39.642930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.761 [2024-12-13 10:31:39.642939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.761 [2024-12-13 10:31:39.642950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.761 [2024-12-13 10:31:39.642959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.761 [2024-12-13 10:31:39.642971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.761 [2024-12-13 10:31:39.642982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.761 [2024-12-13 10:31:39.642993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.761 [2024-12-13 10:31:39.643002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.761 [2024-12-13 10:31:39.643015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.761 [2024-12-13 10:31:39.643025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.761 [2024-12-13 10:31:39.643037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.761 [2024-12-13 10:31:39.643047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.761 [2024-12-13 10:31:39.643058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.761 [2024-12-13 10:31:39.643067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.761 [2024-12-13 10:31:39.643078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.761 [2024-12-13 10:31:39.643089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.761 [2024-12-13 10:31:39.643100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.761 [2024-12-13 10:31:39.643109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.761 [2024-12-13 10:31:39.643120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.761 [2024-12-13 10:31:39.643129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.761 [2024-12-13 10:31:39.643141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.761 [2024-12-13 10:31:39.643151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.761 [2024-12-13 10:31:39.643163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.761 [2024-12-13 10:31:39.643172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.761 [2024-12-13 10:31:39.643183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.761 [2024-12-13 10:31:39.643193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.761 [2024-12-13 10:31:39.643204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.761 [2024-12-13 10:31:39.643213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.762 [2024-12-13 10:31:39.643224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.762 [2024-12-13 10:31:39.643236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.762 [2024-12-13 10:31:39.643248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.762 [2024-12-13 10:31:39.643258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.762 [2024-12-13 10:31:39.643268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.762 [2024-12-13 10:31:39.643277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.762 [2024-12-13 10:31:39.643288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.762 [2024-12-13 10:31:39.643301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.762 [2024-12-13 10:31:39.643317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.762 [2024-12-13 10:31:39.643330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.762 [2024-12-13 10:31:39.643347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.762 [2024-12-13 10:31:39.643369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.762 [2024-12-13 10:31:39.643396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.762 [2024-12-13 10:31:39.643414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.762 [2024-12-13 10:31:39.643432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.762 [2024-12-13 10:31:39.643454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.762 [2024-12-13 10:31:39.643470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.762 [2024-12-13 10:31:39.643481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.762 [2024-12-13 10:31:39.643493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.762 [2024-12-13 10:31:39.643503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.762 [2024-12-13 10:31:39.643516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.762 [2024-12-13 10:31:39.643526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.762 [2024-12-13 10:31:39.643539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.762 [2024-12-13 10:31:39.643549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.762 [2024-12-13 10:31:39.643561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.762 [2024-12-13 10:31:39.643572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.762 [2024-12-13 10:31:39.643591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.762 [2024-12-13 10:31:39.643601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.762 [2024-12-13 10:31:39.643615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.762 [2024-12-13 10:31:39.643625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.762 [2024-12-13 10:31:39.643637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.762 [2024-12-13 10:31:39.643646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.762 [2024-12-13 10:31:39.643660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.762 [2024-12-13 10:31:39.643658] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:45.762 [2024-12-13 10:31:39.643678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.762 [2024-12-13 10:31:39.643686] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:45.762 [2024-12-13 10:31:39.643691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.762 [2024-12-13 10:31:39.643696] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:45.762 [2024-12-13 10:31:39.643702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.762 [2024-12-13 10:31:39.643707] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:45.762 [2024-12-13 10:31:39.643716] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same [2024-12-13 10:31:39.643716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:1with the state(6) to be set 00:29:45.762 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.762 [2024-12-13 10:31:39.643728] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:45.762 [2024-12-13 10:31:39.643730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.762 [2024-12-13 10:31:39.643737] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:45.762 [2024-12-13 10:31:39.643743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.762 [2024-12-13 10:31:39.643746] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:45.762 [2024-12-13 10:31:39.643754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-12-13 10:31:39.643755] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.762 with the state(6) to be set 00:29:45.762 [2024-12-13 10:31:39.643766] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:45.762 [2024-12-13 10:31:39.643769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.762 [2024-12-13 10:31:39.643774] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:45.762 [2024-12-13 10:31:39.643780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.762 [2024-12-13 10:31:39.643788] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:45.762 [2024-12-13 10:31:39.643793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.762 [2024-12-13 10:31:39.643797] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:45.762 [2024-12-13 10:31:39.643804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.762 [2024-12-13 10:31:39.643807] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:45.762 [2024-12-13 10:31:39.643817] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:45.762 [2024-12-13 10:31:39.643819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.762 [2024-12-13 10:31:39.643826] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:45.762 [2024-12-13 10:31:39.643831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.762 [2024-12-13 10:31:39.643836] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:45.762 [2024-12-13 10:31:39.643844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:1[2024-12-13 10:31:39.643845] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.762 with the state(6) to be set 00:29:45.762 [2024-12-13 10:31:39.643856] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:45.762 [2024-12-13 10:31:39.643856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.762 [2024-12-13 10:31:39.643865] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:45.762 [2024-12-13 10:31:39.643870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.762 [2024-12-13 10:31:39.643875] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:45.762 [2024-12-13 10:31:39.643881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.762 [2024-12-13 10:31:39.643884] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:45.762 [2024-12-13 10:31:39.643893] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:45.762 [2024-12-13 10:31:39.643895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.762 [2024-12-13 10:31:39.643902] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:45.762 [2024-12-13 10:31:39.643905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.762 [2024-12-13 10:31:39.643911] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:45.762 [2024-12-13 10:31:39.643919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.762 [2024-12-13 10:31:39.643922] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:45.762 [2024-12-13 10:31:39.643930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-12-13 10:31:39.643931] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.762 with the state(6) to be set 00:29:45.762 [2024-12-13 10:31:39.643943] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:45.762 [2024-12-13 10:31:39.643947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.763 [2024-12-13 10:31:39.643951] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:45.763 [2024-12-13 10:31:39.643958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.763 [2024-12-13 10:31:39.643960] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:45.763 [2024-12-13 10:31:39.643969] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:45.763 [2024-12-13 10:31:39.643972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.763 [2024-12-13 10:31:39.643981] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:45.763 [2024-12-13 10:31:39.643984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.763 [2024-12-13 10:31:39.643991] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:45.763 [2024-12-13 10:31:39.643999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.763 [2024-12-13 10:31:39.644001] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:45.763 [2024-12-13 10:31:39.644010] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same [2024-12-13 10:31:39.644010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cwith the state(6) to be set 00:29:45.763 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.763 [2024-12-13 10:31:39.644020] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:45.763 [2024-12-13 10:31:39.644025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.763 [2024-12-13 10:31:39.644031] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:45.763 [2024-12-13 10:31:39.644036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.763 [2024-12-13 10:31:39.644040] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:45.763 [2024-12-13 10:31:39.644049] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:45.763 [2024-12-13 10:31:39.644050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.763 [2024-12-13 10:31:39.644057] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:45.763 [2024-12-13 10:31:39.644072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-12-13 10:31:39.644075] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.763 with the state(6) to be set 00:29:45.763 [2024-12-13 10:31:39.644085] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:45.763 [2024-12-13 10:31:39.644089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.763 [2024-12-13 10:31:39.644093] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:45.763 [2024-12-13 10:31:39.644100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.763 [2024-12-13 10:31:39.644102] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:45.763 [2024-12-13 10:31:39.644111] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:45.763 [2024-12-13 10:31:39.644112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.763 [2024-12-13 10:31:39.644119] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:45.763 [2024-12-13 10:31:39.644123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.763 [2024-12-13 10:31:39.644129] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:45.763 [2024-12-13 10:31:39.644136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:45.763 [2024-12-13 10:31:39.644138] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:45.763 [2024-12-13 10:31:39.644147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-12-13 10:31:39.644147] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.763 with the state(6) to be set 00:29:45.763 [2024-12-13 10:31:39.644159] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:45.763 [2024-12-13 10:31:39.644167] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:45.763 [2024-12-13 10:31:39.644175] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:45.763 [2024-12-13 10:31:39.644184] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:45.763 [2024-12-13 10:31:39.644192] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:45.763 [2024-12-13 10:31:39.644200] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:45.763 [2024-12-13 10:31:39.644210] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:46.032 [2024-12-13 10:31:39.647666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.032 [2024-12-13 10:31:39.647700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.032 [2024-12-13 10:31:39.647729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.033 [2024-12-13 10:31:39.647744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.033 [2024-12-13 10:31:39.647757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.033 [2024-12-13 10:31:39.647766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.033 [2024-12-13 10:31:39.647779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.033 [2024-12-13 10:31:39.647789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.033 [2024-12-13 10:31:39.647802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.033 [2024-12-13 10:31:39.647811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.033 [2024-12-13 10:31:39.647823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.033 [2024-12-13 10:31:39.647833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.033 [2024-12-13 10:31:39.647847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.033 [2024-12-13 10:31:39.647857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.033 [2024-12-13 10:31:39.647869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.033 [2024-12-13 10:31:39.647880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.033 [2024-12-13 10:31:39.647892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.033 [2024-12-13 10:31:39.647902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.033 [2024-12-13 10:31:39.647914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.033 [2024-12-13 10:31:39.647924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.033 [2024-12-13 10:31:39.647935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.033 [2024-12-13 10:31:39.647946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.033 [2024-12-13 10:31:39.647957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.033 [2024-12-13 10:31:39.647967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.033 [2024-12-13 10:31:39.647978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.033 [2024-12-13 10:31:39.647991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.033 [2024-12-13 10:31:39.648003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.033 [2024-12-13 10:31:39.648012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.033 [2024-12-13 10:31:39.648027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.033 [2024-12-13 10:31:39.648037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.033 [2024-12-13 10:31:39.648048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.033 [2024-12-13 10:31:39.648059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.033 [2024-12-13 10:31:39.648071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.033 [2024-12-13 10:31:39.648080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.033 [2024-12-13 10:31:39.648092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.033 [2024-12-13 10:31:39.648102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.033 [2024-12-13 10:31:39.648113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.033 [2024-12-13 10:31:39.648122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.033 [2024-12-13 10:31:39.648135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.033 [2024-12-13 10:31:39.648145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.033 [2024-12-13 10:31:39.648156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.033 [2024-12-13 10:31:39.648165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.033 [2024-12-13 10:31:39.648176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.033 [2024-12-13 10:31:39.648186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.033 [2024-12-13 10:31:39.648197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.033 [2024-12-13 10:31:39.648206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.033 [2024-12-13 10:31:39.648218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.033 [2024-12-13 10:31:39.648227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.033 [2024-12-13 10:31:39.648238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.033 [2024-12-13 10:31:39.648248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.033 [2024-12-13 10:31:39.648259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.033 [2024-12-13 10:31:39.648268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.033 [2024-12-13 10:31:39.648279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.033 [2024-12-13 10:31:39.648291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.033 [2024-12-13 10:31:39.648302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.033 [2024-12-13 10:31:39.648312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.033 [2024-12-13 10:31:39.648323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.033 [2024-12-13 10:31:39.648335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.033 [2024-12-13 10:31:39.648346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.033 [2024-12-13 10:31:39.648355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.033 [2024-12-13 10:31:39.648367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.033 [2024-12-13 10:31:39.648377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.033 [2024-12-13 10:31:39.648388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.033 [2024-12-13 10:31:39.648397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.033 [2024-12-13 10:31:39.648409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.033 [2024-12-13 10:31:39.648418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.033 [2024-12-13 10:31:39.648429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.033 [2024-12-13 10:31:39.648439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.033 [2024-12-13 10:31:39.648458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.033 [2024-12-13 10:31:39.648468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.033 [2024-12-13 10:31:39.648480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.033 [2024-12-13 10:31:39.648490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.033 [2024-12-13 10:31:39.648502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.033 [2024-12-13 10:31:39.648511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.033 [2024-12-13 10:31:39.648522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.033 [2024-12-13 10:31:39.648532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.033 [2024-12-13 10:31:39.648543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.033 [2024-12-13 10:31:39.648552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.033 [2024-12-13 10:31:39.648566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.033 [2024-12-13 10:31:39.648576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.033 [2024-12-13 10:31:39.648587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.033 [2024-12-13 10:31:39.648596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.034 [2024-12-13 10:31:39.648608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.034 [2024-12-13 10:31:39.648618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.034 [2024-12-13 10:31:39.648629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.034 [2024-12-13 10:31:39.648640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.034 [2024-12-13 10:31:39.648651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.034 [2024-12-13 10:31:39.648660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.034 [2024-12-13 10:31:39.648671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.034 [2024-12-13 10:31:39.648683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.034 [2024-12-13 10:31:39.648701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.034 [2024-12-13 10:31:39.648710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.034 [2024-12-13 10:31:39.648722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.034 [2024-12-13 10:31:39.648732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.034 [2024-12-13 10:31:39.648744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.034 [2024-12-13 10:31:39.648754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.034 [2024-12-13 10:31:39.648766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.034 [2024-12-13 10:31:39.648776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.034 [2024-12-13 10:31:39.648787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.034 [2024-12-13 10:31:39.648797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.034 [2024-12-13 10:31:39.648809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.034 [2024-12-13 10:31:39.648818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.034 [2024-12-13 10:31:39.648829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.034 [2024-12-13 10:31:39.648841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.034 [2024-12-13 10:31:39.648851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.034 [2024-12-13 10:31:39.648861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.034 [2024-12-13 10:31:39.648873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.034 [2024-12-13 10:31:39.648883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.034 [2024-12-13 10:31:39.648894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.034 [2024-12-13 10:31:39.648903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.034 [2024-12-13 10:31:39.648914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.034 [2024-12-13 10:31:39.648924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.034 [2024-12-13 10:31:39.648935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.034 [2024-12-13 10:31:39.648944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.034 [2024-12-13 10:31:39.648955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.034 [2024-12-13 10:31:39.648965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.034 [2024-12-13 10:31:39.648978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.034 [2024-12-13 10:31:39.648988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.034 [2024-12-13 10:31:39.648999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.034 [2024-12-13 10:31:39.649009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.034 [2024-12-13 10:31:39.649019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.034 [2024-12-13 10:31:39.649030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.034 [2024-12-13 10:31:39.649041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.034 [2024-12-13 10:31:39.649051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.034 [2024-12-13 10:31:39.649062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.034 [2024-12-13 10:31:39.649072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.034 [2024-12-13 10:31:39.649083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.034 [2024-12-13 10:31:39.649092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.034 [2024-12-13 10:31:39.649250] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:46.034 [2024-12-13 10:31:39.649275] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:46.034 [2024-12-13 10:31:39.649284] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:46.034 [2024-12-13 10:31:39.649294] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:46.034 [2024-12-13 10:31:39.649302] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:46.034 [2024-12-13 10:31:39.649310] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:46.034 [2024-12-13 10:31:39.649319] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:46.034 [2024-12-13 10:31:39.649328] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:46.034 [2024-12-13 10:31:39.649337] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:46.034 [2024-12-13 10:31:39.649345] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:46.034 [2024-12-13 10:31:39.649354] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:46.034 [2024-12-13 10:31:39.649362] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:46.034 [2024-12-13 10:31:39.649371] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:46.034 [2024-12-13 10:31:39.649379] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:46.034 [2024-12-13 10:31:39.649387] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:46.034 [2024-12-13 10:31:39.649396] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:46.034 [2024-12-13 10:31:39.649405] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:46.034 [2024-12-13 10:31:39.649413] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:46.034 [2024-12-13 10:31:39.649421] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:46.034 [2024-12-13 10:31:39.649429] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:46.034 [2024-12-13 10:31:39.649438] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:46.034 [2024-12-13 10:31:39.649446] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:46.034 [2024-12-13 10:31:39.649460] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:46.034 [2024-12-13 10:31:39.649468] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:46.034 [2024-12-13 10:31:39.649477] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:46.034 [2024-12-13 10:31:39.649486] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:46.034 [2024-12-13 10:31:39.649498] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:46.034 [2024-12-13 10:31:39.649506] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:46.034 [2024-12-13 10:31:39.649515] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:46.034 [2024-12-13 10:31:39.649523] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:46.034 [2024-12-13 10:31:39.649532] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:46.034 [2024-12-13 10:31:39.649541] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:46.034 [2024-12-13 10:31:39.649550] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:46.034 [2024-12-13 10:31:39.649559] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:46.034 [2024-12-13 10:31:39.649568] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:46.034 [2024-12-13 10:31:39.649576] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:46.034 [2024-12-13 10:31:39.649584] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:46.035 [2024-12-13 10:31:39.649593] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:46.035 [2024-12-13 10:31:39.649602] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:46.035 [2024-12-13 10:31:39.649610] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:46.035 [2024-12-13 10:31:39.649618] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:46.035 [2024-12-13 10:31:39.649627] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:46.035 [2024-12-13 10:31:39.649635] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:46.035 [2024-12-13 10:31:39.649643] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:46.035 [2024-12-13 10:31:39.649635] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:29:46.035 [2024-12-13 10:31:39.649652] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:46.035 [2024-12-13 10:31:39.649660] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:46.035 [2024-12-13 10:31:39.649668] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:46.035 [2024-12-13 10:31:39.649677] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:46.035 [2024-12-13 10:31:39.649685] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:46.035 [2024-12-13 10:31:39.649693] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:46.035 [2024-12-13 10:31:39.649693] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:29:46.035 [2024-12-13 10:31:39.649701] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:46.035 [2024-12-13 10:31:39.649711] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:46.035 [2024-12-13 10:31:39.649719] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:46.035 [2024-12-13 10:31:39.649727] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:46.035 [2024-12-13 10:31:39.649735] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:46.035 [2024-12-13 10:31:39.649744] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:46.035 [2024-12-13 10:31:39.649753] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:46.035 [2024-12-13 10:31:39.649761] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:46.035 [2024-12-13 10:31:39.649769] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:46.035 [2024-12-13 10:31:39.649779] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:46.035 [2024-12-13 10:31:39.649787] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:46.035 [2024-12-13 10:31:39.649795] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:46.035 [2024-12-13 10:31:39.649803] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:46.035 [2024-12-13 10:31:39.651056] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:29:46.035 [2024-12-13 10:31:39.651127] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000328c80 (9): Bad file descriptor 00:29:46.035 [2024-12-13 10:31:39.652191] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:46.035 [2024-12-13 10:31:39.652217] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:46.035 [2024-12-13 10:31:39.652227] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:46.035 [2024-12-13 10:31:39.652235] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:46.035 [2024-12-13 10:31:39.652244] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:46.035 [2024-12-13 10:31:39.652254] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:46.035 [2024-12-13 10:31:39.652263] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:46.035 [2024-12-13 10:31:39.652272] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:46.035 [2024-12-13 10:31:39.652280] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:46.035 [2024-12-13 10:31:39.652288] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:46.035 [2024-12-13 10:31:39.652297] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:46.035 [2024-12-13 10:31:39.652306] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:46.035 [2024-12-13 10:31:39.652318] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:46.035 [2024-12-13 10:31:39.652326] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:46.035 [2024-12-13 10:31:39.652334] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:46.035 [2024-12-13 10:31:39.652342] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:46.035 [2024-12-13 10:31:39.652351] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:46.035 [2024-12-13 10:31:39.652360] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:46.035 [2024-12-13 10:31:39.652369] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:46.035 [2024-12-13 10:31:39.652377] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:46.035 [2024-12-13 10:31:39.652385] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:46.035 [2024-12-13 10:31:39.652394] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:46.035 [2024-12-13 10:31:39.652402] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:46.035 [2024-12-13 10:31:39.652410] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:46.035 [2024-12-13 10:31:39.652419] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:46.035 [2024-12-13 10:31:39.652426] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:46.035 [2024-12-13 10:31:39.652435] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:46.035 [2024-12-13 10:31:39.652444] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:46.035 [2024-12-13 10:31:39.652459] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:46.035 [2024-12-13 10:31:39.652468] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:46.035 [2024-12-13 10:31:39.652476] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:46.035 [2024-12-13 10:31:39.652484] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:46.035 [2024-12-13 10:31:39.652494] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:46.035 [2024-12-13 10:31:39.652503] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:46.035 [2024-12-13 10:31:39.652511] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:46.035 [2024-12-13 10:31:39.652520] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:46.035 [2024-12-13 10:31:39.652528] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:46.035 [2024-12-13 10:31:39.652537] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:46.035 [2024-12-13 10:31:39.652547] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same [2024-12-13 10:31:39.652536] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:46.035 with the state(6) to be set 00:29:46.035 [2024-12-13 10:31:39.652560] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:46.035 [2024-12-13 10:31:39.652569] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:46.035 [2024-12-13 10:31:39.652576] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:46.035 [2024-12-13 10:31:39.652585] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:46.035 [2024-12-13 10:31:39.652594] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:46.035 [2024-12-13 10:31:39.652602] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:46.035 [2024-12-13 10:31:39.652610] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:46.035 [2024-12-13 10:31:39.652618] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:46.035 [2024-12-13 10:31:39.652627] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:46.035 [2024-12-13 10:31:39.652636] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:46.035 [2024-12-13 10:31:39.652644] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:46.035 [2024-12-13 10:31:39.652653] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:46.035 [2024-12-13 10:31:39.652661] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:46.035 [2024-12-13 10:31:39.652669] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:46.035 [2024-12-13 10:31:39.652678] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:46.036 [2024-12-13 10:31:39.652687] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:46.036 [2024-12-13 10:31:39.652694] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:46.036 [2024-12-13 10:31:39.652702] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:46.036 [2024-12-13 10:31:39.652711] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:46.036 [2024-12-13 10:31:39.652719] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:46.036 [2024-12-13 10:31:39.652728] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:46.036 [2024-12-13 10:31:39.652735] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:46.036 [2024-12-13 10:31:39.652743] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:46.036 [2024-12-13 10:31:39.652752] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:46.036 [2024-12-13 10:31:39.652869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.036 [2024-12-13 10:31:39.652896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:29:46.036 [2024-12-13 10:31:39.652909] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:29:46.036 [2024-12-13 10:31:39.652937] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032b480 (9): Bad file descriptor 00:29:46.036 [2024-12-13 10:31:39.652990] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:46.036 [2024-12-13 10:31:39.653005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.036 [2024-12-13 10:31:39.653018] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:46.036 [2024-12-13 10:31:39.653028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.036 [2024-12-13 10:31:39.653039] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:46.036 [2024-12-13 10:31:39.653049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.036 [2024-12-13 10:31:39.653059] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:46.036 [2024-12-13 10:31:39.653068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.036 [2024-12-13 10:31:39.653077] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000329680 is same with the state(6) to be set 00:29:46.036 [2024-12-13 10:31:39.653098] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000326480 (9): Bad file descriptor 00:29:46.036 [2024-12-13 10:31:39.653119] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000326e80 (9): Bad file descriptor 00:29:46.036 [2024-12-13 10:31:39.653141] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000327880 (9): Bad file descriptor 00:29:46.036 [2024-12-13 10:31:39.653175] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:46.036 [2024-12-13 10:31:39.653188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.036 [2024-12-13 10:31:39.653199] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:46.036 [2024-12-13 10:31:39.653209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.036 [2024-12-13 10:31:39.653222] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:46.036 [2024-12-13 10:31:39.653232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.036 [2024-12-13 10:31:39.653243] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:46.036 [2024-12-13 10:31:39.653252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.036 [2024-12-13 10:31:39.653262] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000328280 is same with the state(6) to be set 00:29:46.036 [2024-12-13 10:31:39.653298] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:46.036 [2024-12-13 10:31:39.653311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.036 [2024-12-13 10:31:39.653325] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:46.036 [2024-12-13 10:31:39.653334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.036 [2024-12-13 10:31:39.653345] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:46.036 [2024-12-13 10:31:39.653355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.036 [2024-12-13 10:31:39.653365] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:46.036 [2024-12-13 10:31:39.653376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.036 [2024-12-13 10:31:39.653385] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032a080 is same with the state(6) to be set 00:29:46.036 [2024-12-13 10:31:39.653706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.036 [2024-12-13 10:31:39.653730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.036 [2024-12-13 10:31:39.653750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.036 [2024-12-13 10:31:39.653762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.036 [2024-12-13 10:31:39.653774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.036 [2024-12-13 10:31:39.653784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.036 [2024-12-13 10:31:39.653797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.036 [2024-12-13 10:31:39.653808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.036 [2024-12-13 10:31:39.653822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.036 [2024-12-13 10:31:39.653832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.036 [2024-12-13 10:31:39.653844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.036 [2024-12-13 10:31:39.653854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.036 [2024-12-13 10:31:39.653867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.036 [2024-12-13 10:31:39.653878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.036 [2024-12-13 10:31:39.653889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.036 [2024-12-13 10:31:39.653899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.036 [2024-12-13 10:31:39.653911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.036 [2024-12-13 10:31:39.653920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.036 [2024-12-13 10:31:39.653938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.036 [2024-12-13 10:31:39.653948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.036 [2024-12-13 10:31:39.653961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.036 [2024-12-13 10:31:39.653970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.036 [2024-12-13 10:31:39.653983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.036 [2024-12-13 10:31:39.653992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.036 [2024-12-13 10:31:39.654003] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d780 is same with the state(6) to be set 00:29:46.036 [2024-12-13 10:31:39.654051] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:46.036 [2024-12-13 10:31:39.654075] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:46.036 [2024-12-13 10:31:39.654086] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:46.036 [2024-12-13 10:31:39.654094] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:46.036 [2024-12-13 10:31:39.654103] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:46.036 [2024-12-13 10:31:39.654111] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:46.036 [2024-12-13 10:31:39.654121] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:46.036 [2024-12-13 10:31:39.654130] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:46.036 [2024-12-13 10:31:39.654138] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:46.036 [2024-12-13 10:31:39.654146] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:46.036 [2024-12-13 10:31:39.654155] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:46.036 [2024-12-13 10:31:39.654164] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:46.036 [2024-12-13 10:31:39.654173] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:46.036 [2024-12-13 10:31:39.654181] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:46.036 [2024-12-13 10:31:39.654189] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:46.037 [2024-12-13 10:31:39.654198] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:46.037 [2024-12-13 10:31:39.654207] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:46.037 [2024-12-13 10:31:39.654214] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:46.037 [2024-12-13 10:31:39.654223] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:46.037 [2024-12-13 10:31:39.654238] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:46.037 [2024-12-13 10:31:39.654247] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:46.037 [2024-12-13 10:31:39.654255] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:46.037 [2024-12-13 10:31:39.654264] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:46.037 [2024-12-13 10:31:39.654277] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:46.037 [2024-12-13 10:31:39.654285] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:46.037 [2024-12-13 10:31:39.654293] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:46.037 [2024-12-13 10:31:39.654301] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:46.037 [2024-12-13 10:31:39.654309] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:46.037 [2024-12-13 10:31:39.654318] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:46.037 [2024-12-13 10:31:39.654326] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:46.037 [2024-12-13 10:31:39.654334] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:46.037 [2024-12-13 10:31:39.654342] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:46.037 [2024-12-13 10:31:39.654350] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:46.037 [2024-12-13 10:31:39.654358] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:46.037 [2024-12-13 10:31:39.654366] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:46.037 [2024-12-13 10:31:39.654374] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:46.037 [2024-12-13 10:31:39.654384] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:46.037 [2024-12-13 10:31:39.654393] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:46.037 [2024-12-13 10:31:39.654400] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:46.037 [2024-12-13 10:31:39.654403] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:46.037 [2024-12-13 10:31:39.654408] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:46.037 [2024-12-13 10:31:39.654418] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:46.037 [2024-12-13 10:31:39.654425] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:46.037 [2024-12-13 10:31:39.654433] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:46.037 [2024-12-13 10:31:39.654441] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:46.037 [2024-12-13 10:31:39.654454] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:46.037 [2024-12-13 10:31:39.654463] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:46.037 [2024-12-13 10:31:39.654472] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same [2024-12-13 10:31:39.654471] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:46.037 with the state(6) to be set 00:29:46.037 [2024-12-13 10:31:39.654483] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:46.037 [2024-12-13 10:31:39.654492] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:46.037 [2024-12-13 10:31:39.654501] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:46.037 [2024-12-13 10:31:39.654509] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:46.037 [2024-12-13 10:31:39.654517] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:46.037 [2024-12-13 10:31:39.654525] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:46.037 [2024-12-13 10:31:39.654533] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:46.037 [2024-12-13 10:31:39.654540] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:46.037 [2024-12-13 10:31:39.654550] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:46.037 [2024-12-13 10:31:39.654558] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:46.037 [2024-12-13 10:31:39.654565] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:46.037 [2024-12-13 10:31:39.654573] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:46.037 [2024-12-13 10:31:39.654581] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:46.037 [2024-12-13 10:31:39.654589] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:46.037 [2024-12-13 10:31:39.654597] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:46.037 [2024-12-13 10:31:39.654605] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:46.037 [2024-12-13 10:31:39.655382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.037 [2024-12-13 10:31:39.655410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000328c80 with addr=10.0.0.2, port=4420 00:29:46.037 [2024-12-13 10:31:39.655422] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000328c80 is same with the state(6) to be set 00:29:46.037 [2024-12-13 10:31:39.655436] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:29:46.037 [2024-12-13 10:31:39.656270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.037 [2024-12-13 10:31:39.656289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.037 [2024-12-13 10:31:39.656313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.037 [2024-12-13 10:31:39.656328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.037 [2024-12-13 10:31:39.656341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.037 [2024-12-13 10:31:39.656351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.037 [2024-12-13 10:31:39.656363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.037 [2024-12-13 10:31:39.656373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.037 [2024-12-13 10:31:39.656385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.037 [2024-12-13 10:31:39.656395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.037 [2024-12-13 10:31:39.656407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.037 [2024-12-13 10:31:39.656417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.037 [2024-12-13 10:31:39.656429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.037 [2024-12-13 10:31:39.656439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.037 [2024-12-13 10:31:39.656461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.038 [2024-12-13 10:31:39.656471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.038 [2024-12-13 10:31:39.656483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.038 [2024-12-13 10:31:39.656492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.038 [2024-12-13 10:31:39.656505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.038 [2024-12-13 10:31:39.656516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.038 [2024-12-13 10:31:39.656556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.038 [2024-12-13 10:31:39.656566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.038 [2024-12-13 10:31:39.656579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.038 [2024-12-13 10:31:39.656589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.038 [2024-12-13 10:31:39.656601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.038 [2024-12-13 10:31:39.656611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.038 [2024-12-13 10:31:39.656623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.038 [2024-12-13 10:31:39.656632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.038 [2024-12-13 10:31:39.656647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.038 [2024-12-13 10:31:39.656657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.038 [2024-12-13 10:31:39.656669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.038 [2024-12-13 10:31:39.656679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.038 [2024-12-13 10:31:39.656691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.038 [2024-12-13 10:31:39.656700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.038 [2024-12-13 10:31:39.656712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.038 [2024-12-13 10:31:39.656723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.038 [2024-12-13 10:31:39.656734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.038 [2024-12-13 10:31:39.656743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.038 [2024-12-13 10:31:39.656756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.038 [2024-12-13 10:31:39.656766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.038 [2024-12-13 10:31:39.656778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.038 [2024-12-13 10:31:39.656788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.038 [2024-12-13 10:31:39.656800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.038 [2024-12-13 10:31:39.656809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.038 [2024-12-13 10:31:39.656820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.038 [2024-12-13 10:31:39.656830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.038 [2024-12-13 10:31:39.656842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.038 [2024-12-13 10:31:39.656851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.038 [2024-12-13 10:31:39.656863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.038 [2024-12-13 10:31:39.656873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.038 [2024-12-13 10:31:39.656884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.038 [2024-12-13 10:31:39.656893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.038 [2024-12-13 10:31:39.656905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.038 [2024-12-13 10:31:39.656916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.038 [2024-12-13 10:31:39.656928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.038 [2024-12-13 10:31:39.656937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.038 [2024-12-13 10:31:39.656949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.038 [2024-12-13 10:31:39.656959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.038 [2024-12-13 10:31:39.656970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.038 [2024-12-13 10:31:39.656979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.038 [2024-12-13 10:31:39.656991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.038 [2024-12-13 10:31:39.657001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.038 [2024-12-13 10:31:39.657012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.038 [2024-12-13 10:31:39.657022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.038 [2024-12-13 10:31:39.657033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.038 [2024-12-13 10:31:39.657043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.038 [2024-12-13 10:31:39.657054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.038 [2024-12-13 10:31:39.657063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.038 [2024-12-13 10:31:39.657075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.038 [2024-12-13 10:31:39.657084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.038 [2024-12-13 10:31:39.657095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.038 [2024-12-13 10:31:39.657105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.038 [2024-12-13 10:31:39.657116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.038 [2024-12-13 10:31:39.657125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.038 [2024-12-13 10:31:39.657137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.038 [2024-12-13 10:31:39.657147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.038 [2024-12-13 10:31:39.657158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.038 [2024-12-13 10:31:39.657169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.038 [2024-12-13 10:31:39.657183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.038 [2024-12-13 10:31:39.657192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.038 [2024-12-13 10:31:39.657202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.038 [2024-12-13 10:31:39.657213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.038 [2024-12-13 10:31:39.657223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.038 [2024-12-13 10:31:39.657232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.038 [2024-12-13 10:31:39.657243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.038 [2024-12-13 10:31:39.657254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.038 [2024-12-13 10:31:39.657266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.038 [2024-12-13 10:31:39.657276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.038 [2024-12-13 10:31:39.657287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.038 [2024-12-13 10:31:39.657298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.038 [2024-12-13 10:31:39.657309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.038 [2024-12-13 10:31:39.657318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.038 [2024-12-13 10:31:39.657333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.038 [2024-12-13 10:31:39.657342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.038 [2024-12-13 10:31:39.657353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.038 [2024-12-13 10:31:39.657363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.039 [2024-12-13 10:31:39.657374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.039 [2024-12-13 10:31:39.657384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.039 [2024-12-13 10:31:39.657395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.039 [2024-12-13 10:31:39.657405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.039 [2024-12-13 10:31:39.657417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.039 [2024-12-13 10:31:39.657427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.039 [2024-12-13 10:31:39.657440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.039 [2024-12-13 10:31:39.657462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.039 [2024-12-13 10:31:39.657480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.039 [2024-12-13 10:31:39.657503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.039 [2024-12-13 10:31:39.657520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.039 [2024-12-13 10:31:39.657535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.039 [2024-12-13 10:31:39.657553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.039 [2024-12-13 10:31:39.657568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.039 [2024-12-13 10:31:39.657596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.039 [2024-12-13 10:31:39.657614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.039 [2024-12-13 10:31:39.657633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.039 [2024-12-13 10:31:39.657649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.039 [2024-12-13 10:31:39.657663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.039 [2024-12-13 10:31:39.657674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.039 [2024-12-13 10:31:39.657688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.039 [2024-12-13 10:31:39.657699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.039 [2024-12-13 10:31:39.657712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.039 [2024-12-13 10:31:39.657723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.039 [2024-12-13 10:31:39.657737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.039 [2024-12-13 10:31:39.657747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.039 [2024-12-13 10:31:39.657759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.039 [2024-12-13 10:31:39.657770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.039 [2024-12-13 10:31:39.657783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.039 [2024-12-13 10:31:39.657794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.039 [2024-12-13 10:31:39.657808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.039 [2024-12-13 10:31:39.657818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.039 [2024-12-13 10:31:39.657834] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032df00 is same with the state(6) to be set 00:29:46.039 [2024-12-13 10:31:39.658483] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:29:46.039 [2024-12-13 10:31:39.658538] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000328c80 (9): Bad file descriptor 00:29:46.039 [2024-12-13 10:31:39.658554] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:29:46.039 [2024-12-13 10:31:39.658565] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:29:46.039 [2024-12-13 10:31:39.658583] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:29:46.039 [2024-12-13 10:31:39.658596] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:29:46.039 [2024-12-13 10:31:39.659664] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:46.039 [2024-12-13 10:31:39.659881] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:29:46.039 [2024-12-13 10:31:39.659909] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000328280 (9): Bad file descriptor 00:29:46.039 [2024-12-13 10:31:39.660079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.039 [2024-12-13 10:31:39.660097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:29:46.039 [2024-12-13 10:31:39.660108] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000326480 is same with the state(6) to be set 00:29:46.039 [2024-12-13 10:31:39.660119] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:29:46.039 [2024-12-13 10:31:39.660129] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:29:46.039 [2024-12-13 10:31:39.660139] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:29:46.039 [2024-12-13 10:31:39.660151] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:29:46.039 [2024-12-13 10:31:39.660748] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000326480 (9): Bad file descriptor 00:29:46.039 [2024-12-13 10:31:39.661313] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:46.039 [2024-12-13 10:31:39.661625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.039 [2024-12-13 10:31:39.661655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000328280 with addr=10.0.0.2, port=4420 00:29:46.039 [2024-12-13 10:31:39.661667] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000328280 is same with the state(6) to be set 00:29:46.039 [2024-12-13 10:31:39.661679] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:29:46.039 [2024-12-13 10:31:39.661689] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:29:46.039 [2024-12-13 10:31:39.661700] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:29:46.039 [2024-12-13 10:31:39.661711] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:29:46.039 [2024-12-13 10:31:39.661939] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:46.039 [2024-12-13 10:31:39.661976] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000328280 (9): Bad file descriptor 00:29:46.039 [2024-12-13 10:31:39.662047] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:29:46.039 [2024-12-13 10:31:39.662077] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:29:46.039 [2024-12-13 10:31:39.662091] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:29:46.039 [2024-12-13 10:31:39.662101] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:29:46.039 [2024-12-13 10:31:39.662110] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:29:46.039 [2024-12-13 10:31:39.662294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.039 [2024-12-13 10:31:39.662311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:29:46.039 [2024-12-13 10:31:39.662323] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:29:46.039 [2024-12-13 10:31:39.662371] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:29:46.039 [2024-12-13 10:31:39.662420] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:29:46.039 [2024-12-13 10:31:39.662430] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:29:46.039 [2024-12-13 10:31:39.662439] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:29:46.039 [2024-12-13 10:31:39.662454] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:29:46.039 [2024-12-13 10:31:39.662629] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000329680 (9): Bad file descriptor 00:29:46.039 [2024-12-13 10:31:39.662667] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032a080 (9): Bad file descriptor 00:29:46.039 [2024-12-13 10:31:39.662717] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:46.039 [2024-12-13 10:31:39.662732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.039 [2024-12-13 10:31:39.662744] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:46.039 [2024-12-13 10:31:39.662756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.039 [2024-12-13 10:31:39.662767] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:46.039 [2024-12-13 10:31:39.662778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.039 [2024-12-13 10:31:39.662789] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:46.039 [2024-12-13 10:31:39.662799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.039 [2024-12-13 10:31:39.662809] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032aa80 is same with the state(6) to be set 00:29:46.039 [2024-12-13 10:31:39.662940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.040 [2024-12-13 10:31:39.662955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.040 [2024-12-13 10:31:39.662973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.040 [2024-12-13 10:31:39.662985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.040 [2024-12-13 10:31:39.662998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.040 [2024-12-13 10:31:39.663012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.040 [2024-12-13 10:31:39.663024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.040 [2024-12-13 10:31:39.663035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.040 [2024-12-13 10:31:39.663048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.040 [2024-12-13 10:31:39.663058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.040 [2024-12-13 10:31:39.663070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.040 [2024-12-13 10:31:39.663081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.040 [2024-12-13 10:31:39.663093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.040 [2024-12-13 10:31:39.663103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.040 [2024-12-13 10:31:39.663115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.040 [2024-12-13 10:31:39.663124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.040 [2024-12-13 10:31:39.663136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.040 [2024-12-13 10:31:39.663147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.040 [2024-12-13 10:31:39.663158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.040 [2024-12-13 10:31:39.663168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.040 [2024-12-13 10:31:39.663180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.040 [2024-12-13 10:31:39.663190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.040 [2024-12-13 10:31:39.663202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.040 [2024-12-13 10:31:39.663211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.040 [2024-12-13 10:31:39.663223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.040 [2024-12-13 10:31:39.663233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.040 [2024-12-13 10:31:39.663245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.040 [2024-12-13 10:31:39.663255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.040 [2024-12-13 10:31:39.663266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.040 [2024-12-13 10:31:39.663276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.040 [2024-12-13 10:31:39.663288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.040 [2024-12-13 10:31:39.663298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.040 [2024-12-13 10:31:39.663309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.040 [2024-12-13 10:31:39.663320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.040 [2024-12-13 10:31:39.663331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.040 [2024-12-13 10:31:39.663341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.040 [2024-12-13 10:31:39.663353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.040 [2024-12-13 10:31:39.663362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.040 [2024-12-13 10:31:39.663374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.040 [2024-12-13 10:31:39.663384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.040 [2024-12-13 10:31:39.663395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.040 [2024-12-13 10:31:39.663405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.040 [2024-12-13 10:31:39.663416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.040 [2024-12-13 10:31:39.663427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.040 [2024-12-13 10:31:39.663438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.040 [2024-12-13 10:31:39.663454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.040 [2024-12-13 10:31:39.663467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.040 [2024-12-13 10:31:39.663476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.040 [2024-12-13 10:31:39.663489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.040 [2024-12-13 10:31:39.663499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.040 [2024-12-13 10:31:39.663512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.040 [2024-12-13 10:31:39.663522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.040 [2024-12-13 10:31:39.663534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.040 [2024-12-13 10:31:39.663543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.040 [2024-12-13 10:31:39.663556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.040 [2024-12-13 10:31:39.663568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.040 [2024-12-13 10:31:39.663580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.040 [2024-12-13 10:31:39.663589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.040 [2024-12-13 10:31:39.663600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.040 [2024-12-13 10:31:39.663610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.040 [2024-12-13 10:31:39.663622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.040 [2024-12-13 10:31:39.663631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.040 [2024-12-13 10:31:39.663643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.040 [2024-12-13 10:31:39.663652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.040 [2024-12-13 10:31:39.663664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.040 [2024-12-13 10:31:39.663674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.040 [2024-12-13 10:31:39.663687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.040 [2024-12-13 10:31:39.663697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.040 [2024-12-13 10:31:39.663708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.040 [2024-12-13 10:31:39.663718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.040 [2024-12-13 10:31:39.663730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.040 [2024-12-13 10:31:39.663740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.040 [2024-12-13 10:31:39.663751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.040 [2024-12-13 10:31:39.663761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.040 [2024-12-13 10:31:39.663772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.040 [2024-12-13 10:31:39.663781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.040 [2024-12-13 10:31:39.663793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.040 [2024-12-13 10:31:39.663803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.040 [2024-12-13 10:31:39.663815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.040 [2024-12-13 10:31:39.663824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.040 [2024-12-13 10:31:39.663839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.041 [2024-12-13 10:31:39.663850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.041 [2024-12-13 10:31:39.663863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.041 [2024-12-13 10:31:39.663873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.041 [2024-12-13 10:31:39.663885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.041 [2024-12-13 10:31:39.663895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.041 [2024-12-13 10:31:39.663907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.041 [2024-12-13 10:31:39.663917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.041 [2024-12-13 10:31:39.663928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.041 [2024-12-13 10:31:39.663938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.041 [2024-12-13 10:31:39.663950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.041 [2024-12-13 10:31:39.663960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.041 [2024-12-13 10:31:39.663971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.041 [2024-12-13 10:31:39.670669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.041 [2024-12-13 10:31:39.670694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.041 [2024-12-13 10:31:39.670705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.041 [2024-12-13 10:31:39.670725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.041 [2024-12-13 10:31:39.670736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.041 [2024-12-13 10:31:39.670750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.041 [2024-12-13 10:31:39.670762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.041 [2024-12-13 10:31:39.670776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.041 [2024-12-13 10:31:39.670789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.041 [2024-12-13 10:31:39.670802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.041 [2024-12-13 10:31:39.670811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.041 [2024-12-13 10:31:39.670824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.041 [2024-12-13 10:31:39.670838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.041 [2024-12-13 10:31:39.670853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.041 [2024-12-13 10:31:39.670864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.041 [2024-12-13 10:31:39.670877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.041 [2024-12-13 10:31:39.670887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.041 [2024-12-13 10:31:39.670899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.041 [2024-12-13 10:31:39.670910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.041 [2024-12-13 10:31:39.670925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.041 [2024-12-13 10:31:39.670937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.041 [2024-12-13 10:31:39.670949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.041 [2024-12-13 10:31:39.670960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.041 [2024-12-13 10:31:39.670972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.041 [2024-12-13 10:31:39.670983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.041 [2024-12-13 10:31:39.670996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.041 [2024-12-13 10:31:39.671007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.041 [2024-12-13 10:31:39.671019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.041 [2024-12-13 10:31:39.671030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.041 [2024-12-13 10:31:39.671043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.041 [2024-12-13 10:31:39.671054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.041 [2024-12-13 10:31:39.671069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.041 [2024-12-13 10:31:39.671078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.041 [2024-12-13 10:31:39.671091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.041 [2024-12-13 10:31:39.671102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.041 [2024-12-13 10:31:39.671114] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032da00 is same with the state(6) to be set 00:29:46.041 [2024-12-13 10:31:39.672493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.041 [2024-12-13 10:31:39.672522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.041 [2024-12-13 10:31:39.672542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.041 [2024-12-13 10:31:39.672554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.041 [2024-12-13 10:31:39.672567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.041 [2024-12-13 10:31:39.672580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.041 [2024-12-13 10:31:39.672595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.041 [2024-12-13 10:31:39.672605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.041 [2024-12-13 10:31:39.672618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.041 [2024-12-13 10:31:39.672630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.041 [2024-12-13 10:31:39.672644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.041 [2024-12-13 10:31:39.672656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.041 [2024-12-13 10:31:39.672669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.041 [2024-12-13 10:31:39.672680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.041 [2024-12-13 10:31:39.672693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.041 [2024-12-13 10:31:39.672704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.041 [2024-12-13 10:31:39.672717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.041 [2024-12-13 10:31:39.672729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.041 [2024-12-13 10:31:39.672742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.041 [2024-12-13 10:31:39.672754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.041 [2024-12-13 10:31:39.672766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.041 [2024-12-13 10:31:39.672777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.041 [2024-12-13 10:31:39.672789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.042 [2024-12-13 10:31:39.672800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.042 [2024-12-13 10:31:39.672813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.042 [2024-12-13 10:31:39.672824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.042 [2024-12-13 10:31:39.672837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.042 [2024-12-13 10:31:39.672848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.042 [2024-12-13 10:31:39.672860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.042 [2024-12-13 10:31:39.672871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.042 [2024-12-13 10:31:39.672885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.042 [2024-12-13 10:31:39.672896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.042 [2024-12-13 10:31:39.672908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.042 [2024-12-13 10:31:39.672919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.042 [2024-12-13 10:31:39.672932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.042 [2024-12-13 10:31:39.672944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.042 [2024-12-13 10:31:39.672957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.042 [2024-12-13 10:31:39.672968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.042 [2024-12-13 10:31:39.672979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.042 [2024-12-13 10:31:39.672990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.042 [2024-12-13 10:31:39.673003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.042 [2024-12-13 10:31:39.673015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.042 [2024-12-13 10:31:39.673028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.042 [2024-12-13 10:31:39.673039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.042 [2024-12-13 10:31:39.673050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.042 [2024-12-13 10:31:39.673061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.042 [2024-12-13 10:31:39.673075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.042 [2024-12-13 10:31:39.673087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.042 [2024-12-13 10:31:39.673099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.042 [2024-12-13 10:31:39.673110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.042 [2024-12-13 10:31:39.673122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.042 [2024-12-13 10:31:39.673139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.042 [2024-12-13 10:31:39.673153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.042 [2024-12-13 10:31:39.673163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.042 [2024-12-13 10:31:39.673176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.042 [2024-12-13 10:31:39.673187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.042 [2024-12-13 10:31:39.673200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.042 [2024-12-13 10:31:39.673212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.042 [2024-12-13 10:31:39.673225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.042 [2024-12-13 10:31:39.673235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.042 [2024-12-13 10:31:39.673247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.042 [2024-12-13 10:31:39.673259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.042 [2024-12-13 10:31:39.673274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.042 [2024-12-13 10:31:39.673285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.042 [2024-12-13 10:31:39.673296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.042 [2024-12-13 10:31:39.673307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.042 [2024-12-13 10:31:39.673319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.042 [2024-12-13 10:31:39.673331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.042 [2024-12-13 10:31:39.673345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.042 [2024-12-13 10:31:39.673355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.042 [2024-12-13 10:31:39.673367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.042 [2024-12-13 10:31:39.673376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.042 [2024-12-13 10:31:39.673389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.042 [2024-12-13 10:31:39.673400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.042 [2024-12-13 10:31:39.673412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.042 [2024-12-13 10:31:39.673423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.042 [2024-12-13 10:31:39.673437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.042 [2024-12-13 10:31:39.673455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.042 [2024-12-13 10:31:39.673468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.042 [2024-12-13 10:31:39.673479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.042 [2024-12-13 10:31:39.673493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.042 [2024-12-13 10:31:39.673504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.042 [2024-12-13 10:31:39.673516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.042 [2024-12-13 10:31:39.673528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.042 [2024-12-13 10:31:39.673540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.042 [2024-12-13 10:31:39.673550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.042 [2024-12-13 10:31:39.673564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.042 [2024-12-13 10:31:39.673576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.042 [2024-12-13 10:31:39.673587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.042 [2024-12-13 10:31:39.673598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.042 [2024-12-13 10:31:39.673610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.042 [2024-12-13 10:31:39.673621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.042 [2024-12-13 10:31:39.673634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.042 [2024-12-13 10:31:39.673645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.042 [2024-12-13 10:31:39.673657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.042 [2024-12-13 10:31:39.673674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.042 [2024-12-13 10:31:39.673687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.042 [2024-12-13 10:31:39.673698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.042 [2024-12-13 10:31:39.673712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.042 [2024-12-13 10:31:39.673722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.042 [2024-12-13 10:31:39.673735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.042 [2024-12-13 10:31:39.673749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.043 [2024-12-13 10:31:39.673761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.043 [2024-12-13 10:31:39.673772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.043 [2024-12-13 10:31:39.673785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.043 [2024-12-13 10:31:39.673795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.043 [2024-12-13 10:31:39.673807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.043 [2024-12-13 10:31:39.673817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.043 [2024-12-13 10:31:39.673830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.043 [2024-12-13 10:31:39.673842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.043 [2024-12-13 10:31:39.673854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.043 [2024-12-13 10:31:39.673864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.043 [2024-12-13 10:31:39.673876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.043 [2024-12-13 10:31:39.673886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.043 [2024-12-13 10:31:39.673900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.043 [2024-12-13 10:31:39.673911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.043 [2024-12-13 10:31:39.673922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.043 [2024-12-13 10:31:39.673933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.043 [2024-12-13 10:31:39.673945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.043 [2024-12-13 10:31:39.673956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.043 [2024-12-13 10:31:39.673969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.043 [2024-12-13 10:31:39.673979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.043 [2024-12-13 10:31:39.673990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.043 [2024-12-13 10:31:39.674000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.043 [2024-12-13 10:31:39.674013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.043 [2024-12-13 10:31:39.674024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.043 [2024-12-13 10:31:39.674039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.043 [2024-12-13 10:31:39.674050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.043 [2024-12-13 10:31:39.674059] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032dc80 is same with the state(6) to be set 00:29:46.043 [2024-12-13 10:31:39.675405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.043 [2024-12-13 10:31:39.675426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.043 [2024-12-13 10:31:39.675443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.043 [2024-12-13 10:31:39.675459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.043 [2024-12-13 10:31:39.675473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.043 [2024-12-13 10:31:39.675484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.043 [2024-12-13 10:31:39.675498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.043 [2024-12-13 10:31:39.675509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.043 [2024-12-13 10:31:39.675523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.043 [2024-12-13 10:31:39.675534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.043 [2024-12-13 10:31:39.675548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.043 [2024-12-13 10:31:39.675559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.043 [2024-12-13 10:31:39.675572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.043 [2024-12-13 10:31:39.675582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.043 [2024-12-13 10:31:39.675595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.043 [2024-12-13 10:31:39.675605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.043 [2024-12-13 10:31:39.675620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.043 [2024-12-13 10:31:39.675632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.043 [2024-12-13 10:31:39.675644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.043 [2024-12-13 10:31:39.675655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.043 [2024-12-13 10:31:39.675668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.043 [2024-12-13 10:31:39.675677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.043 [2024-12-13 10:31:39.675695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.043 [2024-12-13 10:31:39.675707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.043 [2024-12-13 10:31:39.675720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.043 [2024-12-13 10:31:39.675730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.043 [2024-12-13 10:31:39.675743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.043 [2024-12-13 10:31:39.675754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.043 [2024-12-13 10:31:39.675769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.043 [2024-12-13 10:31:39.675780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.043 [2024-12-13 10:31:39.675792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.043 [2024-12-13 10:31:39.675803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.043 [2024-12-13 10:31:39.675816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.043 [2024-12-13 10:31:39.675828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.043 [2024-12-13 10:31:39.675840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.043 [2024-12-13 10:31:39.675851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.043 [2024-12-13 10:31:39.675863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.043 [2024-12-13 10:31:39.675874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.043 [2024-12-13 10:31:39.675888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.043 [2024-12-13 10:31:39.675900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.043 [2024-12-13 10:31:39.675912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.043 [2024-12-13 10:31:39.675923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.043 [2024-12-13 10:31:39.675936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.043 [2024-12-13 10:31:39.675947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.043 [2024-12-13 10:31:39.675961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.043 [2024-12-13 10:31:39.675972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.043 [2024-12-13 10:31:39.675985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.043 [2024-12-13 10:31:39.675997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.043 [2024-12-13 10:31:39.676011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.043 [2024-12-13 10:31:39.676022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.043 [2024-12-13 10:31:39.676035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.043 [2024-12-13 10:31:39.676046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.043 [2024-12-13 10:31:39.676058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.043 [2024-12-13 10:31:39.676069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.044 [2024-12-13 10:31:39.676082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.044 [2024-12-13 10:31:39.676093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.044 [2024-12-13 10:31:39.676106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.044 [2024-12-13 10:31:39.676115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.044 [2024-12-13 10:31:39.676128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.044 [2024-12-13 10:31:39.676140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.044 [2024-12-13 10:31:39.676153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.044 [2024-12-13 10:31:39.676163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.044 [2024-12-13 10:31:39.676176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.044 [2024-12-13 10:31:39.676186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.044 [2024-12-13 10:31:39.676199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.044 [2024-12-13 10:31:39.676209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.044 [2024-12-13 10:31:39.676221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.044 [2024-12-13 10:31:39.676233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.044 [2024-12-13 10:31:39.676246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.044 [2024-12-13 10:31:39.676257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.044 [2024-12-13 10:31:39.676270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.044 [2024-12-13 10:31:39.676278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.044 [2024-12-13 10:31:39.676291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.044 [2024-12-13 10:31:39.676303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.044 [2024-12-13 10:31:39.676316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.044 [2024-12-13 10:31:39.676326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.044 [2024-12-13 10:31:39.676338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.044 [2024-12-13 10:31:39.676348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.044 [2024-12-13 10:31:39.676360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.044 [2024-12-13 10:31:39.676371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.044 [2024-12-13 10:31:39.676384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.044 [2024-12-13 10:31:39.676394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.044 [2024-12-13 10:31:39.676406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.044 [2024-12-13 10:31:39.676415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.044 [2024-12-13 10:31:39.676427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.044 [2024-12-13 10:31:39.676437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.044 [2024-12-13 10:31:39.676456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.044 [2024-12-13 10:31:39.676468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.044 [2024-12-13 10:31:39.676480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.044 [2024-12-13 10:31:39.676490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.044 [2024-12-13 10:31:39.676503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.044 [2024-12-13 10:31:39.676513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.044 [2024-12-13 10:31:39.676527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.044 [2024-12-13 10:31:39.676553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.044 [2024-12-13 10:31:39.676574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.044 [2024-12-13 10:31:39.676586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.044 [2024-12-13 10:31:39.676597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.044 [2024-12-13 10:31:39.676607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.044 [2024-12-13 10:31:39.676621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.044 [2024-12-13 10:31:39.676631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.044 [2024-12-13 10:31:39.676645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.044 [2024-12-13 10:31:39.676656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.044 [2024-12-13 10:31:39.676669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.044 [2024-12-13 10:31:39.676679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.044 [2024-12-13 10:31:39.676691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.044 [2024-12-13 10:31:39.676702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.044 [2024-12-13 10:31:39.676715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.044 [2024-12-13 10:31:39.676726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.044 [2024-12-13 10:31:39.676738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.044 [2024-12-13 10:31:39.676748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.044 [2024-12-13 10:31:39.676759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.044 [2024-12-13 10:31:39.676770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.044 [2024-12-13 10:31:39.676782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.044 [2024-12-13 10:31:39.676794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.044 [2024-12-13 10:31:39.676806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.044 [2024-12-13 10:31:39.676816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.044 [2024-12-13 10:31:39.676827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.044 [2024-12-13 10:31:39.676838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.044 [2024-12-13 10:31:39.676850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.044 [2024-12-13 10:31:39.676861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.044 [2024-12-13 10:31:39.676873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.044 [2024-12-13 10:31:39.676882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.044 [2024-12-13 10:31:39.676894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.044 [2024-12-13 10:31:39.676905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.044 [2024-12-13 10:31:39.676917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.044 [2024-12-13 10:31:39.676928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.044 [2024-12-13 10:31:39.676941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.044 [2024-12-13 10:31:39.676952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.044 [2024-12-13 10:31:39.676962] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032eb80 is same with the state(6) to be set 00:29:46.044 [2024-12-13 10:31:39.678246] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:29:46.044 [2024-12-13 10:31:39.678272] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:29:46.044 [2024-12-13 10:31:39.678289] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:29:46.044 [2024-12-13 10:31:39.678424] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032aa80 (9): Bad file descriptor 00:29:46.044 [2024-12-13 10:31:39.678740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.045 [2024-12-13 10:31:39.678763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326e80 with addr=10.0.0.2, port=4420 00:29:46.045 [2024-12-13 10:31:39.678776] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000326e80 is same with the state(6) to be set 00:29:46.045 [2024-12-13 10:31:39.678960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.045 [2024-12-13 10:31:39.678977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000327880 with addr=10.0.0.2, port=4420 00:29:46.045 [2024-12-13 10:31:39.678994] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000327880 is same with the state(6) to be set 00:29:46.045 [2024-12-13 10:31:39.679087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.045 [2024-12-13 10:31:39.679104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032b480 with addr=10.0.0.2, port=4420 00:29:46.045 [2024-12-13 10:31:39.679116] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032b480 is same with the state(6) to be set 00:29:46.045 [2024-12-13 10:31:39.679965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.045 [2024-12-13 10:31:39.679991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.045 [2024-12-13 10:31:39.680011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.045 [2024-12-13 10:31:39.680024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.045 [2024-12-13 10:31:39.680038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.045 [2024-12-13 10:31:39.680050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.045 [2024-12-13 10:31:39.680064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.045 [2024-12-13 10:31:39.680076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.045 [2024-12-13 10:31:39.680093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.045 [2024-12-13 10:31:39.680105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.045 [2024-12-13 10:31:39.680118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.045 [2024-12-13 10:31:39.680130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.045 [2024-12-13 10:31:39.680144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.045 [2024-12-13 10:31:39.680157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.045 [2024-12-13 10:31:39.680169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.045 [2024-12-13 10:31:39.680180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.045 [2024-12-13 10:31:39.680194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.045 [2024-12-13 10:31:39.680205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.045 [2024-12-13 10:31:39.680218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.045 [2024-12-13 10:31:39.680229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.045 [2024-12-13 10:31:39.680249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.045 [2024-12-13 10:31:39.680259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.045 [2024-12-13 10:31:39.680273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.045 [2024-12-13 10:31:39.680285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.045 [2024-12-13 10:31:39.680297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.045 [2024-12-13 10:31:39.680308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.045 [2024-12-13 10:31:39.680321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.045 [2024-12-13 10:31:39.680333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.045 [2024-12-13 10:31:39.680346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.045 [2024-12-13 10:31:39.680356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.045 [2024-12-13 10:31:39.680369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.045 [2024-12-13 10:31:39.680381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.045 [2024-12-13 10:31:39.680395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.045 [2024-12-13 10:31:39.680408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.045 [2024-12-13 10:31:39.680422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.045 [2024-12-13 10:31:39.680433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.045 [2024-12-13 10:31:39.680455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.045 [2024-12-13 10:31:39.680469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.045 [2024-12-13 10:31:39.680482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.045 [2024-12-13 10:31:39.680493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.045 [2024-12-13 10:31:39.680507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.045 [2024-12-13 10:31:39.680519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.045 [2024-12-13 10:31:39.680533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.045 [2024-12-13 10:31:39.680544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.045 [2024-12-13 10:31:39.680557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.045 [2024-12-13 10:31:39.680568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.045 [2024-12-13 10:31:39.680582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.045 [2024-12-13 10:31:39.680593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.045 [2024-12-13 10:31:39.680607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.045 [2024-12-13 10:31:39.680618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.045 [2024-12-13 10:31:39.680630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.045 [2024-12-13 10:31:39.680642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.045 [2024-12-13 10:31:39.680655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.045 [2024-12-13 10:31:39.680666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.045 [2024-12-13 10:31:39.680678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.045 [2024-12-13 10:31:39.680689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.045 [2024-12-13 10:31:39.680703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.045 [2024-12-13 10:31:39.680715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.045 [2024-12-13 10:31:39.680728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.045 [2024-12-13 10:31:39.680739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.045 [2024-12-13 10:31:39.680752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.045 [2024-12-13 10:31:39.680764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.045 [2024-12-13 10:31:39.680775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.045 [2024-12-13 10:31:39.680788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.045 [2024-12-13 10:31:39.680802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.045 [2024-12-13 10:31:39.680813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.045 [2024-12-13 10:31:39.680825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.045 [2024-12-13 10:31:39.680836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.045 [2024-12-13 10:31:39.680850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.045 [2024-12-13 10:31:39.680861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.045 [2024-12-13 10:31:39.680874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.045 [2024-12-13 10:31:39.680884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.045 [2024-12-13 10:31:39.680896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.045 [2024-12-13 10:31:39.680908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.046 [2024-12-13 10:31:39.680921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.046 [2024-12-13 10:31:39.680932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.046 [2024-12-13 10:31:39.680944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.046 [2024-12-13 10:31:39.680955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.046 [2024-12-13 10:31:39.680969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.046 [2024-12-13 10:31:39.680981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.046 [2024-12-13 10:31:39.680999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.046 [2024-12-13 10:31:39.681011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.046 [2024-12-13 10:31:39.681024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.046 [2024-12-13 10:31:39.681038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.046 [2024-12-13 10:31:39.681052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.046 [2024-12-13 10:31:39.681063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.046 [2024-12-13 10:31:39.681075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.046 [2024-12-13 10:31:39.681086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.046 [2024-12-13 10:31:39.681101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.046 [2024-12-13 10:31:39.681112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.046 [2024-12-13 10:31:39.681124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.046 [2024-12-13 10:31:39.681135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.046 [2024-12-13 10:31:39.681149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.046 [2024-12-13 10:31:39.681161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.046 [2024-12-13 10:31:39.681173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.046 [2024-12-13 10:31:39.681183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.046 [2024-12-13 10:31:39.681196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.046 [2024-12-13 10:31:39.681208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.046 [2024-12-13 10:31:39.681221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.046 [2024-12-13 10:31:39.681232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.046 [2024-12-13 10:31:39.681245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.046 [2024-12-13 10:31:39.681257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.046 [2024-12-13 10:31:39.681270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.046 [2024-12-13 10:31:39.681282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.046 [2024-12-13 10:31:39.681295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.046 [2024-12-13 10:31:39.681306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.046 [2024-12-13 10:31:39.681320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.046 [2024-12-13 10:31:39.681331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.046 [2024-12-13 10:31:39.681346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.046 [2024-12-13 10:31:39.681357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.046 [2024-12-13 10:31:39.681369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.046 [2024-12-13 10:31:39.681380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.046 [2024-12-13 10:31:39.681393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.046 [2024-12-13 10:31:39.681406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.046 [2024-12-13 10:31:39.681421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.046 [2024-12-13 10:31:39.681432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.046 [2024-12-13 10:31:39.681445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.046 [2024-12-13 10:31:39.681461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.046 [2024-12-13 10:31:39.681475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.046 [2024-12-13 10:31:39.681487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.046 [2024-12-13 10:31:39.681501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.046 [2024-12-13 10:31:39.681511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.046 [2024-12-13 10:31:39.681524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.046 [2024-12-13 10:31:39.681535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.046 [2024-12-13 10:31:39.681549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.046 [2024-12-13 10:31:39.681560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.046 [2024-12-13 10:31:39.681573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.046 [2024-12-13 10:31:39.681583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.046 [2024-12-13 10:31:39.681595] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032e400 is same with the state(6) to be set 00:29:46.046 [2024-12-13 10:31:39.682936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.046 [2024-12-13 10:31:39.682957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.046 [2024-12-13 10:31:39.682975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.046 [2024-12-13 10:31:39.682986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.046 [2024-12-13 10:31:39.683003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.046 [2024-12-13 10:31:39.683016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.046 [2024-12-13 10:31:39.683029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.046 [2024-12-13 10:31:39.683041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.046 [2024-12-13 10:31:39.683054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.046 [2024-12-13 10:31:39.683066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.046 [2024-12-13 10:31:39.683080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.046 [2024-12-13 10:31:39.683092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.046 [2024-12-13 10:31:39.683106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.046 [2024-12-13 10:31:39.683117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.046 [2024-12-13 10:31:39.683130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.047 [2024-12-13 10:31:39.683143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.047 [2024-12-13 10:31:39.683157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.047 [2024-12-13 10:31:39.683169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.047 [2024-12-13 10:31:39.683181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.047 [2024-12-13 10:31:39.683193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.047 [2024-12-13 10:31:39.683208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.047 [2024-12-13 10:31:39.683220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.047 [2024-12-13 10:31:39.683232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.047 [2024-12-13 10:31:39.683244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.047 [2024-12-13 10:31:39.683257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.047 [2024-12-13 10:31:39.683268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.047 [2024-12-13 10:31:39.683282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.047 [2024-12-13 10:31:39.683293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.047 [2024-12-13 10:31:39.683306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.047 [2024-12-13 10:31:39.683319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.047 [2024-12-13 10:31:39.683332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.047 [2024-12-13 10:31:39.683344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.047 [2024-12-13 10:31:39.683358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.047 [2024-12-13 10:31:39.683369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.047 [2024-12-13 10:31:39.683382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.047 [2024-12-13 10:31:39.683393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.047 [2024-12-13 10:31:39.683406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.047 [2024-12-13 10:31:39.683418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.047 [2024-12-13 10:31:39.683431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.047 [2024-12-13 10:31:39.683442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.047 [2024-12-13 10:31:39.683460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.047 [2024-12-13 10:31:39.683473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.047 [2024-12-13 10:31:39.683486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.047 [2024-12-13 10:31:39.683497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.047 [2024-12-13 10:31:39.683511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.047 [2024-12-13 10:31:39.683523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.047 [2024-12-13 10:31:39.683536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.047 [2024-12-13 10:31:39.683548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.047 [2024-12-13 10:31:39.683560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.047 [2024-12-13 10:31:39.683571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.047 [2024-12-13 10:31:39.683584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.047 [2024-12-13 10:31:39.683596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.047 [2024-12-13 10:31:39.683609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.047 [2024-12-13 10:31:39.683621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.047 [2024-12-13 10:31:39.683638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.047 [2024-12-13 10:31:39.683650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.047 [2024-12-13 10:31:39.683664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.047 [2024-12-13 10:31:39.683677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.047 [2024-12-13 10:31:39.683692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.047 [2024-12-13 10:31:39.683702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.047 [2024-12-13 10:31:39.683715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.047 [2024-12-13 10:31:39.683727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.047 [2024-12-13 10:31:39.683741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.047 [2024-12-13 10:31:39.683752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.047 [2024-12-13 10:31:39.683765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.047 [2024-12-13 10:31:39.683775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.047 [2024-12-13 10:31:39.683789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.047 [2024-12-13 10:31:39.683801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.047 [2024-12-13 10:31:39.683813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.047 [2024-12-13 10:31:39.683824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.047 [2024-12-13 10:31:39.683837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.047 [2024-12-13 10:31:39.683849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.047 [2024-12-13 10:31:39.683862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.047 [2024-12-13 10:31:39.683873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.047 [2024-12-13 10:31:39.683886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.047 [2024-12-13 10:31:39.683897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.047 [2024-12-13 10:31:39.683909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.047 [2024-12-13 10:31:39.683920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.047 [2024-12-13 10:31:39.683933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.047 [2024-12-13 10:31:39.683951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.047 [2024-12-13 10:31:39.683964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.047 [2024-12-13 10:31:39.683974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.047 [2024-12-13 10:31:39.683989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.047 [2024-12-13 10:31:39.684000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.047 [2024-12-13 10:31:39.684013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.047 [2024-12-13 10:31:39.684023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.047 [2024-12-13 10:31:39.684036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.047 [2024-12-13 10:31:39.684047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.047 [2024-12-13 10:31:39.684061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.047 [2024-12-13 10:31:39.684071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.047 [2024-12-13 10:31:39.684084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.047 [2024-12-13 10:31:39.684095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.047 [2024-12-13 10:31:39.684107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.047 [2024-12-13 10:31:39.684119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.047 [2024-12-13 10:31:39.684132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.048 [2024-12-13 10:31:39.684142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.048 [2024-12-13 10:31:39.684155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.048 [2024-12-13 10:31:39.684167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.048 [2024-12-13 10:31:39.684180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.048 [2024-12-13 10:31:39.684191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.048 [2024-12-13 10:31:39.684203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.048 [2024-12-13 10:31:39.684214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.048 [2024-12-13 10:31:39.684227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.048 [2024-12-13 10:31:39.684239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.048 [2024-12-13 10:31:39.684253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.048 [2024-12-13 10:31:39.684264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.048 [2024-12-13 10:31:39.684276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.048 [2024-12-13 10:31:39.684288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.048 [2024-12-13 10:31:39.684302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.048 [2024-12-13 10:31:39.684313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.048 [2024-12-13 10:31:39.684324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.048 [2024-12-13 10:31:39.684335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.048 [2024-12-13 10:31:39.684347] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032e680 is same with the state(6) to be set 00:29:46.048 [2024-12-13 10:31:39.686029] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:29:46.048 [2024-12-13 10:31:39.686060] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:29:46.048 [2024-12-13 10:31:39.686075] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:29:46.048 [2024-12-13 10:31:39.686089] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:29:46.048 [2024-12-13 10:31:39.686106] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:29:46.048 [2024-12-13 10:31:39.686123] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:29:46.048 [2024-12-13 10:31:39.686187] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000326e80 (9): Bad file descriptor 00:29:46.048 [2024-12-13 10:31:39.686205] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000327880 (9): Bad file descriptor 00:29:46.048 [2024-12-13 10:31:39.686219] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032b480 (9): Bad file descriptor 00:29:46.048 [2024-12-13 10:31:39.686770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.048 [2024-12-13 10:31:39.686793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000328c80 with addr=10.0.0.2, port=4420 00:29:46.048 [2024-12-13 10:31:39.686806] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000328c80 is same with the state(6) to be set 00:29:46.048 [2024-12-13 10:31:39.686958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.048 [2024-12-13 10:31:39.686974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:29:46.048 [2024-12-13 10:31:39.686986] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000326480 is same with the state(6) to be set 00:29:46.048 [2024-12-13 10:31:39.687140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.048 [2024-12-13 10:31:39.687157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000328280 with addr=10.0.0.2, port=4420 00:29:46.048 [2024-12-13 10:31:39.687168] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000328280 is same with the state(6) to be set 00:29:46.048 [2024-12-13 10:31:39.687372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.048 [2024-12-13 10:31:39.687391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:29:46.048 [2024-12-13 10:31:39.687402] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:29:46.048 [2024-12-13 10:31:39.687630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.048 [2024-12-13 10:31:39.687646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000329680 with addr=10.0.0.2, port=4420 00:29:46.048 [2024-12-13 10:31:39.687656] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000329680 is same with the state(6) to be set 00:29:46.048 [2024-12-13 10:31:39.687803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.048 [2024-12-13 10:31:39.687818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032a080 with addr=10.0.0.2, port=4420 00:29:46.048 [2024-12-13 10:31:39.687828] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032a080 is same with the state(6) to be set 00:29:46.048 [2024-12-13 10:31:39.687838] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:29:46.048 [2024-12-13 10:31:39.687848] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:29:46.048 [2024-12-13 10:31:39.687859] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:29:46.048 [2024-12-13 10:31:39.687871] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:29:46.048 [2024-12-13 10:31:39.687883] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:29:46.048 [2024-12-13 10:31:39.687893] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:29:46.048 [2024-12-13 10:31:39.687903] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:29:46.048 [2024-12-13 10:31:39.687912] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:29:46.048 [2024-12-13 10:31:39.687922] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:29:46.048 [2024-12-13 10:31:39.687931] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:29:46.048 [2024-12-13 10:31:39.687940] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:29:46.048 [2024-12-13 10:31:39.687949] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:29:46.048 [2024-12-13 10:31:39.688764] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000328c80 (9): Bad file descriptor 00:29:46.048 [2024-12-13 10:31:39.688790] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000326480 (9): Bad file descriptor 00:29:46.048 [2024-12-13 10:31:39.688805] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000328280 (9): Bad file descriptor 00:29:46.048 [2024-12-13 10:31:39.688818] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:29:46.048 [2024-12-13 10:31:39.688831] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000329680 (9): Bad file descriptor 00:29:46.048 [2024-12-13 10:31:39.688845] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032a080 (9): Bad file descriptor 00:29:46.048 [2024-12-13 10:31:39.689096] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:29:46.048 [2024-12-13 10:31:39.689112] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:29:46.048 [2024-12-13 10:31:39.689125] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:29:46.048 [2024-12-13 10:31:39.689135] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:29:46.048 [2024-12-13 10:31:39.689147] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:29:46.048 [2024-12-13 10:31:39.689156] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:29:46.048 [2024-12-13 10:31:39.689167] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:29:46.048 [2024-12-13 10:31:39.689175] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:29:46.048 [2024-12-13 10:31:39.689185] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:29:46.048 [2024-12-13 10:31:39.689193] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:29:46.048 [2024-12-13 10:31:39.689203] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:29:46.048 [2024-12-13 10:31:39.689211] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:29:46.048 [2024-12-13 10:31:39.689222] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:29:46.048 [2024-12-13 10:31:39.689230] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:29:46.048 [2024-12-13 10:31:39.689239] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:29:46.048 [2024-12-13 10:31:39.689250] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:29:46.048 [2024-12-13 10:31:39.689261] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:29:46.048 [2024-12-13 10:31:39.689269] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:29:46.048 [2024-12-13 10:31:39.689279] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:29:46.048 [2024-12-13 10:31:39.689289] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:29:46.049 [2024-12-13 10:31:39.689301] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:29:46.049 [2024-12-13 10:31:39.689310] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:29:46.049 [2024-12-13 10:31:39.689319] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:29:46.049 [2024-12-13 10:31:39.689328] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:29:46.049 [2024-12-13 10:31:39.689404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.049 [2024-12-13 10:31:39.689420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.049 [2024-12-13 10:31:39.689438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.049 [2024-12-13 10:31:39.689457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.049 [2024-12-13 10:31:39.689472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.049 [2024-12-13 10:31:39.689482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.049 [2024-12-13 10:31:39.689499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.049 [2024-12-13 10:31:39.689510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.049 [2024-12-13 10:31:39.689523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.049 [2024-12-13 10:31:39.689536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.049 [2024-12-13 10:31:39.689549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.049 [2024-12-13 10:31:39.689560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.049 [2024-12-13 10:31:39.689572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.049 [2024-12-13 10:31:39.689582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.049 [2024-12-13 10:31:39.689596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.049 [2024-12-13 10:31:39.689607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.049 [2024-12-13 10:31:39.689619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.049 [2024-12-13 10:31:39.689630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.049 [2024-12-13 10:31:39.689643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.049 [2024-12-13 10:31:39.689653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.049 [2024-12-13 10:31:39.689666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.049 [2024-12-13 10:31:39.689677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.049 [2024-12-13 10:31:39.689691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.049 [2024-12-13 10:31:39.689701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.049 [2024-12-13 10:31:39.689713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.049 [2024-12-13 10:31:39.689731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.049 [2024-12-13 10:31:39.689743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.049 [2024-12-13 10:31:39.689754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.049 [2024-12-13 10:31:39.689768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.049 [2024-12-13 10:31:39.689779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.049 [2024-12-13 10:31:39.689791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.049 [2024-12-13 10:31:39.689804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.049 [2024-12-13 10:31:39.689816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.049 [2024-12-13 10:31:39.689827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.049 [2024-12-13 10:31:39.689841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.049 [2024-12-13 10:31:39.689850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.049 [2024-12-13 10:31:39.689864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.049 [2024-12-13 10:31:39.689876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.049 [2024-12-13 10:31:39.689888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.049 [2024-12-13 10:31:39.689899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.049 [2024-12-13 10:31:39.689910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.049 [2024-12-13 10:31:39.689922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.049 [2024-12-13 10:31:39.689935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.049 [2024-12-13 10:31:39.689945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.049 [2024-12-13 10:31:39.689958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.049 [2024-12-13 10:31:39.689969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.049 [2024-12-13 10:31:39.689980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.049 [2024-12-13 10:31:39.689991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.049 [2024-12-13 10:31:39.690004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.049 [2024-12-13 10:31:39.690015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.049 [2024-12-13 10:31:39.690029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.049 [2024-12-13 10:31:39.690040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.049 [2024-12-13 10:31:39.690053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.049 [2024-12-13 10:31:39.690065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.049 [2024-12-13 10:31:39.690077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.049 [2024-12-13 10:31:39.690088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.049 [2024-12-13 10:31:39.690103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.049 [2024-12-13 10:31:39.690114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.049 [2024-12-13 10:31:39.690127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.049 [2024-12-13 10:31:39.690138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.049 [2024-12-13 10:31:39.690151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.049 [2024-12-13 10:31:39.690162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.049 [2024-12-13 10:31:39.690174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.049 [2024-12-13 10:31:39.690186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.049 [2024-12-13 10:31:39.690198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.049 [2024-12-13 10:31:39.690209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.049 [2024-12-13 10:31:39.690223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.049 [2024-12-13 10:31:39.690236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.049 [2024-12-13 10:31:39.690248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.049 [2024-12-13 10:31:39.690259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.049 [2024-12-13 10:31:39.690271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.049 [2024-12-13 10:31:39.690282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.049 [2024-12-13 10:31:39.690294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.049 [2024-12-13 10:31:39.690306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.049 [2024-12-13 10:31:39.690318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.049 [2024-12-13 10:31:39.690328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.049 [2024-12-13 10:31:39.690340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.050 [2024-12-13 10:31:39.690352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.050 [2024-12-13 10:31:39.690363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.050 [2024-12-13 10:31:39.690373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.050 [2024-12-13 10:31:39.690387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.050 [2024-12-13 10:31:39.690398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.050 [2024-12-13 10:31:39.690409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.050 [2024-12-13 10:31:39.690421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.050 [2024-12-13 10:31:39.690433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.050 [2024-12-13 10:31:39.690443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.050 [2024-12-13 10:31:39.690459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.050 [2024-12-13 10:31:39.690472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.050 [2024-12-13 10:31:39.690483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.050 [2024-12-13 10:31:39.690494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.050 [2024-12-13 10:31:39.690507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.050 [2024-12-13 10:31:39.690516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.050 [2024-12-13 10:31:39.690528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.050 [2024-12-13 10:31:39.690540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.050 [2024-12-13 10:31:39.690551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.050 [2024-12-13 10:31:39.690561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.050 [2024-12-13 10:31:39.690573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.050 [2024-12-13 10:31:39.690584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.050 [2024-12-13 10:31:39.690597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.050 [2024-12-13 10:31:39.690606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.050 [2024-12-13 10:31:39.690621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.050 [2024-12-13 10:31:39.690631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.050 [2024-12-13 10:31:39.690643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.050 [2024-12-13 10:31:39.690654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.050 [2024-12-13 10:31:39.690667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.050 [2024-12-13 10:31:39.690677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.050 [2024-12-13 10:31:39.690689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.050 [2024-12-13 10:31:39.690701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.050 [2024-12-13 10:31:39.690714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.050 [2024-12-13 10:31:39.690723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.050 [2024-12-13 10:31:39.690736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.050 [2024-12-13 10:31:39.690747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.050 [2024-12-13 10:31:39.690759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.050 [2024-12-13 10:31:39.690769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.050 [2024-12-13 10:31:39.690782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.050 [2024-12-13 10:31:39.690792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.050 [2024-12-13 10:31:39.690804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.050 [2024-12-13 10:31:39.690815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.050 [2024-12-13 10:31:39.690827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.050 [2024-12-13 10:31:39.690838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.050 [2024-12-13 10:31:39.690850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.050 [2024-12-13 10:31:39.690861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.050 [2024-12-13 10:31:39.690873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.050 [2024-12-13 10:31:39.690882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.050 [2024-12-13 10:31:39.690893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.050 [2024-12-13 10:31:39.690903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.050 [2024-12-13 10:31:39.690914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.050 [2024-12-13 10:31:39.690924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.050 [2024-12-13 10:31:39.690935] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032e900 is same with the state(6) to be set 00:29:46.050 [2024-12-13 10:31:39.692207] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:29:46.050 [2024-12-13 10:31:39.692229] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:29:46.050 [2024-12-13 10:31:39.692242] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:29:46.050 task offset: 25088 on job bdev=Nvme1n1 fails 00:29:46.050 00:29:46.050 Latency(us) 00:29:46.050 [2024-12-13T09:31:39.941Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:46.050 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:46.050 Job: Nvme1n1 ended in about 0.88 seconds with error 00:29:46.050 Verification LBA range: start 0x0 length 0x400 00:29:46.050 Nvme1n1 : 0.88 218.54 13.66 72.85 0.00 217285.73 23468.13 219701.64 00:29:46.050 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:46.050 Job: Nvme2n1 ended in about 0.89 seconds with error 00:29:46.050 Verification LBA range: start 0x0 length 0x400 00:29:46.050 Nvme2n1 : 0.89 214.12 13.38 13.52 0.00 271987.07 19348.72 243669.09 00:29:46.050 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:46.050 Job: Nvme3n1 ended in about 0.90 seconds with error 00:29:46.050 Verification LBA range: start 0x0 length 0x400 00:29:46.050 Nvme3n1 : 0.90 212.57 13.29 70.86 0.00 215017.57 16602.45 238675.87 00:29:46.050 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:46.050 Job: Nvme4n1 ended in about 0.91 seconds with error 00:29:46.050 Verification LBA range: start 0x0 length 0x400 00:29:46.050 Nvme4n1 : 0.91 211.89 13.24 70.63 0.00 211393.58 19348.72 250659.60 00:29:46.050 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:46.050 Job: Nvme5n1 ended in about 0.89 seconds with error 00:29:46.050 Verification LBA range: start 0x0 length 0x400 00:29:46.050 Nvme5n1 : 0.89 215.55 13.47 71.85 0.00 203377.13 17351.44 240673.16 00:29:46.050 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:46.050 Job: Nvme6n1 ended in about 0.88 seconds with error 00:29:46.050 Verification LBA range: start 0x0 length 0x400 00:29:46.050 Nvme6n1 : 0.88 217.68 13.61 72.56 0.00 196965.55 9362.29 242670.45 00:29:46.050 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:46.050 Job: Nvme7n1 ended in about 0.91 seconds with error 00:29:46.050 Verification LBA range: start 0x0 length 0x400 00:29:46.051 Nvme7n1 : 0.91 140.10 8.76 70.05 0.00 267637.19 15728.64 249660.95 00:29:46.051 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:46.051 Job: Nvme8n1 ended in about 0.92 seconds with error 00:29:46.051 Verification LBA range: start 0x0 length 0x400 00:29:46.051 Nvme8n1 : 0.92 148.41 9.28 61.11 0.00 261898.57 30708.30 231685.36 00:29:46.051 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:46.051 Job: Nvme9n1 ended in about 0.92 seconds with error 00:29:46.051 Verification LBA range: start 0x0 length 0x400 00:29:46.051 Nvme9n1 : 0.92 138.68 8.67 69.34 0.00 259526.46 20597.03 245666.38 00:29:46.051 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:46.051 Job: Nvme10n1 ended in about 0.91 seconds with error 00:29:46.051 Verification LBA range: start 0x0 length 0x400 00:29:46.051 Nvme10n1 : 0.91 140.81 8.80 70.40 0.00 249397.31 19348.72 265639.25 00:29:46.051 [2024-12-13T09:31:39.942Z] =================================================================================================================== 00:29:46.051 [2024-12-13T09:31:39.942Z] Total : 1858.34 116.15 643.17 0.00 231822.11 9362.29 265639.25 00:29:46.051 [2024-12-13 10:31:39.823413] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:29:46.051 [2024-12-13 10:31:39.823489] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:29:46.051 [2024-12-13 10:31:39.823896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.051 [2024-12-13 10:31:39.823924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032b480 with addr=10.0.0.2, port=4420 00:29:46.051 [2024-12-13 10:31:39.823940] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032b480 is same with the state(6) to be set 00:29:46.051 [2024-12-13 10:31:39.824187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.051 [2024-12-13 10:31:39.824208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000327880 with addr=10.0.0.2, port=4420 00:29:46.051 [2024-12-13 10:31:39.824219] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000327880 is same with the state(6) to be set 00:29:46.051 [2024-12-13 10:31:39.824366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.051 [2024-12-13 10:31:39.824382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326e80 with addr=10.0.0.2, port=4420 00:29:46.051 [2024-12-13 10:31:39.824393] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000326e80 is same with the state(6) to be set 00:29:46.051 [2024-12-13 10:31:39.824588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.051 [2024-12-13 10:31:39.824605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032aa80 with addr=10.0.0.2, port=4420 00:29:46.051 [2024-12-13 10:31:39.824615] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032aa80 is same with the state(6) to be set 00:29:46.051 [2024-12-13 10:31:39.825171] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032b480 (9): Bad file descriptor 00:29:46.051 [2024-12-13 10:31:39.825200] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000327880 (9): Bad file descriptor 00:29:46.051 [2024-12-13 10:31:39.825214] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000326e80 (9): Bad file descriptor 00:29:46.051 [2024-12-13 10:31:39.825226] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032aa80 (9): Bad file descriptor 00:29:46.051 [2024-12-13 10:31:39.825357] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:29:46.051 [2024-12-13 10:31:39.825380] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:29:46.051 [2024-12-13 10:31:39.825393] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:29:46.051 [2024-12-13 10:31:39.825412] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:29:46.051 [2024-12-13 10:31:39.825424] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:29:46.051 [2024-12-13 10:31:39.825436] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:29:46.051 [2024-12-13 10:31:39.825498] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:29:46.051 [2024-12-13 10:31:39.825511] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:29:46.051 [2024-12-13 10:31:39.825524] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:29:46.051 [2024-12-13 10:31:39.825536] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:29:46.051 [2024-12-13 10:31:39.825548] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:29:46.051 [2024-12-13 10:31:39.825556] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:29:46.051 [2024-12-13 10:31:39.825566] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:29:46.051 [2024-12-13 10:31:39.825575] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:29:46.051 [2024-12-13 10:31:39.825585] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:29:46.051 [2024-12-13 10:31:39.825593] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:29:46.051 [2024-12-13 10:31:39.825602] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:29:46.051 [2024-12-13 10:31:39.825615] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:29:46.051 [2024-12-13 10:31:39.825626] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:29:46.051 [2024-12-13 10:31:39.825634] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:29:46.051 [2024-12-13 10:31:39.825644] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:29:46.051 [2024-12-13 10:31:39.825653] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:29:46.051 [2024-12-13 10:31:39.825917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.051 [2024-12-13 10:31:39.825935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032a080 with addr=10.0.0.2, port=4420 00:29:46.051 [2024-12-13 10:31:39.825947] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032a080 is same with the state(6) to be set 00:29:46.051 [2024-12-13 10:31:39.826099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.051 [2024-12-13 10:31:39.826115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000329680 with addr=10.0.0.2, port=4420 00:29:46.051 [2024-12-13 10:31:39.826126] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000329680 is same with the state(6) to be set 00:29:46.051 [2024-12-13 10:31:39.826297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.051 [2024-12-13 10:31:39.826313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:29:46.051 [2024-12-13 10:31:39.826324] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:29:46.051 [2024-12-13 10:31:39.826546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.051 [2024-12-13 10:31:39.826563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000328280 with addr=10.0.0.2, port=4420 00:29:46.051 [2024-12-13 10:31:39.826573] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000328280 is same with the state(6) to be set 00:29:46.051 [2024-12-13 10:31:39.826668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.051 [2024-12-13 10:31:39.826684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:29:46.051 [2024-12-13 10:31:39.826694] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000326480 is same with the state(6) to be set 00:29:46.051 [2024-12-13 10:31:39.826916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.051 [2024-12-13 10:31:39.826932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000328c80 with addr=10.0.0.2, port=4420 00:29:46.051 [2024-12-13 10:31:39.826942] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000328c80 is same with the state(6) to be set 00:29:46.051 [2024-12-13 10:31:39.826987] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032a080 (9): Bad file descriptor 00:29:46.051 [2024-12-13 10:31:39.827003] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000329680 (9): Bad file descriptor 00:29:46.051 [2024-12-13 10:31:39.827017] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:29:46.051 [2024-12-13 10:31:39.827029] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000328280 (9): Bad file descriptor 00:29:46.051 [2024-12-13 10:31:39.827042] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000326480 (9): Bad file descriptor 00:29:46.051 [2024-12-13 10:31:39.827058] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000328c80 (9): Bad file descriptor 00:29:46.051 [2024-12-13 10:31:39.827094] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:29:46.051 [2024-12-13 10:31:39.827106] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:29:46.051 [2024-12-13 10:31:39.827116] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:29:46.051 [2024-12-13 10:31:39.827126] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:29:46.051 [2024-12-13 10:31:39.827137] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:29:46.051 [2024-12-13 10:31:39.827146] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:29:46.051 [2024-12-13 10:31:39.827155] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:29:46.051 [2024-12-13 10:31:39.827164] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:29:46.051 [2024-12-13 10:31:39.827173] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:29:46.051 [2024-12-13 10:31:39.827182] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:29:46.051 [2024-12-13 10:31:39.827192] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:29:46.051 [2024-12-13 10:31:39.827200] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:29:46.051 [2024-12-13 10:31:39.827210] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:29:46.051 [2024-12-13 10:31:39.827218] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:29:46.051 [2024-12-13 10:31:39.827227] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:29:46.051 [2024-12-13 10:31:39.827236] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:29:46.051 [2024-12-13 10:31:39.827246] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:29:46.052 [2024-12-13 10:31:39.827254] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:29:46.052 [2024-12-13 10:31:39.827263] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:29:46.052 [2024-12-13 10:31:39.827271] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:29:46.052 [2024-12-13 10:31:39.827280] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:29:46.052 [2024-12-13 10:31:39.827289] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:29:46.052 [2024-12-13 10:31:39.827297] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:29:46.052 [2024-12-13 10:31:39.827307] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:29:49.332 10:31:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:29:49.898 10:31:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 4037810 00:29:49.898 10:31:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # local es=0 00:29:49.898 10:31:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 4037810 00:29:49.898 10:31:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@640 -- # local arg=wait 00:29:49.899 10:31:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:49.899 10:31:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # type -t wait 00:29:49.899 10:31:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:49.899 10:31:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # wait 4037810 00:29:49.899 10:31:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # es=255 00:29:49.899 10:31:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:49.899 10:31:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@664 -- # es=127 00:29:49.899 10:31:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@665 -- # case "$es" in 00:29:49.899 10:31:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@672 -- # es=1 00:29:49.899 10:31:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:49.899 10:31:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:29:49.899 10:31:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:29:49.899 10:31:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:29:49.899 10:31:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:49.899 10:31:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:29:49.899 10:31:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:49.899 10:31:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:29:49.899 10:31:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:49.899 10:31:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:29:49.899 10:31:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:49.899 10:31:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:49.899 rmmod nvme_tcp 00:29:49.899 rmmod nvme_fabrics 00:29:49.899 rmmod nvme_keyring 00:29:50.157 10:31:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:50.157 10:31:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:29:50.157 10:31:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:29:50.157 10:31:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@517 -- # '[' -n 4037324 ']' 00:29:50.157 10:31:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@518 -- # killprocess 4037324 00:29:50.157 10:31:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 4037324 ']' 00:29:50.157 10:31:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 4037324 00:29:50.157 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (4037324) - No such process 00:29:50.157 10:31:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@981 -- # echo 'Process with pid 4037324 is not found' 00:29:50.157 Process with pid 4037324 is not found 00:29:50.157 10:31:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:50.157 10:31:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:50.157 10:31:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:50.157 10:31:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # iptr 00:29:50.157 10:31:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-save 00:29:50.157 10:31:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:50.157 10:31:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-restore 00:29:50.157 10:31:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:50.157 10:31:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:50.157 10:31:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:50.157 10:31:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:50.157 10:31:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:52.059 10:31:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:52.059 00:29:52.059 real 0m11.689s 00:29:52.059 user 0m34.427s 00:29:52.059 sys 0m1.640s 00:29:52.059 10:31:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:52.059 10:31:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:52.059 ************************************ 00:29:52.059 END TEST nvmf_shutdown_tc3 00:29:52.059 ************************************ 00:29:52.059 10:31:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ e810 == \e\8\1\0 ]] 00:29:52.059 10:31:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ tcp == \r\d\m\a ]] 00:29:52.059 10:31:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:29:52.059 10:31:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:52.059 10:31:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:52.059 10:31:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:52.318 ************************************ 00:29:52.318 START TEST nvmf_shutdown_tc4 00:29:52.318 ************************************ 00:29:52.318 10:31:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc4 00:29:52.318 10:31:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:29:52.318 10:31:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:29:52.318 10:31:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:52.318 10:31:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:52.318 10:31:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:52.318 10:31:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:52.318 10:31:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:52.318 10:31:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:52.318 10:31:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:52.318 10:31:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:52.318 10:31:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:52.318 10:31:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:52.318 10:31:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable 00:29:52.318 10:31:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:52.318 10:31:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:52.318 10:31:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=() 00:29:52.318 10:31:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:52.318 10:31:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:52.318 10:31:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:52.318 10:31:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:52.318 10:31:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:52.318 10:31:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=() 00:29:52.318 10:31:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:52.318 10:31:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=() 00:29:52.318 10:31:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810 00:29:52.318 10:31:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=() 00:29:52.318 10:31:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722 00:29:52.318 10:31:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=() 00:29:52.318 10:31:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx 00:29:52.318 10:31:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:52.318 10:31:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:52.318 10:31:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:52.318 10:31:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:52.318 10:31:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:52.318 10:31:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:52.318 10:31:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:52.318 10:31:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:52.318 10:31:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:52.318 10:31:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:52.318 10:31:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:52.318 10:31:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:52.318 10:31:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:52.318 10:31:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:52.318 10:31:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:52.318 10:31:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:52.318 10:31:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:52.318 10:31:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:52.318 10:31:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:52.318 10:31:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:29:52.318 Found 0000:af:00.0 (0x8086 - 0x159b) 00:29:52.318 10:31:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:52.318 10:31:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:52.318 10:31:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:52.318 10:31:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:52.318 10:31:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:52.318 10:31:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:52.318 10:31:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:29:52.318 Found 0000:af:00.1 (0x8086 - 0x159b) 00:29:52.318 10:31:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:52.318 10:31:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:52.318 10:31:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:52.318 10:31:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:52.318 10:31:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:52.318 10:31:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:52.318 10:31:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:52.318 10:31:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:52.319 10:31:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:52.319 10:31:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:52.319 10:31:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:52.319 10:31:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:52.319 10:31:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:52.319 10:31:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:52.319 10:31:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:52.319 10:31:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:29:52.319 Found net devices under 0000:af:00.0: cvl_0_0 00:29:52.319 10:31:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:52.319 10:31:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:52.319 10:31:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:52.319 10:31:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:52.319 10:31:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:52.319 10:31:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:52.319 10:31:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:52.319 10:31:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:52.319 10:31:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:29:52.319 Found net devices under 0000:af:00.1: cvl_0_1 00:29:52.319 10:31:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:52.319 10:31:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:52.319 10:31:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # is_hw=yes 00:29:52.319 10:31:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:52.319 10:31:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:52.319 10:31:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:52.319 10:31:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:52.319 10:31:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:52.319 10:31:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:52.319 10:31:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:52.319 10:31:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:52.319 10:31:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:52.319 10:31:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:52.319 10:31:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:52.319 10:31:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:52.319 10:31:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:52.319 10:31:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:52.319 10:31:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:52.319 10:31:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:52.319 10:31:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:52.319 10:31:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:52.319 10:31:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:52.319 10:31:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:52.319 10:31:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:52.319 10:31:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:52.593 10:31:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:52.593 10:31:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:52.593 10:31:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:52.593 10:31:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:52.593 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:52.593 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.150 ms 00:29:52.593 00:29:52.593 --- 10.0.0.2 ping statistics --- 00:29:52.593 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:52.593 rtt min/avg/max/mdev = 0.150/0.150/0.150/0.000 ms 00:29:52.593 10:31:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:52.593 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:52.593 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.149 ms 00:29:52.593 00:29:52.593 --- 10.0.0.1 ping statistics --- 00:29:52.593 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:52.593 rtt min/avg/max/mdev = 0.149/0.149/0.149/0.000 ms 00:29:52.593 10:31:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:52.593 10:31:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@450 -- # return 0 00:29:52.593 10:31:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:52.593 10:31:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:52.593 10:31:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:52.593 10:31:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:52.593 10:31:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:52.593 10:31:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:52.593 10:31:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:52.593 10:31:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:29:52.593 10:31:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:52.593 10:31:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:52.593 10:31:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:52.593 10:31:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@509 -- # nvmfpid=4039450 00:29:52.593 10:31:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@510 -- # waitforlisten 4039450 00:29:52.593 10:31:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:29:52.593 10:31:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@835 -- # '[' -z 4039450 ']' 00:29:52.593 10:31:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:52.593 10:31:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:52.593 10:31:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:52.593 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:52.593 10:31:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:52.593 10:31:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:52.593 [2024-12-13 10:31:46.380583] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:29:52.593 [2024-12-13 10:31:46.380696] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:52.866 [2024-12-13 10:31:46.501757] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:52.866 [2024-12-13 10:31:46.611407] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:52.866 [2024-12-13 10:31:46.611459] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:52.866 [2024-12-13 10:31:46.611470] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:52.866 [2024-12-13 10:31:46.611480] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:52.866 [2024-12-13 10:31:46.611488] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:52.866 [2024-12-13 10:31:46.614000] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:29:52.866 [2024-12-13 10:31:46.614076] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:29:52.866 [2024-12-13 10:31:46.614154] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:29:52.866 [2024-12-13 10:31:46.614175] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:29:53.455 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:53.455 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@868 -- # return 0 00:29:53.455 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:53.455 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:53.455 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:53.455 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:53.455 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:53.455 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:53.455 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:53.455 [2024-12-13 10:31:47.227553] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:53.455 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:53.455 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:29:53.455 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:29:53.455 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:53.455 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:53.455 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:53.455 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:53.455 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:53.455 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:53.455 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:53.455 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:53.455 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:53.455 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:53.455 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:53.455 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:53.455 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:53.455 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:53.455 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:53.455 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:53.455 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:53.455 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:53.455 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:53.455 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:53.455 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:53.455 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:53.455 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:53.455 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:29:53.455 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:53.455 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:53.713 Malloc1 00:29:53.713 [2024-12-13 10:31:47.392098] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:53.713 Malloc2 00:29:53.713 Malloc3 00:29:53.971 Malloc4 00:29:53.971 Malloc5 00:29:53.971 Malloc6 00:29:54.230 Malloc7 00:29:54.230 Malloc8 00:29:54.489 Malloc9 00:29:54.489 Malloc10 00:29:54.489 10:31:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:54.489 10:31:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:29:54.489 10:31:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:54.489 10:31:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:54.489 10:31:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=4039777 00:29:54.489 10:31:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:29:54.489 10:31:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:tcp adrfam:IPV4 traddr:10.0.0.2 trsvcid:4420' -P 4 00:29:54.747 [2024-12-13 10:31:48.425785] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:30:00.023 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:00.023 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 4039450 00:30:00.023 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 4039450 ']' 00:30:00.023 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 4039450 00:30:00.023 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # uname 00:30:00.023 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:00.023 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4039450 00:30:00.023 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:00.023 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:00.023 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4039450' 00:30:00.023 killing process with pid 4039450 00:30:00.023 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@973 -- # kill 4039450 00:30:00.023 10:31:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@978 -- # wait 4039450 00:30:00.023 Write completed with error (sct=0, sc=8) 00:30:00.023 Write completed with error (sct=0, sc=8) 00:30:00.023 Write completed with error (sct=0, sc=8) 00:30:00.023 Write completed with error (sct=0, sc=8) 00:30:00.023 starting I/O failed: -6 00:30:00.023 Write completed with error (sct=0, sc=8) 00:30:00.023 Write completed with error (sct=0, sc=8) 00:30:00.023 Write completed with error (sct=0, sc=8) 00:30:00.023 Write completed with error (sct=0, sc=8) 00:30:00.023 starting I/O failed: -6 00:30:00.023 Write completed with error (sct=0, sc=8) 00:30:00.023 Write completed with error (sct=0, sc=8) 00:30:00.023 Write completed with error (sct=0, sc=8) 00:30:00.023 Write completed with error (sct=0, sc=8) 00:30:00.023 starting I/O failed: -6 00:30:00.023 Write completed with error (sct=0, sc=8) 00:30:00.023 Write completed with error (sct=0, sc=8) 00:30:00.023 Write completed with error (sct=0, sc=8) 00:30:00.023 Write completed with error (sct=0, sc=8) 00:30:00.023 starting I/O failed: -6 00:30:00.023 Write completed with error (sct=0, sc=8) 00:30:00.023 Write completed with error (sct=0, sc=8) 00:30:00.023 Write completed with error (sct=0, sc=8) 00:30:00.023 Write completed with error (sct=0, sc=8) 00:30:00.023 starting I/O failed: -6 00:30:00.023 Write completed with error (sct=0, sc=8) 00:30:00.023 Write completed with error (sct=0, sc=8) 00:30:00.023 Write completed with error (sct=0, sc=8) 00:30:00.023 Write completed with error (sct=0, sc=8) 00:30:00.023 starting I/O failed: -6 00:30:00.023 Write completed with error (sct=0, sc=8) 00:30:00.023 Write completed with error (sct=0, sc=8) 00:30:00.023 Write completed with error (sct=0, sc=8) 00:30:00.023 Write completed with error (sct=0, sc=8) 00:30:00.023 starting I/O failed: -6 00:30:00.023 Write completed with error (sct=0, sc=8) 00:30:00.023 Write completed with error (sct=0, sc=8) 00:30:00.023 Write completed with error (sct=0, sc=8) 00:30:00.023 Write completed with error (sct=0, sc=8) 00:30:00.023 starting I/O failed: -6 00:30:00.023 Write completed with error (sct=0, sc=8) 00:30:00.023 Write completed with error (sct=0, sc=8) 00:30:00.023 Write completed with error (sct=0, sc=8) 00:30:00.023 Write completed with error (sct=0, sc=8) 00:30:00.023 starting I/O failed: -6 00:30:00.023 Write completed with error (sct=0, sc=8) 00:30:00.023 Write completed with error (sct=0, sc=8) 00:30:00.023 Write completed with error (sct=0, sc=8) 00:30:00.023 Write completed with error (sct=0, sc=8) 00:30:00.023 starting I/O failed: -6 00:30:00.023 [2024-12-13 10:31:53.417195] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:00.023 Write completed with error (sct=0, sc=8) 00:30:00.023 starting I/O failed: -6 00:30:00.023 Write completed with error (sct=0, sc=8) 00:30:00.023 starting I/O failed: -6 00:30:00.023 Write completed with error (sct=0, sc=8) 00:30:00.023 Write completed with error (sct=0, sc=8) 00:30:00.023 Write completed with error (sct=0, sc=8) 00:30:00.023 starting I/O failed: -6 00:30:00.023 Write completed with error (sct=0, sc=8) 00:30:00.023 starting I/O failed: -6 00:30:00.023 Write completed with error (sct=0, sc=8) 00:30:00.023 Write completed with error (sct=0, sc=8) 00:30:00.023 Write completed with error (sct=0, sc=8) 00:30:00.023 starting I/O failed: -6 00:30:00.023 Write completed with error (sct=0, sc=8) 00:30:00.023 starting I/O failed: -6 00:30:00.023 Write completed with error (sct=0, sc=8) 00:30:00.023 Write completed with error (sct=0, sc=8) 00:30:00.023 Write completed with error (sct=0, sc=8) 00:30:00.023 starting I/O failed: -6 00:30:00.023 Write completed with error (sct=0, sc=8) 00:30:00.023 starting I/O failed: -6 00:30:00.023 Write completed with error (sct=0, sc=8) 00:30:00.023 Write completed with error (sct=0, sc=8) 00:30:00.023 Write completed with error (sct=0, sc=8) 00:30:00.023 starting I/O failed: -6 00:30:00.023 Write completed with error (sct=0, sc=8) 00:30:00.023 starting I/O failed: -6 00:30:00.023 Write completed with error (sct=0, sc=8) 00:30:00.023 Write completed with error (sct=0, sc=8) 00:30:00.023 Write completed with error (sct=0, sc=8) 00:30:00.023 starting I/O failed: -6 00:30:00.023 Write completed with error (sct=0, sc=8) 00:30:00.023 starting I/O failed: -6 00:30:00.023 Write completed with error (sct=0, sc=8) 00:30:00.023 Write completed with error (sct=0, sc=8) 00:30:00.023 Write completed with error (sct=0, sc=8) 00:30:00.023 starting I/O failed: -6 00:30:00.023 Write completed with error (sct=0, sc=8) 00:30:00.023 starting I/O failed: -6 00:30:00.023 Write completed with error (sct=0, sc=8) 00:30:00.023 Write completed with error (sct=0, sc=8) 00:30:00.023 Write completed with error (sct=0, sc=8) 00:30:00.023 starting I/O failed: -6 00:30:00.023 Write completed with error (sct=0, sc=8) 00:30:00.023 starting I/O failed: -6 00:30:00.023 Write completed with error (sct=0, sc=8) 00:30:00.023 Write completed with error (sct=0, sc=8) 00:30:00.023 Write completed with error (sct=0, sc=8) 00:30:00.023 starting I/O failed: -6 00:30:00.023 Write completed with error (sct=0, sc=8) 00:30:00.023 starting I/O failed: -6 00:30:00.023 Write completed with error (sct=0, sc=8) 00:30:00.023 Write completed with error (sct=0, sc=8) 00:30:00.023 Write completed with error (sct=0, sc=8) 00:30:00.023 starting I/O failed: -6 00:30:00.023 [2024-12-13 10:31:53.418882] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.023 Write completed with error (sct=0, sc=8) 00:30:00.023 starting I/O failed: -6 00:30:00.023 Write completed with error (sct=0, sc=8) 00:30:00.023 Write completed with error (sct=0, sc=8) 00:30:00.023 starting I/O failed: -6 00:30:00.023 Write completed with error (sct=0, sc=8) 00:30:00.023 starting I/O failed: -6 00:30:00.023 Write completed with error (sct=0, sc=8) 00:30:00.023 starting I/O failed: -6 00:30:00.023 Write completed with error (sct=0, sc=8) 00:30:00.023 Write completed with error (sct=0, sc=8) 00:30:00.023 starting I/O failed: -6 00:30:00.023 Write completed with error (sct=0, sc=8) 00:30:00.023 starting I/O failed: -6 00:30:00.023 Write completed with error (sct=0, sc=8) 00:30:00.023 starting I/O failed: -6 00:30:00.023 Write completed with error (sct=0, sc=8) 00:30:00.023 Write completed with error (sct=0, sc=8) 00:30:00.023 starting I/O failed: -6 00:30:00.023 Write completed with error (sct=0, sc=8) 00:30:00.023 starting I/O failed: -6 00:30:00.023 Write completed with error (sct=0, sc=8) 00:30:00.023 starting I/O failed: -6 00:30:00.023 Write completed with error (sct=0, sc=8) 00:30:00.023 Write completed with error (sct=0, sc=8) 00:30:00.024 starting I/O failed: -6 00:30:00.024 Write completed with error (sct=0, sc=8) 00:30:00.024 starting I/O failed: -6 00:30:00.024 Write completed with error (sct=0, sc=8) 00:30:00.024 starting I/O failed: -6 00:30:00.024 Write completed with error (sct=0, sc=8) 00:30:00.024 Write completed with error (sct=0, sc=8) 00:30:00.024 starting I/O failed: -6 00:30:00.024 Write completed with error (sct=0, sc=8) 00:30:00.024 starting I/O failed: -6 00:30:00.024 Write completed with error (sct=0, sc=8) 00:30:00.024 starting I/O failed: -6 00:30:00.024 Write completed with error (sct=0, sc=8) 00:30:00.024 Write completed with error (sct=0, sc=8) 00:30:00.024 starting I/O failed: -6 00:30:00.024 Write completed with error (sct=0, sc=8) 00:30:00.024 starting I/O failed: -6 00:30:00.024 Write completed with error (sct=0, sc=8) 00:30:00.024 starting I/O failed: -6 00:30:00.024 Write completed with error (sct=0, sc=8) 00:30:00.024 Write completed with error (sct=0, sc=8) 00:30:00.024 starting I/O failed: -6 00:30:00.024 Write completed with error (sct=0, sc=8) 00:30:00.024 starting I/O failed: -6 00:30:00.024 Write completed with error (sct=0, sc=8) 00:30:00.024 starting I/O failed: -6 00:30:00.024 Write completed with error (sct=0, sc=8) 00:30:00.024 Write completed with error (sct=0, sc=8) 00:30:00.024 starting I/O failed: -6 00:30:00.024 Write completed with error (sct=0, sc=8) 00:30:00.024 starting I/O failed: -6 00:30:00.024 Write completed with error (sct=0, sc=8) 00:30:00.024 starting I/O failed: -6 00:30:00.024 Write completed with error (sct=0, sc=8) 00:30:00.024 Write completed with error (sct=0, sc=8) 00:30:00.024 starting I/O failed: -6 00:30:00.024 Write completed with error (sct=0, sc=8) 00:30:00.024 starting I/O failed: -6 00:30:00.024 Write completed with error (sct=0, sc=8) 00:30:00.024 starting I/O failed: -6 00:30:00.024 Write completed with error (sct=0, sc=8) 00:30:00.024 Write completed with error (sct=0, sc=8) 00:30:00.024 starting I/O failed: -6 00:30:00.024 Write completed with error (sct=0, sc=8) 00:30:00.024 starting I/O failed: -6 00:30:00.024 Write completed with error (sct=0, sc=8) 00:30:00.024 starting I/O failed: -6 00:30:00.024 [2024-12-13 10:31:53.420867] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:30:00.024 Write completed with error (sct=0, sc=8) 00:30:00.024 [2024-12-13 10:31:53.420913] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:30:00.024 [2024-12-13 10:31:53.420925] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:30:00.024 Write completed with error (sct=0, sc=8) 00:30:00.024 [2024-12-13 10:31:53.420935] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:30:00.024 starting I/O failed: -6 00:30:00.024 [2024-12-13 10:31:53.420944] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:30:00.024 [2024-12-13 10:31:53.420962] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:30:00.024 [2024-12-13 10:31:53.420971] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:30:00.024 [2024-12-13 10:31:53.420980] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:30:00.024 Write completed with error (sct=0, sc=8) 00:30:00.024 [2024-12-13 10:31:53.420988] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:30:00.024 starting I/O failed: -6 00:30:00.024 [2024-12-13 10:31:53.420997] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:30:00.024 [2024-12-13 10:31:53.421005] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:30:00.024 Write completed with error (sct=0, sc=8) 00:30:00.024 starting I/O failed: -6 00:30:00.024 Write completed with error (sct=0, sc=8) 00:30:00.024 Write completed with error (sct=0, sc=8) 00:30:00.024 starting I/O failed: -6 00:30:00.024 Write completed with error (sct=0, sc=8) 00:30:00.024 starting I/O failed: -6 00:30:00.024 Write completed with error (sct=0, sc=8) 00:30:00.024 starting I/O failed: -6 00:30:00.024 Write completed with error (sct=0, sc=8) 00:30:00.024 [2024-12-13 10:31:53.421326] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.024 Write completed with error (sct=0, sc=8) 00:30:00.024 starting I/O failed: -6 00:30:00.024 Write completed with error (sct=0, sc=8) 00:30:00.024 starting I/O failed: -6 00:30:00.024 Write completed with error (sct=0, sc=8) 00:30:00.024 starting I/O failed: -6 00:30:00.024 Write completed with error (sct=0, sc=8) 00:30:00.024 starting I/O failed: -6 00:30:00.024 Write completed with error (sct=0, sc=8) 00:30:00.024 starting I/O failed: -6 00:30:00.024 Write completed with error (sct=0, sc=8) 00:30:00.024 starting I/O failed: -6 00:30:00.024 Write completed with error (sct=0, sc=8) 00:30:00.024 starting I/O failed: -6 00:30:00.024 Write completed with error (sct=0, sc=8) 00:30:00.024 starting I/O failed: -6 00:30:00.024 Write completed with error (sct=0, sc=8) 00:30:00.024 starting I/O failed: -6 00:30:00.024 Write completed with error (sct=0, sc=8) 00:30:00.024 starting I/O failed: -6 00:30:00.024 Write completed with error (sct=0, sc=8) 00:30:00.024 starting I/O failed: -6 00:30:00.024 Write completed with error (sct=0, sc=8) 00:30:00.024 starting I/O failed: -6 00:30:00.024 Write completed with error (sct=0, sc=8) 00:30:00.024 starting I/O failed: -6 00:30:00.024 Write completed with error (sct=0, sc=8) 00:30:00.024 starting I/O failed: -6 00:30:00.024 Write completed with error (sct=0, sc=8) 00:30:00.024 starting I/O failed: -6 00:30:00.024 Write completed with error (sct=0, sc=8) 00:30:00.024 starting I/O failed: -6 00:30:00.024 Write completed with error (sct=0, sc=8) 00:30:00.024 starting I/O failed: -6 00:30:00.024 Write completed with error (sct=0, sc=8) 00:30:00.024 starting I/O failed: -6 00:30:00.024 Write completed with error (sct=0, sc=8) 00:30:00.024 starting I/O failed: -6 00:30:00.024 Write completed with error (sct=0, sc=8) 00:30:00.024 starting I/O failed: -6 00:30:00.024 Write completed with error (sct=0, sc=8) 00:30:00.024 starting I/O failed: -6 00:30:00.024 Write completed with error (sct=0, sc=8) 00:30:00.024 starting I/O failed: -6 00:30:00.024 Write completed with error (sct=0, sc=8) 00:30:00.024 starting I/O failed: -6 00:30:00.024 [2024-12-13 10:31:53.422699] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:30:00.024 [2024-12-13 10:31:53.422732] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:30:00.024 [2024-12-13 10:31:53.422743] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:30:00.024 [2024-12-13 10:31:53.422753] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:30:00.024 Write completed with error (sct=0, sc=8) 00:30:00.024 starting I/O failed: -6 00:30:00.024 [2024-12-13 10:31:53.422762] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:30:00.024 [2024-12-13 10:31:53.422771] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:30:00.024 [2024-12-13 10:31:53.422784] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:30:00.024 [2024-12-13 10:31:53.422792] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:30:00.024 [2024-12-13 10:31:53.422801] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:30:00.024 Write completed with error (sct=0, sc=8) 00:30:00.024 starting I/O failed: -6 00:30:00.024 Write completed with error (sct=0, sc=8) 00:30:00.024 starting I/O failed: -6 00:30:00.024 Write completed with error (sct=0, sc=8) 00:30:00.024 starting I/O failed: -6 00:30:00.024 Write completed with error (sct=0, sc=8) 00:30:00.024 starting I/O failed: -6 00:30:00.024 Write completed with error (sct=0, sc=8) 00:30:00.024 starting I/O failed: -6 00:30:00.024 Write completed with error (sct=0, sc=8) 00:30:00.024 starting I/O failed: -6 00:30:00.024 Write completed with error (sct=0, sc=8) 00:30:00.024 starting I/O failed: -6 00:30:00.024 Write completed with error (sct=0, sc=8) 00:30:00.024 starting I/O failed: -6 00:30:00.024 Write completed with error (sct=0, sc=8) 00:30:00.024 starting I/O failed: -6 00:30:00.024 Write completed with error (sct=0, sc=8) 00:30:00.024 starting I/O failed: -6 00:30:00.024 Write completed with error (sct=0, sc=8) 00:30:00.024 starting I/O failed: -6 00:30:00.024 Write completed with error (sct=0, sc=8) 00:30:00.024 starting I/O failed: -6 00:30:00.024 Write completed with error (sct=0, sc=8) 00:30:00.024 starting I/O failed: -6 00:30:00.024 Write completed with error (sct=0, sc=8) 00:30:00.024 starting I/O failed: -6 00:30:00.024 Write completed with error (sct=0, sc=8) 00:30:00.024 starting I/O failed: -6 00:30:00.024 Write completed with error (sct=0, sc=8) 00:30:00.024 starting I/O failed: -6 00:30:00.024 Write completed with error (sct=0, sc=8) 00:30:00.024 starting I/O failed: -6 00:30:00.024 Write completed with error (sct=0, sc=8) 00:30:00.024 starting I/O failed: -6 00:30:00.024 Write completed with error (sct=0, sc=8) 00:30:00.024 starting I/O failed: -6 00:30:00.024 Write completed with error (sct=0, sc=8) 00:30:00.024 starting I/O failed: -6 00:30:00.024 Write completed with error (sct=0, sc=8) 00:30:00.024 starting I/O failed: -6 00:30:00.024 Write completed with error (sct=0, sc=8) 00:30:00.024 starting I/O failed: -6 00:30:00.024 Write completed with error (sct=0, sc=8) 00:30:00.024 starting I/O failed: -6 00:30:00.024 Write completed with error (sct=0, sc=8) 00:30:00.024 starting I/O failed: -6 00:30:00.024 Write completed with error (sct=0, sc=8) 00:30:00.024 starting I/O failed: -6 00:30:00.024 Write completed with error (sct=0, sc=8) 00:30:00.024 starting I/O failed: -6 00:30:00.024 Write completed with error (sct=0, sc=8) 00:30:00.024 starting I/O failed: -6 00:30:00.024 Write completed with error (sct=0, sc=8) 00:30:00.024 starting I/O failed: -6 00:30:00.024 Write completed with error (sct=0, sc=8) 00:30:00.024 starting I/O failed: -6 00:30:00.024 Write completed with error (sct=0, sc=8) 00:30:00.024 starting I/O failed: -6 00:30:00.024 Write completed with error (sct=0, sc=8) 00:30:00.024 starting I/O failed: -6 00:30:00.024 Write completed with error (sct=0, sc=8) 00:30:00.024 starting I/O failed: -6 00:30:00.025 Write completed with error (sct=0, sc=8) 00:30:00.025 starting I/O failed: -6 00:30:00.025 Write completed with error (sct=0, sc=8) 00:30:00.025 starting I/O failed: -6 00:30:00.025 Write completed with error (sct=0, sc=8) 00:30:00.025 starting I/O failed: -6 00:30:00.025 Write completed with error (sct=0, sc=8) 00:30:00.025 starting I/O failed: -6 00:30:00.025 Write completed with error (sct=0, sc=8) 00:30:00.025 starting I/O failed: -6 00:30:00.025 Write completed with error (sct=0, sc=8) 00:30:00.025 starting I/O failed: -6 00:30:00.025 [2024-12-13 10:31:53.428404] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:00.025 NVMe io qpair process completion error 00:30:00.025 Write completed with error (sct=0, sc=8) 00:30:00.025 starting I/O failed: -6 00:30:00.025 Write completed with error (sct=0, sc=8) 00:30:00.025 Write completed with error (sct=0, sc=8) 00:30:00.025 Write completed with error (sct=0, sc=8) 00:30:00.025 Write completed with error (sct=0, sc=8) 00:30:00.025 starting I/O failed: -6 00:30:00.025 Write completed with error (sct=0, sc=8) 00:30:00.025 Write completed with error (sct=0, sc=8) 00:30:00.025 Write completed with error (sct=0, sc=8) 00:30:00.025 Write completed with error (sct=0, sc=8) 00:30:00.025 starting I/O failed: -6 00:30:00.025 Write completed with error (sct=0, sc=8) 00:30:00.025 Write completed with error (sct=0, sc=8) 00:30:00.025 Write completed with error (sct=0, sc=8) 00:30:00.025 Write completed with error (sct=0, sc=8) 00:30:00.025 starting I/O failed: -6 00:30:00.025 Write completed with error (sct=0, sc=8) 00:30:00.025 Write completed with error (sct=0, sc=8) 00:30:00.025 Write completed with error (sct=0, sc=8) 00:30:00.025 Write completed with error (sct=0, sc=8) 00:30:00.025 starting I/O failed: -6 00:30:00.025 Write completed with error (sct=0, sc=8) 00:30:00.025 Write completed with error (sct=0, sc=8) 00:30:00.025 Write completed with error (sct=0, sc=8) 00:30:00.025 Write completed with error (sct=0, sc=8) 00:30:00.025 starting I/O failed: -6 00:30:00.025 Write completed with error (sct=0, sc=8) 00:30:00.025 Write completed with error (sct=0, sc=8) 00:30:00.025 Write completed with error (sct=0, sc=8) 00:30:00.025 Write completed with error (sct=0, sc=8) 00:30:00.025 starting I/O failed: -6 00:30:00.025 Write completed with error (sct=0, sc=8) 00:30:00.025 Write completed with error (sct=0, sc=8) 00:30:00.025 Write completed with error (sct=0, sc=8) 00:30:00.025 Write completed with error (sct=0, sc=8) 00:30:00.025 starting I/O failed: -6 00:30:00.025 Write completed with error (sct=0, sc=8) 00:30:00.025 Write completed with error (sct=0, sc=8) 00:30:00.025 Write completed with error (sct=0, sc=8) 00:30:00.025 Write completed with error (sct=0, sc=8) 00:30:00.025 starting I/O failed: -6 00:30:00.025 Write completed with error (sct=0, sc=8) 00:30:00.025 Write completed with error (sct=0, sc=8) 00:30:00.025 Write completed with error (sct=0, sc=8) 00:30:00.025 Write completed with error (sct=0, sc=8) 00:30:00.025 starting I/O failed: -6 00:30:00.025 Write completed with error (sct=0, sc=8) 00:30:00.025 Write completed with error (sct=0, sc=8) 00:30:00.025 [2024-12-13 10:31:53.433400] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:00.025 starting I/O failed: -6 00:30:00.025 starting I/O failed: -6 00:30:00.025 starting I/O failed: -6 00:30:00.025 Write completed with error (sct=0, sc=8) 00:30:00.025 starting I/O failed: -6 00:30:00.025 Write completed with error (sct=0, sc=8) 00:30:00.025 Write completed with error (sct=0, sc=8) 00:30:00.025 Write completed with error (sct=0, sc=8) 00:30:00.025 starting I/O failed: -6 00:30:00.025 Write completed with error (sct=0, sc=8) 00:30:00.025 starting I/O failed: -6 00:30:00.025 Write completed with error (sct=0, sc=8) 00:30:00.025 Write completed with error (sct=0, sc=8) 00:30:00.025 Write completed with error (sct=0, sc=8) 00:30:00.025 starting I/O failed: -6 00:30:00.025 Write completed with error (sct=0, sc=8) 00:30:00.025 starting I/O failed: -6 00:30:00.025 Write completed with error (sct=0, sc=8) 00:30:00.025 Write completed with error (sct=0, sc=8) 00:30:00.025 Write completed with error (sct=0, sc=8) 00:30:00.025 starting I/O failed: -6 00:30:00.025 Write completed with error (sct=0, sc=8) 00:30:00.025 starting I/O failed: -6 00:30:00.025 Write completed with error (sct=0, sc=8) 00:30:00.025 Write completed with error (sct=0, sc=8) 00:30:00.025 Write completed with error (sct=0, sc=8) 00:30:00.025 starting I/O failed: -6 00:30:00.025 Write completed with error (sct=0, sc=8) 00:30:00.025 starting I/O failed: -6 00:30:00.025 Write completed with error (sct=0, sc=8) 00:30:00.025 Write completed with error (sct=0, sc=8) 00:30:00.025 Write completed with error (sct=0, sc=8) 00:30:00.025 starting I/O failed: -6 00:30:00.025 Write completed with error (sct=0, sc=8) 00:30:00.025 starting I/O failed: -6 00:30:00.025 Write completed with error (sct=0, sc=8) 00:30:00.025 Write completed with error (sct=0, sc=8) 00:30:00.025 Write completed with error (sct=0, sc=8) 00:30:00.025 starting I/O failed: -6 00:30:00.025 Write completed with error (sct=0, sc=8) 00:30:00.025 starting I/O failed: -6 00:30:00.025 Write completed with error (sct=0, sc=8) 00:30:00.025 Write completed with error (sct=0, sc=8) 00:30:00.025 Write completed with error (sct=0, sc=8) 00:30:00.025 starting I/O failed: -6 00:30:00.025 Write completed with error (sct=0, sc=8) 00:30:00.025 starting I/O failed: -6 00:30:00.025 Write completed with error (sct=0, sc=8) 00:30:00.025 Write completed with error (sct=0, sc=8) 00:30:00.025 Write completed with error (sct=0, sc=8) 00:30:00.025 starting I/O failed: -6 00:30:00.025 Write completed with error (sct=0, sc=8) 00:30:00.025 starting I/O failed: -6 00:30:00.025 Write completed with error (sct=0, sc=8) 00:30:00.025 Write completed with error (sct=0, sc=8) 00:30:00.025 Write completed with error (sct=0, sc=8) 00:30:00.025 starting I/O failed: -6 00:30:00.025 Write completed with error (sct=0, sc=8) 00:30:00.025 starting I/O failed: -6 00:30:00.025 [2024-12-13 10:31:53.435324] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.025 Write completed with error (sct=0, sc=8) 00:30:00.025 Write completed with error (sct=0, sc=8) 00:30:00.025 starting I/O failed: -6 00:30:00.025 Write completed with error (sct=0, sc=8) 00:30:00.025 starting I/O failed: -6 00:30:00.025 Write completed with error (sct=0, sc=8) 00:30:00.025 starting I/O failed: -6 00:30:00.025 Write completed with error (sct=0, sc=8) 00:30:00.025 Write completed with error (sct=0, sc=8) 00:30:00.025 starting I/O failed: -6 00:30:00.025 Write completed with error (sct=0, sc=8) 00:30:00.025 starting I/O failed: -6 00:30:00.025 Write completed with error (sct=0, sc=8) 00:30:00.025 starting I/O failed: -6 00:30:00.025 Write completed with error (sct=0, sc=8) 00:30:00.025 Write completed with error (sct=0, sc=8) 00:30:00.025 starting I/O failed: -6 00:30:00.025 Write completed with error (sct=0, sc=8) 00:30:00.025 starting I/O failed: -6 00:30:00.025 Write completed with error (sct=0, sc=8) 00:30:00.025 starting I/O failed: -6 00:30:00.025 Write completed with error (sct=0, sc=8) 00:30:00.025 Write completed with error (sct=0, sc=8) 00:30:00.025 starting I/O failed: -6 00:30:00.025 Write completed with error (sct=0, sc=8) 00:30:00.025 starting I/O failed: -6 00:30:00.025 Write completed with error (sct=0, sc=8) 00:30:00.025 starting I/O failed: -6 00:30:00.025 Write completed with error (sct=0, sc=8) 00:30:00.025 Write completed with error (sct=0, sc=8) 00:30:00.025 starting I/O failed: -6 00:30:00.025 Write completed with error (sct=0, sc=8) 00:30:00.025 starting I/O failed: -6 00:30:00.025 Write completed with error (sct=0, sc=8) 00:30:00.025 starting I/O failed: -6 00:30:00.025 Write completed with error (sct=0, sc=8) 00:30:00.025 Write completed with error (sct=0, sc=8) 00:30:00.025 starting I/O failed: -6 00:30:00.025 Write completed with error (sct=0, sc=8) 00:30:00.025 starting I/O failed: -6 00:30:00.025 Write completed with error (sct=0, sc=8) 00:30:00.025 starting I/O failed: -6 00:30:00.025 Write completed with error (sct=0, sc=8) 00:30:00.025 Write completed with error (sct=0, sc=8) 00:30:00.025 starting I/O failed: -6 00:30:00.025 Write completed with error (sct=0, sc=8) 00:30:00.025 starting I/O failed: -6 00:30:00.025 Write completed with error (sct=0, sc=8) 00:30:00.025 starting I/O failed: -6 00:30:00.025 Write completed with error (sct=0, sc=8) 00:30:00.025 Write completed with error (sct=0, sc=8) 00:30:00.025 starting I/O failed: -6 00:30:00.025 Write completed with error (sct=0, sc=8) 00:30:00.025 starting I/O failed: -6 00:30:00.025 Write completed with error (sct=0, sc=8) 00:30:00.025 starting I/O failed: -6 00:30:00.025 Write completed with error (sct=0, sc=8) 00:30:00.025 Write completed with error (sct=0, sc=8) 00:30:00.025 starting I/O failed: -6 00:30:00.025 Write completed with error (sct=0, sc=8) 00:30:00.025 starting I/O failed: -6 00:30:00.025 Write completed with error (sct=0, sc=8) 00:30:00.025 starting I/O failed: -6 00:30:00.025 Write completed with error (sct=0, sc=8) 00:30:00.025 Write completed with error (sct=0, sc=8) 00:30:00.025 starting I/O failed: -6 00:30:00.025 Write completed with error (sct=0, sc=8) 00:30:00.025 starting I/O failed: -6 00:30:00.025 Write completed with error (sct=0, sc=8) 00:30:00.025 starting I/O failed: -6 00:30:00.025 Write completed with error (sct=0, sc=8) 00:30:00.025 Write completed with error (sct=0, sc=8) 00:30:00.025 starting I/O failed: -6 00:30:00.025 Write completed with error (sct=0, sc=8) 00:30:00.025 starting I/O failed: -6 00:30:00.025 Write completed with error (sct=0, sc=8) 00:30:00.025 starting I/O failed: -6 00:30:00.025 Write completed with error (sct=0, sc=8) 00:30:00.025 Write completed with error (sct=0, sc=8) 00:30:00.025 starting I/O failed: -6 00:30:00.025 Write completed with error (sct=0, sc=8) 00:30:00.025 starting I/O failed: -6 00:30:00.025 Write completed with error (sct=0, sc=8) 00:30:00.025 starting I/O failed: -6 00:30:00.025 [2024-12-13 10:31:53.437829] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.025 Write completed with error (sct=0, sc=8) 00:30:00.025 starting I/O failed: -6 00:30:00.025 Write completed with error (sct=0, sc=8) 00:30:00.025 starting I/O failed: -6 00:30:00.025 Write completed with error (sct=0, sc=8) 00:30:00.025 starting I/O failed: -6 00:30:00.025 Write completed with error (sct=0, sc=8) 00:30:00.025 starting I/O failed: -6 00:30:00.025 Write completed with error (sct=0, sc=8) 00:30:00.025 starting I/O failed: -6 00:30:00.025 Write completed with error (sct=0, sc=8) 00:30:00.026 starting I/O failed: -6 00:30:00.026 Write completed with error (sct=0, sc=8) 00:30:00.026 starting I/O failed: -6 00:30:00.026 Write completed with error (sct=0, sc=8) 00:30:00.026 starting I/O failed: -6 00:30:00.026 Write completed with error (sct=0, sc=8) 00:30:00.026 starting I/O failed: -6 00:30:00.026 Write completed with error (sct=0, sc=8) 00:30:00.026 starting I/O failed: -6 00:30:00.026 Write completed with error (sct=0, sc=8) 00:30:00.026 starting I/O failed: -6 00:30:00.026 Write completed with error (sct=0, sc=8) 00:30:00.026 starting I/O failed: -6 00:30:00.026 Write completed with error (sct=0, sc=8) 00:30:00.026 starting I/O failed: -6 00:30:00.026 Write completed with error (sct=0, sc=8) 00:30:00.026 starting I/O failed: -6 00:30:00.026 Write completed with error (sct=0, sc=8) 00:30:00.026 starting I/O failed: -6 00:30:00.026 Write completed with error (sct=0, sc=8) 00:30:00.026 starting I/O failed: -6 00:30:00.026 Write completed with error (sct=0, sc=8) 00:30:00.026 starting I/O failed: -6 00:30:00.026 Write completed with error (sct=0, sc=8) 00:30:00.026 starting I/O failed: -6 00:30:00.026 Write completed with error (sct=0, sc=8) 00:30:00.026 starting I/O failed: -6 00:30:00.026 Write completed with error (sct=0, sc=8) 00:30:00.026 starting I/O failed: -6 00:30:00.026 Write completed with error (sct=0, sc=8) 00:30:00.026 starting I/O failed: -6 00:30:00.026 Write completed with error (sct=0, sc=8) 00:30:00.026 starting I/O failed: -6 00:30:00.026 Write completed with error (sct=0, sc=8) 00:30:00.026 starting I/O failed: -6 00:30:00.026 Write completed with error (sct=0, sc=8) 00:30:00.026 starting I/O failed: -6 00:30:00.026 Write completed with error (sct=0, sc=8) 00:30:00.026 starting I/O failed: -6 00:30:00.026 Write completed with error (sct=0, sc=8) 00:30:00.026 starting I/O failed: -6 00:30:00.026 Write completed with error (sct=0, sc=8) 00:30:00.026 starting I/O failed: -6 00:30:00.026 Write completed with error (sct=0, sc=8) 00:30:00.026 starting I/O failed: -6 00:30:00.026 Write completed with error (sct=0, sc=8) 00:30:00.026 starting I/O failed: -6 00:30:00.026 Write completed with error (sct=0, sc=8) 00:30:00.026 starting I/O failed: -6 00:30:00.026 Write completed with error (sct=0, sc=8) 00:30:00.026 starting I/O failed: -6 00:30:00.026 Write completed with error (sct=0, sc=8) 00:30:00.026 starting I/O failed: -6 00:30:00.026 Write completed with error (sct=0, sc=8) 00:30:00.026 starting I/O failed: -6 00:30:00.026 Write completed with error (sct=0, sc=8) 00:30:00.026 starting I/O failed: -6 00:30:00.026 Write completed with error (sct=0, sc=8) 00:30:00.026 starting I/O failed: -6 00:30:00.026 Write completed with error (sct=0, sc=8) 00:30:00.026 starting I/O failed: -6 00:30:00.026 Write completed with error (sct=0, sc=8) 00:30:00.026 starting I/O failed: -6 00:30:00.026 Write completed with error (sct=0, sc=8) 00:30:00.026 starting I/O failed: -6 00:30:00.026 Write completed with error (sct=0, sc=8) 00:30:00.026 starting I/O failed: -6 00:30:00.026 Write completed with error (sct=0, sc=8) 00:30:00.026 starting I/O failed: -6 00:30:00.026 Write completed with error (sct=0, sc=8) 00:30:00.026 starting I/O failed: -6 00:30:00.026 Write completed with error (sct=0, sc=8) 00:30:00.026 starting I/O failed: -6 00:30:00.026 Write completed with error (sct=0, sc=8) 00:30:00.026 starting I/O failed: -6 00:30:00.026 Write completed with error (sct=0, sc=8) 00:30:00.026 starting I/O failed: -6 00:30:00.026 Write completed with error (sct=0, sc=8) 00:30:00.026 starting I/O failed: -6 00:30:00.026 Write completed with error (sct=0, sc=8) 00:30:00.026 starting I/O failed: -6 00:30:00.026 Write completed with error (sct=0, sc=8) 00:30:00.026 starting I/O failed: -6 00:30:00.026 Write completed with error (sct=0, sc=8) 00:30:00.026 starting I/O failed: -6 00:30:00.026 Write completed with error (sct=0, sc=8) 00:30:00.026 starting I/O failed: -6 00:30:00.026 Write completed with error (sct=0, sc=8) 00:30:00.026 starting I/O failed: -6 00:30:00.026 Write completed with error (sct=0, sc=8) 00:30:00.026 starting I/O failed: -6 00:30:00.026 Write completed with error (sct=0, sc=8) 00:30:00.026 starting I/O failed: -6 00:30:00.026 Write completed with error (sct=0, sc=8) 00:30:00.026 starting I/O failed: -6 00:30:00.026 Write completed with error (sct=0, sc=8) 00:30:00.026 starting I/O failed: -6 00:30:00.026 Write completed with error (sct=0, sc=8) 00:30:00.026 starting I/O failed: -6 00:30:00.026 Write completed with error (sct=0, sc=8) 00:30:00.026 starting I/O failed: -6 00:30:00.026 Write completed with error (sct=0, sc=8) 00:30:00.026 starting I/O failed: -6 00:30:00.026 Write completed with error (sct=0, sc=8) 00:30:00.026 starting I/O failed: -6 00:30:00.026 Write completed with error (sct=0, sc=8) 00:30:00.026 starting I/O failed: -6 00:30:00.026 Write completed with error (sct=0, sc=8) 00:30:00.026 starting I/O failed: -6 00:30:00.026 [2024-12-13 10:31:53.448253] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:00.026 NVMe io qpair process completion error 00:30:00.026 Write completed with error (sct=0, sc=8) 00:30:00.026 Write completed with error (sct=0, sc=8) 00:30:00.026 starting I/O failed: -6 00:30:00.026 Write completed with error (sct=0, sc=8) 00:30:00.026 Write completed with error (sct=0, sc=8) 00:30:00.026 Write completed with error (sct=0, sc=8) 00:30:00.026 Write completed with error (sct=0, sc=8) 00:30:00.026 starting I/O failed: -6 00:30:00.026 Write completed with error (sct=0, sc=8) 00:30:00.026 Write completed with error (sct=0, sc=8) 00:30:00.026 Write completed with error (sct=0, sc=8) 00:30:00.026 Write completed with error (sct=0, sc=8) 00:30:00.026 starting I/O failed: -6 00:30:00.026 Write completed with error (sct=0, sc=8) 00:30:00.026 Write completed with error (sct=0, sc=8) 00:30:00.026 Write completed with error (sct=0, sc=8) 00:30:00.026 Write completed with error (sct=0, sc=8) 00:30:00.026 starting I/O failed: -6 00:30:00.026 Write completed with error (sct=0, sc=8) 00:30:00.026 Write completed with error (sct=0, sc=8) 00:30:00.026 Write completed with error (sct=0, sc=8) 00:30:00.026 Write completed with error (sct=0, sc=8) 00:30:00.026 starting I/O failed: -6 00:30:00.026 Write completed with error (sct=0, sc=8) 00:30:00.026 Write completed with error (sct=0, sc=8) 00:30:00.026 Write completed with error (sct=0, sc=8) 00:30:00.026 Write completed with error (sct=0, sc=8) 00:30:00.026 starting I/O failed: -6 00:30:00.026 Write completed with error (sct=0, sc=8) 00:30:00.026 Write completed with error (sct=0, sc=8) 00:30:00.026 Write completed with error (sct=0, sc=8) 00:30:00.026 Write completed with error (sct=0, sc=8) 00:30:00.026 starting I/O failed: -6 00:30:00.026 Write completed with error (sct=0, sc=8) 00:30:00.026 Write completed with error (sct=0, sc=8) 00:30:00.026 Write completed with error (sct=0, sc=8) 00:30:00.026 Write completed with error (sct=0, sc=8) 00:30:00.026 starting I/O failed: -6 00:30:00.026 Write completed with error (sct=0, sc=8) 00:30:00.026 Write completed with error (sct=0, sc=8) 00:30:00.026 Write completed with error (sct=0, sc=8) 00:30:00.026 Write completed with error (sct=0, sc=8) 00:30:00.026 starting I/O failed: -6 00:30:00.026 Write completed with error (sct=0, sc=8) 00:30:00.026 Write completed with error (sct=0, sc=8) 00:30:00.026 Write completed with error (sct=0, sc=8) 00:30:00.026 Write completed with error (sct=0, sc=8) 00:30:00.026 starting I/O failed: -6 00:30:00.026 Write completed with error (sct=0, sc=8) 00:30:00.026 [2024-12-13 10:31:53.449907] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:00.026 Write completed with error (sct=0, sc=8) 00:30:00.026 starting I/O failed: -6 00:30:00.026 Write completed with error (sct=0, sc=8) 00:30:00.026 starting I/O failed: -6 00:30:00.026 Write completed with error (sct=0, sc=8) 00:30:00.026 Write completed with error (sct=0, sc=8) 00:30:00.026 Write completed with error (sct=0, sc=8) 00:30:00.026 starting I/O failed: -6 00:30:00.026 Write completed with error (sct=0, sc=8) 00:30:00.026 starting I/O failed: -6 00:30:00.026 Write completed with error (sct=0, sc=8) 00:30:00.026 Write completed with error (sct=0, sc=8) 00:30:00.026 Write completed with error (sct=0, sc=8) 00:30:00.026 starting I/O failed: -6 00:30:00.026 Write completed with error (sct=0, sc=8) 00:30:00.026 starting I/O failed: -6 00:30:00.026 Write completed with error (sct=0, sc=8) 00:30:00.026 Write completed with error (sct=0, sc=8) 00:30:00.026 Write completed with error (sct=0, sc=8) 00:30:00.026 starting I/O failed: -6 00:30:00.026 Write completed with error (sct=0, sc=8) 00:30:00.026 starting I/O failed: -6 00:30:00.026 Write completed with error (sct=0, sc=8) 00:30:00.026 Write completed with error (sct=0, sc=8) 00:30:00.026 Write completed with error (sct=0, sc=8) 00:30:00.026 starting I/O failed: -6 00:30:00.026 Write completed with error (sct=0, sc=8) 00:30:00.026 starting I/O failed: -6 00:30:00.026 Write completed with error (sct=0, sc=8) 00:30:00.026 Write completed with error (sct=0, sc=8) 00:30:00.026 Write completed with error (sct=0, sc=8) 00:30:00.026 starting I/O failed: -6 00:30:00.026 Write completed with error (sct=0, sc=8) 00:30:00.026 starting I/O failed: -6 00:30:00.026 Write completed with error (sct=0, sc=8) 00:30:00.026 Write completed with error (sct=0, sc=8) 00:30:00.026 Write completed with error (sct=0, sc=8) 00:30:00.026 starting I/O failed: -6 00:30:00.026 Write completed with error (sct=0, sc=8) 00:30:00.026 starting I/O failed: -6 00:30:00.026 Write completed with error (sct=0, sc=8) 00:30:00.026 Write completed with error (sct=0, sc=8) 00:30:00.026 Write completed with error (sct=0, sc=8) 00:30:00.026 starting I/O failed: -6 00:30:00.026 Write completed with error (sct=0, sc=8) 00:30:00.026 starting I/O failed: -6 00:30:00.026 Write completed with error (sct=0, sc=8) 00:30:00.026 Write completed with error (sct=0, sc=8) 00:30:00.026 Write completed with error (sct=0, sc=8) 00:30:00.026 starting I/O failed: -6 00:30:00.026 Write completed with error (sct=0, sc=8) 00:30:00.026 starting I/O failed: -6 00:30:00.026 Write completed with error (sct=0, sc=8) 00:30:00.026 Write completed with error (sct=0, sc=8) 00:30:00.026 Write completed with error (sct=0, sc=8) 00:30:00.026 starting I/O failed: -6 00:30:00.026 Write completed with error (sct=0, sc=8) 00:30:00.026 starting I/O failed: -6 00:30:00.026 Write completed with error (sct=0, sc=8) 00:30:00.026 Write completed with error (sct=0, sc=8) 00:30:00.026 Write completed with error (sct=0, sc=8) 00:30:00.026 starting I/O failed: -6 00:30:00.026 Write completed with error (sct=0, sc=8) 00:30:00.026 starting I/O failed: -6 00:30:00.026 Write completed with error (sct=0, sc=8) 00:30:00.026 Write completed with error (sct=0, sc=8) 00:30:00.027 Write completed with error (sct=0, sc=8) 00:30:00.027 starting I/O failed: -6 00:30:00.027 Write completed with error (sct=0, sc=8) 00:30:00.027 starting I/O failed: -6 00:30:00.027 [2024-12-13 10:31:53.452000] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.027 Write completed with error (sct=0, sc=8) 00:30:00.027 Write completed with error (sct=0, sc=8) 00:30:00.027 starting I/O failed: -6 00:30:00.027 Write completed with error (sct=0, sc=8) 00:30:00.027 starting I/O failed: -6 00:30:00.027 Write completed with error (sct=0, sc=8) 00:30:00.027 starting I/O failed: -6 00:30:00.027 Write completed with error (sct=0, sc=8) 00:30:00.027 Write completed with error (sct=0, sc=8) 00:30:00.027 starting I/O failed: -6 00:30:00.027 Write completed with error (sct=0, sc=8) 00:30:00.027 starting I/O failed: -6 00:30:00.027 Write completed with error (sct=0, sc=8) 00:30:00.027 starting I/O failed: -6 00:30:00.027 Write completed with error (sct=0, sc=8) 00:30:00.027 Write completed with error (sct=0, sc=8) 00:30:00.027 starting I/O failed: -6 00:30:00.027 Write completed with error (sct=0, sc=8) 00:30:00.027 starting I/O failed: -6 00:30:00.027 Write completed with error (sct=0, sc=8) 00:30:00.027 starting I/O failed: -6 00:30:00.027 Write completed with error (sct=0, sc=8) 00:30:00.027 Write completed with error (sct=0, sc=8) 00:30:00.027 starting I/O failed: -6 00:30:00.027 Write completed with error (sct=0, sc=8) 00:30:00.027 starting I/O failed: -6 00:30:00.027 Write completed with error (sct=0, sc=8) 00:30:00.027 starting I/O failed: -6 00:30:00.027 Write completed with error (sct=0, sc=8) 00:30:00.027 Write completed with error (sct=0, sc=8) 00:30:00.027 starting I/O failed: -6 00:30:00.027 Write completed with error (sct=0, sc=8) 00:30:00.027 starting I/O failed: -6 00:30:00.027 Write completed with error (sct=0, sc=8) 00:30:00.027 starting I/O failed: -6 00:30:00.027 Write completed with error (sct=0, sc=8) 00:30:00.027 Write completed with error (sct=0, sc=8) 00:30:00.027 starting I/O failed: -6 00:30:00.027 Write completed with error (sct=0, sc=8) 00:30:00.027 starting I/O failed: -6 00:30:00.027 Write completed with error (sct=0, sc=8) 00:30:00.027 starting I/O failed: -6 00:30:00.027 Write completed with error (sct=0, sc=8) 00:30:00.027 Write completed with error (sct=0, sc=8) 00:30:00.027 starting I/O failed: -6 00:30:00.027 Write completed with error (sct=0, sc=8) 00:30:00.027 starting I/O failed: -6 00:30:00.027 Write completed with error (sct=0, sc=8) 00:30:00.027 starting I/O failed: -6 00:30:00.027 Write completed with error (sct=0, sc=8) 00:30:00.027 Write completed with error (sct=0, sc=8) 00:30:00.027 starting I/O failed: -6 00:30:00.027 Write completed with error (sct=0, sc=8) 00:30:00.027 starting I/O failed: -6 00:30:00.027 Write completed with error (sct=0, sc=8) 00:30:00.027 starting I/O failed: -6 00:30:00.027 Write completed with error (sct=0, sc=8) 00:30:00.027 Write completed with error (sct=0, sc=8) 00:30:00.027 starting I/O failed: -6 00:30:00.027 Write completed with error (sct=0, sc=8) 00:30:00.027 starting I/O failed: -6 00:30:00.027 Write completed with error (sct=0, sc=8) 00:30:00.027 starting I/O failed: -6 00:30:00.027 Write completed with error (sct=0, sc=8) 00:30:00.027 Write completed with error (sct=0, sc=8) 00:30:00.027 starting I/O failed: -6 00:30:00.027 Write completed with error (sct=0, sc=8) 00:30:00.027 starting I/O failed: -6 00:30:00.027 Write completed with error (sct=0, sc=8) 00:30:00.027 starting I/O failed: -6 00:30:00.027 Write completed with error (sct=0, sc=8) 00:30:00.027 Write completed with error (sct=0, sc=8) 00:30:00.027 starting I/O failed: -6 00:30:00.027 Write completed with error (sct=0, sc=8) 00:30:00.027 starting I/O failed: -6 00:30:00.027 Write completed with error (sct=0, sc=8) 00:30:00.027 starting I/O failed: -6 00:30:00.027 Write completed with error (sct=0, sc=8) 00:30:00.027 Write completed with error (sct=0, sc=8) 00:30:00.027 starting I/O failed: -6 00:30:00.027 Write completed with error (sct=0, sc=8) 00:30:00.027 starting I/O failed: -6 00:30:00.027 [2024-12-13 10:31:53.454442] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.027 Write completed with error (sct=0, sc=8) 00:30:00.027 starting I/O failed: -6 00:30:00.027 Write completed with error (sct=0, sc=8) 00:30:00.027 starting I/O failed: -6 00:30:00.027 Write completed with error (sct=0, sc=8) 00:30:00.027 starting I/O failed: -6 00:30:00.027 Write completed with error (sct=0, sc=8) 00:30:00.027 starting I/O failed: -6 00:30:00.027 Write completed with error (sct=0, sc=8) 00:30:00.027 starting I/O failed: -6 00:30:00.027 Write completed with error (sct=0, sc=8) 00:30:00.027 starting I/O failed: -6 00:30:00.027 Write completed with error (sct=0, sc=8) 00:30:00.027 starting I/O failed: -6 00:30:00.027 Write completed with error (sct=0, sc=8) 00:30:00.027 starting I/O failed: -6 00:30:00.027 Write completed with error (sct=0, sc=8) 00:30:00.027 starting I/O failed: -6 00:30:00.027 Write completed with error (sct=0, sc=8) 00:30:00.027 starting I/O failed: -6 00:30:00.027 Write completed with error (sct=0, sc=8) 00:30:00.027 starting I/O failed: -6 00:30:00.027 Write completed with error (sct=0, sc=8) 00:30:00.027 starting I/O failed: -6 00:30:00.027 Write completed with error (sct=0, sc=8) 00:30:00.027 starting I/O failed: -6 00:30:00.027 Write completed with error (sct=0, sc=8) 00:30:00.027 starting I/O failed: -6 00:30:00.027 Write completed with error (sct=0, sc=8) 00:30:00.027 starting I/O failed: -6 00:30:00.027 Write completed with error (sct=0, sc=8) 00:30:00.027 starting I/O failed: -6 00:30:00.027 Write completed with error (sct=0, sc=8) 00:30:00.027 starting I/O failed: -6 00:30:00.027 Write completed with error (sct=0, sc=8) 00:30:00.027 starting I/O failed: -6 00:30:00.027 Write completed with error (sct=0, sc=8) 00:30:00.027 starting I/O failed: -6 00:30:00.027 Write completed with error (sct=0, sc=8) 00:30:00.027 starting I/O failed: -6 00:30:00.027 Write completed with error (sct=0, sc=8) 00:30:00.027 starting I/O failed: -6 00:30:00.027 Write completed with error (sct=0, sc=8) 00:30:00.027 starting I/O failed: -6 00:30:00.027 Write completed with error (sct=0, sc=8) 00:30:00.027 starting I/O failed: -6 00:30:00.027 Write completed with error (sct=0, sc=8) 00:30:00.027 starting I/O failed: -6 00:30:00.027 Write completed with error (sct=0, sc=8) 00:30:00.027 starting I/O failed: -6 00:30:00.027 Write completed with error (sct=0, sc=8) 00:30:00.027 starting I/O failed: -6 00:30:00.027 Write completed with error (sct=0, sc=8) 00:30:00.027 starting I/O failed: -6 00:30:00.027 Write completed with error (sct=0, sc=8) 00:30:00.027 starting I/O failed: -6 00:30:00.027 Write completed with error (sct=0, sc=8) 00:30:00.027 starting I/O failed: -6 00:30:00.027 Write completed with error (sct=0, sc=8) 00:30:00.027 starting I/O failed: -6 00:30:00.027 Write completed with error (sct=0, sc=8) 00:30:00.027 starting I/O failed: -6 00:30:00.027 Write completed with error (sct=0, sc=8) 00:30:00.027 starting I/O failed: -6 00:30:00.027 Write completed with error (sct=0, sc=8) 00:30:00.027 starting I/O failed: -6 00:30:00.027 Write completed with error (sct=0, sc=8) 00:30:00.027 starting I/O failed: -6 00:30:00.027 Write completed with error (sct=0, sc=8) 00:30:00.027 starting I/O failed: -6 00:30:00.027 Write completed with error (sct=0, sc=8) 00:30:00.027 starting I/O failed: -6 00:30:00.027 Write completed with error (sct=0, sc=8) 00:30:00.027 starting I/O failed: -6 00:30:00.027 Write completed with error (sct=0, sc=8) 00:30:00.027 starting I/O failed: -6 00:30:00.027 Write completed with error (sct=0, sc=8) 00:30:00.027 starting I/O failed: -6 00:30:00.027 Write completed with error (sct=0, sc=8) 00:30:00.027 starting I/O failed: -6 00:30:00.027 Write completed with error (sct=0, sc=8) 00:30:00.027 starting I/O failed: -6 00:30:00.027 Write completed with error (sct=0, sc=8) 00:30:00.027 starting I/O failed: -6 00:30:00.027 Write completed with error (sct=0, sc=8) 00:30:00.027 starting I/O failed: -6 00:30:00.027 Write completed with error (sct=0, sc=8) 00:30:00.027 starting I/O failed: -6 00:30:00.027 Write completed with error (sct=0, sc=8) 00:30:00.027 starting I/O failed: -6 00:30:00.027 Write completed with error (sct=0, sc=8) 00:30:00.027 starting I/O failed: -6 00:30:00.027 Write completed with error (sct=0, sc=8) 00:30:00.027 starting I/O failed: -6 00:30:00.027 Write completed with error (sct=0, sc=8) 00:30:00.027 starting I/O failed: -6 00:30:00.027 Write completed with error (sct=0, sc=8) 00:30:00.027 starting I/O failed: -6 00:30:00.027 Write completed with error (sct=0, sc=8) 00:30:00.027 starting I/O failed: -6 00:30:00.027 Write completed with error (sct=0, sc=8) 00:30:00.027 starting I/O failed: -6 00:30:00.027 Write completed with error (sct=0, sc=8) 00:30:00.027 starting I/O failed: -6 00:30:00.027 Write completed with error (sct=0, sc=8) 00:30:00.027 starting I/O failed: -6 00:30:00.027 Write completed with error (sct=0, sc=8) 00:30:00.027 starting I/O failed: -6 00:30:00.027 Write completed with error (sct=0, sc=8) 00:30:00.028 starting I/O failed: -6 00:30:00.028 Write completed with error (sct=0, sc=8) 00:30:00.028 starting I/O failed: -6 00:30:00.028 Write completed with error (sct=0, sc=8) 00:30:00.028 starting I/O failed: -6 00:30:00.028 Write completed with error (sct=0, sc=8) 00:30:00.028 starting I/O failed: -6 00:30:00.028 Write completed with error (sct=0, sc=8) 00:30:00.028 starting I/O failed: -6 00:30:00.028 [2024-12-13 10:31:53.469596] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:00.028 NVMe io qpair process completion error 00:30:00.028 Write completed with error (sct=0, sc=8) 00:30:00.028 Write completed with error (sct=0, sc=8) 00:30:00.028 Write completed with error (sct=0, sc=8) 00:30:00.028 Write completed with error (sct=0, sc=8) 00:30:00.028 starting I/O failed: -6 00:30:00.028 Write completed with error (sct=0, sc=8) 00:30:00.028 Write completed with error (sct=0, sc=8) 00:30:00.028 Write completed with error (sct=0, sc=8) 00:30:00.028 Write completed with error (sct=0, sc=8) 00:30:00.028 starting I/O failed: -6 00:30:00.028 Write completed with error (sct=0, sc=8) 00:30:00.028 Write completed with error (sct=0, sc=8) 00:30:00.028 Write completed with error (sct=0, sc=8) 00:30:00.028 Write completed with error (sct=0, sc=8) 00:30:00.028 starting I/O failed: -6 00:30:00.028 Write completed with error (sct=0, sc=8) 00:30:00.028 Write completed with error (sct=0, sc=8) 00:30:00.028 Write completed with error (sct=0, sc=8) 00:30:00.028 Write completed with error (sct=0, sc=8) 00:30:00.028 starting I/O failed: -6 00:30:00.028 Write completed with error (sct=0, sc=8) 00:30:00.028 Write completed with error (sct=0, sc=8) 00:30:00.028 Write completed with error (sct=0, sc=8) 00:30:00.028 Write completed with error (sct=0, sc=8) 00:30:00.028 starting I/O failed: -6 00:30:00.028 Write completed with error (sct=0, sc=8) 00:30:00.028 Write completed with error (sct=0, sc=8) 00:30:00.028 Write completed with error (sct=0, sc=8) 00:30:00.028 Write completed with error (sct=0, sc=8) 00:30:00.028 starting I/O failed: -6 00:30:00.028 Write completed with error (sct=0, sc=8) 00:30:00.028 Write completed with error (sct=0, sc=8) 00:30:00.028 Write completed with error (sct=0, sc=8) 00:30:00.028 Write completed with error (sct=0, sc=8) 00:30:00.028 starting I/O failed: -6 00:30:00.028 Write completed with error (sct=0, sc=8) 00:30:00.028 Write completed with error (sct=0, sc=8) 00:30:00.028 Write completed with error (sct=0, sc=8) 00:30:00.028 Write completed with error (sct=0, sc=8) 00:30:00.028 starting I/O failed: -6 00:30:00.028 Write completed with error (sct=0, sc=8) 00:30:00.028 Write completed with error (sct=0, sc=8) 00:30:00.028 Write completed with error (sct=0, sc=8) 00:30:00.028 Write completed with error (sct=0, sc=8) 00:30:00.028 starting I/O failed: -6 00:30:00.028 Write completed with error (sct=0, sc=8) 00:30:00.028 Write completed with error (sct=0, sc=8) 00:30:00.028 Write completed with error (sct=0, sc=8) 00:30:00.028 Write completed with error (sct=0, sc=8) 00:30:00.028 starting I/O failed: -6 00:30:00.028 Write completed with error (sct=0, sc=8) 00:30:00.028 Write completed with error (sct=0, sc=8) 00:30:00.028 [2024-12-13 10:31:53.471247] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:00.028 starting I/O failed: -6 00:30:00.028 Write completed with error (sct=0, sc=8) 00:30:00.028 Write completed with error (sct=0, sc=8) 00:30:00.028 starting I/O failed: -6 00:30:00.028 Write completed with error (sct=0, sc=8) 00:30:00.028 starting I/O failed: -6 00:30:00.028 Write completed with error (sct=0, sc=8) 00:30:00.028 Write completed with error (sct=0, sc=8) 00:30:00.028 Write completed with error (sct=0, sc=8) 00:30:00.028 starting I/O failed: -6 00:30:00.028 Write completed with error (sct=0, sc=8) 00:30:00.028 starting I/O failed: -6 00:30:00.028 Write completed with error (sct=0, sc=8) 00:30:00.028 Write completed with error (sct=0, sc=8) 00:30:00.028 Write completed with error (sct=0, sc=8) 00:30:00.028 starting I/O failed: -6 00:30:00.028 Write completed with error (sct=0, sc=8) 00:30:00.028 starting I/O failed: -6 00:30:00.028 Write completed with error (sct=0, sc=8) 00:30:00.028 Write completed with error (sct=0, sc=8) 00:30:00.028 Write completed with error (sct=0, sc=8) 00:30:00.028 starting I/O failed: -6 00:30:00.028 Write completed with error (sct=0, sc=8) 00:30:00.028 starting I/O failed: -6 00:30:00.028 Write completed with error (sct=0, sc=8) 00:30:00.028 Write completed with error (sct=0, sc=8) 00:30:00.028 Write completed with error (sct=0, sc=8) 00:30:00.028 starting I/O failed: -6 00:30:00.028 Write completed with error (sct=0, sc=8) 00:30:00.028 starting I/O failed: -6 00:30:00.028 Write completed with error (sct=0, sc=8) 00:30:00.028 Write completed with error (sct=0, sc=8) 00:30:00.028 Write completed with error (sct=0, sc=8) 00:30:00.028 starting I/O failed: -6 00:30:00.028 Write completed with error (sct=0, sc=8) 00:30:00.028 starting I/O failed: -6 00:30:00.028 Write completed with error (sct=0, sc=8) 00:30:00.028 Write completed with error (sct=0, sc=8) 00:30:00.028 Write completed with error (sct=0, sc=8) 00:30:00.028 starting I/O failed: -6 00:30:00.028 Write completed with error (sct=0, sc=8) 00:30:00.028 starting I/O failed: -6 00:30:00.028 Write completed with error (sct=0, sc=8) 00:30:00.028 Write completed with error (sct=0, sc=8) 00:30:00.028 Write completed with error (sct=0, sc=8) 00:30:00.028 starting I/O failed: -6 00:30:00.028 Write completed with error (sct=0, sc=8) 00:30:00.028 starting I/O failed: -6 00:30:00.028 Write completed with error (sct=0, sc=8) 00:30:00.028 Write completed with error (sct=0, sc=8) 00:30:00.028 Write completed with error (sct=0, sc=8) 00:30:00.028 starting I/O failed: -6 00:30:00.028 Write completed with error (sct=0, sc=8) 00:30:00.028 starting I/O failed: -6 00:30:00.028 Write completed with error (sct=0, sc=8) 00:30:00.028 [2024-12-13 10:31:53.472864] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.028 Write completed with error (sct=0, sc=8) 00:30:00.028 starting I/O failed: -6 00:30:00.028 Write completed with error (sct=0, sc=8) 00:30:00.028 starting I/O failed: -6 00:30:00.028 Write completed with error (sct=0, sc=8) 00:30:00.028 starting I/O failed: -6 00:30:00.028 Write completed with error (sct=0, sc=8) 00:30:00.028 Write completed with error (sct=0, sc=8) 00:30:00.028 starting I/O failed: -6 00:30:00.028 Write completed with error (sct=0, sc=8) 00:30:00.028 starting I/O failed: -6 00:30:00.028 Write completed with error (sct=0, sc=8) 00:30:00.028 starting I/O failed: -6 00:30:00.028 Write completed with error (sct=0, sc=8) 00:30:00.028 Write completed with error (sct=0, sc=8) 00:30:00.028 starting I/O failed: -6 00:30:00.028 Write completed with error (sct=0, sc=8) 00:30:00.028 starting I/O failed: -6 00:30:00.028 Write completed with error (sct=0, sc=8) 00:30:00.028 starting I/O failed: -6 00:30:00.028 Write completed with error (sct=0, sc=8) 00:30:00.028 Write completed with error (sct=0, sc=8) 00:30:00.028 starting I/O failed: -6 00:30:00.028 Write completed with error (sct=0, sc=8) 00:30:00.028 starting I/O failed: -6 00:30:00.028 Write completed with error (sct=0, sc=8) 00:30:00.028 starting I/O failed: -6 00:30:00.028 Write completed with error (sct=0, sc=8) 00:30:00.028 Write completed with error (sct=0, sc=8) 00:30:00.028 starting I/O failed: -6 00:30:00.028 Write completed with error (sct=0, sc=8) 00:30:00.028 starting I/O failed: -6 00:30:00.028 Write completed with error (sct=0, sc=8) 00:30:00.028 starting I/O failed: -6 00:30:00.028 Write completed with error (sct=0, sc=8) 00:30:00.028 Write completed with error (sct=0, sc=8) 00:30:00.028 starting I/O failed: -6 00:30:00.028 Write completed with error (sct=0, sc=8) 00:30:00.028 starting I/O failed: -6 00:30:00.028 Write completed with error (sct=0, sc=8) 00:30:00.028 starting I/O failed: -6 00:30:00.028 Write completed with error (sct=0, sc=8) 00:30:00.028 Write completed with error (sct=0, sc=8) 00:30:00.028 starting I/O failed: -6 00:30:00.028 Write completed with error (sct=0, sc=8) 00:30:00.028 starting I/O failed: -6 00:30:00.028 Write completed with error (sct=0, sc=8) 00:30:00.028 starting I/O failed: -6 00:30:00.028 Write completed with error (sct=0, sc=8) 00:30:00.028 Write completed with error (sct=0, sc=8) 00:30:00.028 starting I/O failed: -6 00:30:00.028 Write completed with error (sct=0, sc=8) 00:30:00.028 starting I/O failed: -6 00:30:00.028 Write completed with error (sct=0, sc=8) 00:30:00.028 starting I/O failed: -6 00:30:00.028 Write completed with error (sct=0, sc=8) 00:30:00.028 Write completed with error (sct=0, sc=8) 00:30:00.028 starting I/O failed: -6 00:30:00.028 Write completed with error (sct=0, sc=8) 00:30:00.028 starting I/O failed: -6 00:30:00.028 Write completed with error (sct=0, sc=8) 00:30:00.028 starting I/O failed: -6 00:30:00.028 Write completed with error (sct=0, sc=8) 00:30:00.028 Write completed with error (sct=0, sc=8) 00:30:00.028 starting I/O failed: -6 00:30:00.028 Write completed with error (sct=0, sc=8) 00:30:00.028 starting I/O failed: -6 00:30:00.028 Write completed with error (sct=0, sc=8) 00:30:00.028 starting I/O failed: -6 00:30:00.028 Write completed with error (sct=0, sc=8) 00:30:00.028 Write completed with error (sct=0, sc=8) 00:30:00.028 starting I/O failed: -6 00:30:00.028 Write completed with error (sct=0, sc=8) 00:30:00.028 starting I/O failed: -6 00:30:00.028 Write completed with error (sct=0, sc=8) 00:30:00.028 starting I/O failed: -6 00:30:00.028 Write completed with error (sct=0, sc=8) 00:30:00.028 Write completed with error (sct=0, sc=8) 00:30:00.028 starting I/O failed: -6 00:30:00.028 Write completed with error (sct=0, sc=8) 00:30:00.028 starting I/O failed: -6 00:30:00.028 Write completed with error (sct=0, sc=8) 00:30:00.028 starting I/O failed: -6 00:30:00.028 Write completed with error (sct=0, sc=8) 00:30:00.028 Write completed with error (sct=0, sc=8) 00:30:00.028 starting I/O failed: -6 00:30:00.028 [2024-12-13 10:31:53.475527] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:00.028 Write completed with error (sct=0, sc=8) 00:30:00.028 starting I/O failed: -6 00:30:00.028 Write completed with error (sct=0, sc=8) 00:30:00.028 starting I/O failed: -6 00:30:00.028 Write completed with error (sct=0, sc=8) 00:30:00.028 starting I/O failed: -6 00:30:00.028 Write completed with error (sct=0, sc=8) 00:30:00.028 starting I/O failed: -6 00:30:00.029 Write completed with error (sct=0, sc=8) 00:30:00.029 starting I/O failed: -6 00:30:00.029 Write completed with error (sct=0, sc=8) 00:30:00.029 starting I/O failed: -6 00:30:00.029 Write completed with error (sct=0, sc=8) 00:30:00.029 starting I/O failed: -6 00:30:00.029 Write completed with error (sct=0, sc=8) 00:30:00.029 starting I/O failed: -6 00:30:00.029 Write completed with error (sct=0, sc=8) 00:30:00.029 starting I/O failed: -6 00:30:00.029 Write completed with error (sct=0, sc=8) 00:30:00.029 starting I/O failed: -6 00:30:00.029 Write completed with error (sct=0, sc=8) 00:30:00.029 starting I/O failed: -6 00:30:00.029 Write completed with error (sct=0, sc=8) 00:30:00.029 starting I/O failed: -6 00:30:00.029 Write completed with error (sct=0, sc=8) 00:30:00.029 starting I/O failed: -6 00:30:00.029 Write completed with error (sct=0, sc=8) 00:30:00.029 starting I/O failed: -6 00:30:00.029 Write completed with error (sct=0, sc=8) 00:30:00.029 starting I/O failed: -6 00:30:00.029 Write completed with error (sct=0, sc=8) 00:30:00.029 starting I/O failed: -6 00:30:00.029 Write completed with error (sct=0, sc=8) 00:30:00.029 starting I/O failed: -6 00:30:00.029 Write completed with error (sct=0, sc=8) 00:30:00.029 starting I/O failed: -6 00:30:00.029 Write completed with error (sct=0, sc=8) 00:30:00.029 starting I/O failed: -6 00:30:00.029 Write completed with error (sct=0, sc=8) 00:30:00.029 starting I/O failed: -6 00:30:00.029 Write completed with error (sct=0, sc=8) 00:30:00.029 starting I/O failed: -6 00:30:00.029 Write completed with error (sct=0, sc=8) 00:30:00.029 starting I/O failed: -6 00:30:00.029 Write completed with error (sct=0, sc=8) 00:30:00.029 starting I/O failed: -6 00:30:00.029 Write completed with error (sct=0, sc=8) 00:30:00.029 starting I/O failed: -6 00:30:00.029 Write completed with error (sct=0, sc=8) 00:30:00.029 starting I/O failed: -6 00:30:00.029 Write completed with error (sct=0, sc=8) 00:30:00.029 starting I/O failed: -6 00:30:00.029 Write completed with error (sct=0, sc=8) 00:30:00.029 starting I/O failed: -6 00:30:00.029 Write completed with error (sct=0, sc=8) 00:30:00.029 starting I/O failed: -6 00:30:00.029 Write completed with error (sct=0, sc=8) 00:30:00.029 starting I/O failed: -6 00:30:00.029 Write completed with error (sct=0, sc=8) 00:30:00.029 starting I/O failed: -6 00:30:00.029 Write completed with error (sct=0, sc=8) 00:30:00.029 starting I/O failed: -6 00:30:00.029 Write completed with error (sct=0, sc=8) 00:30:00.029 starting I/O failed: -6 00:30:00.029 Write completed with error (sct=0, sc=8) 00:30:00.029 starting I/O failed: -6 00:30:00.029 Write completed with error (sct=0, sc=8) 00:30:00.029 starting I/O failed: -6 00:30:00.029 Write completed with error (sct=0, sc=8) 00:30:00.029 starting I/O failed: -6 00:30:00.029 Write completed with error (sct=0, sc=8) 00:30:00.029 starting I/O failed: -6 00:30:00.029 Write completed with error (sct=0, sc=8) 00:30:00.029 starting I/O failed: -6 00:30:00.029 Write completed with error (sct=0, sc=8) 00:30:00.029 starting I/O failed: -6 00:30:00.029 Write completed with error (sct=0, sc=8) 00:30:00.029 starting I/O failed: -6 00:30:00.029 Write completed with error (sct=0, sc=8) 00:30:00.029 starting I/O failed: -6 00:30:00.029 Write completed with error (sct=0, sc=8) 00:30:00.029 starting I/O failed: -6 00:30:00.029 Write completed with error (sct=0, sc=8) 00:30:00.029 starting I/O failed: -6 00:30:00.029 Write completed with error (sct=0, sc=8) 00:30:00.029 starting I/O failed: -6 00:30:00.029 Write completed with error (sct=0, sc=8) 00:30:00.029 starting I/O failed: -6 00:30:00.029 Write completed with error (sct=0, sc=8) 00:30:00.029 starting I/O failed: -6 00:30:00.029 Write completed with error (sct=0, sc=8) 00:30:00.029 starting I/O failed: -6 00:30:00.029 Write completed with error (sct=0, sc=8) 00:30:00.029 starting I/O failed: -6 00:30:00.029 Write completed with error (sct=0, sc=8) 00:30:00.029 starting I/O failed: -6 00:30:00.029 Write completed with error (sct=0, sc=8) 00:30:00.029 starting I/O failed: -6 00:30:00.029 Write completed with error (sct=0, sc=8) 00:30:00.029 starting I/O failed: -6 00:30:00.029 Write completed with error (sct=0, sc=8) 00:30:00.029 starting I/O failed: -6 00:30:00.029 Write completed with error (sct=0, sc=8) 00:30:00.029 starting I/O failed: -6 00:30:00.029 Write completed with error (sct=0, sc=8) 00:30:00.029 starting I/O failed: -6 00:30:00.029 Write completed with error (sct=0, sc=8) 00:30:00.029 starting I/O failed: -6 00:30:00.029 Write completed with error (sct=0, sc=8) 00:30:00.029 starting I/O failed: -6 00:30:00.029 Write completed with error (sct=0, sc=8) 00:30:00.029 starting I/O failed: -6 00:30:00.029 Write completed with error (sct=0, sc=8) 00:30:00.029 starting I/O failed: -6 00:30:00.029 Write completed with error (sct=0, sc=8) 00:30:00.029 starting I/O failed: -6 00:30:00.029 Write completed with error (sct=0, sc=8) 00:30:00.029 starting I/O failed: -6 00:30:00.029 Write completed with error (sct=0, sc=8) 00:30:00.029 starting I/O failed: -6 00:30:00.029 Write completed with error (sct=0, sc=8) 00:30:00.029 starting I/O failed: -6 00:30:00.029 Write completed with error (sct=0, sc=8) 00:30:00.029 starting I/O failed: -6 00:30:00.029 [2024-12-13 10:31:53.489680] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.029 NVMe io qpair process completion error 00:30:00.029 Write completed with error (sct=0, sc=8) 00:30:00.029 Write completed with error (sct=0, sc=8) 00:30:00.029 starting I/O failed: -6 00:30:00.029 Write completed with error (sct=0, sc=8) 00:30:00.029 Write completed with error (sct=0, sc=8) 00:30:00.029 Write completed with error (sct=0, sc=8) 00:30:00.029 Write completed with error (sct=0, sc=8) 00:30:00.029 starting I/O failed: -6 00:30:00.029 Write completed with error (sct=0, sc=8) 00:30:00.029 Write completed with error (sct=0, sc=8) 00:30:00.029 Write completed with error (sct=0, sc=8) 00:30:00.029 Write completed with error (sct=0, sc=8) 00:30:00.029 starting I/O failed: -6 00:30:00.029 Write completed with error (sct=0, sc=8) 00:30:00.029 Write completed with error (sct=0, sc=8) 00:30:00.029 Write completed with error (sct=0, sc=8) 00:30:00.029 Write completed with error (sct=0, sc=8) 00:30:00.029 starting I/O failed: -6 00:30:00.029 Write completed with error (sct=0, sc=8) 00:30:00.029 Write completed with error (sct=0, sc=8) 00:30:00.029 Write completed with error (sct=0, sc=8) 00:30:00.029 Write completed with error (sct=0, sc=8) 00:30:00.029 starting I/O failed: -6 00:30:00.029 Write completed with error (sct=0, sc=8) 00:30:00.029 Write completed with error (sct=0, sc=8) 00:30:00.029 Write completed with error (sct=0, sc=8) 00:30:00.029 Write completed with error (sct=0, sc=8) 00:30:00.029 starting I/O failed: -6 00:30:00.029 Write completed with error (sct=0, sc=8) 00:30:00.029 Write completed with error (sct=0, sc=8) 00:30:00.029 Write completed with error (sct=0, sc=8) 00:30:00.029 Write completed with error (sct=0, sc=8) 00:30:00.029 starting I/O failed: -6 00:30:00.029 Write completed with error (sct=0, sc=8) 00:30:00.029 Write completed with error (sct=0, sc=8) 00:30:00.029 Write completed with error (sct=0, sc=8) 00:30:00.029 Write completed with error (sct=0, sc=8) 00:30:00.029 starting I/O failed: -6 00:30:00.029 Write completed with error (sct=0, sc=8) 00:30:00.029 Write completed with error (sct=0, sc=8) 00:30:00.029 Write completed with error (sct=0, sc=8) 00:30:00.029 Write completed with error (sct=0, sc=8) 00:30:00.029 starting I/O failed: -6 00:30:00.029 Write completed with error (sct=0, sc=8) 00:30:00.029 Write completed with error (sct=0, sc=8) 00:30:00.029 [2024-12-13 10:31:53.491140] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:00.029 Write completed with error (sct=0, sc=8) 00:30:00.029 Write completed with error (sct=0, sc=8) 00:30:00.029 starting I/O failed: -6 00:30:00.029 Write completed with error (sct=0, sc=8) 00:30:00.029 starting I/O failed: -6 00:30:00.029 Write completed with error (sct=0, sc=8) 00:30:00.029 Write completed with error (sct=0, sc=8) 00:30:00.029 Write completed with error (sct=0, sc=8) 00:30:00.029 starting I/O failed: -6 00:30:00.029 Write completed with error (sct=0, sc=8) 00:30:00.029 starting I/O failed: -6 00:30:00.029 Write completed with error (sct=0, sc=8) 00:30:00.029 Write completed with error (sct=0, sc=8) 00:30:00.029 Write completed with error (sct=0, sc=8) 00:30:00.029 starting I/O failed: -6 00:30:00.029 Write completed with error (sct=0, sc=8) 00:30:00.029 starting I/O failed: -6 00:30:00.029 Write completed with error (sct=0, sc=8) 00:30:00.029 Write completed with error (sct=0, sc=8) 00:30:00.029 Write completed with error (sct=0, sc=8) 00:30:00.029 starting I/O failed: -6 00:30:00.029 Write completed with error (sct=0, sc=8) 00:30:00.029 starting I/O failed: -6 00:30:00.029 Write completed with error (sct=0, sc=8) 00:30:00.029 Write completed with error (sct=0, sc=8) 00:30:00.029 Write completed with error (sct=0, sc=8) 00:30:00.029 starting I/O failed: -6 00:30:00.029 Write completed with error (sct=0, sc=8) 00:30:00.029 starting I/O failed: -6 00:30:00.029 Write completed with error (sct=0, sc=8) 00:30:00.029 Write completed with error (sct=0, sc=8) 00:30:00.029 Write completed with error (sct=0, sc=8) 00:30:00.029 starting I/O failed: -6 00:30:00.029 Write completed with error (sct=0, sc=8) 00:30:00.029 starting I/O failed: -6 00:30:00.029 Write completed with error (sct=0, sc=8) 00:30:00.029 Write completed with error (sct=0, sc=8) 00:30:00.029 Write completed with error (sct=0, sc=8) 00:30:00.029 starting I/O failed: -6 00:30:00.029 Write completed with error (sct=0, sc=8) 00:30:00.029 starting I/O failed: -6 00:30:00.029 Write completed with error (sct=0, sc=8) 00:30:00.029 Write completed with error (sct=0, sc=8) 00:30:00.029 Write completed with error (sct=0, sc=8) 00:30:00.029 starting I/O failed: -6 00:30:00.029 Write completed with error (sct=0, sc=8) 00:30:00.029 starting I/O failed: -6 00:30:00.029 Write completed with error (sct=0, sc=8) 00:30:00.029 Write completed with error (sct=0, sc=8) 00:30:00.029 Write completed with error (sct=0, sc=8) 00:30:00.029 starting I/O failed: -6 00:30:00.029 Write completed with error (sct=0, sc=8) 00:30:00.029 starting I/O failed: -6 00:30:00.029 Write completed with error (sct=0, sc=8) 00:30:00.029 Write completed with error (sct=0, sc=8) 00:30:00.029 Write completed with error (sct=0, sc=8) 00:30:00.029 starting I/O failed: -6 00:30:00.029 Write completed with error (sct=0, sc=8) 00:30:00.029 starting I/O failed: -6 00:30:00.029 Write completed with error (sct=0, sc=8) 00:30:00.029 Write completed with error (sct=0, sc=8) 00:30:00.029 Write completed with error (sct=0, sc=8) 00:30:00.029 starting I/O failed: -6 00:30:00.029 Write completed with error (sct=0, sc=8) 00:30:00.029 starting I/O failed: -6 00:30:00.030 Write completed with error (sct=0, sc=8) 00:30:00.030 Write completed with error (sct=0, sc=8) 00:30:00.030 Write completed with error (sct=0, sc=8) 00:30:00.030 starting I/O failed: -6 00:30:00.030 [2024-12-13 10:31:53.492972] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.030 Write completed with error (sct=0, sc=8) 00:30:00.030 starting I/O failed: -6 00:30:00.030 Write completed with error (sct=0, sc=8) 00:30:00.030 starting I/O failed: -6 00:30:00.030 Write completed with error (sct=0, sc=8) 00:30:00.030 Write completed with error (sct=0, sc=8) 00:30:00.030 starting I/O failed: -6 00:30:00.030 Write completed with error (sct=0, sc=8) 00:30:00.030 starting I/O failed: -6 00:30:00.030 Write completed with error (sct=0, sc=8) 00:30:00.030 starting I/O failed: -6 00:30:00.030 Write completed with error (sct=0, sc=8) 00:30:00.030 Write completed with error (sct=0, sc=8) 00:30:00.030 starting I/O failed: -6 00:30:00.030 Write completed with error (sct=0, sc=8) 00:30:00.030 starting I/O failed: -6 00:30:00.030 Write completed with error (sct=0, sc=8) 00:30:00.030 starting I/O failed: -6 00:30:00.030 Write completed with error (sct=0, sc=8) 00:30:00.030 Write completed with error (sct=0, sc=8) 00:30:00.030 starting I/O failed: -6 00:30:00.030 Write completed with error (sct=0, sc=8) 00:30:00.030 starting I/O failed: -6 00:30:00.030 Write completed with error (sct=0, sc=8) 00:30:00.030 starting I/O failed: -6 00:30:00.030 Write completed with error (sct=0, sc=8) 00:30:00.030 Write completed with error (sct=0, sc=8) 00:30:00.030 starting I/O failed: -6 00:30:00.030 Write completed with error (sct=0, sc=8) 00:30:00.030 starting I/O failed: -6 00:30:00.030 Write completed with error (sct=0, sc=8) 00:30:00.030 starting I/O failed: -6 00:30:00.030 Write completed with error (sct=0, sc=8) 00:30:00.030 Write completed with error (sct=0, sc=8) 00:30:00.030 starting I/O failed: -6 00:30:00.030 Write completed with error (sct=0, sc=8) 00:30:00.030 starting I/O failed: -6 00:30:00.030 Write completed with error (sct=0, sc=8) 00:30:00.030 starting I/O failed: -6 00:30:00.030 Write completed with error (sct=0, sc=8) 00:30:00.030 Write completed with error (sct=0, sc=8) 00:30:00.030 starting I/O failed: -6 00:30:00.030 Write completed with error (sct=0, sc=8) 00:30:00.030 starting I/O failed: -6 00:30:00.030 Write completed with error (sct=0, sc=8) 00:30:00.030 starting I/O failed: -6 00:30:00.030 Write completed with error (sct=0, sc=8) 00:30:00.030 Write completed with error (sct=0, sc=8) 00:30:00.030 starting I/O failed: -6 00:30:00.030 Write completed with error (sct=0, sc=8) 00:30:00.030 starting I/O failed: -6 00:30:00.030 Write completed with error (sct=0, sc=8) 00:30:00.030 starting I/O failed: -6 00:30:00.030 Write completed with error (sct=0, sc=8) 00:30:00.030 Write completed with error (sct=0, sc=8) 00:30:00.030 starting I/O failed: -6 00:30:00.030 Write completed with error (sct=0, sc=8) 00:30:00.030 starting I/O failed: -6 00:30:00.030 Write completed with error (sct=0, sc=8) 00:30:00.030 starting I/O failed: -6 00:30:00.030 Write completed with error (sct=0, sc=8) 00:30:00.030 Write completed with error (sct=0, sc=8) 00:30:00.030 starting I/O failed: -6 00:30:00.030 Write completed with error (sct=0, sc=8) 00:30:00.030 starting I/O failed: -6 00:30:00.030 Write completed with error (sct=0, sc=8) 00:30:00.030 starting I/O failed: -6 00:30:00.030 Write completed with error (sct=0, sc=8) 00:30:00.030 Write completed with error (sct=0, sc=8) 00:30:00.030 starting I/O failed: -6 00:30:00.030 Write completed with error (sct=0, sc=8) 00:30:00.030 starting I/O failed: -6 00:30:00.030 Write completed with error (sct=0, sc=8) 00:30:00.030 starting I/O failed: -6 00:30:00.030 Write completed with error (sct=0, sc=8) 00:30:00.030 Write completed with error (sct=0, sc=8) 00:30:00.030 starting I/O failed: -6 00:30:00.030 Write completed with error (sct=0, sc=8) 00:30:00.030 starting I/O failed: -6 00:30:00.030 Write completed with error (sct=0, sc=8) 00:30:00.030 starting I/O failed: -6 00:30:00.030 Write completed with error (sct=0, sc=8) 00:30:00.030 Write completed with error (sct=0, sc=8) 00:30:00.030 starting I/O failed: -6 00:30:00.030 [2024-12-13 10:31:53.495424] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:00.030 Write completed with error (sct=0, sc=8) 00:30:00.030 starting I/O failed: -6 00:30:00.030 Write completed with error (sct=0, sc=8) 00:30:00.030 starting I/O failed: -6 00:30:00.030 Write completed with error (sct=0, sc=8) 00:30:00.030 starting I/O failed: -6 00:30:00.030 Write completed with error (sct=0, sc=8) 00:30:00.030 starting I/O failed: -6 00:30:00.030 Write completed with error (sct=0, sc=8) 00:30:00.030 starting I/O failed: -6 00:30:00.030 Write completed with error (sct=0, sc=8) 00:30:00.030 starting I/O failed: -6 00:30:00.030 Write completed with error (sct=0, sc=8) 00:30:00.030 starting I/O failed: -6 00:30:00.030 Write completed with error (sct=0, sc=8) 00:30:00.030 starting I/O failed: -6 00:30:00.030 Write completed with error (sct=0, sc=8) 00:30:00.030 starting I/O failed: -6 00:30:00.030 Write completed with error (sct=0, sc=8) 00:30:00.030 starting I/O failed: -6 00:30:00.030 Write completed with error (sct=0, sc=8) 00:30:00.030 starting I/O failed: -6 00:30:00.030 Write completed with error (sct=0, sc=8) 00:30:00.030 starting I/O failed: -6 00:30:00.030 Write completed with error (sct=0, sc=8) 00:30:00.030 starting I/O failed: -6 00:30:00.030 Write completed with error (sct=0, sc=8) 00:30:00.030 starting I/O failed: -6 00:30:00.030 Write completed with error (sct=0, sc=8) 00:30:00.030 starting I/O failed: -6 00:30:00.030 Write completed with error (sct=0, sc=8) 00:30:00.030 starting I/O failed: -6 00:30:00.030 Write completed with error (sct=0, sc=8) 00:30:00.030 starting I/O failed: -6 00:30:00.030 Write completed with error (sct=0, sc=8) 00:30:00.030 starting I/O failed: -6 00:30:00.030 Write completed with error (sct=0, sc=8) 00:30:00.030 starting I/O failed: -6 00:30:00.030 Write completed with error (sct=0, sc=8) 00:30:00.030 starting I/O failed: -6 00:30:00.030 Write completed with error (sct=0, sc=8) 00:30:00.030 starting I/O failed: -6 00:30:00.030 Write completed with error (sct=0, sc=8) 00:30:00.030 starting I/O failed: -6 00:30:00.030 Write completed with error (sct=0, sc=8) 00:30:00.030 starting I/O failed: -6 00:30:00.030 Write completed with error (sct=0, sc=8) 00:30:00.030 starting I/O failed: -6 00:30:00.030 Write completed with error (sct=0, sc=8) 00:30:00.030 starting I/O failed: -6 00:30:00.030 Write completed with error (sct=0, sc=8) 00:30:00.030 starting I/O failed: -6 00:30:00.030 Write completed with error (sct=0, sc=8) 00:30:00.030 starting I/O failed: -6 00:30:00.030 Write completed with error (sct=0, sc=8) 00:30:00.030 starting I/O failed: -6 00:30:00.030 Write completed with error (sct=0, sc=8) 00:30:00.030 starting I/O failed: -6 00:30:00.030 Write completed with error (sct=0, sc=8) 00:30:00.030 starting I/O failed: -6 00:30:00.030 Write completed with error (sct=0, sc=8) 00:30:00.030 starting I/O failed: -6 00:30:00.030 Write completed with error (sct=0, sc=8) 00:30:00.030 starting I/O failed: -6 00:30:00.030 Write completed with error (sct=0, sc=8) 00:30:00.030 starting I/O failed: -6 00:30:00.030 Write completed with error (sct=0, sc=8) 00:30:00.030 starting I/O failed: -6 00:30:00.030 Write completed with error (sct=0, sc=8) 00:30:00.030 starting I/O failed: -6 00:30:00.030 Write completed with error (sct=0, sc=8) 00:30:00.030 starting I/O failed: -6 00:30:00.030 Write completed with error (sct=0, sc=8) 00:30:00.030 starting I/O failed: -6 00:30:00.030 Write completed with error (sct=0, sc=8) 00:30:00.030 starting I/O failed: -6 00:30:00.030 Write completed with error (sct=0, sc=8) 00:30:00.030 starting I/O failed: -6 00:30:00.030 Write completed with error (sct=0, sc=8) 00:30:00.030 starting I/O failed: -6 00:30:00.030 Write completed with error (sct=0, sc=8) 00:30:00.030 starting I/O failed: -6 00:30:00.030 Write completed with error (sct=0, sc=8) 00:30:00.030 starting I/O failed: -6 00:30:00.030 Write completed with error (sct=0, sc=8) 00:30:00.030 starting I/O failed: -6 00:30:00.030 Write completed with error (sct=0, sc=8) 00:30:00.030 starting I/O failed: -6 00:30:00.030 Write completed with error (sct=0, sc=8) 00:30:00.030 starting I/O failed: -6 00:30:00.030 Write completed with error (sct=0, sc=8) 00:30:00.030 starting I/O failed: -6 00:30:00.030 Write completed with error (sct=0, sc=8) 00:30:00.030 starting I/O failed: -6 00:30:00.030 Write completed with error (sct=0, sc=8) 00:30:00.030 starting I/O failed: -6 00:30:00.030 Write completed with error (sct=0, sc=8) 00:30:00.030 starting I/O failed: -6 00:30:00.030 Write completed with error (sct=0, sc=8) 00:30:00.030 starting I/O failed: -6 00:30:00.030 Write completed with error (sct=0, sc=8) 00:30:00.030 starting I/O failed: -6 00:30:00.030 Write completed with error (sct=0, sc=8) 00:30:00.030 starting I/O failed: -6 00:30:00.030 Write completed with error (sct=0, sc=8) 00:30:00.030 starting I/O failed: -6 00:30:00.030 Write completed with error (sct=0, sc=8) 00:30:00.030 starting I/O failed: -6 00:30:00.030 Write completed with error (sct=0, sc=8) 00:30:00.030 starting I/O failed: -6 00:30:00.030 Write completed with error (sct=0, sc=8) 00:30:00.030 starting I/O failed: -6 00:30:00.030 Write completed with error (sct=0, sc=8) 00:30:00.030 starting I/O failed: -6 00:30:00.030 Write completed with error (sct=0, sc=8) 00:30:00.030 starting I/O failed: -6 00:30:00.030 Write completed with error (sct=0, sc=8) 00:30:00.030 starting I/O failed: -6 00:30:00.030 Write completed with error (sct=0, sc=8) 00:30:00.030 starting I/O failed: -6 00:30:00.030 [2024-12-13 10:31:53.509499] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.030 NVMe io qpair process completion error 00:30:00.030 Write completed with error (sct=0, sc=8) 00:30:00.030 starting I/O failed: -6 00:30:00.030 Write completed with error (sct=0, sc=8) 00:30:00.030 Write completed with error (sct=0, sc=8) 00:30:00.030 Write completed with error (sct=0, sc=8) 00:30:00.030 Write completed with error (sct=0, sc=8) 00:30:00.030 starting I/O failed: -6 00:30:00.030 Write completed with error (sct=0, sc=8) 00:30:00.030 Write completed with error (sct=0, sc=8) 00:30:00.030 Write completed with error (sct=0, sc=8) 00:30:00.030 Write completed with error (sct=0, sc=8) 00:30:00.030 starting I/O failed: -6 00:30:00.030 Write completed with error (sct=0, sc=8) 00:30:00.030 Write completed with error (sct=0, sc=8) 00:30:00.030 Write completed with error (sct=0, sc=8) 00:30:00.031 Write completed with error (sct=0, sc=8) 00:30:00.031 starting I/O failed: -6 00:30:00.031 Write completed with error (sct=0, sc=8) 00:30:00.031 Write completed with error (sct=0, sc=8) 00:30:00.031 Write completed with error (sct=0, sc=8) 00:30:00.031 Write completed with error (sct=0, sc=8) 00:30:00.031 starting I/O failed: -6 00:30:00.031 Write completed with error (sct=0, sc=8) 00:30:00.031 Write completed with error (sct=0, sc=8) 00:30:00.031 Write completed with error (sct=0, sc=8) 00:30:00.031 Write completed with error (sct=0, sc=8) 00:30:00.031 starting I/O failed: -6 00:30:00.031 Write completed with error (sct=0, sc=8) 00:30:00.031 Write completed with error (sct=0, sc=8) 00:30:00.031 Write completed with error (sct=0, sc=8) 00:30:00.031 Write completed with error (sct=0, sc=8) 00:30:00.031 starting I/O failed: -6 00:30:00.031 Write completed with error (sct=0, sc=8) 00:30:00.031 Write completed with error (sct=0, sc=8) 00:30:00.031 Write completed with error (sct=0, sc=8) 00:30:00.031 Write completed with error (sct=0, sc=8) 00:30:00.031 starting I/O failed: -6 00:30:00.031 Write completed with error (sct=0, sc=8) 00:30:00.031 Write completed with error (sct=0, sc=8) 00:30:00.031 Write completed with error (sct=0, sc=8) 00:30:00.031 Write completed with error (sct=0, sc=8) 00:30:00.031 starting I/O failed: -6 00:30:00.031 Write completed with error (sct=0, sc=8) 00:30:00.031 Write completed with error (sct=0, sc=8) 00:30:00.031 Write completed with error (sct=0, sc=8) 00:30:00.031 Write completed with error (sct=0, sc=8) 00:30:00.031 starting I/O failed: -6 00:30:00.031 Write completed with error (sct=0, sc=8) 00:30:00.031 [2024-12-13 10:31:53.511025] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.031 starting I/O failed: -6 00:30:00.031 Write completed with error (sct=0, sc=8) 00:30:00.031 Write completed with error (sct=0, sc=8) 00:30:00.031 starting I/O failed: -6 00:30:00.031 Write completed with error (sct=0, sc=8) 00:30:00.031 starting I/O failed: -6 00:30:00.031 Write completed with error (sct=0, sc=8) 00:30:00.031 Write completed with error (sct=0, sc=8) 00:30:00.031 Write completed with error (sct=0, sc=8) 00:30:00.031 starting I/O failed: -6 00:30:00.031 Write completed with error (sct=0, sc=8) 00:30:00.031 starting I/O failed: -6 00:30:00.031 Write completed with error (sct=0, sc=8) 00:30:00.031 Write completed with error (sct=0, sc=8) 00:30:00.031 Write completed with error (sct=0, sc=8) 00:30:00.031 starting I/O failed: -6 00:30:00.031 Write completed with error (sct=0, sc=8) 00:30:00.031 starting I/O failed: -6 00:30:00.031 Write completed with error (sct=0, sc=8) 00:30:00.031 Write completed with error (sct=0, sc=8) 00:30:00.031 Write completed with error (sct=0, sc=8) 00:30:00.031 starting I/O failed: -6 00:30:00.031 Write completed with error (sct=0, sc=8) 00:30:00.031 starting I/O failed: -6 00:30:00.031 Write completed with error (sct=0, sc=8) 00:30:00.031 Write completed with error (sct=0, sc=8) 00:30:00.031 Write completed with error (sct=0, sc=8) 00:30:00.031 starting I/O failed: -6 00:30:00.031 Write completed with error (sct=0, sc=8) 00:30:00.031 starting I/O failed: -6 00:30:00.031 Write completed with error (sct=0, sc=8) 00:30:00.031 Write completed with error (sct=0, sc=8) 00:30:00.031 Write completed with error (sct=0, sc=8) 00:30:00.031 starting I/O failed: -6 00:30:00.031 Write completed with error (sct=0, sc=8) 00:30:00.031 starting I/O failed: -6 00:30:00.031 Write completed with error (sct=0, sc=8) 00:30:00.031 Write completed with error (sct=0, sc=8) 00:30:00.031 Write completed with error (sct=0, sc=8) 00:30:00.031 starting I/O failed: -6 00:30:00.031 Write completed with error (sct=0, sc=8) 00:30:00.031 starting I/O failed: -6 00:30:00.031 Write completed with error (sct=0, sc=8) 00:30:00.031 Write completed with error (sct=0, sc=8) 00:30:00.031 Write completed with error (sct=0, sc=8) 00:30:00.031 starting I/O failed: -6 00:30:00.031 Write completed with error (sct=0, sc=8) 00:30:00.031 starting I/O failed: -6 00:30:00.031 Write completed with error (sct=0, sc=8) 00:30:00.031 Write completed with error (sct=0, sc=8) 00:30:00.031 Write completed with error (sct=0, sc=8) 00:30:00.031 starting I/O failed: -6 00:30:00.031 Write completed with error (sct=0, sc=8) 00:30:00.031 starting I/O failed: -6 00:30:00.031 [2024-12-13 10:31:53.512677] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:00.031 Write completed with error (sct=0, sc=8) 00:30:00.031 starting I/O failed: -6 00:30:00.031 Write completed with error (sct=0, sc=8) 00:30:00.031 starting I/O failed: -6 00:30:00.031 Write completed with error (sct=0, sc=8) 00:30:00.031 starting I/O failed: -6 00:30:00.031 Write completed with error (sct=0, sc=8) 00:30:00.031 Write completed with error (sct=0, sc=8) 00:30:00.031 starting I/O failed: -6 00:30:00.031 Write completed with error (sct=0, sc=8) 00:30:00.031 starting I/O failed: -6 00:30:00.031 Write completed with error (sct=0, sc=8) 00:30:00.031 starting I/O failed: -6 00:30:00.031 Write completed with error (sct=0, sc=8) 00:30:00.031 Write completed with error (sct=0, sc=8) 00:30:00.031 starting I/O failed: -6 00:30:00.031 Write completed with error (sct=0, sc=8) 00:30:00.031 starting I/O failed: -6 00:30:00.031 Write completed with error (sct=0, sc=8) 00:30:00.031 starting I/O failed: -6 00:30:00.031 Write completed with error (sct=0, sc=8) 00:30:00.031 Write completed with error (sct=0, sc=8) 00:30:00.031 starting I/O failed: -6 00:30:00.031 Write completed with error (sct=0, sc=8) 00:30:00.031 starting I/O failed: -6 00:30:00.031 Write completed with error (sct=0, sc=8) 00:30:00.031 starting I/O failed: -6 00:30:00.031 Write completed with error (sct=0, sc=8) 00:30:00.031 Write completed with error (sct=0, sc=8) 00:30:00.031 starting I/O failed: -6 00:30:00.031 Write completed with error (sct=0, sc=8) 00:30:00.031 starting I/O failed: -6 00:30:00.031 Write completed with error (sct=0, sc=8) 00:30:00.031 starting I/O failed: -6 00:30:00.031 Write completed with error (sct=0, sc=8) 00:30:00.031 Write completed with error (sct=0, sc=8) 00:30:00.031 starting I/O failed: -6 00:30:00.031 Write completed with error (sct=0, sc=8) 00:30:00.031 starting I/O failed: -6 00:30:00.031 Write completed with error (sct=0, sc=8) 00:30:00.031 starting I/O failed: -6 00:30:00.031 Write completed with error (sct=0, sc=8) 00:30:00.031 Write completed with error (sct=0, sc=8) 00:30:00.031 starting I/O failed: -6 00:30:00.031 Write completed with error (sct=0, sc=8) 00:30:00.031 starting I/O failed: -6 00:30:00.031 Write completed with error (sct=0, sc=8) 00:30:00.031 starting I/O failed: -6 00:30:00.031 Write completed with error (sct=0, sc=8) 00:30:00.031 Write completed with error (sct=0, sc=8) 00:30:00.031 starting I/O failed: -6 00:30:00.031 Write completed with error (sct=0, sc=8) 00:30:00.031 starting I/O failed: -6 00:30:00.031 Write completed with error (sct=0, sc=8) 00:30:00.031 starting I/O failed: -6 00:30:00.031 Write completed with error (sct=0, sc=8) 00:30:00.031 Write completed with error (sct=0, sc=8) 00:30:00.031 starting I/O failed: -6 00:30:00.031 Write completed with error (sct=0, sc=8) 00:30:00.031 starting I/O failed: -6 00:30:00.031 Write completed with error (sct=0, sc=8) 00:30:00.031 starting I/O failed: -6 00:30:00.031 Write completed with error (sct=0, sc=8) 00:30:00.031 Write completed with error (sct=0, sc=8) 00:30:00.031 starting I/O failed: -6 00:30:00.031 Write completed with error (sct=0, sc=8) 00:30:00.031 starting I/O failed: -6 00:30:00.031 Write completed with error (sct=0, sc=8) 00:30:00.031 starting I/O failed: -6 00:30:00.031 Write completed with error (sct=0, sc=8) 00:30:00.031 Write completed with error (sct=0, sc=8) 00:30:00.031 starting I/O failed: -6 00:30:00.031 Write completed with error (sct=0, sc=8) 00:30:00.031 starting I/O failed: -6 00:30:00.031 Write completed with error (sct=0, sc=8) 00:30:00.031 starting I/O failed: -6 00:30:00.031 Write completed with error (sct=0, sc=8) 00:30:00.031 Write completed with error (sct=0, sc=8) 00:30:00.031 starting I/O failed: -6 00:30:00.031 Write completed with error (sct=0, sc=8) 00:30:00.031 starting I/O failed: -6 00:30:00.031 Write completed with error (sct=0, sc=8) 00:30:00.031 starting I/O failed: -6 00:30:00.031 Write completed with error (sct=0, sc=8) 00:30:00.031 Write completed with error (sct=0, sc=8) 00:30:00.031 starting I/O failed: -6 00:30:00.031 [2024-12-13 10:31:53.515201] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:00.031 starting I/O failed: -6 00:30:00.031 Write completed with error (sct=0, sc=8) 00:30:00.031 starting I/O failed: -6 00:30:00.031 Write completed with error (sct=0, sc=8) 00:30:00.031 starting I/O failed: -6 00:30:00.031 Write completed with error (sct=0, sc=8) 00:30:00.031 starting I/O failed: -6 00:30:00.031 Write completed with error (sct=0, sc=8) 00:30:00.031 starting I/O failed: -6 00:30:00.031 Write completed with error (sct=0, sc=8) 00:30:00.031 starting I/O failed: -6 00:30:00.031 Write completed with error (sct=0, sc=8) 00:30:00.031 starting I/O failed: -6 00:30:00.031 Write completed with error (sct=0, sc=8) 00:30:00.031 starting I/O failed: -6 00:30:00.031 Write completed with error (sct=0, sc=8) 00:30:00.031 starting I/O failed: -6 00:30:00.031 Write completed with error (sct=0, sc=8) 00:30:00.031 starting I/O failed: -6 00:30:00.031 Write completed with error (sct=0, sc=8) 00:30:00.031 starting I/O failed: -6 00:30:00.031 Write completed with error (sct=0, sc=8) 00:30:00.031 starting I/O failed: -6 00:30:00.031 Write completed with error (sct=0, sc=8) 00:30:00.031 starting I/O failed: -6 00:30:00.031 Write completed with error (sct=0, sc=8) 00:30:00.031 starting I/O failed: -6 00:30:00.031 Write completed with error (sct=0, sc=8) 00:30:00.031 starting I/O failed: -6 00:30:00.031 Write completed with error (sct=0, sc=8) 00:30:00.031 starting I/O failed: -6 00:30:00.031 Write completed with error (sct=0, sc=8) 00:30:00.031 starting I/O failed: -6 00:30:00.031 Write completed with error (sct=0, sc=8) 00:30:00.031 starting I/O failed: -6 00:30:00.031 Write completed with error (sct=0, sc=8) 00:30:00.031 starting I/O failed: -6 00:30:00.031 Write completed with error (sct=0, sc=8) 00:30:00.031 starting I/O failed: -6 00:30:00.031 Write completed with error (sct=0, sc=8) 00:30:00.031 starting I/O failed: -6 00:30:00.031 Write completed with error (sct=0, sc=8) 00:30:00.031 starting I/O failed: -6 00:30:00.031 Write completed with error (sct=0, sc=8) 00:30:00.031 starting I/O failed: -6 00:30:00.031 Write completed with error (sct=0, sc=8) 00:30:00.031 starting I/O failed: -6 00:30:00.031 Write completed with error (sct=0, sc=8) 00:30:00.031 starting I/O failed: -6 00:30:00.032 Write completed with error (sct=0, sc=8) 00:30:00.032 starting I/O failed: -6 00:30:00.032 Write completed with error (sct=0, sc=8) 00:30:00.032 starting I/O failed: -6 00:30:00.032 Write completed with error (sct=0, sc=8) 00:30:00.032 starting I/O failed: -6 00:30:00.032 Write completed with error (sct=0, sc=8) 00:30:00.032 starting I/O failed: -6 00:30:00.032 Write completed with error (sct=0, sc=8) 00:30:00.032 starting I/O failed: -6 00:30:00.032 Write completed with error (sct=0, sc=8) 00:30:00.032 starting I/O failed: -6 00:30:00.032 Write completed with error (sct=0, sc=8) 00:30:00.032 starting I/O failed: -6 00:30:00.032 Write completed with error (sct=0, sc=8) 00:30:00.032 starting I/O failed: -6 00:30:00.032 Write completed with error (sct=0, sc=8) 00:30:00.032 starting I/O failed: -6 00:30:00.032 Write completed with error (sct=0, sc=8) 00:30:00.032 starting I/O failed: -6 00:30:00.032 Write completed with error (sct=0, sc=8) 00:30:00.032 starting I/O failed: -6 00:30:00.032 Write completed with error (sct=0, sc=8) 00:30:00.032 starting I/O failed: -6 00:30:00.032 Write completed with error (sct=0, sc=8) 00:30:00.032 starting I/O failed: -6 00:30:00.032 Write completed with error (sct=0, sc=8) 00:30:00.032 starting I/O failed: -6 00:30:00.032 Write completed with error (sct=0, sc=8) 00:30:00.032 starting I/O failed: -6 00:30:00.032 Write completed with error (sct=0, sc=8) 00:30:00.032 starting I/O failed: -6 00:30:00.032 Write completed with error (sct=0, sc=8) 00:30:00.032 starting I/O failed: -6 00:30:00.032 Write completed with error (sct=0, sc=8) 00:30:00.032 starting I/O failed: -6 00:30:00.032 Write completed with error (sct=0, sc=8) 00:30:00.032 starting I/O failed: -6 00:30:00.032 Write completed with error (sct=0, sc=8) 00:30:00.032 starting I/O failed: -6 00:30:00.032 Write completed with error (sct=0, sc=8) 00:30:00.032 starting I/O failed: -6 00:30:00.032 Write completed with error (sct=0, sc=8) 00:30:00.032 starting I/O failed: -6 00:30:00.032 Write completed with error (sct=0, sc=8) 00:30:00.032 starting I/O failed: -6 00:30:00.032 Write completed with error (sct=0, sc=8) 00:30:00.032 starting I/O failed: -6 00:30:00.032 Write completed with error (sct=0, sc=8) 00:30:00.032 starting I/O failed: -6 00:30:00.032 Write completed with error (sct=0, sc=8) 00:30:00.032 starting I/O failed: -6 00:30:00.032 Write completed with error (sct=0, sc=8) 00:30:00.032 starting I/O failed: -6 00:30:00.032 Write completed with error (sct=0, sc=8) 00:30:00.032 starting I/O failed: -6 00:30:00.032 Write completed with error (sct=0, sc=8) 00:30:00.032 starting I/O failed: -6 00:30:00.032 Write completed with error (sct=0, sc=8) 00:30:00.032 starting I/O failed: -6 00:30:00.032 Write completed with error (sct=0, sc=8) 00:30:00.032 starting I/O failed: -6 00:30:00.032 Write completed with error (sct=0, sc=8) 00:30:00.032 starting I/O failed: -6 00:30:00.032 Write completed with error (sct=0, sc=8) 00:30:00.032 starting I/O failed: -6 00:30:00.032 Write completed with error (sct=0, sc=8) 00:30:00.032 starting I/O failed: -6 00:30:00.032 Write completed with error (sct=0, sc=8) 00:30:00.032 starting I/O failed: -6 00:30:00.032 Write completed with error (sct=0, sc=8) 00:30:00.032 starting I/O failed: -6 00:30:00.032 Write completed with error (sct=0, sc=8) 00:30:00.032 starting I/O failed: -6 00:30:00.032 [2024-12-13 10:31:53.527066] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.032 NVMe io qpair process completion error 00:30:00.032 Write completed with error (sct=0, sc=8) 00:30:00.032 Write completed with error (sct=0, sc=8) 00:30:00.032 Write completed with error (sct=0, sc=8) 00:30:00.032 Write completed with error (sct=0, sc=8) 00:30:00.032 starting I/O failed: -6 00:30:00.032 Write completed with error (sct=0, sc=8) 00:30:00.032 Write completed with error (sct=0, sc=8) 00:30:00.032 Write completed with error (sct=0, sc=8) 00:30:00.032 Write completed with error (sct=0, sc=8) 00:30:00.032 starting I/O failed: -6 00:30:00.032 Write completed with error (sct=0, sc=8) 00:30:00.032 Write completed with error (sct=0, sc=8) 00:30:00.032 Write completed with error (sct=0, sc=8) 00:30:00.032 Write completed with error (sct=0, sc=8) 00:30:00.032 starting I/O failed: -6 00:30:00.032 Write completed with error (sct=0, sc=8) 00:30:00.032 Write completed with error (sct=0, sc=8) 00:30:00.032 Write completed with error (sct=0, sc=8) 00:30:00.032 Write completed with error (sct=0, sc=8) 00:30:00.032 starting I/O failed: -6 00:30:00.032 Write completed with error (sct=0, sc=8) 00:30:00.032 Write completed with error (sct=0, sc=8) 00:30:00.032 Write completed with error (sct=0, sc=8) 00:30:00.032 Write completed with error (sct=0, sc=8) 00:30:00.032 starting I/O failed: -6 00:30:00.032 Write completed with error (sct=0, sc=8) 00:30:00.032 Write completed with error (sct=0, sc=8) 00:30:00.032 Write completed with error (sct=0, sc=8) 00:30:00.032 Write completed with error (sct=0, sc=8) 00:30:00.032 starting I/O failed: -6 00:30:00.032 Write completed with error (sct=0, sc=8) 00:30:00.032 Write completed with error (sct=0, sc=8) 00:30:00.032 Write completed with error (sct=0, sc=8) 00:30:00.032 Write completed with error (sct=0, sc=8) 00:30:00.032 starting I/O failed: -6 00:30:00.032 Write completed with error (sct=0, sc=8) 00:30:00.032 Write completed with error (sct=0, sc=8) 00:30:00.032 Write completed with error (sct=0, sc=8) 00:30:00.032 Write completed with error (sct=0, sc=8) 00:30:00.032 starting I/O failed: -6 00:30:00.032 Write completed with error (sct=0, sc=8) 00:30:00.032 Write completed with error (sct=0, sc=8) 00:30:00.032 Write completed with error (sct=0, sc=8) 00:30:00.032 Write completed with error (sct=0, sc=8) 00:30:00.032 starting I/O failed: -6 00:30:00.032 Write completed with error (sct=0, sc=8) 00:30:00.032 Write completed with error (sct=0, sc=8) 00:30:00.032 Write completed with error (sct=0, sc=8) 00:30:00.032 Write completed with error (sct=0, sc=8) 00:30:00.032 starting I/O failed: -6 00:30:00.032 Write completed with error (sct=0, sc=8) 00:30:00.032 Write completed with error (sct=0, sc=8) 00:30:00.032 [2024-12-13 10:31:53.528667] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:00.032 Write completed with error (sct=0, sc=8) 00:30:00.032 starting I/O failed: -6 00:30:00.032 Write completed with error (sct=0, sc=8) 00:30:00.032 starting I/O failed: -6 00:30:00.032 Write completed with error (sct=0, sc=8) 00:30:00.032 Write completed with error (sct=0, sc=8) 00:30:00.032 Write completed with error (sct=0, sc=8) 00:30:00.032 starting I/O failed: -6 00:30:00.032 Write completed with error (sct=0, sc=8) 00:30:00.032 starting I/O failed: -6 00:30:00.032 Write completed with error (sct=0, sc=8) 00:30:00.032 Write completed with error (sct=0, sc=8) 00:30:00.032 Write completed with error (sct=0, sc=8) 00:30:00.032 starting I/O failed: -6 00:30:00.032 Write completed with error (sct=0, sc=8) 00:30:00.032 starting I/O failed: -6 00:30:00.032 Write completed with error (sct=0, sc=8) 00:30:00.032 Write completed with error (sct=0, sc=8) 00:30:00.032 Write completed with error (sct=0, sc=8) 00:30:00.032 starting I/O failed: -6 00:30:00.032 Write completed with error (sct=0, sc=8) 00:30:00.032 starting I/O failed: -6 00:30:00.032 Write completed with error (sct=0, sc=8) 00:30:00.032 Write completed with error (sct=0, sc=8) 00:30:00.032 Write completed with error (sct=0, sc=8) 00:30:00.032 starting I/O failed: -6 00:30:00.032 Write completed with error (sct=0, sc=8) 00:30:00.032 starting I/O failed: -6 00:30:00.032 Write completed with error (sct=0, sc=8) 00:30:00.032 Write completed with error (sct=0, sc=8) 00:30:00.032 Write completed with error (sct=0, sc=8) 00:30:00.032 starting I/O failed: -6 00:30:00.032 Write completed with error (sct=0, sc=8) 00:30:00.032 starting I/O failed: -6 00:30:00.032 Write completed with error (sct=0, sc=8) 00:30:00.032 Write completed with error (sct=0, sc=8) 00:30:00.032 Write completed with error (sct=0, sc=8) 00:30:00.032 starting I/O failed: -6 00:30:00.032 Write completed with error (sct=0, sc=8) 00:30:00.032 starting I/O failed: -6 00:30:00.032 Write completed with error (sct=0, sc=8) 00:30:00.032 Write completed with error (sct=0, sc=8) 00:30:00.032 Write completed with error (sct=0, sc=8) 00:30:00.032 starting I/O failed: -6 00:30:00.032 Write completed with error (sct=0, sc=8) 00:30:00.032 starting I/O failed: -6 00:30:00.032 Write completed with error (sct=0, sc=8) 00:30:00.032 Write completed with error (sct=0, sc=8) 00:30:00.032 Write completed with error (sct=0, sc=8) 00:30:00.032 starting I/O failed: -6 00:30:00.032 Write completed with error (sct=0, sc=8) 00:30:00.032 starting I/O failed: -6 00:30:00.032 Write completed with error (sct=0, sc=8) 00:30:00.032 Write completed with error (sct=0, sc=8) 00:30:00.032 Write completed with error (sct=0, sc=8) 00:30:00.032 starting I/O failed: -6 00:30:00.032 Write completed with error (sct=0, sc=8) 00:30:00.032 starting I/O failed: -6 00:30:00.032 Write completed with error (sct=0, sc=8) 00:30:00.032 Write completed with error (sct=0, sc=8) 00:30:00.032 [2024-12-13 10:31:53.530382] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.032 Write completed with error (sct=0, sc=8) 00:30:00.032 starting I/O failed: -6 00:30:00.032 Write completed with error (sct=0, sc=8) 00:30:00.032 starting I/O failed: -6 00:30:00.032 Write completed with error (sct=0, sc=8) 00:30:00.033 starting I/O failed: -6 00:30:00.033 Write completed with error (sct=0, sc=8) 00:30:00.033 Write completed with error (sct=0, sc=8) 00:30:00.033 starting I/O failed: -6 00:30:00.033 Write completed with error (sct=0, sc=8) 00:30:00.033 starting I/O failed: -6 00:30:00.033 Write completed with error (sct=0, sc=8) 00:30:00.033 starting I/O failed: -6 00:30:00.033 Write completed with error (sct=0, sc=8) 00:30:00.033 Write completed with error (sct=0, sc=8) 00:30:00.033 starting I/O failed: -6 00:30:00.033 Write completed with error (sct=0, sc=8) 00:30:00.033 starting I/O failed: -6 00:30:00.033 Write completed with error (sct=0, sc=8) 00:30:00.033 starting I/O failed: -6 00:30:00.033 Write completed with error (sct=0, sc=8) 00:30:00.033 Write completed with error (sct=0, sc=8) 00:30:00.033 starting I/O failed: -6 00:30:00.033 Write completed with error (sct=0, sc=8) 00:30:00.033 starting I/O failed: -6 00:30:00.033 Write completed with error (sct=0, sc=8) 00:30:00.033 starting I/O failed: -6 00:30:00.033 Write completed with error (sct=0, sc=8) 00:30:00.033 Write completed with error (sct=0, sc=8) 00:30:00.033 starting I/O failed: -6 00:30:00.033 Write completed with error (sct=0, sc=8) 00:30:00.033 starting I/O failed: -6 00:30:00.033 Write completed with error (sct=0, sc=8) 00:30:00.033 starting I/O failed: -6 00:30:00.033 Write completed with error (sct=0, sc=8) 00:30:00.033 Write completed with error (sct=0, sc=8) 00:30:00.033 starting I/O failed: -6 00:30:00.033 Write completed with error (sct=0, sc=8) 00:30:00.033 starting I/O failed: -6 00:30:00.033 Write completed with error (sct=0, sc=8) 00:30:00.033 starting I/O failed: -6 00:30:00.033 Write completed with error (sct=0, sc=8) 00:30:00.033 Write completed with error (sct=0, sc=8) 00:30:00.033 starting I/O failed: -6 00:30:00.033 Write completed with error (sct=0, sc=8) 00:30:00.033 starting I/O failed: -6 00:30:00.033 Write completed with error (sct=0, sc=8) 00:30:00.033 starting I/O failed: -6 00:30:00.033 Write completed with error (sct=0, sc=8) 00:30:00.033 Write completed with error (sct=0, sc=8) 00:30:00.033 starting I/O failed: -6 00:30:00.033 Write completed with error (sct=0, sc=8) 00:30:00.033 starting I/O failed: -6 00:30:00.033 Write completed with error (sct=0, sc=8) 00:30:00.033 starting I/O failed: -6 00:30:00.033 Write completed with error (sct=0, sc=8) 00:30:00.033 Write completed with error (sct=0, sc=8) 00:30:00.033 starting I/O failed: -6 00:30:00.033 Write completed with error (sct=0, sc=8) 00:30:00.033 starting I/O failed: -6 00:30:00.033 Write completed with error (sct=0, sc=8) 00:30:00.033 starting I/O failed: -6 00:30:00.033 Write completed with error (sct=0, sc=8) 00:30:00.033 Write completed with error (sct=0, sc=8) 00:30:00.033 starting I/O failed: -6 00:30:00.033 Write completed with error (sct=0, sc=8) 00:30:00.033 starting I/O failed: -6 00:30:00.033 Write completed with error (sct=0, sc=8) 00:30:00.033 starting I/O failed: -6 00:30:00.033 Write completed with error (sct=0, sc=8) 00:30:00.033 Write completed with error (sct=0, sc=8) 00:30:00.033 starting I/O failed: -6 00:30:00.033 Write completed with error (sct=0, sc=8) 00:30:00.033 starting I/O failed: -6 00:30:00.033 Write completed with error (sct=0, sc=8) 00:30:00.033 starting I/O failed: -6 00:30:00.033 Write completed with error (sct=0, sc=8) 00:30:00.033 Write completed with error (sct=0, sc=8) 00:30:00.033 starting I/O failed: -6 00:30:00.033 Write completed with error (sct=0, sc=8) 00:30:00.033 starting I/O failed: -6 00:30:00.033 Write completed with error (sct=0, sc=8) 00:30:00.033 starting I/O failed: -6 00:30:00.033 Write completed with error (sct=0, sc=8) 00:30:00.033 Write completed with error (sct=0, sc=8) 00:30:00.033 starting I/O failed: -6 00:30:00.033 [2024-12-13 10:31:53.532950] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:00.033 Write completed with error (sct=0, sc=8) 00:30:00.033 starting I/O failed: -6 00:30:00.033 Write completed with error (sct=0, sc=8) 00:30:00.033 starting I/O failed: -6 00:30:00.033 Write completed with error (sct=0, sc=8) 00:30:00.033 starting I/O failed: -6 00:30:00.033 Write completed with error (sct=0, sc=8) 00:30:00.033 starting I/O failed: -6 00:30:00.033 Write completed with error (sct=0, sc=8) 00:30:00.033 starting I/O failed: -6 00:30:00.033 Write completed with error (sct=0, sc=8) 00:30:00.033 starting I/O failed: -6 00:30:00.033 Write completed with error (sct=0, sc=8) 00:30:00.033 starting I/O failed: -6 00:30:00.033 Write completed with error (sct=0, sc=8) 00:30:00.033 starting I/O failed: -6 00:30:00.033 Write completed with error (sct=0, sc=8) 00:30:00.033 starting I/O failed: -6 00:30:00.033 Write completed with error (sct=0, sc=8) 00:30:00.033 starting I/O failed: -6 00:30:00.033 Write completed with error (sct=0, sc=8) 00:30:00.033 starting I/O failed: -6 00:30:00.033 Write completed with error (sct=0, sc=8) 00:30:00.033 starting I/O failed: -6 00:30:00.033 Write completed with error (sct=0, sc=8) 00:30:00.033 starting I/O failed: -6 00:30:00.033 Write completed with error (sct=0, sc=8) 00:30:00.033 starting I/O failed: -6 00:30:00.033 Write completed with error (sct=0, sc=8) 00:30:00.033 starting I/O failed: -6 00:30:00.033 Write completed with error (sct=0, sc=8) 00:30:00.033 starting I/O failed: -6 00:30:00.033 Write completed with error (sct=0, sc=8) 00:30:00.033 starting I/O failed: -6 00:30:00.033 Write completed with error (sct=0, sc=8) 00:30:00.033 starting I/O failed: -6 00:30:00.033 Write completed with error (sct=0, sc=8) 00:30:00.033 starting I/O failed: -6 00:30:00.033 Write completed with error (sct=0, sc=8) 00:30:00.033 starting I/O failed: -6 00:30:00.033 Write completed with error (sct=0, sc=8) 00:30:00.033 starting I/O failed: -6 00:30:00.033 Write completed with error (sct=0, sc=8) 00:30:00.033 starting I/O failed: -6 00:30:00.033 Write completed with error (sct=0, sc=8) 00:30:00.033 starting I/O failed: -6 00:30:00.033 Write completed with error (sct=0, sc=8) 00:30:00.033 starting I/O failed: -6 00:30:00.033 Write completed with error (sct=0, sc=8) 00:30:00.033 starting I/O failed: -6 00:30:00.033 Write completed with error (sct=0, sc=8) 00:30:00.033 starting I/O failed: -6 00:30:00.033 Write completed with error (sct=0, sc=8) 00:30:00.033 starting I/O failed: -6 00:30:00.033 Write completed with error (sct=0, sc=8) 00:30:00.033 starting I/O failed: -6 00:30:00.033 Write completed with error (sct=0, sc=8) 00:30:00.033 starting I/O failed: -6 00:30:00.033 Write completed with error (sct=0, sc=8) 00:30:00.033 starting I/O failed: -6 00:30:00.033 Write completed with error (sct=0, sc=8) 00:30:00.033 starting I/O failed: -6 00:30:00.033 Write completed with error (sct=0, sc=8) 00:30:00.033 starting I/O failed: -6 00:30:00.033 Write completed with error (sct=0, sc=8) 00:30:00.033 starting I/O failed: -6 00:30:00.033 Write completed with error (sct=0, sc=8) 00:30:00.033 starting I/O failed: -6 00:30:00.033 Write completed with error (sct=0, sc=8) 00:30:00.033 starting I/O failed: -6 00:30:00.033 Write completed with error (sct=0, sc=8) 00:30:00.033 starting I/O failed: -6 00:30:00.033 Write completed with error (sct=0, sc=8) 00:30:00.033 starting I/O failed: -6 00:30:00.033 Write completed with error (sct=0, sc=8) 00:30:00.033 starting I/O failed: -6 00:30:00.033 Write completed with error (sct=0, sc=8) 00:30:00.033 starting I/O failed: -6 00:30:00.033 Write completed with error (sct=0, sc=8) 00:30:00.033 starting I/O failed: -6 00:30:00.033 Write completed with error (sct=0, sc=8) 00:30:00.033 starting I/O failed: -6 00:30:00.033 Write completed with error (sct=0, sc=8) 00:30:00.033 starting I/O failed: -6 00:30:00.033 Write completed with error (sct=0, sc=8) 00:30:00.033 starting I/O failed: -6 00:30:00.033 Write completed with error (sct=0, sc=8) 00:30:00.033 starting I/O failed: -6 00:30:00.033 Write completed with error (sct=0, sc=8) 00:30:00.033 starting I/O failed: -6 00:30:00.033 Write completed with error (sct=0, sc=8) 00:30:00.033 starting I/O failed: -6 00:30:00.033 Write completed with error (sct=0, sc=8) 00:30:00.033 starting I/O failed: -6 00:30:00.033 Write completed with error (sct=0, sc=8) 00:30:00.033 starting I/O failed: -6 00:30:00.033 Write completed with error (sct=0, sc=8) 00:30:00.033 starting I/O failed: -6 00:30:00.033 Write completed with error (sct=0, sc=8) 00:30:00.033 starting I/O failed: -6 00:30:00.033 Write completed with error (sct=0, sc=8) 00:30:00.033 starting I/O failed: -6 00:30:00.033 Write completed with error (sct=0, sc=8) 00:30:00.033 starting I/O failed: -6 00:30:00.033 Write completed with error (sct=0, sc=8) 00:30:00.033 starting I/O failed: -6 00:30:00.033 Write completed with error (sct=0, sc=8) 00:30:00.033 starting I/O failed: -6 00:30:00.033 Write completed with error (sct=0, sc=8) 00:30:00.033 starting I/O failed: -6 00:30:00.033 Write completed with error (sct=0, sc=8) 00:30:00.033 starting I/O failed: -6 00:30:00.033 Write completed with error (sct=0, sc=8) 00:30:00.033 starting I/O failed: -6 00:30:00.033 Write completed with error (sct=0, sc=8) 00:30:00.033 starting I/O failed: -6 00:30:00.033 Write completed with error (sct=0, sc=8) 00:30:00.033 starting I/O failed: -6 00:30:00.033 Write completed with error (sct=0, sc=8) 00:30:00.033 starting I/O failed: -6 00:30:00.033 Write completed with error (sct=0, sc=8) 00:30:00.033 starting I/O failed: -6 00:30:00.033 [2024-12-13 10:31:53.547529] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.033 NVMe io qpair process completion error 00:30:00.033 Write completed with error (sct=0, sc=8) 00:30:00.033 starting I/O failed: -6 00:30:00.033 Write completed with error (sct=0, sc=8) 00:30:00.033 Write completed with error (sct=0, sc=8) 00:30:00.033 Write completed with error (sct=0, sc=8) 00:30:00.033 Write completed with error (sct=0, sc=8) 00:30:00.033 starting I/O failed: -6 00:30:00.033 Write completed with error (sct=0, sc=8) 00:30:00.033 Write completed with error (sct=0, sc=8) 00:30:00.033 Write completed with error (sct=0, sc=8) 00:30:00.033 Write completed with error (sct=0, sc=8) 00:30:00.033 starting I/O failed: -6 00:30:00.033 Write completed with error (sct=0, sc=8) 00:30:00.033 Write completed with error (sct=0, sc=8) 00:30:00.033 Write completed with error (sct=0, sc=8) 00:30:00.033 Write completed with error (sct=0, sc=8) 00:30:00.033 starting I/O failed: -6 00:30:00.033 Write completed with error (sct=0, sc=8) 00:30:00.033 Write completed with error (sct=0, sc=8) 00:30:00.033 Write completed with error (sct=0, sc=8) 00:30:00.033 Write completed with error (sct=0, sc=8) 00:30:00.033 starting I/O failed: -6 00:30:00.033 Write completed with error (sct=0, sc=8) 00:30:00.033 Write completed with error (sct=0, sc=8) 00:30:00.034 Write completed with error (sct=0, sc=8) 00:30:00.034 Write completed with error (sct=0, sc=8) 00:30:00.034 starting I/O failed: -6 00:30:00.034 Write completed with error (sct=0, sc=8) 00:30:00.034 Write completed with error (sct=0, sc=8) 00:30:00.034 Write completed with error (sct=0, sc=8) 00:30:00.034 Write completed with error (sct=0, sc=8) 00:30:00.034 starting I/O failed: -6 00:30:00.034 Write completed with error (sct=0, sc=8) 00:30:00.034 Write completed with error (sct=0, sc=8) 00:30:00.034 Write completed with error (sct=0, sc=8) 00:30:00.034 Write completed with error (sct=0, sc=8) 00:30:00.034 starting I/O failed: -6 00:30:00.034 Write completed with error (sct=0, sc=8) 00:30:00.034 Write completed with error (sct=0, sc=8) 00:30:00.034 Write completed with error (sct=0, sc=8) 00:30:00.034 Write completed with error (sct=0, sc=8) 00:30:00.034 starting I/O failed: -6 00:30:00.034 Write completed with error (sct=0, sc=8) 00:30:00.034 Write completed with error (sct=0, sc=8) 00:30:00.034 Write completed with error (sct=0, sc=8) 00:30:00.034 Write completed with error (sct=0, sc=8) 00:30:00.034 starting I/O failed: -6 00:30:00.034 Write completed with error (sct=0, sc=8) 00:30:00.034 [2024-12-13 10:31:53.549047] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:00.034 starting I/O failed: -6 00:30:00.034 starting I/O failed: -6 00:30:00.034 starting I/O failed: -6 00:30:00.034 Write completed with error (sct=0, sc=8) 00:30:00.034 Write completed with error (sct=0, sc=8) 00:30:00.034 starting I/O failed: -6 00:30:00.034 Write completed with error (sct=0, sc=8) 00:30:00.034 starting I/O failed: -6 00:30:00.034 Write completed with error (sct=0, sc=8) 00:30:00.034 Write completed with error (sct=0, sc=8) 00:30:00.034 Write completed with error (sct=0, sc=8) 00:30:00.034 starting I/O failed: -6 00:30:00.034 Write completed with error (sct=0, sc=8) 00:30:00.034 starting I/O failed: -6 00:30:00.034 Write completed with error (sct=0, sc=8) 00:30:00.034 Write completed with error (sct=0, sc=8) 00:30:00.034 Write completed with error (sct=0, sc=8) 00:30:00.034 starting I/O failed: -6 00:30:00.034 Write completed with error (sct=0, sc=8) 00:30:00.034 starting I/O failed: -6 00:30:00.034 Write completed with error (sct=0, sc=8) 00:30:00.034 Write completed with error (sct=0, sc=8) 00:30:00.034 Write completed with error (sct=0, sc=8) 00:30:00.034 starting I/O failed: -6 00:30:00.034 Write completed with error (sct=0, sc=8) 00:30:00.034 starting I/O failed: -6 00:30:00.034 Write completed with error (sct=0, sc=8) 00:30:00.034 Write completed with error (sct=0, sc=8) 00:30:00.034 Write completed with error (sct=0, sc=8) 00:30:00.034 starting I/O failed: -6 00:30:00.034 Write completed with error (sct=0, sc=8) 00:30:00.034 starting I/O failed: -6 00:30:00.034 Write completed with error (sct=0, sc=8) 00:30:00.034 Write completed with error (sct=0, sc=8) 00:30:00.034 Write completed with error (sct=0, sc=8) 00:30:00.034 starting I/O failed: -6 00:30:00.034 Write completed with error (sct=0, sc=8) 00:30:00.034 starting I/O failed: -6 00:30:00.034 Write completed with error (sct=0, sc=8) 00:30:00.034 Write completed with error (sct=0, sc=8) 00:30:00.034 Write completed with error (sct=0, sc=8) 00:30:00.034 starting I/O failed: -6 00:30:00.034 Write completed with error (sct=0, sc=8) 00:30:00.034 starting I/O failed: -6 00:30:00.034 Write completed with error (sct=0, sc=8) 00:30:00.034 Write completed with error (sct=0, sc=8) 00:30:00.034 Write completed with error (sct=0, sc=8) 00:30:00.034 starting I/O failed: -6 00:30:00.034 Write completed with error (sct=0, sc=8) 00:30:00.034 starting I/O failed: -6 00:30:00.034 Write completed with error (sct=0, sc=8) 00:30:00.034 Write completed with error (sct=0, sc=8) 00:30:00.034 Write completed with error (sct=0, sc=8) 00:30:00.034 starting I/O failed: -6 00:30:00.034 Write completed with error (sct=0, sc=8) 00:30:00.034 starting I/O failed: -6 00:30:00.034 [2024-12-13 10:31:53.550951] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.034 Write completed with error (sct=0, sc=8) 00:30:00.034 starting I/O failed: -6 00:30:00.034 Write completed with error (sct=0, sc=8) 00:30:00.034 Write completed with error (sct=0, sc=8) 00:30:00.034 starting I/O failed: -6 00:30:00.034 Write completed with error (sct=0, sc=8) 00:30:00.034 starting I/O failed: -6 00:30:00.034 Write completed with error (sct=0, sc=8) 00:30:00.034 starting I/O failed: -6 00:30:00.034 Write completed with error (sct=0, sc=8) 00:30:00.034 Write completed with error (sct=0, sc=8) 00:30:00.034 starting I/O failed: -6 00:30:00.034 Write completed with error (sct=0, sc=8) 00:30:00.034 starting I/O failed: -6 00:30:00.034 Write completed with error (sct=0, sc=8) 00:30:00.034 starting I/O failed: -6 00:30:00.034 Write completed with error (sct=0, sc=8) 00:30:00.034 Write completed with error (sct=0, sc=8) 00:30:00.034 starting I/O failed: -6 00:30:00.034 Write completed with error (sct=0, sc=8) 00:30:00.034 starting I/O failed: -6 00:30:00.034 Write completed with error (sct=0, sc=8) 00:30:00.034 starting I/O failed: -6 00:30:00.034 Write completed with error (sct=0, sc=8) 00:30:00.034 Write completed with error (sct=0, sc=8) 00:30:00.034 starting I/O failed: -6 00:30:00.034 Write completed with error (sct=0, sc=8) 00:30:00.034 starting I/O failed: -6 00:30:00.034 Write completed with error (sct=0, sc=8) 00:30:00.034 starting I/O failed: -6 00:30:00.034 Write completed with error (sct=0, sc=8) 00:30:00.034 Write completed with error (sct=0, sc=8) 00:30:00.034 starting I/O failed: -6 00:30:00.034 Write completed with error (sct=0, sc=8) 00:30:00.034 starting I/O failed: -6 00:30:00.034 Write completed with error (sct=0, sc=8) 00:30:00.034 starting I/O failed: -6 00:30:00.034 Write completed with error (sct=0, sc=8) 00:30:00.034 Write completed with error (sct=0, sc=8) 00:30:00.034 starting I/O failed: -6 00:30:00.034 Write completed with error (sct=0, sc=8) 00:30:00.034 starting I/O failed: -6 00:30:00.034 Write completed with error (sct=0, sc=8) 00:30:00.034 starting I/O failed: -6 00:30:00.034 Write completed with error (sct=0, sc=8) 00:30:00.034 Write completed with error (sct=0, sc=8) 00:30:00.034 starting I/O failed: -6 00:30:00.034 Write completed with error (sct=0, sc=8) 00:30:00.034 starting I/O failed: -6 00:30:00.034 Write completed with error (sct=0, sc=8) 00:30:00.034 starting I/O failed: -6 00:30:00.034 Write completed with error (sct=0, sc=8) 00:30:00.034 Write completed with error (sct=0, sc=8) 00:30:00.034 starting I/O failed: -6 00:30:00.034 Write completed with error (sct=0, sc=8) 00:30:00.034 starting I/O failed: -6 00:30:00.034 Write completed with error (sct=0, sc=8) 00:30:00.034 starting I/O failed: -6 00:30:00.034 Write completed with error (sct=0, sc=8) 00:30:00.034 Write completed with error (sct=0, sc=8) 00:30:00.034 starting I/O failed: -6 00:30:00.034 Write completed with error (sct=0, sc=8) 00:30:00.034 starting I/O failed: -6 00:30:00.034 Write completed with error (sct=0, sc=8) 00:30:00.034 starting I/O failed: -6 00:30:00.034 Write completed with error (sct=0, sc=8) 00:30:00.034 Write completed with error (sct=0, sc=8) 00:30:00.034 starting I/O failed: -6 00:30:00.034 Write completed with error (sct=0, sc=8) 00:30:00.034 starting I/O failed: -6 00:30:00.034 Write completed with error (sct=0, sc=8) 00:30:00.034 starting I/O failed: -6 00:30:00.034 Write completed with error (sct=0, sc=8) 00:30:00.034 Write completed with error (sct=0, sc=8) 00:30:00.034 starting I/O failed: -6 00:30:00.034 Write completed with error (sct=0, sc=8) 00:30:00.034 starting I/O failed: -6 00:30:00.034 Write completed with error (sct=0, sc=8) 00:30:00.034 starting I/O failed: -6 00:30:00.034 Write completed with error (sct=0, sc=8) 00:30:00.034 Write completed with error (sct=0, sc=8) 00:30:00.034 starting I/O failed: -6 00:30:00.034 Write completed with error (sct=0, sc=8) 00:30:00.034 starting I/O failed: -6 00:30:00.034 Write completed with error (sct=0, sc=8) 00:30:00.034 starting I/O failed: -6 00:30:00.034 Write completed with error (sct=0, sc=8) 00:30:00.034 Write completed with error (sct=0, sc=8) 00:30:00.034 starting I/O failed: -6 00:30:00.034 [2024-12-13 10:31:53.553599] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:00.034 Write completed with error (sct=0, sc=8) 00:30:00.034 starting I/O failed: -6 00:30:00.034 Write completed with error (sct=0, sc=8) 00:30:00.034 starting I/O failed: -6 00:30:00.034 Write completed with error (sct=0, sc=8) 00:30:00.034 starting I/O failed: -6 00:30:00.034 Write completed with error (sct=0, sc=8) 00:30:00.034 starting I/O failed: -6 00:30:00.034 Write completed with error (sct=0, sc=8) 00:30:00.034 starting I/O failed: -6 00:30:00.034 Write completed with error (sct=0, sc=8) 00:30:00.034 starting I/O failed: -6 00:30:00.034 Write completed with error (sct=0, sc=8) 00:30:00.034 starting I/O failed: -6 00:30:00.034 Write completed with error (sct=0, sc=8) 00:30:00.034 starting I/O failed: -6 00:30:00.034 Write completed with error (sct=0, sc=8) 00:30:00.034 starting I/O failed: -6 00:30:00.034 Write completed with error (sct=0, sc=8) 00:30:00.034 starting I/O failed: -6 00:30:00.034 Write completed with error (sct=0, sc=8) 00:30:00.034 starting I/O failed: -6 00:30:00.034 Write completed with error (sct=0, sc=8) 00:30:00.034 starting I/O failed: -6 00:30:00.034 Write completed with error (sct=0, sc=8) 00:30:00.034 starting I/O failed: -6 00:30:00.034 Write completed with error (sct=0, sc=8) 00:30:00.034 starting I/O failed: -6 00:30:00.034 Write completed with error (sct=0, sc=8) 00:30:00.034 starting I/O failed: -6 00:30:00.034 Write completed with error (sct=0, sc=8) 00:30:00.034 starting I/O failed: -6 00:30:00.034 Write completed with error (sct=0, sc=8) 00:30:00.034 starting I/O failed: -6 00:30:00.034 Write completed with error (sct=0, sc=8) 00:30:00.034 starting I/O failed: -6 00:30:00.034 Write completed with error (sct=0, sc=8) 00:30:00.034 starting I/O failed: -6 00:30:00.034 Write completed with error (sct=0, sc=8) 00:30:00.034 starting I/O failed: -6 00:30:00.034 Write completed with error (sct=0, sc=8) 00:30:00.034 starting I/O failed: -6 00:30:00.034 Write completed with error (sct=0, sc=8) 00:30:00.034 starting I/O failed: -6 00:30:00.034 Write completed with error (sct=0, sc=8) 00:30:00.034 starting I/O failed: -6 00:30:00.034 Write completed with error (sct=0, sc=8) 00:30:00.034 starting I/O failed: -6 00:30:00.034 Write completed with error (sct=0, sc=8) 00:30:00.034 starting I/O failed: -6 00:30:00.034 Write completed with error (sct=0, sc=8) 00:30:00.034 starting I/O failed: -6 00:30:00.034 Write completed with error (sct=0, sc=8) 00:30:00.035 starting I/O failed: -6 00:30:00.035 Write completed with error (sct=0, sc=8) 00:30:00.035 starting I/O failed: -6 00:30:00.035 Write completed with error (sct=0, sc=8) 00:30:00.035 starting I/O failed: -6 00:30:00.035 Write completed with error (sct=0, sc=8) 00:30:00.035 starting I/O failed: -6 00:30:00.035 Write completed with error (sct=0, sc=8) 00:30:00.035 starting I/O failed: -6 00:30:00.035 Write completed with error (sct=0, sc=8) 00:30:00.035 starting I/O failed: -6 00:30:00.035 Write completed with error (sct=0, sc=8) 00:30:00.035 starting I/O failed: -6 00:30:00.035 Write completed with error (sct=0, sc=8) 00:30:00.035 starting I/O failed: -6 00:30:00.035 Write completed with error (sct=0, sc=8) 00:30:00.035 starting I/O failed: -6 00:30:00.035 Write completed with error (sct=0, sc=8) 00:30:00.035 starting I/O failed: -6 00:30:00.035 Write completed with error (sct=0, sc=8) 00:30:00.035 starting I/O failed: -6 00:30:00.035 Write completed with error (sct=0, sc=8) 00:30:00.035 starting I/O failed: -6 00:30:00.035 Write completed with error (sct=0, sc=8) 00:30:00.035 starting I/O failed: -6 00:30:00.035 Write completed with error (sct=0, sc=8) 00:30:00.035 starting I/O failed: -6 00:30:00.035 Write completed with error (sct=0, sc=8) 00:30:00.035 starting I/O failed: -6 00:30:00.035 Write completed with error (sct=0, sc=8) 00:30:00.035 starting I/O failed: -6 00:30:00.035 Write completed with error (sct=0, sc=8) 00:30:00.035 starting I/O failed: -6 00:30:00.035 Write completed with error (sct=0, sc=8) 00:30:00.035 starting I/O failed: -6 00:30:00.035 Write completed with error (sct=0, sc=8) 00:30:00.035 starting I/O failed: -6 00:30:00.035 Write completed with error (sct=0, sc=8) 00:30:00.035 starting I/O failed: -6 00:30:00.035 Write completed with error (sct=0, sc=8) 00:30:00.035 starting I/O failed: -6 00:30:00.035 Write completed with error (sct=0, sc=8) 00:30:00.035 starting I/O failed: -6 00:30:00.035 Write completed with error (sct=0, sc=8) 00:30:00.035 starting I/O failed: -6 00:30:00.035 Write completed with error (sct=0, sc=8) 00:30:00.035 starting I/O failed: -6 00:30:00.035 Write completed with error (sct=0, sc=8) 00:30:00.035 starting I/O failed: -6 00:30:00.035 Write completed with error (sct=0, sc=8) 00:30:00.035 starting I/O failed: -6 00:30:00.035 Write completed with error (sct=0, sc=8) 00:30:00.035 starting I/O failed: -6 00:30:00.035 Write completed with error (sct=0, sc=8) 00:30:00.035 starting I/O failed: -6 00:30:00.035 Write completed with error (sct=0, sc=8) 00:30:00.035 starting I/O failed: -6 00:30:00.035 Write completed with error (sct=0, sc=8) 00:30:00.035 starting I/O failed: -6 00:30:00.035 Write completed with error (sct=0, sc=8) 00:30:00.035 starting I/O failed: -6 00:30:00.035 Write completed with error (sct=0, sc=8) 00:30:00.035 starting I/O failed: -6 00:30:00.035 Write completed with error (sct=0, sc=8) 00:30:00.035 starting I/O failed: -6 00:30:00.035 [2024-12-13 10:31:53.567482] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.035 NVMe io qpair process completion error 00:30:00.035 Write completed with error (sct=0, sc=8) 00:30:00.035 Write completed with error (sct=0, sc=8) 00:30:00.035 Write completed with error (sct=0, sc=8) 00:30:00.035 Write completed with error (sct=0, sc=8) 00:30:00.035 starting I/O failed: -6 00:30:00.035 Write completed with error (sct=0, sc=8) 00:30:00.035 Write completed with error (sct=0, sc=8) 00:30:00.035 Write completed with error (sct=0, sc=8) 00:30:00.035 Write completed with error (sct=0, sc=8) 00:30:00.035 starting I/O failed: -6 00:30:00.035 Write completed with error (sct=0, sc=8) 00:30:00.035 Write completed with error (sct=0, sc=8) 00:30:00.035 Write completed with error (sct=0, sc=8) 00:30:00.035 Write completed with error (sct=0, sc=8) 00:30:00.035 starting I/O failed: -6 00:30:00.035 Write completed with error (sct=0, sc=8) 00:30:00.035 Write completed with error (sct=0, sc=8) 00:30:00.035 Write completed with error (sct=0, sc=8) 00:30:00.035 Write completed with error (sct=0, sc=8) 00:30:00.035 starting I/O failed: -6 00:30:00.035 Write completed with error (sct=0, sc=8) 00:30:00.035 Write completed with error (sct=0, sc=8) 00:30:00.035 Write completed with error (sct=0, sc=8) 00:30:00.035 Write completed with error (sct=0, sc=8) 00:30:00.035 starting I/O failed: -6 00:30:00.035 Write completed with error (sct=0, sc=8) 00:30:00.035 Write completed with error (sct=0, sc=8) 00:30:00.035 Write completed with error (sct=0, sc=8) 00:30:00.035 Write completed with error (sct=0, sc=8) 00:30:00.035 starting I/O failed: -6 00:30:00.035 Write completed with error (sct=0, sc=8) 00:30:00.035 Write completed with error (sct=0, sc=8) 00:30:00.035 Write completed with error (sct=0, sc=8) 00:30:00.035 Write completed with error (sct=0, sc=8) 00:30:00.035 starting I/O failed: -6 00:30:00.035 Write completed with error (sct=0, sc=8) 00:30:00.035 Write completed with error (sct=0, sc=8) 00:30:00.035 Write completed with error (sct=0, sc=8) 00:30:00.035 Write completed with error (sct=0, sc=8) 00:30:00.035 starting I/O failed: -6 00:30:00.035 Write completed with error (sct=0, sc=8) 00:30:00.035 Write completed with error (sct=0, sc=8) 00:30:00.035 Write completed with error (sct=0, sc=8) 00:30:00.035 Write completed with error (sct=0, sc=8) 00:30:00.035 starting I/O failed: -6 00:30:00.035 Write completed with error (sct=0, sc=8) 00:30:00.035 Write completed with error (sct=0, sc=8) 00:30:00.035 Write completed with error (sct=0, sc=8) 00:30:00.035 Write completed with error (sct=0, sc=8) 00:30:00.035 starting I/O failed: -6 00:30:00.035 Write completed with error (sct=0, sc=8) 00:30:00.035 [2024-12-13 10:31:53.569104] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:00.035 starting I/O failed: -6 00:30:00.035 starting I/O failed: -6 00:30:00.035 starting I/O failed: -6 00:30:00.035 Write completed with error (sct=0, sc=8) 00:30:00.035 Write completed with error (sct=0, sc=8) 00:30:00.035 Write completed with error (sct=0, sc=8) 00:30:00.035 starting I/O failed: -6 00:30:00.035 Write completed with error (sct=0, sc=8) 00:30:00.035 starting I/O failed: -6 00:30:00.035 Write completed with error (sct=0, sc=8) 00:30:00.035 Write completed with error (sct=0, sc=8) 00:30:00.035 Write completed with error (sct=0, sc=8) 00:30:00.035 starting I/O failed: -6 00:30:00.035 Write completed with error (sct=0, sc=8) 00:30:00.035 starting I/O failed: -6 00:30:00.035 Write completed with error (sct=0, sc=8) 00:30:00.035 Write completed with error (sct=0, sc=8) 00:30:00.035 Write completed with error (sct=0, sc=8) 00:30:00.035 starting I/O failed: -6 00:30:00.035 Write completed with error (sct=0, sc=8) 00:30:00.035 starting I/O failed: -6 00:30:00.035 Write completed with error (sct=0, sc=8) 00:30:00.035 Write completed with error (sct=0, sc=8) 00:30:00.035 Write completed with error (sct=0, sc=8) 00:30:00.035 starting I/O failed: -6 00:30:00.035 Write completed with error (sct=0, sc=8) 00:30:00.035 starting I/O failed: -6 00:30:00.035 Write completed with error (sct=0, sc=8) 00:30:00.035 Write completed with error (sct=0, sc=8) 00:30:00.035 Write completed with error (sct=0, sc=8) 00:30:00.035 starting I/O failed: -6 00:30:00.035 Write completed with error (sct=0, sc=8) 00:30:00.035 starting I/O failed: -6 00:30:00.035 Write completed with error (sct=0, sc=8) 00:30:00.035 Write completed with error (sct=0, sc=8) 00:30:00.035 Write completed with error (sct=0, sc=8) 00:30:00.035 starting I/O failed: -6 00:30:00.035 Write completed with error (sct=0, sc=8) 00:30:00.035 starting I/O failed: -6 00:30:00.035 Write completed with error (sct=0, sc=8) 00:30:00.035 Write completed with error (sct=0, sc=8) 00:30:00.035 Write completed with error (sct=0, sc=8) 00:30:00.035 starting I/O failed: -6 00:30:00.035 Write completed with error (sct=0, sc=8) 00:30:00.035 starting I/O failed: -6 00:30:00.035 Write completed with error (sct=0, sc=8) 00:30:00.035 Write completed with error (sct=0, sc=8) 00:30:00.035 Write completed with error (sct=0, sc=8) 00:30:00.035 starting I/O failed: -6 00:30:00.035 Write completed with error (sct=0, sc=8) 00:30:00.035 starting I/O failed: -6 00:30:00.035 Write completed with error (sct=0, sc=8) 00:30:00.035 Write completed with error (sct=0, sc=8) 00:30:00.035 Write completed with error (sct=0, sc=8) 00:30:00.035 starting I/O failed: -6 00:30:00.035 Write completed with error (sct=0, sc=8) 00:30:00.035 starting I/O failed: -6 00:30:00.035 Write completed with error (sct=0, sc=8) 00:30:00.035 [2024-12-13 10:31:53.571042] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.035 Write completed with error (sct=0, sc=8) 00:30:00.035 starting I/O failed: -6 00:30:00.035 Write completed with error (sct=0, sc=8) 00:30:00.035 starting I/O failed: -6 00:30:00.035 Write completed with error (sct=0, sc=8) 00:30:00.035 Write completed with error (sct=0, sc=8) 00:30:00.035 starting I/O failed: -6 00:30:00.035 Write completed with error (sct=0, sc=8) 00:30:00.035 starting I/O failed: -6 00:30:00.035 Write completed with error (sct=0, sc=8) 00:30:00.035 starting I/O failed: -6 00:30:00.035 Write completed with error (sct=0, sc=8) 00:30:00.035 Write completed with error (sct=0, sc=8) 00:30:00.035 starting I/O failed: -6 00:30:00.035 Write completed with error (sct=0, sc=8) 00:30:00.035 starting I/O failed: -6 00:30:00.035 Write completed with error (sct=0, sc=8) 00:30:00.035 starting I/O failed: -6 00:30:00.035 Write completed with error (sct=0, sc=8) 00:30:00.035 Write completed with error (sct=0, sc=8) 00:30:00.035 starting I/O failed: -6 00:30:00.035 Write completed with error (sct=0, sc=8) 00:30:00.035 starting I/O failed: -6 00:30:00.035 Write completed with error (sct=0, sc=8) 00:30:00.035 starting I/O failed: -6 00:30:00.035 Write completed with error (sct=0, sc=8) 00:30:00.035 Write completed with error (sct=0, sc=8) 00:30:00.035 starting I/O failed: -6 00:30:00.035 Write completed with error (sct=0, sc=8) 00:30:00.035 starting I/O failed: -6 00:30:00.035 Write completed with error (sct=0, sc=8) 00:30:00.035 starting I/O failed: -6 00:30:00.035 Write completed with error (sct=0, sc=8) 00:30:00.035 Write completed with error (sct=0, sc=8) 00:30:00.035 starting I/O failed: -6 00:30:00.035 Write completed with error (sct=0, sc=8) 00:30:00.035 starting I/O failed: -6 00:30:00.035 Write completed with error (sct=0, sc=8) 00:30:00.035 starting I/O failed: -6 00:30:00.035 Write completed with error (sct=0, sc=8) 00:30:00.035 Write completed with error (sct=0, sc=8) 00:30:00.035 starting I/O failed: -6 00:30:00.035 Write completed with error (sct=0, sc=8) 00:30:00.035 starting I/O failed: -6 00:30:00.035 Write completed with error (sct=0, sc=8) 00:30:00.035 starting I/O failed: -6 00:30:00.036 Write completed with error (sct=0, sc=8) 00:30:00.036 Write completed with error (sct=0, sc=8) 00:30:00.036 starting I/O failed: -6 00:30:00.036 Write completed with error (sct=0, sc=8) 00:30:00.036 starting I/O failed: -6 00:30:00.036 Write completed with error (sct=0, sc=8) 00:30:00.036 starting I/O failed: -6 00:30:00.036 Write completed with error (sct=0, sc=8) 00:30:00.036 Write completed with error (sct=0, sc=8) 00:30:00.036 starting I/O failed: -6 00:30:00.036 Write completed with error (sct=0, sc=8) 00:30:00.036 starting I/O failed: -6 00:30:00.036 Write completed with error (sct=0, sc=8) 00:30:00.036 starting I/O failed: -6 00:30:00.036 Write completed with error (sct=0, sc=8) 00:30:00.036 Write completed with error (sct=0, sc=8) 00:30:00.036 starting I/O failed: -6 00:30:00.036 Write completed with error (sct=0, sc=8) 00:30:00.036 starting I/O failed: -6 00:30:00.036 Write completed with error (sct=0, sc=8) 00:30:00.036 starting I/O failed: -6 00:30:00.036 Write completed with error (sct=0, sc=8) 00:30:00.036 Write completed with error (sct=0, sc=8) 00:30:00.036 starting I/O failed: -6 00:30:00.036 Write completed with error (sct=0, sc=8) 00:30:00.036 starting I/O failed: -6 00:30:00.036 Write completed with error (sct=0, sc=8) 00:30:00.036 starting I/O failed: -6 00:30:00.036 Write completed with error (sct=0, sc=8) 00:30:00.036 Write completed with error (sct=0, sc=8) 00:30:00.036 starting I/O failed: -6 00:30:00.036 Write completed with error (sct=0, sc=8) 00:30:00.036 starting I/O failed: -6 00:30:00.036 Write completed with error (sct=0, sc=8) 00:30:00.036 starting I/O failed: -6 00:30:00.036 Write completed with error (sct=0, sc=8) 00:30:00.036 Write completed with error (sct=0, sc=8) 00:30:00.036 starting I/O failed: -6 00:30:00.036 Write completed with error (sct=0, sc=8) 00:30:00.036 starting I/O failed: -6 00:30:00.036 [2024-12-13 10:31:53.573579] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.036 Write completed with error (sct=0, sc=8) 00:30:00.036 starting I/O failed: -6 00:30:00.036 Write completed with error (sct=0, sc=8) 00:30:00.036 starting I/O failed: -6 00:30:00.036 Write completed with error (sct=0, sc=8) 00:30:00.036 starting I/O failed: -6 00:30:00.036 Write completed with error (sct=0, sc=8) 00:30:00.036 starting I/O failed: -6 00:30:00.036 Write completed with error (sct=0, sc=8) 00:30:00.036 starting I/O failed: -6 00:30:00.036 Write completed with error (sct=0, sc=8) 00:30:00.036 starting I/O failed: -6 00:30:00.036 Write completed with error (sct=0, sc=8) 00:30:00.036 starting I/O failed: -6 00:30:00.036 Write completed with error (sct=0, sc=8) 00:30:00.036 starting I/O failed: -6 00:30:00.036 Write completed with error (sct=0, sc=8) 00:30:00.036 starting I/O failed: -6 00:30:00.036 Write completed with error (sct=0, sc=8) 00:30:00.036 starting I/O failed: -6 00:30:00.036 Write completed with error (sct=0, sc=8) 00:30:00.036 starting I/O failed: -6 00:30:00.036 Write completed with error (sct=0, sc=8) 00:30:00.036 starting I/O failed: -6 00:30:00.036 Write completed with error (sct=0, sc=8) 00:30:00.036 starting I/O failed: -6 00:30:00.036 Write completed with error (sct=0, sc=8) 00:30:00.036 starting I/O failed: -6 00:30:00.036 Write completed with error (sct=0, sc=8) 00:30:00.036 starting I/O failed: -6 00:30:00.036 Write completed with error (sct=0, sc=8) 00:30:00.036 starting I/O failed: -6 00:30:00.036 Write completed with error (sct=0, sc=8) 00:30:00.036 starting I/O failed: -6 00:30:00.036 Write completed with error (sct=0, sc=8) 00:30:00.036 starting I/O failed: -6 00:30:00.036 Write completed with error (sct=0, sc=8) 00:30:00.036 starting I/O failed: -6 00:30:00.036 Write completed with error (sct=0, sc=8) 00:30:00.036 starting I/O failed: -6 00:30:00.036 Write completed with error (sct=0, sc=8) 00:30:00.036 starting I/O failed: -6 00:30:00.036 Write completed with error (sct=0, sc=8) 00:30:00.036 starting I/O failed: -6 00:30:00.036 Write completed with error (sct=0, sc=8) 00:30:00.036 starting I/O failed: -6 00:30:00.036 Write completed with error (sct=0, sc=8) 00:30:00.036 starting I/O failed: -6 00:30:00.036 Write completed with error (sct=0, sc=8) 00:30:00.036 starting I/O failed: -6 00:30:00.036 Write completed with error (sct=0, sc=8) 00:30:00.036 starting I/O failed: -6 00:30:00.036 Write completed with error (sct=0, sc=8) 00:30:00.036 starting I/O failed: -6 00:30:00.036 Write completed with error (sct=0, sc=8) 00:30:00.036 starting I/O failed: -6 00:30:00.036 Write completed with error (sct=0, sc=8) 00:30:00.036 starting I/O failed: -6 00:30:00.036 Write completed with error (sct=0, sc=8) 00:30:00.036 starting I/O failed: -6 00:30:00.036 Write completed with error (sct=0, sc=8) 00:30:00.036 starting I/O failed: -6 00:30:00.036 Write completed with error (sct=0, sc=8) 00:30:00.036 starting I/O failed: -6 00:30:00.036 Write completed with error (sct=0, sc=8) 00:30:00.036 starting I/O failed: -6 00:30:00.036 Write completed with error (sct=0, sc=8) 00:30:00.036 starting I/O failed: -6 00:30:00.036 Write completed with error (sct=0, sc=8) 00:30:00.036 starting I/O failed: -6 00:30:00.036 Write completed with error (sct=0, sc=8) 00:30:00.036 starting I/O failed: -6 00:30:00.036 Write completed with error (sct=0, sc=8) 00:30:00.036 starting I/O failed: -6 00:30:00.036 Write completed with error (sct=0, sc=8) 00:30:00.036 starting I/O failed: -6 00:30:00.036 Write completed with error (sct=0, sc=8) 00:30:00.036 starting I/O failed: -6 00:30:00.036 Write completed with error (sct=0, sc=8) 00:30:00.036 starting I/O failed: -6 00:30:00.036 Write completed with error (sct=0, sc=8) 00:30:00.036 starting I/O failed: -6 00:30:00.036 Write completed with error (sct=0, sc=8) 00:30:00.036 starting I/O failed: -6 00:30:00.036 Write completed with error (sct=0, sc=8) 00:30:00.036 starting I/O failed: -6 00:30:00.036 Write completed with error (sct=0, sc=8) 00:30:00.036 starting I/O failed: -6 00:30:00.036 Write completed with error (sct=0, sc=8) 00:30:00.036 starting I/O failed: -6 00:30:00.036 Write completed with error (sct=0, sc=8) 00:30:00.036 starting I/O failed: -6 00:30:00.036 Write completed with error (sct=0, sc=8) 00:30:00.036 starting I/O failed: -6 00:30:00.036 Write completed with error (sct=0, sc=8) 00:30:00.036 starting I/O failed: -6 00:30:00.036 Write completed with error (sct=0, sc=8) 00:30:00.036 starting I/O failed: -6 00:30:00.036 Write completed with error (sct=0, sc=8) 00:30:00.036 starting I/O failed: -6 00:30:00.036 Write completed with error (sct=0, sc=8) 00:30:00.036 starting I/O failed: -6 00:30:00.036 Write completed with error (sct=0, sc=8) 00:30:00.036 starting I/O failed: -6 00:30:00.036 Write completed with error (sct=0, sc=8) 00:30:00.036 starting I/O failed: -6 00:30:00.036 Write completed with error (sct=0, sc=8) 00:30:00.036 starting I/O failed: -6 00:30:00.036 Write completed with error (sct=0, sc=8) 00:30:00.036 starting I/O failed: -6 00:30:00.036 Write completed with error (sct=0, sc=8) 00:30:00.036 starting I/O failed: -6 00:30:00.036 Write completed with error (sct=0, sc=8) 00:30:00.036 starting I/O failed: -6 00:30:00.036 Write completed with error (sct=0, sc=8) 00:30:00.036 starting I/O failed: -6 00:30:00.036 Write completed with error (sct=0, sc=8) 00:30:00.036 starting I/O failed: -6 00:30:00.036 Write completed with error (sct=0, sc=8) 00:30:00.036 starting I/O failed: -6 00:30:00.036 [2024-12-13 10:31:53.591516] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:00.036 NVMe io qpair process completion error 00:30:00.036 Write completed with error (sct=0, sc=8) 00:30:00.036 starting I/O failed: -6 00:30:00.036 Write completed with error (sct=0, sc=8) 00:30:00.036 Write completed with error (sct=0, sc=8) 00:30:00.036 Write completed with error (sct=0, sc=8) 00:30:00.036 Write completed with error (sct=0, sc=8) 00:30:00.036 starting I/O failed: -6 00:30:00.036 Write completed with error (sct=0, sc=8) 00:30:00.036 Write completed with error (sct=0, sc=8) 00:30:00.036 Write completed with error (sct=0, sc=8) 00:30:00.036 Write completed with error (sct=0, sc=8) 00:30:00.036 starting I/O failed: -6 00:30:00.036 Write completed with error (sct=0, sc=8) 00:30:00.036 Write completed with error (sct=0, sc=8) 00:30:00.036 Write completed with error (sct=0, sc=8) 00:30:00.036 Write completed with error (sct=0, sc=8) 00:30:00.036 starting I/O failed: -6 00:30:00.036 Write completed with error (sct=0, sc=8) 00:30:00.036 Write completed with error (sct=0, sc=8) 00:30:00.036 Write completed with error (sct=0, sc=8) 00:30:00.036 Write completed with error (sct=0, sc=8) 00:30:00.036 starting I/O failed: -6 00:30:00.036 Write completed with error (sct=0, sc=8) 00:30:00.036 Write completed with error (sct=0, sc=8) 00:30:00.036 Write completed with error (sct=0, sc=8) 00:30:00.036 Write completed with error (sct=0, sc=8) 00:30:00.036 starting I/O failed: -6 00:30:00.036 Write completed with error (sct=0, sc=8) 00:30:00.036 Write completed with error (sct=0, sc=8) 00:30:00.036 Write completed with error (sct=0, sc=8) 00:30:00.036 Write completed with error (sct=0, sc=8) 00:30:00.036 starting I/O failed: -6 00:30:00.036 Write completed with error (sct=0, sc=8) 00:30:00.036 Write completed with error (sct=0, sc=8) 00:30:00.036 Write completed with error (sct=0, sc=8) 00:30:00.036 Write completed with error (sct=0, sc=8) 00:30:00.036 starting I/O failed: -6 00:30:00.036 Write completed with error (sct=0, sc=8) 00:30:00.036 Write completed with error (sct=0, sc=8) 00:30:00.036 Write completed with error (sct=0, sc=8) 00:30:00.036 Write completed with error (sct=0, sc=8) 00:30:00.036 starting I/O failed: -6 00:30:00.036 Write completed with error (sct=0, sc=8) 00:30:00.036 Write completed with error (sct=0, sc=8) 00:30:00.036 Write completed with error (sct=0, sc=8) 00:30:00.036 Write completed with error (sct=0, sc=8) 00:30:00.036 starting I/O failed: -6 00:30:00.036 Write completed with error (sct=0, sc=8) 00:30:00.036 Write completed with error (sct=0, sc=8) 00:30:00.036 Write completed with error (sct=0, sc=8) 00:30:00.036 [2024-12-13 10:31:53.592965] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:00.036 starting I/O failed: -6 00:30:00.036 starting I/O failed: -6 00:30:00.036 starting I/O failed: -6 00:30:00.036 Write completed with error (sct=0, sc=8) 00:30:00.036 starting I/O failed: -6 00:30:00.036 Write completed with error (sct=0, sc=8) 00:30:00.036 starting I/O failed: -6 00:30:00.036 Write completed with error (sct=0, sc=8) 00:30:00.036 Write completed with error (sct=0, sc=8) 00:30:00.036 Write completed with error (sct=0, sc=8) 00:30:00.036 starting I/O failed: -6 00:30:00.036 Write completed with error (sct=0, sc=8) 00:30:00.037 starting I/O failed: -6 00:30:00.037 Write completed with error (sct=0, sc=8) 00:30:00.037 Write completed with error (sct=0, sc=8) 00:30:00.037 Write completed with error (sct=0, sc=8) 00:30:00.037 starting I/O failed: -6 00:30:00.037 Write completed with error (sct=0, sc=8) 00:30:00.037 starting I/O failed: -6 00:30:00.037 Write completed with error (sct=0, sc=8) 00:30:00.037 Write completed with error (sct=0, sc=8) 00:30:00.037 Write completed with error (sct=0, sc=8) 00:30:00.037 starting I/O failed: -6 00:30:00.037 Write completed with error (sct=0, sc=8) 00:30:00.037 starting I/O failed: -6 00:30:00.037 Write completed with error (sct=0, sc=8) 00:30:00.037 Write completed with error (sct=0, sc=8) 00:30:00.037 Write completed with error (sct=0, sc=8) 00:30:00.037 starting I/O failed: -6 00:30:00.037 Write completed with error (sct=0, sc=8) 00:30:00.037 starting I/O failed: -6 00:30:00.037 Write completed with error (sct=0, sc=8) 00:30:00.037 Write completed with error (sct=0, sc=8) 00:30:00.037 Write completed with error (sct=0, sc=8) 00:30:00.037 starting I/O failed: -6 00:30:00.037 Write completed with error (sct=0, sc=8) 00:30:00.037 starting I/O failed: -6 00:30:00.037 Write completed with error (sct=0, sc=8) 00:30:00.037 Write completed with error (sct=0, sc=8) 00:30:00.037 Write completed with error (sct=0, sc=8) 00:30:00.037 starting I/O failed: -6 00:30:00.037 Write completed with error (sct=0, sc=8) 00:30:00.037 starting I/O failed: -6 00:30:00.037 Write completed with error (sct=0, sc=8) 00:30:00.037 Write completed with error (sct=0, sc=8) 00:30:00.037 Write completed with error (sct=0, sc=8) 00:30:00.037 starting I/O failed: -6 00:30:00.037 Write completed with error (sct=0, sc=8) 00:30:00.037 starting I/O failed: -6 00:30:00.037 Write completed with error (sct=0, sc=8) 00:30:00.037 Write completed with error (sct=0, sc=8) 00:30:00.037 Write completed with error (sct=0, sc=8) 00:30:00.037 starting I/O failed: -6 00:30:00.037 [2024-12-13 10:31:53.594767] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.037 Write completed with error (sct=0, sc=8) 00:30:00.037 starting I/O failed: -6 00:30:00.037 Write completed with error (sct=0, sc=8) 00:30:00.037 Write completed with error (sct=0, sc=8) 00:30:00.037 starting I/O failed: -6 00:30:00.037 Write completed with error (sct=0, sc=8) 00:30:00.037 starting I/O failed: -6 00:30:00.037 Write completed with error (sct=0, sc=8) 00:30:00.037 starting I/O failed: -6 00:30:00.037 Write completed with error (sct=0, sc=8) 00:30:00.037 Write completed with error (sct=0, sc=8) 00:30:00.037 starting I/O failed: -6 00:30:00.037 Write completed with error (sct=0, sc=8) 00:30:00.037 starting I/O failed: -6 00:30:00.037 Write completed with error (sct=0, sc=8) 00:30:00.037 starting I/O failed: -6 00:30:00.037 Write completed with error (sct=0, sc=8) 00:30:00.037 Write completed with error (sct=0, sc=8) 00:30:00.037 starting I/O failed: -6 00:30:00.037 Write completed with error (sct=0, sc=8) 00:30:00.037 starting I/O failed: -6 00:30:00.037 Write completed with error (sct=0, sc=8) 00:30:00.037 starting I/O failed: -6 00:30:00.037 Write completed with error (sct=0, sc=8) 00:30:00.037 Write completed with error (sct=0, sc=8) 00:30:00.037 starting I/O failed: -6 00:30:00.037 Write completed with error (sct=0, sc=8) 00:30:00.037 starting I/O failed: -6 00:30:00.037 Write completed with error (sct=0, sc=8) 00:30:00.037 starting I/O failed: -6 00:30:00.037 Write completed with error (sct=0, sc=8) 00:30:00.037 Write completed with error (sct=0, sc=8) 00:30:00.037 starting I/O failed: -6 00:30:00.037 Write completed with error (sct=0, sc=8) 00:30:00.037 starting I/O failed: -6 00:30:00.037 Write completed with error (sct=0, sc=8) 00:30:00.037 starting I/O failed: -6 00:30:00.037 Write completed with error (sct=0, sc=8) 00:30:00.037 Write completed with error (sct=0, sc=8) 00:30:00.037 starting I/O failed: -6 00:30:00.037 Write completed with error (sct=0, sc=8) 00:30:00.037 starting I/O failed: -6 00:30:00.037 Write completed with error (sct=0, sc=8) 00:30:00.037 starting I/O failed: -6 00:30:00.037 Write completed with error (sct=0, sc=8) 00:30:00.037 Write completed with error (sct=0, sc=8) 00:30:00.037 starting I/O failed: -6 00:30:00.037 Write completed with error (sct=0, sc=8) 00:30:00.037 starting I/O failed: -6 00:30:00.037 Write completed with error (sct=0, sc=8) 00:30:00.037 starting I/O failed: -6 00:30:00.037 Write completed with error (sct=0, sc=8) 00:30:00.037 Write completed with error (sct=0, sc=8) 00:30:00.037 starting I/O failed: -6 00:30:00.037 Write completed with error (sct=0, sc=8) 00:30:00.037 starting I/O failed: -6 00:30:00.037 Write completed with error (sct=0, sc=8) 00:30:00.037 starting I/O failed: -6 00:30:00.037 Write completed with error (sct=0, sc=8) 00:30:00.037 Write completed with error (sct=0, sc=8) 00:30:00.037 starting I/O failed: -6 00:30:00.037 Write completed with error (sct=0, sc=8) 00:30:00.037 starting I/O failed: -6 00:30:00.037 Write completed with error (sct=0, sc=8) 00:30:00.037 starting I/O failed: -6 00:30:00.037 Write completed with error (sct=0, sc=8) 00:30:00.037 Write completed with error (sct=0, sc=8) 00:30:00.037 starting I/O failed: -6 00:30:00.037 Write completed with error (sct=0, sc=8) 00:30:00.037 starting I/O failed: -6 00:30:00.037 Write completed with error (sct=0, sc=8) 00:30:00.037 starting I/O failed: -6 00:30:00.037 Write completed with error (sct=0, sc=8) 00:30:00.037 Write completed with error (sct=0, sc=8) 00:30:00.037 starting I/O failed: -6 00:30:00.037 Write completed with error (sct=0, sc=8) 00:30:00.037 starting I/O failed: -6 00:30:00.037 Write completed with error (sct=0, sc=8) 00:30:00.037 starting I/O failed: -6 00:30:00.037 Write completed with error (sct=0, sc=8) 00:30:00.037 Write completed with error (sct=0, sc=8) 00:30:00.037 starting I/O failed: -6 00:30:00.037 Write completed with error (sct=0, sc=8) 00:30:00.037 starting I/O failed: -6 00:30:00.037 Write completed with error (sct=0, sc=8) 00:30:00.037 starting I/O failed: -6 00:30:00.037 Write completed with error (sct=0, sc=8) 00:30:00.037 Write completed with error (sct=0, sc=8) 00:30:00.037 starting I/O failed: -6 00:30:00.037 [2024-12-13 10:31:53.597436] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:00.037 Write completed with error (sct=0, sc=8) 00:30:00.037 starting I/O failed: -6 00:30:00.037 Write completed with error (sct=0, sc=8) 00:30:00.037 starting I/O failed: -6 00:30:00.037 Write completed with error (sct=0, sc=8) 00:30:00.037 starting I/O failed: -6 00:30:00.037 Write completed with error (sct=0, sc=8) 00:30:00.037 starting I/O failed: -6 00:30:00.037 Write completed with error (sct=0, sc=8) 00:30:00.037 starting I/O failed: -6 00:30:00.037 Write completed with error (sct=0, sc=8) 00:30:00.037 starting I/O failed: -6 00:30:00.037 Write completed with error (sct=0, sc=8) 00:30:00.037 starting I/O failed: -6 00:30:00.037 Write completed with error (sct=0, sc=8) 00:30:00.037 starting I/O failed: -6 00:30:00.037 Write completed with error (sct=0, sc=8) 00:30:00.037 starting I/O failed: -6 00:30:00.037 Write completed with error (sct=0, sc=8) 00:30:00.037 starting I/O failed: -6 00:30:00.037 Write completed with error (sct=0, sc=8) 00:30:00.037 starting I/O failed: -6 00:30:00.037 Write completed with error (sct=0, sc=8) 00:30:00.037 starting I/O failed: -6 00:30:00.037 Write completed with error (sct=0, sc=8) 00:30:00.037 starting I/O failed: -6 00:30:00.037 Write completed with error (sct=0, sc=8) 00:30:00.037 starting I/O failed: -6 00:30:00.037 Write completed with error (sct=0, sc=8) 00:30:00.037 starting I/O failed: -6 00:30:00.037 Write completed with error (sct=0, sc=8) 00:30:00.037 starting I/O failed: -6 00:30:00.037 Write completed with error (sct=0, sc=8) 00:30:00.037 starting I/O failed: -6 00:30:00.037 Write completed with error (sct=0, sc=8) 00:30:00.037 starting I/O failed: -6 00:30:00.037 Write completed with error (sct=0, sc=8) 00:30:00.037 starting I/O failed: -6 00:30:00.037 Write completed with error (sct=0, sc=8) 00:30:00.037 starting I/O failed: -6 00:30:00.037 Write completed with error (sct=0, sc=8) 00:30:00.037 starting I/O failed: -6 00:30:00.037 Write completed with error (sct=0, sc=8) 00:30:00.037 starting I/O failed: -6 00:30:00.037 Write completed with error (sct=0, sc=8) 00:30:00.037 starting I/O failed: -6 00:30:00.037 Write completed with error (sct=0, sc=8) 00:30:00.037 starting I/O failed: -6 00:30:00.037 Write completed with error (sct=0, sc=8) 00:30:00.037 starting I/O failed: -6 00:30:00.037 Write completed with error (sct=0, sc=8) 00:30:00.037 starting I/O failed: -6 00:30:00.037 Write completed with error (sct=0, sc=8) 00:30:00.037 starting I/O failed: -6 00:30:00.037 Write completed with error (sct=0, sc=8) 00:30:00.037 starting I/O failed: -6 00:30:00.037 Write completed with error (sct=0, sc=8) 00:30:00.037 starting I/O failed: -6 00:30:00.037 Write completed with error (sct=0, sc=8) 00:30:00.037 starting I/O failed: -6 00:30:00.037 Write completed with error (sct=0, sc=8) 00:30:00.037 starting I/O failed: -6 00:30:00.037 Write completed with error (sct=0, sc=8) 00:30:00.037 starting I/O failed: -6 00:30:00.037 Write completed with error (sct=0, sc=8) 00:30:00.037 starting I/O failed: -6 00:30:00.037 Write completed with error (sct=0, sc=8) 00:30:00.037 starting I/O failed: -6 00:30:00.037 Write completed with error (sct=0, sc=8) 00:30:00.037 starting I/O failed: -6 00:30:00.037 Write completed with error (sct=0, sc=8) 00:30:00.037 starting I/O failed: -6 00:30:00.038 Write completed with error (sct=0, sc=8) 00:30:00.038 starting I/O failed: -6 00:30:00.038 Write completed with error (sct=0, sc=8) 00:30:00.038 starting I/O failed: -6 00:30:00.038 Write completed with error (sct=0, sc=8) 00:30:00.038 starting I/O failed: -6 00:30:00.038 Write completed with error (sct=0, sc=8) 00:30:00.038 starting I/O failed: -6 00:30:00.038 Write completed with error (sct=0, sc=8) 00:30:00.038 starting I/O failed: -6 00:30:00.038 Write completed with error (sct=0, sc=8) 00:30:00.038 starting I/O failed: -6 00:30:00.038 Write completed with error (sct=0, sc=8) 00:30:00.038 starting I/O failed: -6 00:30:00.038 Write completed with error (sct=0, sc=8) 00:30:00.038 starting I/O failed: -6 00:30:00.038 Write completed with error (sct=0, sc=8) 00:30:00.038 starting I/O failed: -6 00:30:00.038 Write completed with error (sct=0, sc=8) 00:30:00.038 starting I/O failed: -6 00:30:00.038 Write completed with error (sct=0, sc=8) 00:30:00.038 starting I/O failed: -6 00:30:00.038 Write completed with error (sct=0, sc=8) 00:30:00.038 starting I/O failed: -6 00:30:00.038 Write completed with error (sct=0, sc=8) 00:30:00.038 starting I/O failed: -6 00:30:00.038 Write completed with error (sct=0, sc=8) 00:30:00.038 starting I/O failed: -6 00:30:00.038 Write completed with error (sct=0, sc=8) 00:30:00.038 starting I/O failed: -6 00:30:00.038 Write completed with error (sct=0, sc=8) 00:30:00.038 starting I/O failed: -6 00:30:00.038 Write completed with error (sct=0, sc=8) 00:30:00.038 starting I/O failed: -6 00:30:00.038 Write completed with error (sct=0, sc=8) 00:30:00.038 starting I/O failed: -6 00:30:00.038 Write completed with error (sct=0, sc=8) 00:30:00.038 starting I/O failed: -6 00:30:00.038 Write completed with error (sct=0, sc=8) 00:30:00.038 starting I/O failed: -6 00:30:00.038 Write completed with error (sct=0, sc=8) 00:30:00.038 starting I/O failed: -6 00:30:00.038 Write completed with error (sct=0, sc=8) 00:30:00.038 starting I/O failed: -6 00:30:00.038 Write completed with error (sct=0, sc=8) 00:30:00.038 starting I/O failed: -6 00:30:00.038 Write completed with error (sct=0, sc=8) 00:30:00.038 starting I/O failed: -6 00:30:00.038 [2024-12-13 10:31:53.615594] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:00.038 NVMe io qpair process completion error 00:30:00.038 Initializing NVMe Controllers 00:30:00.038 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode2 00:30:00.038 Controller IO queue size 128, less than required. 00:30:00.038 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:00.038 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode6 00:30:00.038 Controller IO queue size 128, less than required. 00:30:00.038 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:00.038 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode9 00:30:00.038 Controller IO queue size 128, less than required. 00:30:00.038 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:00.038 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode3 00:30:00.038 Controller IO queue size 128, less than required. 00:30:00.038 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:00.038 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:00.038 Controller IO queue size 128, less than required. 00:30:00.038 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:00.038 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode7 00:30:00.038 Controller IO queue size 128, less than required. 00:30:00.038 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:00.038 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode4 00:30:00.038 Controller IO queue size 128, less than required. 00:30:00.038 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:00.038 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode10 00:30:00.038 Controller IO queue size 128, less than required. 00:30:00.038 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:00.038 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode5 00:30:00.038 Controller IO queue size 128, less than required. 00:30:00.038 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:00.038 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode8 00:30:00.038 Controller IO queue size 128, less than required. 00:30:00.038 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:00.038 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:30:00.038 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:30:00.038 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:30:00.038 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:30:00.038 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:00.038 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:30:00.038 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:30:00.038 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:30:00.038 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:30:00.038 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:30:00.038 Initialization complete. Launching workers. 00:30:00.038 ======================================================== 00:30:00.038 Latency(us) 00:30:00.038 Device Information : IOPS MiB/s Average min max 00:30:00.038 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 1855.51 79.73 68990.58 1463.27 169663.12 00:30:00.038 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 1829.76 78.62 70089.95 1338.74 177856.60 00:30:00.038 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 1821.67 78.27 70564.76 1376.82 197191.20 00:30:00.038 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 1794.64 77.11 71797.26 1278.02 238555.40 00:30:00.038 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1809.11 77.74 68792.80 1444.50 142000.90 00:30:00.038 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 1817.41 78.09 70978.23 1346.70 232008.59 00:30:00.038 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 1787.19 76.79 69612.26 1529.99 131086.94 00:30:00.038 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 1799.54 77.32 69249.50 1303.00 137691.68 00:30:00.038 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 1824.86 78.41 68467.65 1222.14 155700.38 00:30:00.038 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 1819.33 78.17 68833.69 1817.13 146575.67 00:30:00.038 ======================================================== 00:30:00.038 Total : 18159.02 780.27 69734.46 1222.14 238555.40 00:30:00.038 00:30:00.038 [2024-12-13 10:31:53.649037] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500001e800 is same with the state(6) to be set 00:30:00.038 [2024-12-13 10:31:53.649100] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500001fc00 is same with the state(6) to be set 00:30:00.038 [2024-12-13 10:31:53.649143] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000020b00 is same with the state(6) to be set 00:30:00.038 [2024-12-13 10:31:53.649185] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500001ed00 is same with the state(6) to be set 00:30:00.038 [2024-12-13 10:31:53.649227] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500001de00 is same with the state(6) to be set 00:30:00.038 [2024-12-13 10:31:53.649268] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000020100 is same with the state(6) to be set 00:30:00.038 [2024-12-13 10:31:53.649308] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500001f200 is same with the state(6) to be set 00:30:00.038 [2024-12-13 10:31:53.649359] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500001e300 is same with the state(6) to be set 00:30:00.038 [2024-12-13 10:31:53.649399] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500001f700 is same with the state(6) to be set 00:30:00.038 [2024-12-13 10:31:53.649442] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000020600 is same with the state(6) to be set 00:30:00.038 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:30:03.330 10:31:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:30:03.899 10:31:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 4039777 00:30:03.899 10:31:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # local es=0 00:30:03.899 10:31:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 4039777 00:30:03.899 10:31:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@640 -- # local arg=wait 00:30:03.899 10:31:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:03.899 10:31:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # type -t wait 00:30:03.899 10:31:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:03.899 10:31:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # wait 4039777 00:30:03.899 10:31:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # es=1 00:30:03.899 10:31:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:30:03.899 10:31:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:30:03.899 10:31:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:30:03.899 10:31:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget 00:30:03.899 10:31:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:30:03.899 10:31:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:30:03.899 10:31:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:30:03.899 10:31:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:30:03.899 10:31:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:03.899 10:31:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync 00:30:03.899 10:31:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:03.899 10:31:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e 00:30:03.899 10:31:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:03.899 10:31:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:03.899 rmmod nvme_tcp 00:30:03.899 rmmod nvme_fabrics 00:30:03.899 rmmod nvme_keyring 00:30:03.899 10:31:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:03.899 10:31:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e 00:30:03.899 10:31:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0 00:30:03.899 10:31:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@517 -- # '[' -n 4039450 ']' 00:30:03.899 10:31:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@518 -- # killprocess 4039450 00:30:03.899 10:31:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 4039450 ']' 00:30:03.899 10:31:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 4039450 00:30:03.900 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (4039450) - No such process 00:30:03.900 10:31:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@981 -- # echo 'Process with pid 4039450 is not found' 00:30:03.900 Process with pid 4039450 is not found 00:30:03.900 10:31:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:03.900 10:31:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:03.900 10:31:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:03.900 10:31:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@297 -- # iptr 00:30:03.900 10:31:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-save 00:30:03.900 10:31:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:03.900 10:31:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-restore 00:30:03.900 10:31:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:03.900 10:31:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:03.900 10:31:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:03.900 10:31:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:03.900 10:31:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:05.804 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:05.804 00:30:05.804 real 0m13.698s 00:30:05.804 user 0m39.614s 00:30:05.804 sys 0m4.972s 00:30:05.804 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:05.804 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:30:05.804 ************************************ 00:30:05.804 END TEST nvmf_shutdown_tc4 00:30:05.804 ************************************ 00:30:06.062 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:30:06.062 00:30:06.062 real 0m58.670s 00:30:06.062 user 2m51.335s 00:30:06.062 sys 0m14.540s 00:30:06.062 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:06.062 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:30:06.062 ************************************ 00:30:06.062 END TEST nvmf_shutdown 00:30:06.062 ************************************ 00:30:06.062 10:31:59 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:30:06.062 10:31:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:30:06.062 10:31:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:06.063 10:31:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:30:06.063 ************************************ 00:30:06.063 START TEST nvmf_nsid 00:30:06.063 ************************************ 00:30:06.063 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:30:06.063 * Looking for test storage... 00:30:06.063 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:06.063 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:30:06.063 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # lcov --version 00:30:06.063 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:30:06.063 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:30:06.063 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:06.063 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:06.063 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:06.063 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:30:06.063 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:30:06.063 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:30:06.063 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:30:06.063 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:30:06.063 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:30:06.063 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:30:06.063 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:06.063 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:30:06.063 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:30:06.063 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:06.063 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:06.063 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:30:06.063 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:30:06.063 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:06.063 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:30:06.063 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:30:06.063 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:30:06.063 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:30:06.063 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:06.063 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:30:06.063 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:30:06.063 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:06.063 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:06.063 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:30:06.063 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:06.063 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:30:06.063 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:06.063 --rc genhtml_branch_coverage=1 00:30:06.063 --rc genhtml_function_coverage=1 00:30:06.063 --rc genhtml_legend=1 00:30:06.063 --rc geninfo_all_blocks=1 00:30:06.063 --rc geninfo_unexecuted_blocks=1 00:30:06.063 00:30:06.063 ' 00:30:06.063 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:30:06.063 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:06.063 --rc genhtml_branch_coverage=1 00:30:06.063 --rc genhtml_function_coverage=1 00:30:06.063 --rc genhtml_legend=1 00:30:06.063 --rc geninfo_all_blocks=1 00:30:06.063 --rc geninfo_unexecuted_blocks=1 00:30:06.063 00:30:06.063 ' 00:30:06.063 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:30:06.063 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:06.063 --rc genhtml_branch_coverage=1 00:30:06.063 --rc genhtml_function_coverage=1 00:30:06.063 --rc genhtml_legend=1 00:30:06.063 --rc geninfo_all_blocks=1 00:30:06.063 --rc geninfo_unexecuted_blocks=1 00:30:06.063 00:30:06.063 ' 00:30:06.063 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:30:06.063 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:06.063 --rc genhtml_branch_coverage=1 00:30:06.063 --rc genhtml_function_coverage=1 00:30:06.063 --rc genhtml_legend=1 00:30:06.063 --rc geninfo_all_blocks=1 00:30:06.063 --rc geninfo_unexecuted_blocks=1 00:30:06.063 00:30:06.063 ' 00:30:06.063 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:06.063 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:30:06.063 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:06.063 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:06.063 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:06.063 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:06.063 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:06.063 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:06.063 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:06.063 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:06.063 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:06.063 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:06.323 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:30:06.323 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:30:06.323 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:06.323 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:06.323 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:06.323 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:06.323 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:06.323 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:30:06.323 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:06.323 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:06.323 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:06.323 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:06.323 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:06.323 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:06.323 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:30:06.323 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:06.323 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:30:06.323 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:06.323 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:06.323 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:06.323 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:06.323 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:06.323 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:06.323 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:06.323 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:06.323 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:06.323 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:06.323 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:30:06.323 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:30:06.323 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:30:06.323 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:30:06.323 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:30:06.323 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:30:06.323 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:06.323 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:06.323 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:06.323 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:06.323 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:06.323 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:06.323 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:06.323 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:06.323 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:06.323 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:06.323 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@309 -- # xtrace_disable 00:30:06.323 10:31:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:30:11.595 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:11.595 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # pci_devs=() 00:30:11.595 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:11.595 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:11.595 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:11.595 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:11.595 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:11.595 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # net_devs=() 00:30:11.595 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:11.595 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # e810=() 00:30:11.595 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # local -ga e810 00:30:11.595 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # x722=() 00:30:11.595 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # local -ga x722 00:30:11.595 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # mlx=() 00:30:11.595 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # local -ga mlx 00:30:11.595 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:11.595 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:11.595 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:11.595 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:11.595 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:11.595 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:11.595 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:11.595 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:11.595 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:11.595 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:11.595 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:11.595 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:11.595 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:11.595 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:11.595 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:11.595 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:11.595 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:11.595 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:11.595 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:11.595 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:30:11.595 Found 0000:af:00.0 (0x8086 - 0x159b) 00:30:11.595 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:11.595 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:11.595 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:11.595 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:11.595 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:11.595 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:11.595 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:30:11.595 Found 0000:af:00.1 (0x8086 - 0x159b) 00:30:11.595 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:11.595 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:11.595 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:11.595 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:11.595 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:11.595 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:11.595 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:11.595 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:11.595 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:11.595 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:11.595 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:11.595 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:11.595 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:11.595 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:11.595 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:11.595 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:30:11.595 Found net devices under 0000:af:00.0: cvl_0_0 00:30:11.595 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:11.595 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:11.595 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:11.595 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:11.595 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:11.595 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:11.595 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:11.595 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:11.595 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:30:11.595 Found net devices under 0000:af:00.1: cvl_0_1 00:30:11.595 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:11.595 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:11.595 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # is_hw=yes 00:30:11.595 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:11.595 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:11.595 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:11.595 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:11.595 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:11.595 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:11.595 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:11.595 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:11.595 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:11.595 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:11.595 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:11.595 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:11.595 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:11.595 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:11.595 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:11.595 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:11.595 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:11.595 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:11.595 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:11.595 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:11.595 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:11.595 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:11.595 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:11.595 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:11.595 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:11.595 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:11.595 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:11.595 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.306 ms 00:30:11.595 00:30:11.595 --- 10.0.0.2 ping statistics --- 00:30:11.595 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:11.595 rtt min/avg/max/mdev = 0.306/0.306/0.306/0.000 ms 00:30:11.595 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:11.595 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:11.595 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.218 ms 00:30:11.595 00:30:11.595 --- 10.0.0.1 ping statistics --- 00:30:11.595 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:11.595 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:30:11.595 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:11.595 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@450 -- # return 0 00:30:11.595 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:11.596 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:11.596 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:11.596 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:11.596 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:11.596 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:11.596 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:11.596 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:30:11.596 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:11.596 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:11.596 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:30:11.596 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=4044588 00:30:11.596 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 4044588 00:30:11.596 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:30:11.596 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 4044588 ']' 00:30:11.596 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:11.596 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:11.596 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:11.596 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:11.596 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:11.596 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:30:11.854 [2024-12-13 10:32:05.551616] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:30:11.854 [2024-12-13 10:32:05.551704] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:11.854 [2024-12-13 10:32:05.669620] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:12.113 [2024-12-13 10:32:05.773519] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:12.113 [2024-12-13 10:32:05.773562] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:12.113 [2024-12-13 10:32:05.773572] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:12.113 [2024-12-13 10:32:05.773582] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:12.113 [2024-12-13 10:32:05.773589] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:12.113 [2024-12-13 10:32:05.774852] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:30:12.681 10:32:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:12.681 10:32:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:30:12.681 10:32:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:12.681 10:32:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:12.681 10:32:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:30:12.681 10:32:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:12.681 10:32:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:30:12.681 10:32:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=4044821 00:30:12.681 10:32:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.2 00:30:12.681 10:32:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:30:12.681 10:32:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:30:12.681 10:32:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:30:12.681 10:32:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:12.681 10:32:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:12.681 10:32:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:12.681 10:32:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:12.681 10:32:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:12.681 10:32:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:12.681 10:32:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:12.681 10:32:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:12.681 10:32:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:12.681 10:32:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:30:12.681 10:32:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:30:12.681 10:32:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=fb0ad8bc-787a-4143-82b4-9edf50500844 00:30:12.681 10:32:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:30:12.681 10:32:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=7eb8bb48-fcd2-4428-8b51-9a07957ede43 00:30:12.681 10:32:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:30:12.681 10:32:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=44116a09-1017-4429-b173-a5e701bad70d 00:30:12.681 10:32:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:30:12.681 10:32:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:12.681 10:32:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:30:12.681 null0 00:30:12.681 null1 00:30:12.681 null2 00:30:12.681 [2024-12-13 10:32:06.446026] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:12.681 [2024-12-13 10:32:06.470257] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:12.681 [2024-12-13 10:32:06.472363] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:30:12.681 [2024-12-13 10:32:06.472440] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4044821 ] 00:30:12.681 10:32:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:12.681 10:32:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 4044821 /var/tmp/tgt2.sock 00:30:12.681 10:32:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 4044821 ']' 00:30:12.681 10:32:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/tgt2.sock 00:30:12.681 10:32:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:12.681 10:32:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:30:12.681 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:30:12.681 10:32:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:12.682 10:32:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:30:12.940 [2024-12-13 10:32:06.582320] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:12.941 [2024-12-13 10:32:06.694405] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:30:13.876 10:32:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:13.876 10:32:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:30:13.876 10:32:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:30:14.135 [2024-12-13 10:32:07.837877] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:14.135 [2024-12-13 10:32:07.854007] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:30:14.135 nvme0n1 nvme0n2 00:30:14.135 nvme1n1 00:30:14.135 10:32:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:30:14.135 10:32:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:30:14.135 10:32:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 00:30:15.512 10:32:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:30:15.512 10:32:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:30:15.512 10:32:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:30:15.512 10:32:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:30:15.512 10:32:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 0 00:30:15.512 10:32:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:30:15.512 10:32:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:30:15.512 10:32:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:30:15.512 10:32:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:30:15.512 10:32:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:30:15.512 10:32:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # '[' 0 -lt 15 ']' 00:30:15.512 10:32:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1242 -- # i=1 00:30:15.512 10:32:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1243 -- # sleep 1 00:30:16.448 10:32:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:30:16.448 10:32:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:30:16.448 10:32:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:30:16.448 10:32:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:30:16.448 10:32:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:30:16.448 10:32:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid fb0ad8bc-787a-4143-82b4-9edf50500844 00:30:16.448 10:32:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:30:16.448 10:32:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:30:16.448 10:32:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:30:16.448 10:32:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:30:16.448 10:32:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:30:16.448 10:32:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=fb0ad8bc787a414382b49edf50500844 00:30:16.448 10:32:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo FB0AD8BC787A414382B49EDF50500844 00:30:16.448 10:32:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ FB0AD8BC787A414382B49EDF50500844 == \F\B\0\A\D\8\B\C\7\8\7\A\4\1\4\3\8\2\B\4\9\E\D\F\5\0\5\0\0\8\4\4 ]] 00:30:16.448 10:32:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:30:16.448 10:32:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:30:16.448 10:32:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:30:16.448 10:32:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n2 00:30:16.448 10:32:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:30:16.448 10:32:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n2 00:30:16.448 10:32:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:30:16.448 10:32:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid 7eb8bb48-fcd2-4428-8b51-9a07957ede43 00:30:16.448 10:32:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:30:16.448 10:32:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:30:16.448 10:32:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:30:16.448 10:32:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:30:16.448 10:32:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:30:16.448 10:32:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=7eb8bb48fcd244288b519a07957ede43 00:30:16.448 10:32:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 7EB8BB48FCD244288B519A07957EDE43 00:30:16.448 10:32:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ 7EB8BB48FCD244288B519A07957EDE43 == \7\E\B\8\B\B\4\8\F\C\D\2\4\4\2\8\8\B\5\1\9\A\0\7\9\5\7\E\D\E\4\3 ]] 00:30:16.448 10:32:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:30:16.448 10:32:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:30:16.448 10:32:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:30:16.448 10:32:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n3 00:30:16.448 10:32:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:30:16.448 10:32:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n3 00:30:16.448 10:32:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:30:16.448 10:32:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid 44116a09-1017-4429-b173-a5e701bad70d 00:30:16.448 10:32:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:30:16.448 10:32:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:30:16.448 10:32:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:30:16.448 10:32:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:30:16.448 10:32:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:30:16.448 10:32:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=44116a0910174429b173a5e701bad70d 00:30:16.448 10:32:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 44116A0910174429B173A5E701BAD70D 00:30:16.448 10:32:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ 44116A0910174429B173A5E701BAD70D == \4\4\1\1\6\A\0\9\1\0\1\7\4\4\2\9\B\1\7\3\A\5\E\7\0\1\B\A\D\7\0\D ]] 00:30:16.448 10:32:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:30:16.707 10:32:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:30:16.707 10:32:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:30:16.707 10:32:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 4044821 00:30:16.707 10:32:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 4044821 ']' 00:30:16.707 10:32:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 4044821 00:30:16.707 10:32:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:30:16.707 10:32:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:16.707 10:32:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4044821 00:30:16.966 10:32:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:16.966 10:32:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:16.966 10:32:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4044821' 00:30:16.966 killing process with pid 4044821 00:30:16.966 10:32:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 4044821 00:30:16.966 10:32:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 4044821 00:30:19.501 10:32:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:30:19.501 10:32:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:19.501 10:32:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:30:19.501 10:32:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:19.501 10:32:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:30:19.501 10:32:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:19.501 10:32:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:19.501 rmmod nvme_tcp 00:30:19.501 rmmod nvme_fabrics 00:30:19.501 rmmod nvme_keyring 00:30:19.501 10:32:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:19.501 10:32:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:30:19.501 10:32:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:30:19.501 10:32:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 4044588 ']' 00:30:19.501 10:32:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 4044588 00:30:19.501 10:32:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 4044588 ']' 00:30:19.501 10:32:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 4044588 00:30:19.501 10:32:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:30:19.501 10:32:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:19.501 10:32:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4044588 00:30:19.501 10:32:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:19.501 10:32:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:19.501 10:32:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4044588' 00:30:19.501 killing process with pid 4044588 00:30:19.501 10:32:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 4044588 00:30:19.501 10:32:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 4044588 00:30:20.438 10:32:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:20.438 10:32:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:20.438 10:32:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:20.438 10:32:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@297 -- # iptr 00:30:20.438 10:32:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:20.438 10:32:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-save 00:30:20.438 10:32:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-restore 00:30:20.438 10:32:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:20.438 10:32:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:20.438 10:32:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:20.438 10:32:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:20.438 10:32:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:22.346 10:32:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:22.346 00:30:22.346 real 0m16.419s 00:30:22.346 user 0m17.032s 00:30:22.346 sys 0m5.373s 00:30:22.346 10:32:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:22.346 10:32:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:30:22.346 ************************************ 00:30:22.346 END TEST nvmf_nsid 00:30:22.346 ************************************ 00:30:22.346 10:32:16 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:30:22.605 00:30:22.605 real 18m48.047s 00:30:22.605 user 50m7.002s 00:30:22.605 sys 4m4.049s 00:30:22.605 10:32:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:22.605 10:32:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:30:22.605 ************************************ 00:30:22.605 END TEST nvmf_target_extra 00:30:22.605 ************************************ 00:30:22.605 10:32:16 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:30:22.605 10:32:16 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:30:22.605 10:32:16 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:22.605 10:32:16 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:22.605 ************************************ 00:30:22.605 START TEST nvmf_host 00:30:22.605 ************************************ 00:30:22.605 10:32:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:30:22.605 * Looking for test storage... 00:30:22.605 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:30:22.605 10:32:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:30:22.605 10:32:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:30:22.605 10:32:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # lcov --version 00:30:22.605 10:32:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:30:22.605 10:32:16 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:22.605 10:32:16 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:22.605 10:32:16 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:22.605 10:32:16 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:30:22.605 10:32:16 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:30:22.605 10:32:16 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:30:22.605 10:32:16 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:30:22.605 10:32:16 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:30:22.605 10:32:16 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:30:22.605 10:32:16 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:30:22.605 10:32:16 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:22.605 10:32:16 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:30:22.605 10:32:16 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:30:22.605 10:32:16 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:22.605 10:32:16 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:22.605 10:32:16 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:30:22.605 10:32:16 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:30:22.605 10:32:16 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:22.605 10:32:16 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:30:22.605 10:32:16 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:30:22.605 10:32:16 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:30:22.605 10:32:16 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:30:22.605 10:32:16 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:22.605 10:32:16 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:30:22.605 10:32:16 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:30:22.605 10:32:16 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:22.605 10:32:16 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:22.605 10:32:16 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:30:22.605 10:32:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:22.605 10:32:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:30:22.605 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:22.605 --rc genhtml_branch_coverage=1 00:30:22.605 --rc genhtml_function_coverage=1 00:30:22.605 --rc genhtml_legend=1 00:30:22.605 --rc geninfo_all_blocks=1 00:30:22.605 --rc geninfo_unexecuted_blocks=1 00:30:22.605 00:30:22.605 ' 00:30:22.605 10:32:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:30:22.605 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:22.605 --rc genhtml_branch_coverage=1 00:30:22.605 --rc genhtml_function_coverage=1 00:30:22.605 --rc genhtml_legend=1 00:30:22.605 --rc geninfo_all_blocks=1 00:30:22.605 --rc geninfo_unexecuted_blocks=1 00:30:22.605 00:30:22.605 ' 00:30:22.605 10:32:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:30:22.605 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:22.605 --rc genhtml_branch_coverage=1 00:30:22.605 --rc genhtml_function_coverage=1 00:30:22.605 --rc genhtml_legend=1 00:30:22.605 --rc geninfo_all_blocks=1 00:30:22.605 --rc geninfo_unexecuted_blocks=1 00:30:22.605 00:30:22.605 ' 00:30:22.605 10:32:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:30:22.605 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:22.605 --rc genhtml_branch_coverage=1 00:30:22.605 --rc genhtml_function_coverage=1 00:30:22.605 --rc genhtml_legend=1 00:30:22.605 --rc geninfo_all_blocks=1 00:30:22.605 --rc geninfo_unexecuted_blocks=1 00:30:22.605 00:30:22.605 ' 00:30:22.605 10:32:16 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:22.605 10:32:16 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:30:22.605 10:32:16 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:22.605 10:32:16 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:22.605 10:32:16 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:22.605 10:32:16 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:22.605 10:32:16 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:22.605 10:32:16 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:22.605 10:32:16 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:22.605 10:32:16 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:22.605 10:32:16 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:22.605 10:32:16 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:22.605 10:32:16 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:30:22.865 10:32:16 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:30:22.865 10:32:16 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:22.865 10:32:16 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:22.865 10:32:16 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:22.865 10:32:16 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:22.865 10:32:16 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:22.865 10:32:16 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:30:22.865 10:32:16 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:22.865 10:32:16 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:22.865 10:32:16 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:22.865 10:32:16 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:22.865 10:32:16 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:22.865 10:32:16 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:22.865 10:32:16 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:30:22.865 10:32:16 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:22.865 10:32:16 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:30:22.865 10:32:16 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:22.865 10:32:16 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:22.865 10:32:16 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:22.865 10:32:16 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:22.865 10:32:16 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:22.865 10:32:16 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:22.865 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:22.865 10:32:16 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:22.865 10:32:16 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:22.865 10:32:16 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:22.865 10:32:16 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:30:22.865 10:32:16 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:30:22.865 10:32:16 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:30:22.866 10:32:16 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:30:22.866 10:32:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:30:22.866 10:32:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:22.866 10:32:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:22.866 ************************************ 00:30:22.866 START TEST nvmf_multicontroller 00:30:22.866 ************************************ 00:30:22.866 10:32:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:30:22.866 * Looking for test storage... 00:30:22.866 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:22.866 10:32:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:30:22.866 10:32:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # lcov --version 00:30:22.866 10:32:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:30:22.866 10:32:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:30:22.866 10:32:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:22.866 10:32:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:22.866 10:32:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:22.866 10:32:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:30:22.866 10:32:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:30:22.866 10:32:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:30:22.866 10:32:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:30:22.866 10:32:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:30:22.866 10:32:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:30:22.866 10:32:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:30:22.866 10:32:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:22.866 10:32:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:30:22.866 10:32:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:30:22.866 10:32:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:22.866 10:32:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:22.866 10:32:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:30:22.866 10:32:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:30:22.866 10:32:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:22.866 10:32:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:30:22.866 10:32:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:30:22.866 10:32:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:30:22.866 10:32:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:30:22.866 10:32:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:22.866 10:32:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:30:22.866 10:32:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:30:22.866 10:32:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:22.866 10:32:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:22.866 10:32:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:30:22.866 10:32:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:22.866 10:32:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:30:22.866 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:22.866 --rc genhtml_branch_coverage=1 00:30:22.866 --rc genhtml_function_coverage=1 00:30:22.866 --rc genhtml_legend=1 00:30:22.866 --rc geninfo_all_blocks=1 00:30:22.866 --rc geninfo_unexecuted_blocks=1 00:30:22.866 00:30:22.866 ' 00:30:22.866 10:32:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:30:22.866 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:22.866 --rc genhtml_branch_coverage=1 00:30:22.866 --rc genhtml_function_coverage=1 00:30:22.866 --rc genhtml_legend=1 00:30:22.866 --rc geninfo_all_blocks=1 00:30:22.866 --rc geninfo_unexecuted_blocks=1 00:30:22.866 00:30:22.866 ' 00:30:22.866 10:32:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:30:22.866 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:22.866 --rc genhtml_branch_coverage=1 00:30:22.866 --rc genhtml_function_coverage=1 00:30:22.866 --rc genhtml_legend=1 00:30:22.866 --rc geninfo_all_blocks=1 00:30:22.866 --rc geninfo_unexecuted_blocks=1 00:30:22.866 00:30:22.866 ' 00:30:22.866 10:32:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:30:22.866 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:22.866 --rc genhtml_branch_coverage=1 00:30:22.866 --rc genhtml_function_coverage=1 00:30:22.866 --rc genhtml_legend=1 00:30:22.866 --rc geninfo_all_blocks=1 00:30:22.866 --rc geninfo_unexecuted_blocks=1 00:30:22.866 00:30:22.866 ' 00:30:22.866 10:32:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:22.866 10:32:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:30:22.866 10:32:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:22.866 10:32:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:22.866 10:32:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:22.866 10:32:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:22.866 10:32:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:22.866 10:32:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:22.866 10:32:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:22.866 10:32:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:22.866 10:32:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:22.866 10:32:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:22.866 10:32:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:30:22.866 10:32:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:30:22.866 10:32:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:22.866 10:32:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:22.866 10:32:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:22.866 10:32:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:22.866 10:32:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:22.866 10:32:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:30:22.866 10:32:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:22.866 10:32:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:22.866 10:32:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:22.866 10:32:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:22.866 10:32:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:22.866 10:32:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:22.866 10:32:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:30:22.866 10:32:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:22.866 10:32:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:30:22.866 10:32:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:22.866 10:32:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:22.866 10:32:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:22.866 10:32:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:22.866 10:32:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:22.866 10:32:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:22.866 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:22.867 10:32:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:22.867 10:32:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:22.867 10:32:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:22.867 10:32:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:22.867 10:32:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:22.867 10:32:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:30:22.867 10:32:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:30:22.867 10:32:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:30:22.867 10:32:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:30:22.867 10:32:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:30:22.867 10:32:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:22.867 10:32:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:22.867 10:32:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:22.867 10:32:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:22.867 10:32:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:22.867 10:32:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:22.867 10:32:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:22.867 10:32:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:22.867 10:32:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:22.867 10:32:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:22.867 10:32:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@309 -- # xtrace_disable 00:30:22.867 10:32:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:28.142 10:32:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:28.142 10:32:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # pci_devs=() 00:30:28.142 10:32:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:28.142 10:32:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:28.142 10:32:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:28.142 10:32:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:28.142 10:32:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:28.142 10:32:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # net_devs=() 00:30:28.142 10:32:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:28.142 10:32:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # e810=() 00:30:28.142 10:32:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # local -ga e810 00:30:28.142 10:32:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # x722=() 00:30:28.142 10:32:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # local -ga x722 00:30:28.142 10:32:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # mlx=() 00:30:28.142 10:32:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # local -ga mlx 00:30:28.142 10:32:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:28.142 10:32:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:28.142 10:32:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:28.142 10:32:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:28.142 10:32:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:28.142 10:32:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:28.142 10:32:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:28.142 10:32:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:28.142 10:32:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:28.142 10:32:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:28.142 10:32:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:28.143 10:32:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:28.143 10:32:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:28.143 10:32:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:28.143 10:32:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:28.143 10:32:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:28.143 10:32:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:28.143 10:32:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:28.143 10:32:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:28.143 10:32:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:30:28.143 Found 0000:af:00.0 (0x8086 - 0x159b) 00:30:28.143 10:32:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:28.143 10:32:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:28.143 10:32:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:28.143 10:32:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:28.143 10:32:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:28.143 10:32:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:28.143 10:32:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:30:28.143 Found 0000:af:00.1 (0x8086 - 0x159b) 00:30:28.143 10:32:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:28.143 10:32:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:28.143 10:32:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:28.143 10:32:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:28.143 10:32:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:28.143 10:32:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:28.143 10:32:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:28.143 10:32:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:28.143 10:32:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:28.143 10:32:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:28.143 10:32:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:28.143 10:32:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:28.143 10:32:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:28.143 10:32:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:28.143 10:32:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:28.143 10:32:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:30:28.143 Found net devices under 0000:af:00.0: cvl_0_0 00:30:28.143 10:32:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:28.143 10:32:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:28.143 10:32:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:28.143 10:32:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:28.143 10:32:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:28.143 10:32:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:28.143 10:32:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:28.143 10:32:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:28.143 10:32:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:30:28.143 Found net devices under 0000:af:00.1: cvl_0_1 00:30:28.143 10:32:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:28.143 10:32:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:28.143 10:32:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # is_hw=yes 00:30:28.143 10:32:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:28.143 10:32:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:28.143 10:32:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:28.143 10:32:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:28.143 10:32:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:28.143 10:32:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:28.143 10:32:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:28.143 10:32:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:28.143 10:32:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:28.143 10:32:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:28.143 10:32:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:28.143 10:32:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:28.143 10:32:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:28.143 10:32:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:28.143 10:32:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:28.143 10:32:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:28.143 10:32:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:28.143 10:32:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:28.402 10:32:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:28.402 10:32:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:28.402 10:32:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:28.402 10:32:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:28.402 10:32:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:28.402 10:32:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:28.402 10:32:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:28.402 10:32:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:28.402 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:28.402 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.303 ms 00:30:28.402 00:30:28.402 --- 10.0.0.2 ping statistics --- 00:30:28.402 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:28.402 rtt min/avg/max/mdev = 0.303/0.303/0.303/0.000 ms 00:30:28.402 10:32:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:28.402 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:28.402 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.194 ms 00:30:28.402 00:30:28.402 --- 10.0.0.1 ping statistics --- 00:30:28.402 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:28.402 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:30:28.402 10:32:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:28.402 10:32:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@450 -- # return 0 00:30:28.402 10:32:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:28.402 10:32:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:28.402 10:32:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:28.402 10:32:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:28.402 10:32:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:28.402 10:32:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:28.402 10:32:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:28.402 10:32:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:30:28.402 10:32:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:28.402 10:32:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:28.402 10:32:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:28.402 10:32:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@509 -- # nvmfpid=4049534 00:30:28.402 10:32:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@510 -- # waitforlisten 4049534 00:30:28.402 10:32:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:30:28.402 10:32:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 4049534 ']' 00:30:28.402 10:32:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:28.402 10:32:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:28.402 10:32:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:28.402 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:28.402 10:32:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:28.402 10:32:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:28.661 [2024-12-13 10:32:22.364642] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:30:28.661 [2024-12-13 10:32:22.364731] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:28.661 [2024-12-13 10:32:22.480735] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:28.920 [2024-12-13 10:32:22.585662] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:28.920 [2024-12-13 10:32:22.585705] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:28.920 [2024-12-13 10:32:22.585715] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:28.920 [2024-12-13 10:32:22.585725] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:28.920 [2024-12-13 10:32:22.585732] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:28.920 [2024-12-13 10:32:22.587924] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:30:28.920 [2024-12-13 10:32:22.587991] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:30:28.920 [2024-12-13 10:32:22.588000] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:30:29.488 10:32:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:29.488 10:32:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:30:29.488 10:32:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:29.488 10:32:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:29.488 10:32:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:29.488 10:32:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:29.488 10:32:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:29.488 10:32:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:29.488 10:32:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:29.488 [2024-12-13 10:32:23.196636] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:29.488 10:32:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:29.488 10:32:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:29.488 10:32:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:29.488 10:32:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:29.488 Malloc0 00:30:29.489 10:32:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:29.489 10:32:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:29.489 10:32:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:29.489 10:32:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:29.489 10:32:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:29.489 10:32:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:29.489 10:32:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:29.489 10:32:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:29.489 10:32:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:29.489 10:32:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:29.489 10:32:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:29.489 10:32:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:29.489 [2024-12-13 10:32:23.305925] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:29.489 10:32:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:29.489 10:32:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:30:29.489 10:32:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:29.489 10:32:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:29.489 [2024-12-13 10:32:23.317870] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:30:29.489 10:32:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:29.489 10:32:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:30:29.489 10:32:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:29.489 10:32:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:29.748 Malloc1 00:30:29.748 10:32:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:29.748 10:32:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:30:29.748 10:32:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:29.748 10:32:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:29.748 10:32:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:29.748 10:32:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:30:29.748 10:32:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:29.748 10:32:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:29.748 10:32:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:29.748 10:32:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:30:29.748 10:32:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:29.748 10:32:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:29.748 10:32:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:29.748 10:32:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:30:29.748 10:32:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:29.748 10:32:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:29.748 10:32:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:29.748 10:32:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=4049773 00:30:29.748 10:32:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:30:29.748 10:32:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:29.748 10:32:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 4049773 /var/tmp/bdevperf.sock 00:30:29.748 10:32:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 4049773 ']' 00:30:29.748 10:32:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:29.748 10:32:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:29.748 10:32:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:29.748 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:29.748 10:32:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:29.748 10:32:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:30.686 10:32:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:30.686 10:32:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:30:30.686 10:32:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:30:30.686 10:32:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:30.686 10:32:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:30.686 NVMe0n1 00:30:30.686 10:32:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:30.686 10:32:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:30.686 10:32:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:30.686 10:32:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:30:30.686 10:32:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:30.686 10:32:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:30.686 1 00:30:30.686 10:32:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:30:30.686 10:32:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:30:30.686 10:32:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:30:30.686 10:32:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:30:30.686 10:32:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:30.686 10:32:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:30:30.686 10:32:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:30.686 10:32:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:30:30.686 10:32:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:30.686 10:32:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:30.686 request: 00:30:30.686 { 00:30:30.686 "name": "NVMe0", 00:30:30.686 "trtype": "tcp", 00:30:30.686 "traddr": "10.0.0.2", 00:30:30.686 "adrfam": "ipv4", 00:30:30.686 "trsvcid": "4420", 00:30:30.686 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:30.686 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:30:30.686 "hostaddr": "10.0.0.1", 00:30:30.686 "prchk_reftag": false, 00:30:30.686 "prchk_guard": false, 00:30:30.686 "hdgst": false, 00:30:30.686 "ddgst": false, 00:30:30.686 "allow_unrecognized_csi": false, 00:30:30.686 "method": "bdev_nvme_attach_controller", 00:30:30.686 "req_id": 1 00:30:30.686 } 00:30:30.686 Got JSON-RPC error response 00:30:30.686 response: 00:30:30.686 { 00:30:30.686 "code": -114, 00:30:30.686 "message": "A controller named NVMe0 already exists with the specified network path" 00:30:30.686 } 00:30:30.686 10:32:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:30:30.686 10:32:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:30:30.686 10:32:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:30:30.686 10:32:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:30:30.686 10:32:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:30:30.686 10:32:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:30:30.686 10:32:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:30:30.686 10:32:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:30:30.686 10:32:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:30:30.686 10:32:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:30.686 10:32:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:30:30.686 10:32:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:30.686 10:32:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:30:30.686 10:32:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:30.686 10:32:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:30.686 request: 00:30:30.686 { 00:30:30.686 "name": "NVMe0", 00:30:30.686 "trtype": "tcp", 00:30:30.686 "traddr": "10.0.0.2", 00:30:30.686 "adrfam": "ipv4", 00:30:30.686 "trsvcid": "4420", 00:30:30.686 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:30:30.686 "hostaddr": "10.0.0.1", 00:30:30.686 "prchk_reftag": false, 00:30:30.686 "prchk_guard": false, 00:30:30.686 "hdgst": false, 00:30:30.686 "ddgst": false, 00:30:30.686 "allow_unrecognized_csi": false, 00:30:30.686 "method": "bdev_nvme_attach_controller", 00:30:30.686 "req_id": 1 00:30:30.686 } 00:30:30.686 Got JSON-RPC error response 00:30:30.686 response: 00:30:30.686 { 00:30:30.686 "code": -114, 00:30:30.687 "message": "A controller named NVMe0 already exists with the specified network path" 00:30:30.687 } 00:30:30.687 10:32:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:30:30.687 10:32:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:30:30.687 10:32:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:30:30.687 10:32:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:30:30.687 10:32:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:30:30.687 10:32:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:30:30.687 10:32:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:30:30.687 10:32:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:30:30.687 10:32:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:30:30.687 10:32:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:30.687 10:32:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:30:30.687 10:32:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:30.687 10:32:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:30:30.687 10:32:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:30.687 10:32:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:30.687 request: 00:30:30.687 { 00:30:30.687 "name": "NVMe0", 00:30:30.687 "trtype": "tcp", 00:30:30.687 "traddr": "10.0.0.2", 00:30:30.687 "adrfam": "ipv4", 00:30:30.687 "trsvcid": "4420", 00:30:30.687 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:30.687 "hostaddr": "10.0.0.1", 00:30:30.687 "prchk_reftag": false, 00:30:30.687 "prchk_guard": false, 00:30:30.687 "hdgst": false, 00:30:30.687 "ddgst": false, 00:30:30.687 "multipath": "disable", 00:30:30.687 "allow_unrecognized_csi": false, 00:30:30.687 "method": "bdev_nvme_attach_controller", 00:30:30.687 "req_id": 1 00:30:30.687 } 00:30:30.687 Got JSON-RPC error response 00:30:30.687 response: 00:30:30.687 { 00:30:30.687 "code": -114, 00:30:30.687 "message": "A controller named NVMe0 already exists and multipath is disabled" 00:30:30.687 } 00:30:30.687 10:32:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:30:30.687 10:32:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:30:30.687 10:32:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:30:30.687 10:32:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:30:30.687 10:32:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:30:30.687 10:32:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:30:30.687 10:32:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:30:30.687 10:32:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:30:30.687 10:32:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:30:30.687 10:32:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:30.687 10:32:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:30:30.687 10:32:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:30.687 10:32:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:30:30.687 10:32:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:30.687 10:32:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:30.687 request: 00:30:30.687 { 00:30:30.687 "name": "NVMe0", 00:30:30.687 "trtype": "tcp", 00:30:30.687 "traddr": "10.0.0.2", 00:30:30.687 "adrfam": "ipv4", 00:30:30.687 "trsvcid": "4420", 00:30:30.687 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:30.687 "hostaddr": "10.0.0.1", 00:30:30.687 "prchk_reftag": false, 00:30:30.687 "prchk_guard": false, 00:30:30.687 "hdgst": false, 00:30:30.687 "ddgst": false, 00:30:30.687 "multipath": "failover", 00:30:30.687 "allow_unrecognized_csi": false, 00:30:30.687 "method": "bdev_nvme_attach_controller", 00:30:30.687 "req_id": 1 00:30:30.687 } 00:30:30.687 Got JSON-RPC error response 00:30:30.687 response: 00:30:30.687 { 00:30:30.687 "code": -114, 00:30:30.687 "message": "A controller named NVMe0 already exists with the specified network path" 00:30:30.687 } 00:30:30.687 10:32:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:30:30.687 10:32:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:30:30.687 10:32:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:30:30.687 10:32:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:30:30.687 10:32:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:30:30.687 10:32:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:30.687 10:32:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:30.687 10:32:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:30.946 NVMe0n1 00:30:30.946 10:32:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:30.946 10:32:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:30.946 10:32:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:30.946 10:32:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:30.946 10:32:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:30.946 10:32:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:30:30.946 10:32:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:30.946 10:32:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:30.946 00:30:30.946 10:32:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:30.946 10:32:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:30.946 10:32:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:30:30.946 10:32:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:30.946 10:32:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:30.946 10:32:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:30.946 10:32:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:30:30.946 10:32:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:30:32.323 { 00:30:32.323 "results": [ 00:30:32.323 { 00:30:32.323 "job": "NVMe0n1", 00:30:32.323 "core_mask": "0x1", 00:30:32.323 "workload": "write", 00:30:32.323 "status": "finished", 00:30:32.323 "queue_depth": 128, 00:30:32.323 "io_size": 4096, 00:30:32.323 "runtime": 1.004895, 00:30:32.323 "iops": 21486.822006279264, 00:30:32.323 "mibps": 83.93289846202838, 00:30:32.323 "io_failed": 0, 00:30:32.323 "io_timeout": 0, 00:30:32.323 "avg_latency_us": 5949.104863529702, 00:30:32.323 "min_latency_us": 3401.630476190476, 00:30:32.323 "max_latency_us": 10673.005714285715 00:30:32.323 } 00:30:32.323 ], 00:30:32.323 "core_count": 1 00:30:32.323 } 00:30:32.323 10:32:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:30:32.323 10:32:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:32.323 10:32:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:32.323 10:32:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:32.323 10:32:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n '' ]] 00:30:32.323 10:32:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 4049773 00:30:32.323 10:32:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 4049773 ']' 00:30:32.323 10:32:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 4049773 00:30:32.323 10:32:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:30:32.323 10:32:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:32.323 10:32:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4049773 00:30:32.323 10:32:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:32.323 10:32:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:32.323 10:32:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4049773' 00:30:32.323 killing process with pid 4049773 00:30:32.323 10:32:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 4049773 00:30:32.323 10:32:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 4049773 00:30:33.260 10:32:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:33.260 10:32:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:33.260 10:32:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:33.260 10:32:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:33.260 10:32:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:30:33.260 10:32:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:33.260 10:32:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:33.260 10:32:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:33.260 10:32:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:30:33.260 10:32:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:30:33.260 10:32:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:30:33.260 10:32:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:30:33.260 10:32:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # sort -u 00:30:33.260 10:32:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1600 -- # cat 00:30:33.260 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:30:33.260 [2024-12-13 10:32:23.503660] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:30:33.260 [2024-12-13 10:32:23.503769] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4049773 ] 00:30:33.260 [2024-12-13 10:32:23.616642] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:33.260 [2024-12-13 10:32:23.731081] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:30:33.260 [2024-12-13 10:32:24.731137] bdev.c:4957:bdev_name_add: *ERROR*: Bdev name c649f8c3-8e08-4b42-954e-932937ac3cfb already exists 00:30:33.260 [2024-12-13 10:32:24.731180] bdev.c:8177:bdev_register: *ERROR*: Unable to add uuid:c649f8c3-8e08-4b42-954e-932937ac3cfb alias for bdev NVMe1n1 00:30:33.260 [2024-12-13 10:32:24.731193] bdev_nvme.c:4666:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:30:33.260 Running I/O for 1 seconds... 00:30:33.260 21464.00 IOPS, 83.84 MiB/s 00:30:33.260 Latency(us) 00:30:33.260 [2024-12-13T09:32:27.151Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:33.260 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:30:33.260 NVMe0n1 : 1.00 21486.82 83.93 0.00 0.00 5949.10 3401.63 10673.01 00:30:33.260 [2024-12-13T09:32:27.151Z] =================================================================================================================== 00:30:33.260 [2024-12-13T09:32:27.151Z] Total : 21486.82 83.93 0.00 0.00 5949.10 3401.63 10673.01 00:30:33.260 Received shutdown signal, test time was about 1.000000 seconds 00:30:33.260 00:30:33.260 Latency(us) 00:30:33.260 [2024-12-13T09:32:27.151Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:33.260 [2024-12-13T09:32:27.151Z] =================================================================================================================== 00:30:33.260 [2024-12-13T09:32:27.151Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:33.260 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:30:33.260 10:32:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1605 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:30:33.260 10:32:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:30:33.260 10:32:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini 00:30:33.260 10:32:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:33.260 10:32:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync 00:30:33.260 10:32:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:33.260 10:32:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e 00:30:33.260 10:32:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:33.260 10:32:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:33.260 rmmod nvme_tcp 00:30:33.260 rmmod nvme_fabrics 00:30:33.260 rmmod nvme_keyring 00:30:33.260 10:32:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:33.260 10:32:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e 00:30:33.260 10:32:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0 00:30:33.260 10:32:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@517 -- # '[' -n 4049534 ']' 00:30:33.260 10:32:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@518 -- # killprocess 4049534 00:30:33.260 10:32:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 4049534 ']' 00:30:33.260 10:32:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 4049534 00:30:33.261 10:32:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:30:33.261 10:32:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:33.261 10:32:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4049534 00:30:33.261 10:32:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:33.261 10:32:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:33.261 10:32:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4049534' 00:30:33.261 killing process with pid 4049534 00:30:33.261 10:32:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 4049534 00:30:33.261 10:32:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 4049534 00:30:34.638 10:32:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:34.638 10:32:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:34.638 10:32:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:34.638 10:32:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # iptr 00:30:34.639 10:32:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-save 00:30:34.639 10:32:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:34.639 10:32:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-restore 00:30:34.639 10:32:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:34.639 10:32:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:34.639 10:32:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:34.639 10:32:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:34.639 10:32:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:37.178 10:32:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:37.178 00:30:37.178 real 0m14.032s 00:30:37.178 user 0m22.525s 00:30:37.178 sys 0m5.111s 00:30:37.178 10:32:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:37.178 10:32:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:37.178 ************************************ 00:30:37.178 END TEST nvmf_multicontroller 00:30:37.178 ************************************ 00:30:37.178 10:32:30 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:30:37.178 10:32:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:30:37.178 10:32:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:37.178 10:32:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:37.178 ************************************ 00:30:37.178 START TEST nvmf_aer 00:30:37.178 ************************************ 00:30:37.178 10:32:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:30:37.178 * Looking for test storage... 00:30:37.178 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:37.178 10:32:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:30:37.178 10:32:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # lcov --version 00:30:37.178 10:32:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:30:37.178 10:32:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:30:37.178 10:32:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:37.178 10:32:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:37.178 10:32:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:37.178 10:32:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:30:37.178 10:32:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:30:37.178 10:32:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:30:37.178 10:32:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:30:37.178 10:32:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:30:37.178 10:32:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:30:37.178 10:32:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:30:37.178 10:32:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:37.178 10:32:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:30:37.178 10:32:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:30:37.178 10:32:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:37.178 10:32:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:37.178 10:32:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:30:37.178 10:32:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:30:37.178 10:32:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:37.178 10:32:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:30:37.178 10:32:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:30:37.178 10:32:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:30:37.178 10:32:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:30:37.178 10:32:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:37.178 10:32:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:30:37.178 10:32:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:30:37.178 10:32:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:37.179 10:32:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:37.179 10:32:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:30:37.179 10:32:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:37.179 10:32:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:30:37.179 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:37.179 --rc genhtml_branch_coverage=1 00:30:37.179 --rc genhtml_function_coverage=1 00:30:37.179 --rc genhtml_legend=1 00:30:37.179 --rc geninfo_all_blocks=1 00:30:37.179 --rc geninfo_unexecuted_blocks=1 00:30:37.179 00:30:37.179 ' 00:30:37.179 10:32:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:30:37.179 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:37.179 --rc genhtml_branch_coverage=1 00:30:37.179 --rc genhtml_function_coverage=1 00:30:37.179 --rc genhtml_legend=1 00:30:37.179 --rc geninfo_all_blocks=1 00:30:37.179 --rc geninfo_unexecuted_blocks=1 00:30:37.179 00:30:37.179 ' 00:30:37.179 10:32:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:30:37.179 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:37.179 --rc genhtml_branch_coverage=1 00:30:37.179 --rc genhtml_function_coverage=1 00:30:37.179 --rc genhtml_legend=1 00:30:37.179 --rc geninfo_all_blocks=1 00:30:37.179 --rc geninfo_unexecuted_blocks=1 00:30:37.179 00:30:37.179 ' 00:30:37.179 10:32:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:30:37.179 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:37.179 --rc genhtml_branch_coverage=1 00:30:37.179 --rc genhtml_function_coverage=1 00:30:37.179 --rc genhtml_legend=1 00:30:37.179 --rc geninfo_all_blocks=1 00:30:37.179 --rc geninfo_unexecuted_blocks=1 00:30:37.179 00:30:37.179 ' 00:30:37.179 10:32:30 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:37.179 10:32:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:30:37.179 10:32:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:37.179 10:32:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:37.179 10:32:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:37.179 10:32:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:37.179 10:32:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:37.179 10:32:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:37.179 10:32:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:37.179 10:32:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:37.179 10:32:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:37.179 10:32:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:37.179 10:32:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:30:37.179 10:32:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:30:37.179 10:32:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:37.179 10:32:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:37.179 10:32:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:37.179 10:32:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:37.179 10:32:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:37.179 10:32:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:30:37.179 10:32:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:37.179 10:32:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:37.179 10:32:30 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:37.179 10:32:30 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:37.179 10:32:30 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:37.179 10:32:30 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:37.179 10:32:30 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:30:37.179 10:32:30 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:37.179 10:32:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:30:37.179 10:32:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:37.179 10:32:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:37.179 10:32:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:37.179 10:32:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:37.179 10:32:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:37.179 10:32:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:37.179 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:37.179 10:32:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:37.179 10:32:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:37.179 10:32:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:37.179 10:32:30 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:30:37.179 10:32:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:37.179 10:32:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:37.179 10:32:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:37.179 10:32:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:37.179 10:32:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:37.179 10:32:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:37.179 10:32:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:37.179 10:32:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:37.179 10:32:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:37.179 10:32:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:37.179 10:32:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable 00:30:37.179 10:32:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:42.464 10:32:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:42.464 10:32:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=() 00:30:42.464 10:32:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:42.464 10:32:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:42.464 10:32:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:42.464 10:32:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:42.464 10:32:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:42.464 10:32:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=() 00:30:42.464 10:32:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:42.464 10:32:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=() 00:30:42.464 10:32:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810 00:30:42.464 10:32:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=() 00:30:42.464 10:32:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722 00:30:42.464 10:32:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=() 00:30:42.464 10:32:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx 00:30:42.464 10:32:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:42.464 10:32:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:42.464 10:32:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:42.464 10:32:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:42.464 10:32:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:42.464 10:32:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:42.464 10:32:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:42.464 10:32:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:42.464 10:32:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:42.464 10:32:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:42.464 10:32:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:42.464 10:32:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:42.464 10:32:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:42.464 10:32:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:42.464 10:32:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:42.464 10:32:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:42.464 10:32:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:42.464 10:32:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:42.464 10:32:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:42.464 10:32:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:30:42.464 Found 0000:af:00.0 (0x8086 - 0x159b) 00:30:42.464 10:32:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:42.464 10:32:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:42.464 10:32:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:42.464 10:32:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:42.464 10:32:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:42.464 10:32:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:42.464 10:32:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:30:42.464 Found 0000:af:00.1 (0x8086 - 0x159b) 00:30:42.464 10:32:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:42.464 10:32:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:42.464 10:32:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:42.464 10:32:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:42.464 10:32:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:42.464 10:32:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:42.464 10:32:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:42.464 10:32:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:42.464 10:32:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:42.464 10:32:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:42.464 10:32:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:42.465 10:32:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:42.465 10:32:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:42.465 10:32:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:42.465 10:32:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:42.465 10:32:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:30:42.465 Found net devices under 0000:af:00.0: cvl_0_0 00:30:42.465 10:32:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:42.465 10:32:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:42.465 10:32:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:42.465 10:32:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:42.465 10:32:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:42.465 10:32:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:42.465 10:32:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:42.465 10:32:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:42.465 10:32:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:30:42.465 Found net devices under 0000:af:00.1: cvl_0_1 00:30:42.465 10:32:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:42.465 10:32:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:42.465 10:32:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # is_hw=yes 00:30:42.465 10:32:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:42.465 10:32:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:42.465 10:32:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:42.465 10:32:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:42.465 10:32:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:42.465 10:32:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:42.465 10:32:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:42.465 10:32:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:42.465 10:32:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:42.465 10:32:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:42.465 10:32:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:42.465 10:32:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:42.465 10:32:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:42.465 10:32:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:42.465 10:32:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:42.465 10:32:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:42.465 10:32:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:42.465 10:32:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:42.465 10:32:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:42.465 10:32:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:42.465 10:32:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:42.465 10:32:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:42.465 10:32:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:42.465 10:32:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:42.465 10:32:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:42.465 10:32:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:42.465 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:42.465 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.328 ms 00:30:42.465 00:30:42.465 --- 10.0.0.2 ping statistics --- 00:30:42.465 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:42.465 rtt min/avg/max/mdev = 0.328/0.328/0.328/0.000 ms 00:30:42.465 10:32:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:42.465 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:42.465 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.195 ms 00:30:42.465 00:30:42.465 --- 10.0.0.1 ping statistics --- 00:30:42.465 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:42.465 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:30:42.465 10:32:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:42.465 10:32:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # return 0 00:30:42.465 10:32:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:42.465 10:32:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:42.465 10:32:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:42.465 10:32:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:42.465 10:32:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:42.465 10:32:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:42.465 10:32:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:42.465 10:32:36 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:30:42.465 10:32:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:42.465 10:32:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:42.465 10:32:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:42.465 10:32:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # nvmfpid=4053927 00:30:42.465 10:32:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # waitforlisten 4053927 00:30:42.465 10:32:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # '[' -z 4053927 ']' 00:30:42.465 10:32:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:42.465 10:32:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:42.465 10:32:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:30:42.465 10:32:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:42.465 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:42.465 10:32:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:42.465 10:32:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:42.465 [2024-12-13 10:32:36.191843] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:30:42.465 [2024-12-13 10:32:36.191931] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:42.465 [2024-12-13 10:32:36.310619] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:42.724 [2024-12-13 10:32:36.419172] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:42.724 [2024-12-13 10:32:36.419218] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:42.724 [2024-12-13 10:32:36.419228] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:42.724 [2024-12-13 10:32:36.419238] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:42.724 [2024-12-13 10:32:36.419246] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:42.724 [2024-12-13 10:32:36.421629] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:30:42.724 [2024-12-13 10:32:36.421704] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:30:42.724 [2024-12-13 10:32:36.421720] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:30:42.724 [2024-12-13 10:32:36.421731] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:30:43.291 10:32:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:43.291 10:32:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@868 -- # return 0 00:30:43.291 10:32:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:43.291 10:32:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:43.291 10:32:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:43.291 10:32:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:43.291 10:32:37 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:43.291 10:32:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:43.291 10:32:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:43.291 [2024-12-13 10:32:37.060308] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:43.291 10:32:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:43.291 10:32:37 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:30:43.291 10:32:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:43.291 10:32:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:43.291 Malloc0 00:30:43.291 10:32:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:43.291 10:32:37 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:30:43.291 10:32:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:43.291 10:32:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:43.291 10:32:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:43.291 10:32:37 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:43.291 10:32:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:43.291 10:32:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:43.291 10:32:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:43.291 10:32:37 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:43.291 10:32:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:43.291 10:32:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:43.291 [2024-12-13 10:32:37.182126] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:43.550 10:32:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:43.550 10:32:37 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:30:43.550 10:32:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:43.550 10:32:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:43.550 [ 00:30:43.550 { 00:30:43.550 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:30:43.550 "subtype": "Discovery", 00:30:43.550 "listen_addresses": [], 00:30:43.550 "allow_any_host": true, 00:30:43.550 "hosts": [] 00:30:43.550 }, 00:30:43.550 { 00:30:43.550 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:30:43.550 "subtype": "NVMe", 00:30:43.550 "listen_addresses": [ 00:30:43.550 { 00:30:43.550 "trtype": "TCP", 00:30:43.550 "adrfam": "IPv4", 00:30:43.550 "traddr": "10.0.0.2", 00:30:43.550 "trsvcid": "4420" 00:30:43.550 } 00:30:43.550 ], 00:30:43.550 "allow_any_host": true, 00:30:43.550 "hosts": [], 00:30:43.550 "serial_number": "SPDK00000000000001", 00:30:43.550 "model_number": "SPDK bdev Controller", 00:30:43.550 "max_namespaces": 2, 00:30:43.550 "min_cntlid": 1, 00:30:43.550 "max_cntlid": 65519, 00:30:43.550 "namespaces": [ 00:30:43.550 { 00:30:43.550 "nsid": 1, 00:30:43.550 "bdev_name": "Malloc0", 00:30:43.550 "name": "Malloc0", 00:30:43.550 "nguid": "347EC8E48B1048D59A06334DFA94397F", 00:30:43.550 "uuid": "347ec8e4-8b10-48d5-9a06-334dfa94397f" 00:30:43.550 } 00:30:43.550 ] 00:30:43.550 } 00:30:43.550 ] 00:30:43.550 10:32:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:43.550 10:32:37 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:30:43.550 10:32:37 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:30:43.550 10:32:37 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=4054159 00:30:43.550 10:32:37 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:30:43.550 10:32:37 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:30:43.550 10:32:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # local i=0 00:30:43.550 10:32:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:30:43.550 10:32:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 0 -lt 200 ']' 00:30:43.550 10:32:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=1 00:30:43.550 10:32:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:30:43.550 10:32:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:30:43.550 10:32:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 1 -lt 200 ']' 00:30:43.550 10:32:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=2 00:30:43.550 10:32:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:30:43.550 10:32:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:30:43.550 10:32:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 2 -lt 200 ']' 00:30:43.550 10:32:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=3 00:30:43.550 10:32:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:30:43.809 10:32:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:30:43.809 10:32:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:30:43.809 10:32:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1280 -- # return 0 00:30:43.809 10:32:37 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:30:43.809 10:32:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:43.809 10:32:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:43.809 Malloc1 00:30:43.809 10:32:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:43.809 10:32:37 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:30:43.809 10:32:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:43.809 10:32:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:43.809 10:32:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:43.809 10:32:37 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:30:43.809 10:32:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:43.809 10:32:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:43.809 [ 00:30:43.809 { 00:30:43.809 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:30:43.809 "subtype": "Discovery", 00:30:43.809 "listen_addresses": [], 00:30:43.809 "allow_any_host": true, 00:30:43.809 "hosts": [] 00:30:43.809 }, 00:30:43.809 { 00:30:43.809 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:30:43.809 "subtype": "NVMe", 00:30:43.809 "listen_addresses": [ 00:30:43.809 { 00:30:43.809 "trtype": "TCP", 00:30:43.809 "adrfam": "IPv4", 00:30:43.810 "traddr": "10.0.0.2", 00:30:43.810 "trsvcid": "4420" 00:30:43.810 } 00:30:43.810 ], 00:30:43.810 "allow_any_host": true, 00:30:43.810 "hosts": [], 00:30:43.810 "serial_number": "SPDK00000000000001", 00:30:43.810 "model_number": "SPDK bdev Controller", 00:30:43.810 "max_namespaces": 2, 00:30:43.810 "min_cntlid": 1, 00:30:43.810 "max_cntlid": 65519, 00:30:43.810 "namespaces": [ 00:30:43.810 { 00:30:43.810 "nsid": 1, 00:30:43.810 "bdev_name": "Malloc0", 00:30:43.810 "name": "Malloc0", 00:30:43.810 "nguid": "347EC8E48B1048D59A06334DFA94397F", 00:30:44.069 "uuid": "347ec8e4-8b10-48d5-9a06-334dfa94397f" 00:30:44.069 }, 00:30:44.069 { 00:30:44.069 "nsid": 2, 00:30:44.069 "bdev_name": "Malloc1", 00:30:44.069 "name": "Malloc1", 00:30:44.069 "nguid": "4B68448C81284515AA00DA990C74E57B", 00:30:44.069 "uuid": "4b68448c-8128-4515-aa00-da990c74e57b" 00:30:44.069 } 00:30:44.069 ] 00:30:44.069 } 00:30:44.069 ] 00:30:44.069 10:32:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:44.069 10:32:37 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 4054159 00:30:44.069 Asynchronous Event Request test 00:30:44.069 Attaching to 10.0.0.2 00:30:44.069 Attached to 10.0.0.2 00:30:44.069 Registering asynchronous event callbacks... 00:30:44.069 Starting namespace attribute notice tests for all controllers... 00:30:44.069 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:30:44.069 aer_cb - Changed Namespace 00:30:44.069 Cleaning up... 00:30:44.069 10:32:37 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:30:44.069 10:32:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:44.069 10:32:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:44.069 10:32:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:44.069 10:32:37 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:30:44.069 10:32:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:44.069 10:32:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:44.328 10:32:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:44.328 10:32:38 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:44.328 10:32:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:44.328 10:32:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:44.328 10:32:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:44.328 10:32:38 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:30:44.328 10:32:38 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:30:44.328 10:32:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:44.328 10:32:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:30:44.328 10:32:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:44.328 10:32:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:30:44.328 10:32:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:44.328 10:32:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:44.328 rmmod nvme_tcp 00:30:44.328 rmmod nvme_fabrics 00:30:44.328 rmmod nvme_keyring 00:30:44.328 10:32:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:44.328 10:32:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:30:44.328 10:32:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:30:44.328 10:32:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@517 -- # '[' -n 4053927 ']' 00:30:44.328 10:32:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # killprocess 4053927 00:30:44.328 10:32:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # '[' -z 4053927 ']' 00:30:44.328 10:32:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # kill -0 4053927 00:30:44.328 10:32:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # uname 00:30:44.328 10:32:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:44.328 10:32:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4053927 00:30:44.587 10:32:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:44.587 10:32:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:44.587 10:32:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4053927' 00:30:44.587 killing process with pid 4053927 00:30:44.587 10:32:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@973 -- # kill 4053927 00:30:44.587 10:32:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@978 -- # wait 4053927 00:30:45.523 10:32:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:45.523 10:32:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:45.523 10:32:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:45.523 10:32:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # iptr 00:30:45.781 10:32:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-save 00:30:45.781 10:32:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:45.781 10:32:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-restore 00:30:45.781 10:32:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:45.781 10:32:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:45.781 10:32:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:45.781 10:32:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:45.781 10:32:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:47.686 10:32:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:47.686 00:30:47.686 real 0m10.853s 00:30:47.686 user 0m12.600s 00:30:47.686 sys 0m4.584s 00:30:47.686 10:32:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:47.686 10:32:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:47.686 ************************************ 00:30:47.686 END TEST nvmf_aer 00:30:47.686 ************************************ 00:30:47.686 10:32:41 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:30:47.686 10:32:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:30:47.686 10:32:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:47.686 10:32:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:47.945 ************************************ 00:30:47.945 START TEST nvmf_async_init 00:30:47.945 ************************************ 00:30:47.945 10:32:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:30:47.945 * Looking for test storage... 00:30:47.945 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:47.945 10:32:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:30:47.945 10:32:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # lcov --version 00:30:47.945 10:32:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:30:47.945 10:32:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:30:47.945 10:32:41 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:47.945 10:32:41 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:47.945 10:32:41 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:47.945 10:32:41 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:30:47.945 10:32:41 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:30:47.945 10:32:41 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:30:47.945 10:32:41 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:30:47.945 10:32:41 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:30:47.945 10:32:41 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:30:47.945 10:32:41 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:30:47.945 10:32:41 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:47.945 10:32:41 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:30:47.945 10:32:41 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:30:47.945 10:32:41 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:47.945 10:32:41 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:47.945 10:32:41 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:30:47.945 10:32:41 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:30:47.945 10:32:41 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:47.945 10:32:41 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:30:47.945 10:32:41 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:30:47.945 10:32:41 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:30:47.945 10:32:41 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:30:47.945 10:32:41 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:47.945 10:32:41 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:30:47.945 10:32:41 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:30:47.945 10:32:41 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:47.945 10:32:41 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:47.945 10:32:41 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:30:47.945 10:32:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:47.945 10:32:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:30:47.945 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:47.945 --rc genhtml_branch_coverage=1 00:30:47.945 --rc genhtml_function_coverage=1 00:30:47.945 --rc genhtml_legend=1 00:30:47.945 --rc geninfo_all_blocks=1 00:30:47.945 --rc geninfo_unexecuted_blocks=1 00:30:47.945 00:30:47.945 ' 00:30:47.945 10:32:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:30:47.945 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:47.945 --rc genhtml_branch_coverage=1 00:30:47.945 --rc genhtml_function_coverage=1 00:30:47.945 --rc genhtml_legend=1 00:30:47.945 --rc geninfo_all_blocks=1 00:30:47.945 --rc geninfo_unexecuted_blocks=1 00:30:47.945 00:30:47.945 ' 00:30:47.945 10:32:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:30:47.945 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:47.945 --rc genhtml_branch_coverage=1 00:30:47.945 --rc genhtml_function_coverage=1 00:30:47.945 --rc genhtml_legend=1 00:30:47.945 --rc geninfo_all_blocks=1 00:30:47.945 --rc geninfo_unexecuted_blocks=1 00:30:47.945 00:30:47.945 ' 00:30:47.945 10:32:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:30:47.945 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:47.945 --rc genhtml_branch_coverage=1 00:30:47.945 --rc genhtml_function_coverage=1 00:30:47.945 --rc genhtml_legend=1 00:30:47.945 --rc geninfo_all_blocks=1 00:30:47.945 --rc geninfo_unexecuted_blocks=1 00:30:47.945 00:30:47.945 ' 00:30:47.945 10:32:41 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:47.945 10:32:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:30:47.945 10:32:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:47.945 10:32:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:47.945 10:32:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:47.945 10:32:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:47.945 10:32:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:47.945 10:32:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:47.945 10:32:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:47.945 10:32:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:47.945 10:32:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:47.945 10:32:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:47.945 10:32:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:30:47.946 10:32:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:30:47.946 10:32:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:47.946 10:32:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:47.946 10:32:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:47.946 10:32:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:47.946 10:32:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:47.946 10:32:41 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:30:47.946 10:32:41 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:47.946 10:32:41 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:47.946 10:32:41 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:47.946 10:32:41 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:47.946 10:32:41 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:47.946 10:32:41 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:47.946 10:32:41 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:30:47.946 10:32:41 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:47.946 10:32:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:30:47.946 10:32:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:47.946 10:32:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:47.946 10:32:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:47.946 10:32:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:47.946 10:32:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:47.946 10:32:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:47.946 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:47.946 10:32:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:47.946 10:32:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:47.946 10:32:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:47.946 10:32:41 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:30:47.946 10:32:41 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:30:47.946 10:32:41 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:30:47.946 10:32:41 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:30:47.946 10:32:41 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:30:47.946 10:32:41 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:30:47.946 10:32:41 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=989e5fd2f97c49c29e7ff1d6f99f43a1 00:30:47.946 10:32:41 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:30:47.946 10:32:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:47.946 10:32:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:47.946 10:32:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:47.946 10:32:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:47.946 10:32:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:47.946 10:32:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:47.946 10:32:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:47.946 10:32:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:47.946 10:32:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:47.946 10:32:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:47.946 10:32:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable 00:30:47.946 10:32:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:53.402 10:32:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:53.402 10:32:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=() 00:30:53.402 10:32:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:53.402 10:32:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:53.402 10:32:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:53.402 10:32:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:53.402 10:32:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:53.402 10:32:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=() 00:30:53.402 10:32:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:53.402 10:32:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=() 00:30:53.402 10:32:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810 00:30:53.402 10:32:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=() 00:30:53.402 10:32:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722 00:30:53.402 10:32:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=() 00:30:53.402 10:32:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx 00:30:53.402 10:32:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:53.402 10:32:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:53.402 10:32:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:53.402 10:32:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:53.402 10:32:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:53.402 10:32:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:53.402 10:32:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:53.402 10:32:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:53.402 10:32:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:53.403 10:32:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:53.403 10:32:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:53.403 10:32:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:53.403 10:32:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:53.403 10:32:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:53.403 10:32:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:53.403 10:32:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:53.403 10:32:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:53.403 10:32:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:53.403 10:32:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:53.403 10:32:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:30:53.403 Found 0000:af:00.0 (0x8086 - 0x159b) 00:30:53.403 10:32:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:53.403 10:32:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:53.403 10:32:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:53.403 10:32:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:53.403 10:32:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:53.403 10:32:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:53.403 10:32:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:30:53.403 Found 0000:af:00.1 (0x8086 - 0x159b) 00:30:53.403 10:32:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:53.403 10:32:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:53.403 10:32:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:53.403 10:32:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:53.403 10:32:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:53.403 10:32:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:53.403 10:32:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:53.403 10:32:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:53.403 10:32:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:53.403 10:32:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:53.403 10:32:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:53.403 10:32:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:53.403 10:32:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:53.403 10:32:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:53.403 10:32:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:53.403 10:32:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:30:53.403 Found net devices under 0000:af:00.0: cvl_0_0 00:30:53.403 10:32:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:53.403 10:32:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:53.403 10:32:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:53.403 10:32:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:53.403 10:32:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:53.403 10:32:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:53.403 10:32:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:53.403 10:32:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:53.403 10:32:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:30:53.403 Found net devices under 0000:af:00.1: cvl_0_1 00:30:53.403 10:32:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:53.403 10:32:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:53.403 10:32:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # is_hw=yes 00:30:53.403 10:32:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:53.403 10:32:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:53.403 10:32:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:53.403 10:32:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:53.403 10:32:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:53.403 10:32:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:53.403 10:32:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:53.403 10:32:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:53.403 10:32:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:53.403 10:32:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:53.403 10:32:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:53.403 10:32:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:53.403 10:32:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:53.403 10:32:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:53.403 10:32:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:53.403 10:32:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:53.403 10:32:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:53.403 10:32:46 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:53.403 10:32:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:53.403 10:32:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:53.403 10:32:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:53.403 10:32:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:53.403 10:32:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:53.403 10:32:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:53.403 10:32:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:53.403 10:32:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:53.403 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:53.403 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.316 ms 00:30:53.403 00:30:53.403 --- 10.0.0.2 ping statistics --- 00:30:53.403 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:53.403 rtt min/avg/max/mdev = 0.316/0.316/0.316/0.000 ms 00:30:53.403 10:32:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:53.403 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:53.403 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.189 ms 00:30:53.403 00:30:53.403 --- 10.0.0.1 ping statistics --- 00:30:53.403 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:53.403 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:30:53.403 10:32:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:53.403 10:32:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # return 0 00:30:53.403 10:32:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:53.403 10:32:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:53.403 10:32:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:53.403 10:32:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:53.403 10:32:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:53.403 10:32:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:53.403 10:32:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:53.403 10:32:47 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:30:53.403 10:32:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:53.403 10:32:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:53.403 10:32:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:53.403 10:32:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # nvmfpid=4057867 00:30:53.403 10:32:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:30:53.403 10:32:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # waitforlisten 4057867 00:30:53.403 10:32:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # '[' -z 4057867 ']' 00:30:53.403 10:32:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:53.403 10:32:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:53.403 10:32:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:53.403 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:53.404 10:32:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:53.404 10:32:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:53.404 [2024-12-13 10:32:47.234546] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:30:53.404 [2024-12-13 10:32:47.234651] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:53.662 [2024-12-13 10:32:47.351642] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:53.663 [2024-12-13 10:32:47.456200] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:53.663 [2024-12-13 10:32:47.456247] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:53.663 [2024-12-13 10:32:47.456257] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:53.663 [2024-12-13 10:32:47.456267] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:53.663 [2024-12-13 10:32:47.456275] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:53.663 [2024-12-13 10:32:47.457753] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:30:54.230 10:32:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:54.230 10:32:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@868 -- # return 0 00:30:54.230 10:32:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:54.230 10:32:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:54.230 10:32:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:54.230 10:32:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:54.230 10:32:48 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:30:54.230 10:32:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:54.231 10:32:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:54.231 [2024-12-13 10:32:48.077124] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:54.231 10:32:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:54.231 10:32:48 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:30:54.231 10:32:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:54.231 10:32:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:54.231 null0 00:30:54.231 10:32:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:54.231 10:32:48 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:30:54.231 10:32:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:54.231 10:32:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:54.231 10:32:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:54.231 10:32:48 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:30:54.231 10:32:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:54.231 10:32:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:54.231 10:32:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:54.231 10:32:48 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 989e5fd2f97c49c29e7ff1d6f99f43a1 00:30:54.231 10:32:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:54.231 10:32:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:54.231 10:32:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:54.231 10:32:48 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:54.231 10:32:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:54.489 10:32:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:54.490 [2024-12-13 10:32:48.129400] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:54.490 10:32:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:54.490 10:32:48 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:30:54.490 10:32:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:54.490 10:32:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:54.490 nvme0n1 00:30:54.490 10:32:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:54.490 10:32:48 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:30:54.490 10:32:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:54.490 10:32:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:54.749 [ 00:30:54.749 { 00:30:54.749 "name": "nvme0n1", 00:30:54.749 "aliases": [ 00:30:54.749 "989e5fd2-f97c-49c2-9e7f-f1d6f99f43a1" 00:30:54.749 ], 00:30:54.749 "product_name": "NVMe disk", 00:30:54.749 "block_size": 512, 00:30:54.749 "num_blocks": 2097152, 00:30:54.749 "uuid": "989e5fd2-f97c-49c2-9e7f-f1d6f99f43a1", 00:30:54.749 "numa_id": 1, 00:30:54.749 "assigned_rate_limits": { 00:30:54.749 "rw_ios_per_sec": 0, 00:30:54.749 "rw_mbytes_per_sec": 0, 00:30:54.749 "r_mbytes_per_sec": 0, 00:30:54.749 "w_mbytes_per_sec": 0 00:30:54.749 }, 00:30:54.749 "claimed": false, 00:30:54.749 "zoned": false, 00:30:54.749 "supported_io_types": { 00:30:54.749 "read": true, 00:30:54.749 "write": true, 00:30:54.749 "unmap": false, 00:30:54.749 "flush": true, 00:30:54.749 "reset": true, 00:30:54.749 "nvme_admin": true, 00:30:54.749 "nvme_io": true, 00:30:54.749 "nvme_io_md": false, 00:30:54.749 "write_zeroes": true, 00:30:54.749 "zcopy": false, 00:30:54.749 "get_zone_info": false, 00:30:54.749 "zone_management": false, 00:30:54.749 "zone_append": false, 00:30:54.749 "compare": true, 00:30:54.749 "compare_and_write": true, 00:30:54.749 "abort": true, 00:30:54.749 "seek_hole": false, 00:30:54.749 "seek_data": false, 00:30:54.749 "copy": true, 00:30:54.749 "nvme_iov_md": false 00:30:54.749 }, 00:30:54.749 "memory_domains": [ 00:30:54.749 { 00:30:54.749 "dma_device_id": "system", 00:30:54.749 "dma_device_type": 1 00:30:54.749 } 00:30:54.749 ], 00:30:54.749 "driver_specific": { 00:30:54.749 "nvme": [ 00:30:54.749 { 00:30:54.749 "trid": { 00:30:54.749 "trtype": "TCP", 00:30:54.749 "adrfam": "IPv4", 00:30:54.749 "traddr": "10.0.0.2", 00:30:54.749 "trsvcid": "4420", 00:30:54.749 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:30:54.749 }, 00:30:54.749 "ctrlr_data": { 00:30:54.749 "cntlid": 1, 00:30:54.749 "vendor_id": "0x8086", 00:30:54.749 "model_number": "SPDK bdev Controller", 00:30:54.749 "serial_number": "00000000000000000000", 00:30:54.749 "firmware_revision": "25.01", 00:30:54.749 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:54.749 "oacs": { 00:30:54.749 "security": 0, 00:30:54.749 "format": 0, 00:30:54.749 "firmware": 0, 00:30:54.749 "ns_manage": 0 00:30:54.749 }, 00:30:54.749 "multi_ctrlr": true, 00:30:54.749 "ana_reporting": false 00:30:54.749 }, 00:30:54.749 "vs": { 00:30:54.749 "nvme_version": "1.3" 00:30:54.749 }, 00:30:54.749 "ns_data": { 00:30:54.749 "id": 1, 00:30:54.749 "can_share": true 00:30:54.749 } 00:30:54.749 } 00:30:54.749 ], 00:30:54.749 "mp_policy": "active_passive" 00:30:54.749 } 00:30:54.749 } 00:30:54.749 ] 00:30:54.749 10:32:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:54.749 10:32:48 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:30:54.749 10:32:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:54.749 10:32:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:54.749 [2024-12-13 10:32:48.399675] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:30:54.749 [2024-12-13 10:32:48.399771] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:30:54.749 [2024-12-13 10:32:48.541572] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:30:54.749 10:32:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:54.749 10:32:48 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:30:54.749 10:32:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:54.749 10:32:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:54.749 [ 00:30:54.749 { 00:30:54.749 "name": "nvme0n1", 00:30:54.749 "aliases": [ 00:30:54.749 "989e5fd2-f97c-49c2-9e7f-f1d6f99f43a1" 00:30:54.749 ], 00:30:54.749 "product_name": "NVMe disk", 00:30:54.749 "block_size": 512, 00:30:54.749 "num_blocks": 2097152, 00:30:54.749 "uuid": "989e5fd2-f97c-49c2-9e7f-f1d6f99f43a1", 00:30:54.749 "numa_id": 1, 00:30:54.749 "assigned_rate_limits": { 00:30:54.749 "rw_ios_per_sec": 0, 00:30:54.749 "rw_mbytes_per_sec": 0, 00:30:54.749 "r_mbytes_per_sec": 0, 00:30:54.749 "w_mbytes_per_sec": 0 00:30:54.749 }, 00:30:54.749 "claimed": false, 00:30:54.749 "zoned": false, 00:30:54.749 "supported_io_types": { 00:30:54.749 "read": true, 00:30:54.749 "write": true, 00:30:54.749 "unmap": false, 00:30:54.749 "flush": true, 00:30:54.749 "reset": true, 00:30:54.749 "nvme_admin": true, 00:30:54.749 "nvme_io": true, 00:30:54.749 "nvme_io_md": false, 00:30:54.749 "write_zeroes": true, 00:30:54.749 "zcopy": false, 00:30:54.749 "get_zone_info": false, 00:30:54.749 "zone_management": false, 00:30:54.749 "zone_append": false, 00:30:54.749 "compare": true, 00:30:54.749 "compare_and_write": true, 00:30:54.749 "abort": true, 00:30:54.749 "seek_hole": false, 00:30:54.749 "seek_data": false, 00:30:54.749 "copy": true, 00:30:54.749 "nvme_iov_md": false 00:30:54.749 }, 00:30:54.749 "memory_domains": [ 00:30:54.749 { 00:30:54.749 "dma_device_id": "system", 00:30:54.749 "dma_device_type": 1 00:30:54.749 } 00:30:54.749 ], 00:30:54.749 "driver_specific": { 00:30:54.749 "nvme": [ 00:30:54.749 { 00:30:54.749 "trid": { 00:30:54.749 "trtype": "TCP", 00:30:54.749 "adrfam": "IPv4", 00:30:54.749 "traddr": "10.0.0.2", 00:30:54.749 "trsvcid": "4420", 00:30:54.749 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:30:54.749 }, 00:30:54.749 "ctrlr_data": { 00:30:54.749 "cntlid": 2, 00:30:54.749 "vendor_id": "0x8086", 00:30:54.749 "model_number": "SPDK bdev Controller", 00:30:54.749 "serial_number": "00000000000000000000", 00:30:54.749 "firmware_revision": "25.01", 00:30:54.749 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:54.749 "oacs": { 00:30:54.749 "security": 0, 00:30:54.749 "format": 0, 00:30:54.749 "firmware": 0, 00:30:54.749 "ns_manage": 0 00:30:54.749 }, 00:30:54.749 "multi_ctrlr": true, 00:30:54.749 "ana_reporting": false 00:30:54.749 }, 00:30:54.749 "vs": { 00:30:54.749 "nvme_version": "1.3" 00:30:54.749 }, 00:30:54.749 "ns_data": { 00:30:54.749 "id": 1, 00:30:54.749 "can_share": true 00:30:54.749 } 00:30:54.749 } 00:30:54.749 ], 00:30:54.750 "mp_policy": "active_passive" 00:30:54.750 } 00:30:54.750 } 00:30:54.750 ] 00:30:54.750 10:32:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:54.750 10:32:48 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:54.750 10:32:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:54.750 10:32:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:54.750 10:32:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:54.750 10:32:48 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:30:54.750 10:32:48 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.yv3CEzcjAc 00:30:54.750 10:32:48 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:30:54.750 10:32:48 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.yv3CEzcjAc 00:30:54.750 10:32:48 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.yv3CEzcjAc 00:30:54.750 10:32:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:54.750 10:32:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:54.750 10:32:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:54.750 10:32:48 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:30:54.750 10:32:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:54.750 10:32:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:54.750 10:32:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:54.750 10:32:48 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:30:54.750 10:32:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:54.750 10:32:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:54.750 [2024-12-13 10:32:48.620383] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:30:54.750 [2024-12-13 10:32:48.620565] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:30:54.750 10:32:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:54.750 10:32:48 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:30:54.750 10:32:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:54.750 10:32:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:54.750 10:32:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:54.750 10:32:48 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:30:54.750 10:32:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:54.750 10:32:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:54.750 [2024-12-13 10:32:48.640454] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:30:55.009 nvme0n1 00:30:55.009 10:32:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:55.009 10:32:48 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:30:55.009 10:32:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:55.009 10:32:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:55.009 [ 00:30:55.009 { 00:30:55.009 "name": "nvme0n1", 00:30:55.009 "aliases": [ 00:30:55.009 "989e5fd2-f97c-49c2-9e7f-f1d6f99f43a1" 00:30:55.009 ], 00:30:55.009 "product_name": "NVMe disk", 00:30:55.009 "block_size": 512, 00:30:55.009 "num_blocks": 2097152, 00:30:55.009 "uuid": "989e5fd2-f97c-49c2-9e7f-f1d6f99f43a1", 00:30:55.009 "numa_id": 1, 00:30:55.009 "assigned_rate_limits": { 00:30:55.009 "rw_ios_per_sec": 0, 00:30:55.009 "rw_mbytes_per_sec": 0, 00:30:55.009 "r_mbytes_per_sec": 0, 00:30:55.009 "w_mbytes_per_sec": 0 00:30:55.009 }, 00:30:55.009 "claimed": false, 00:30:55.009 "zoned": false, 00:30:55.009 "supported_io_types": { 00:30:55.009 "read": true, 00:30:55.009 "write": true, 00:30:55.009 "unmap": false, 00:30:55.009 "flush": true, 00:30:55.009 "reset": true, 00:30:55.009 "nvme_admin": true, 00:30:55.009 "nvme_io": true, 00:30:55.009 "nvme_io_md": false, 00:30:55.009 "write_zeroes": true, 00:30:55.009 "zcopy": false, 00:30:55.009 "get_zone_info": false, 00:30:55.009 "zone_management": false, 00:30:55.009 "zone_append": false, 00:30:55.009 "compare": true, 00:30:55.009 "compare_and_write": true, 00:30:55.009 "abort": true, 00:30:55.009 "seek_hole": false, 00:30:55.009 "seek_data": false, 00:30:55.009 "copy": true, 00:30:55.009 "nvme_iov_md": false 00:30:55.009 }, 00:30:55.009 "memory_domains": [ 00:30:55.009 { 00:30:55.009 "dma_device_id": "system", 00:30:55.009 "dma_device_type": 1 00:30:55.009 } 00:30:55.009 ], 00:30:55.009 "driver_specific": { 00:30:55.009 "nvme": [ 00:30:55.009 { 00:30:55.009 "trid": { 00:30:55.009 "trtype": "TCP", 00:30:55.009 "adrfam": "IPv4", 00:30:55.009 "traddr": "10.0.0.2", 00:30:55.009 "trsvcid": "4421", 00:30:55.009 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:30:55.009 }, 00:30:55.009 "ctrlr_data": { 00:30:55.009 "cntlid": 3, 00:30:55.009 "vendor_id": "0x8086", 00:30:55.009 "model_number": "SPDK bdev Controller", 00:30:55.009 "serial_number": "00000000000000000000", 00:30:55.009 "firmware_revision": "25.01", 00:30:55.009 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:55.009 "oacs": { 00:30:55.009 "security": 0, 00:30:55.009 "format": 0, 00:30:55.009 "firmware": 0, 00:30:55.009 "ns_manage": 0 00:30:55.009 }, 00:30:55.009 "multi_ctrlr": true, 00:30:55.009 "ana_reporting": false 00:30:55.009 }, 00:30:55.009 "vs": { 00:30:55.009 "nvme_version": "1.3" 00:30:55.009 }, 00:30:55.009 "ns_data": { 00:30:55.009 "id": 1, 00:30:55.009 "can_share": true 00:30:55.009 } 00:30:55.009 } 00:30:55.009 ], 00:30:55.009 "mp_policy": "active_passive" 00:30:55.009 } 00:30:55.009 } 00:30:55.009 ] 00:30:55.009 10:32:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:55.009 10:32:48 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:55.009 10:32:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:55.009 10:32:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:55.009 10:32:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:55.009 10:32:48 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.yv3CEzcjAc 00:30:55.009 10:32:48 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:30:55.009 10:32:48 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:30:55.009 10:32:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:55.009 10:32:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:30:55.009 10:32:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:55.009 10:32:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:30:55.009 10:32:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:55.009 10:32:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:55.009 rmmod nvme_tcp 00:30:55.009 rmmod nvme_fabrics 00:30:55.009 rmmod nvme_keyring 00:30:55.009 10:32:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:55.009 10:32:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:30:55.009 10:32:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:30:55.009 10:32:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@517 -- # '[' -n 4057867 ']' 00:30:55.009 10:32:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # killprocess 4057867 00:30:55.009 10:32:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # '[' -z 4057867 ']' 00:30:55.009 10:32:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # kill -0 4057867 00:30:55.009 10:32:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # uname 00:30:55.009 10:32:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:55.009 10:32:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4057867 00:30:55.009 10:32:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:55.009 10:32:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:55.009 10:32:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4057867' 00:30:55.009 killing process with pid 4057867 00:30:55.009 10:32:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@973 -- # kill 4057867 00:30:55.009 10:32:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@978 -- # wait 4057867 00:30:56.386 10:32:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:56.386 10:32:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:56.386 10:32:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:56.386 10:32:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # iptr 00:30:56.386 10:32:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-restore 00:30:56.386 10:32:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-save 00:30:56.386 10:32:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:56.386 10:32:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:56.386 10:32:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:56.386 10:32:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:56.386 10:32:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:56.386 10:32:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:58.288 10:32:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:58.288 00:30:58.288 real 0m10.491s 00:30:58.288 user 0m4.609s 00:30:58.288 sys 0m4.482s 00:30:58.288 10:32:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:58.288 10:32:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:58.288 ************************************ 00:30:58.288 END TEST nvmf_async_init 00:30:58.288 ************************************ 00:30:58.288 10:32:52 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:30:58.288 10:32:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:30:58.288 10:32:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:58.288 10:32:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:58.288 ************************************ 00:30:58.288 START TEST dma 00:30:58.288 ************************************ 00:30:58.288 10:32:52 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:30:58.547 * Looking for test storage... 00:30:58.547 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:58.547 10:32:52 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:30:58.547 10:32:52 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1711 -- # lcov --version 00:30:58.547 10:32:52 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:30:58.547 10:32:52 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:30:58.547 10:32:52 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:58.547 10:32:52 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:58.547 10:32:52 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:58.547 10:32:52 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:30:58.547 10:32:52 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:30:58.547 10:32:52 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:30:58.547 10:32:52 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:30:58.547 10:32:52 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:30:58.547 10:32:52 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:30:58.547 10:32:52 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:30:58.547 10:32:52 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:58.547 10:32:52 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:30:58.547 10:32:52 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:30:58.547 10:32:52 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:58.547 10:32:52 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:58.547 10:32:52 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:30:58.547 10:32:52 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:30:58.547 10:32:52 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:58.547 10:32:52 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:30:58.548 10:32:52 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:30:58.548 10:32:52 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:30:58.548 10:32:52 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:30:58.548 10:32:52 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:58.548 10:32:52 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:30:58.548 10:32:52 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:30:58.548 10:32:52 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:58.548 10:32:52 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:58.548 10:32:52 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:30:58.548 10:32:52 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:58.548 10:32:52 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:30:58.548 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:58.548 --rc genhtml_branch_coverage=1 00:30:58.548 --rc genhtml_function_coverage=1 00:30:58.548 --rc genhtml_legend=1 00:30:58.548 --rc geninfo_all_blocks=1 00:30:58.548 --rc geninfo_unexecuted_blocks=1 00:30:58.548 00:30:58.548 ' 00:30:58.548 10:32:52 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:30:58.548 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:58.548 --rc genhtml_branch_coverage=1 00:30:58.548 --rc genhtml_function_coverage=1 00:30:58.548 --rc genhtml_legend=1 00:30:58.548 --rc geninfo_all_blocks=1 00:30:58.548 --rc geninfo_unexecuted_blocks=1 00:30:58.548 00:30:58.548 ' 00:30:58.548 10:32:52 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:30:58.548 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:58.548 --rc genhtml_branch_coverage=1 00:30:58.548 --rc genhtml_function_coverage=1 00:30:58.548 --rc genhtml_legend=1 00:30:58.548 --rc geninfo_all_blocks=1 00:30:58.548 --rc geninfo_unexecuted_blocks=1 00:30:58.548 00:30:58.548 ' 00:30:58.548 10:32:52 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:30:58.548 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:58.548 --rc genhtml_branch_coverage=1 00:30:58.548 --rc genhtml_function_coverage=1 00:30:58.548 --rc genhtml_legend=1 00:30:58.548 --rc geninfo_all_blocks=1 00:30:58.548 --rc geninfo_unexecuted_blocks=1 00:30:58.548 00:30:58.548 ' 00:30:58.548 10:32:52 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:58.548 10:32:52 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:30:58.548 10:32:52 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:58.548 10:32:52 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:58.548 10:32:52 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:58.548 10:32:52 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:58.548 10:32:52 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:58.548 10:32:52 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:58.548 10:32:52 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:58.548 10:32:52 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:58.548 10:32:52 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:58.548 10:32:52 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:58.548 10:32:52 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:30:58.548 10:32:52 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:30:58.548 10:32:52 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:58.548 10:32:52 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:58.548 10:32:52 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:58.548 10:32:52 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:58.548 10:32:52 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:58.548 10:32:52 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:30:58.548 10:32:52 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:58.548 10:32:52 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:58.548 10:32:52 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:58.548 10:32:52 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:58.548 10:32:52 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:58.548 10:32:52 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:58.548 10:32:52 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:30:58.548 10:32:52 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:58.548 10:32:52 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:30:58.548 10:32:52 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:58.548 10:32:52 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:58.548 10:32:52 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:58.548 10:32:52 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:58.548 10:32:52 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:58.548 10:32:52 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:58.548 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:58.548 10:32:52 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:58.548 10:32:52 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:58.548 10:32:52 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:58.548 10:32:52 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:30:58.548 10:32:52 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:30:58.548 00:30:58.548 real 0m0.202s 00:30:58.548 user 0m0.129s 00:30:58.548 sys 0m0.087s 00:30:58.548 10:32:52 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:58.548 10:32:52 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:30:58.548 ************************************ 00:30:58.548 END TEST dma 00:30:58.548 ************************************ 00:30:58.548 10:32:52 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:30:58.548 10:32:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:30:58.548 10:32:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:58.548 10:32:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:58.548 ************************************ 00:30:58.548 START TEST nvmf_identify 00:30:58.548 ************************************ 00:30:58.548 10:32:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:30:58.808 * Looking for test storage... 00:30:58.808 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:58.808 10:32:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:30:58.808 10:32:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # lcov --version 00:30:58.808 10:32:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:30:58.808 10:32:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:30:58.808 10:32:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:58.808 10:32:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:58.808 10:32:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:58.808 10:32:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:30:58.808 10:32:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:30:58.808 10:32:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:30:58.808 10:32:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:30:58.808 10:32:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:30:58.808 10:32:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:30:58.808 10:32:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:30:58.808 10:32:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:58.808 10:32:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:30:58.808 10:32:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:30:58.808 10:32:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:58.808 10:32:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:58.808 10:32:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:30:58.808 10:32:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:30:58.808 10:32:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:58.808 10:32:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:30:58.808 10:32:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:30:58.808 10:32:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:30:58.808 10:32:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:30:58.808 10:32:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:58.808 10:32:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:30:58.808 10:32:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:30:58.808 10:32:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:58.808 10:32:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:58.808 10:32:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:30:58.808 10:32:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:58.808 10:32:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:30:58.808 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:58.808 --rc genhtml_branch_coverage=1 00:30:58.808 --rc genhtml_function_coverage=1 00:30:58.808 --rc genhtml_legend=1 00:30:58.808 --rc geninfo_all_blocks=1 00:30:58.808 --rc geninfo_unexecuted_blocks=1 00:30:58.808 00:30:58.808 ' 00:30:58.808 10:32:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:30:58.808 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:58.808 --rc genhtml_branch_coverage=1 00:30:58.808 --rc genhtml_function_coverage=1 00:30:58.808 --rc genhtml_legend=1 00:30:58.808 --rc geninfo_all_blocks=1 00:30:58.808 --rc geninfo_unexecuted_blocks=1 00:30:58.808 00:30:58.808 ' 00:30:58.808 10:32:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:30:58.808 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:58.808 --rc genhtml_branch_coverage=1 00:30:58.808 --rc genhtml_function_coverage=1 00:30:58.808 --rc genhtml_legend=1 00:30:58.808 --rc geninfo_all_blocks=1 00:30:58.808 --rc geninfo_unexecuted_blocks=1 00:30:58.808 00:30:58.808 ' 00:30:58.808 10:32:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:30:58.808 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:58.808 --rc genhtml_branch_coverage=1 00:30:58.808 --rc genhtml_function_coverage=1 00:30:58.808 --rc genhtml_legend=1 00:30:58.808 --rc geninfo_all_blocks=1 00:30:58.808 --rc geninfo_unexecuted_blocks=1 00:30:58.808 00:30:58.808 ' 00:30:58.808 10:32:52 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:58.808 10:32:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:30:58.808 10:32:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:58.808 10:32:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:58.808 10:32:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:58.808 10:32:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:58.808 10:32:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:58.808 10:32:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:58.808 10:32:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:58.808 10:32:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:58.808 10:32:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:58.808 10:32:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:58.808 10:32:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:30:58.808 10:32:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:30:58.808 10:32:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:58.808 10:32:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:58.808 10:32:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:58.808 10:32:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:58.808 10:32:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:58.808 10:32:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:30:58.808 10:32:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:58.808 10:32:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:58.808 10:32:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:58.808 10:32:52 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:58.808 10:32:52 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:58.808 10:32:52 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:58.808 10:32:52 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:30:58.809 10:32:52 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:58.809 10:32:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:30:58.809 10:32:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:58.809 10:32:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:58.809 10:32:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:58.809 10:32:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:58.809 10:32:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:58.809 10:32:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:58.809 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:58.809 10:32:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:58.809 10:32:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:58.809 10:32:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:58.809 10:32:52 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:58.809 10:32:52 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:58.809 10:32:52 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:30:58.809 10:32:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:58.809 10:32:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:58.809 10:32:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:58.809 10:32:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:58.809 10:32:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:58.809 10:32:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:58.809 10:32:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:58.809 10:32:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:58.809 10:32:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:58.809 10:32:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:58.809 10:32:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable 00:30:58.809 10:32:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:04.077 10:32:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:04.077 10:32:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=() 00:31:04.077 10:32:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:04.077 10:32:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:04.077 10:32:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:04.077 10:32:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:04.077 10:32:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:04.077 10:32:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=() 00:31:04.077 10:32:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:04.077 10:32:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=() 00:31:04.077 10:32:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810 00:31:04.077 10:32:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=() 00:31:04.077 10:32:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722 00:31:04.077 10:32:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=() 00:31:04.077 10:32:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx 00:31:04.077 10:32:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:04.077 10:32:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:04.077 10:32:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:04.077 10:32:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:04.077 10:32:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:04.077 10:32:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:04.077 10:32:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:04.077 10:32:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:04.077 10:32:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:04.077 10:32:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:04.077 10:32:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:04.077 10:32:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:04.077 10:32:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:04.077 10:32:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:04.077 10:32:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:04.077 10:32:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:04.077 10:32:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:04.077 10:32:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:04.077 10:32:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:04.077 10:32:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:31:04.077 Found 0000:af:00.0 (0x8086 - 0x159b) 00:31:04.077 10:32:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:04.077 10:32:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:04.077 10:32:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:04.077 10:32:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:04.077 10:32:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:04.077 10:32:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:04.077 10:32:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:31:04.077 Found 0000:af:00.1 (0x8086 - 0x159b) 00:31:04.077 10:32:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:04.077 10:32:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:04.077 10:32:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:04.077 10:32:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:04.077 10:32:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:04.077 10:32:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:04.077 10:32:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:04.077 10:32:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:04.077 10:32:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:04.077 10:32:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:04.077 10:32:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:04.077 10:32:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:04.077 10:32:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:04.077 10:32:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:04.077 10:32:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:04.078 10:32:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:31:04.078 Found net devices under 0000:af:00.0: cvl_0_0 00:31:04.078 10:32:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:04.078 10:32:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:04.078 10:32:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:04.078 10:32:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:04.078 10:32:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:04.078 10:32:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:04.078 10:32:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:04.078 10:32:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:04.078 10:32:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:31:04.078 Found net devices under 0000:af:00.1: cvl_0_1 00:31:04.078 10:32:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:04.078 10:32:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:04.078 10:32:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # is_hw=yes 00:31:04.078 10:32:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:04.078 10:32:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:04.078 10:32:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:04.078 10:32:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:04.078 10:32:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:04.078 10:32:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:04.078 10:32:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:04.078 10:32:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:04.078 10:32:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:04.078 10:32:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:04.078 10:32:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:04.078 10:32:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:04.078 10:32:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:04.078 10:32:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:04.078 10:32:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:04.078 10:32:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:04.078 10:32:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:04.078 10:32:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:04.078 10:32:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:04.078 10:32:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:04.078 10:32:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:04.078 10:32:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:04.337 10:32:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:04.337 10:32:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:04.337 10:32:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:04.337 10:32:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:04.337 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:04.337 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.244 ms 00:31:04.337 00:31:04.337 --- 10.0.0.2 ping statistics --- 00:31:04.337 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:04.337 rtt min/avg/max/mdev = 0.244/0.244/0.244/0.000 ms 00:31:04.337 10:32:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:04.337 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:04.337 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.139 ms 00:31:04.337 00:31:04.337 --- 10.0.0.1 ping statistics --- 00:31:04.337 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:04.337 rtt min/avg/max/mdev = 0.139/0.139/0.139/0.000 ms 00:31:04.337 10:32:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:04.337 10:32:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # return 0 00:31:04.337 10:32:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:04.337 10:32:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:04.337 10:32:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:04.337 10:32:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:04.337 10:32:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:04.337 10:32:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:04.337 10:32:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:04.337 10:32:58 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:31:04.337 10:32:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:04.337 10:32:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:04.337 10:32:58 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=4061653 00:31:04.337 10:32:58 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:31:04.337 10:32:58 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:04.337 10:32:58 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 4061653 00:31:04.337 10:32:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # '[' -z 4061653 ']' 00:31:04.337 10:32:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:04.337 10:32:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:04.337 10:32:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:04.337 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:04.337 10:32:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:04.337 10:32:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:04.337 [2024-12-13 10:32:58.175577] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:31:04.337 [2024-12-13 10:32:58.175680] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:04.596 [2024-12-13 10:32:58.293705] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:04.596 [2024-12-13 10:32:58.409691] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:04.596 [2024-12-13 10:32:58.409734] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:04.596 [2024-12-13 10:32:58.409745] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:04.596 [2024-12-13 10:32:58.409757] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:04.596 [2024-12-13 10:32:58.409766] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:04.596 [2024-12-13 10:32:58.412330] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:31:04.596 [2024-12-13 10:32:58.412407] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:31:04.596 [2024-12-13 10:32:58.412557] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:31:04.596 [2024-12-13 10:32:58.412566] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:31:05.164 10:32:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:05.164 10:32:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@868 -- # return 0 00:31:05.164 10:32:58 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:05.164 10:32:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:05.164 10:32:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:05.164 [2024-12-13 10:32:59.000698] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:05.164 10:32:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:05.164 10:32:59 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:31:05.164 10:32:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:05.164 10:32:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:05.164 10:32:59 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:31:05.164 10:32:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:05.164 10:32:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:05.423 Malloc0 00:31:05.423 10:32:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:05.423 10:32:59 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:05.423 10:32:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:05.423 10:32:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:05.423 10:32:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:05.423 10:32:59 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:31:05.423 10:32:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:05.423 10:32:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:05.423 10:32:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:05.423 10:32:59 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:05.423 10:32:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:05.423 10:32:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:05.423 [2024-12-13 10:32:59.153134] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:05.423 10:32:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:05.423 10:32:59 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:05.423 10:32:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:05.423 10:32:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:05.423 10:32:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:05.423 10:32:59 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:31:05.423 10:32:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:05.423 10:32:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:05.424 [ 00:31:05.424 { 00:31:05.424 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:31:05.424 "subtype": "Discovery", 00:31:05.424 "listen_addresses": [ 00:31:05.424 { 00:31:05.424 "trtype": "TCP", 00:31:05.424 "adrfam": "IPv4", 00:31:05.424 "traddr": "10.0.0.2", 00:31:05.424 "trsvcid": "4420" 00:31:05.424 } 00:31:05.424 ], 00:31:05.424 "allow_any_host": true, 00:31:05.424 "hosts": [] 00:31:05.424 }, 00:31:05.424 { 00:31:05.424 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:31:05.424 "subtype": "NVMe", 00:31:05.424 "listen_addresses": [ 00:31:05.424 { 00:31:05.424 "trtype": "TCP", 00:31:05.424 "adrfam": "IPv4", 00:31:05.424 "traddr": "10.0.0.2", 00:31:05.424 "trsvcid": "4420" 00:31:05.424 } 00:31:05.424 ], 00:31:05.424 "allow_any_host": true, 00:31:05.424 "hosts": [], 00:31:05.424 "serial_number": "SPDK00000000000001", 00:31:05.424 "model_number": "SPDK bdev Controller", 00:31:05.424 "max_namespaces": 32, 00:31:05.424 "min_cntlid": 1, 00:31:05.424 "max_cntlid": 65519, 00:31:05.424 "namespaces": [ 00:31:05.424 { 00:31:05.424 "nsid": 1, 00:31:05.424 "bdev_name": "Malloc0", 00:31:05.424 "name": "Malloc0", 00:31:05.424 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:31:05.424 "eui64": "ABCDEF0123456789", 00:31:05.424 "uuid": "153be8fe-7271-442d-a50b-bf7a69fae75a" 00:31:05.424 } 00:31:05.424 ] 00:31:05.424 } 00:31:05.424 ] 00:31:05.424 10:32:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:05.424 10:32:59 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:31:05.424 [2024-12-13 10:32:59.224291] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:31:05.424 [2024-12-13 10:32:59.224354] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4061879 ] 00:31:05.424 [2024-12-13 10:32:59.286112] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:31:05.424 [2024-12-13 10:32:59.286221] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:31:05.424 [2024-12-13 10:32:59.286230] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:31:05.424 [2024-12-13 10:32:59.286252] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:31:05.424 [2024-12-13 10:32:59.286265] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:31:05.424 [2024-12-13 10:32:59.286862] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:31:05.424 [2024-12-13 10:32:59.286904] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x61500001db80 0 00:31:05.424 [2024-12-13 10:32:59.293597] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:31:05.424 [2024-12-13 10:32:59.293622] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:31:05.424 [2024-12-13 10:32:59.293634] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:31:05.424 [2024-12-13 10:32:59.293640] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:31:05.424 [2024-12-13 10:32:59.293693] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:05.424 [2024-12-13 10:32:59.293703] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:05.424 [2024-12-13 10:32:59.293714] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500001db80) 00:31:05.424 [2024-12-13 10:32:59.293737] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:31:05.424 [2024-12-13 10:32:59.293758] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:31:05.424 [2024-12-13 10:32:59.301467] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:05.424 [2024-12-13 10:32:59.301488] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:05.424 [2024-12-13 10:32:59.301495] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:05.424 [2024-12-13 10:32:59.301506] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500001db80 00:31:05.424 [2024-12-13 10:32:59.301523] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:31:05.424 [2024-12-13 10:32:59.301539] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:31:05.424 [2024-12-13 10:32:59.301550] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:31:05.424 [2024-12-13 10:32:59.301569] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:05.424 [2024-12-13 10:32:59.301576] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:05.424 [2024-12-13 10:32:59.301582] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500001db80) 00:31:05.424 [2024-12-13 10:32:59.301595] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.424 [2024-12-13 10:32:59.301616] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:31:05.424 [2024-12-13 10:32:59.301823] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:05.424 [2024-12-13 10:32:59.301832] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:05.424 [2024-12-13 10:32:59.301837] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:05.424 [2024-12-13 10:32:59.301843] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500001db80 00:31:05.424 [2024-12-13 10:32:59.301858] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:31:05.424 [2024-12-13 10:32:59.301870] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:31:05.424 [2024-12-13 10:32:59.301879] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:05.424 [2024-12-13 10:32:59.301885] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:05.424 [2024-12-13 10:32:59.301891] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500001db80) 00:31:05.424 [2024-12-13 10:32:59.301904] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.424 [2024-12-13 10:32:59.301919] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:31:05.424 [2024-12-13 10:32:59.302002] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:05.424 [2024-12-13 10:32:59.302011] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:05.424 [2024-12-13 10:32:59.302016] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:05.424 [2024-12-13 10:32:59.302021] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500001db80 00:31:05.424 [2024-12-13 10:32:59.302029] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:31:05.424 [2024-12-13 10:32:59.302042] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:31:05.424 [2024-12-13 10:32:59.302055] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:05.424 [2024-12-13 10:32:59.302061] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:05.424 [2024-12-13 10:32:59.302066] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500001db80) 00:31:05.424 [2024-12-13 10:32:59.302080] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.424 [2024-12-13 10:32:59.302095] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:31:05.424 [2024-12-13 10:32:59.302175] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:05.424 [2024-12-13 10:32:59.302185] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:05.424 [2024-12-13 10:32:59.302189] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:05.424 [2024-12-13 10:32:59.302194] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500001db80 00:31:05.424 [2024-12-13 10:32:59.302202] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:31:05.424 [2024-12-13 10:32:59.302215] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:05.424 [2024-12-13 10:32:59.302221] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:05.424 [2024-12-13 10:32:59.302226] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500001db80) 00:31:05.424 [2024-12-13 10:32:59.302236] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.424 [2024-12-13 10:32:59.302252] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:31:05.424 [2024-12-13 10:32:59.302326] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:05.424 [2024-12-13 10:32:59.302335] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:05.424 [2024-12-13 10:32:59.302339] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:05.424 [2024-12-13 10:32:59.302344] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500001db80 00:31:05.424 [2024-12-13 10:32:59.302351] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:31:05.424 [2024-12-13 10:32:59.302359] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:31:05.424 [2024-12-13 10:32:59.302369] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:31:05.424 [2024-12-13 10:32:59.302479] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:31:05.424 [2024-12-13 10:32:59.302487] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:31:05.424 [2024-12-13 10:32:59.302505] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:05.424 [2024-12-13 10:32:59.302511] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:05.424 [2024-12-13 10:32:59.302517] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500001db80) 00:31:05.424 [2024-12-13 10:32:59.302527] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.424 [2024-12-13 10:32:59.302546] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:31:05.424 [2024-12-13 10:32:59.302628] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:05.424 [2024-12-13 10:32:59.302639] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:05.424 [2024-12-13 10:32:59.302644] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:05.424 [2024-12-13 10:32:59.302649] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500001db80 00:31:05.425 [2024-12-13 10:32:59.302656] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:31:05.425 [2024-12-13 10:32:59.302668] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:05.425 [2024-12-13 10:32:59.302674] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:05.425 [2024-12-13 10:32:59.302680] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500001db80) 00:31:05.425 [2024-12-13 10:32:59.302692] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.425 [2024-12-13 10:32:59.302706] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:31:05.425 [2024-12-13 10:32:59.302780] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:05.425 [2024-12-13 10:32:59.302789] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:05.425 [2024-12-13 10:32:59.302794] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:05.425 [2024-12-13 10:32:59.302801] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500001db80 00:31:05.425 [2024-12-13 10:32:59.302808] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:31:05.425 [2024-12-13 10:32:59.302815] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:31:05.425 [2024-12-13 10:32:59.302825] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:31:05.425 [2024-12-13 10:32:59.302841] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:31:05.425 [2024-12-13 10:32:59.302857] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:05.425 [2024-12-13 10:32:59.302863] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500001db80) 00:31:05.425 [2024-12-13 10:32:59.302874] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.425 [2024-12-13 10:32:59.302888] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:31:05.425 [2024-12-13 10:32:59.302998] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:31:05.425 [2024-12-13 10:32:59.303007] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:31:05.425 [2024-12-13 10:32:59.303011] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:31:05.425 [2024-12-13 10:32:59.303018] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500001db80): datao=0, datal=4096, cccid=0 00:31:05.425 [2024-12-13 10:32:59.303030] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b100) on tqpair(0x61500001db80): expected_datao=0, payload_size=4096 00:31:05.425 [2024-12-13 10:32:59.303038] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:05.425 [2024-12-13 10:32:59.303059] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:31:05.425 [2024-12-13 10:32:59.303066] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:31:05.685 [2024-12-13 10:32:59.344615] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:05.685 [2024-12-13 10:32:59.344638] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:05.685 [2024-12-13 10:32:59.344644] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:05.685 [2024-12-13 10:32:59.344651] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500001db80 00:31:05.685 [2024-12-13 10:32:59.344672] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:31:05.685 [2024-12-13 10:32:59.344681] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:31:05.685 [2024-12-13 10:32:59.344688] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:31:05.685 [2024-12-13 10:32:59.344700] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:31:05.685 [2024-12-13 10:32:59.344707] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:31:05.685 [2024-12-13 10:32:59.344714] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:31:05.685 [2024-12-13 10:32:59.344729] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:31:05.685 [2024-12-13 10:32:59.344741] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:05.685 [2024-12-13 10:32:59.344748] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:05.685 [2024-12-13 10:32:59.344756] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500001db80) 00:31:05.685 [2024-12-13 10:32:59.344771] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:31:05.685 [2024-12-13 10:32:59.344789] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:31:05.685 [2024-12-13 10:32:59.344874] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:05.685 [2024-12-13 10:32:59.344882] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:05.685 [2024-12-13 10:32:59.344887] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:05.685 [2024-12-13 10:32:59.344893] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500001db80 00:31:05.685 [2024-12-13 10:32:59.344903] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:05.685 [2024-12-13 10:32:59.344914] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:05.685 [2024-12-13 10:32:59.344919] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500001db80) 00:31:05.685 [2024-12-13 10:32:59.344929] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:05.685 [2024-12-13 10:32:59.344937] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:05.685 [2024-12-13 10:32:59.344943] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:05.685 [2024-12-13 10:32:59.344948] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x61500001db80) 00:31:05.685 [2024-12-13 10:32:59.344956] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:05.685 [2024-12-13 10:32:59.344963] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:05.685 [2024-12-13 10:32:59.344968] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:05.685 [2024-12-13 10:32:59.344972] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x61500001db80) 00:31:05.685 [2024-12-13 10:32:59.344980] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:05.685 [2024-12-13 10:32:59.344987] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:05.685 [2024-12-13 10:32:59.344992] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:05.685 [2024-12-13 10:32:59.344996] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500001db80) 00:31:05.685 [2024-12-13 10:32:59.345004] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:05.685 [2024-12-13 10:32:59.345011] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:31:05.685 [2024-12-13 10:32:59.345029] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:31:05.686 [2024-12-13 10:32:59.345038] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:05.686 [2024-12-13 10:32:59.345046] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500001db80) 00:31:05.686 [2024-12-13 10:32:59.345057] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.686 [2024-12-13 10:32:59.345074] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:31:05.686 [2024-12-13 10:32:59.345081] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b280, cid 1, qid 0 00:31:05.686 [2024-12-13 10:32:59.345087] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b400, cid 2, qid 0 00:31:05.686 [2024-12-13 10:32:59.345093] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:31:05.686 [2024-12-13 10:32:59.345099] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:31:05.686 [2024-12-13 10:32:59.345212] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:05.686 [2024-12-13 10:32:59.345221] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:05.686 [2024-12-13 10:32:59.345226] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:05.686 [2024-12-13 10:32:59.345231] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500001db80 00:31:05.686 [2024-12-13 10:32:59.345239] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:31:05.686 [2024-12-13 10:32:59.345246] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:31:05.686 [2024-12-13 10:32:59.345263] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:05.686 [2024-12-13 10:32:59.345270] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500001db80) 00:31:05.686 [2024-12-13 10:32:59.345281] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.686 [2024-12-13 10:32:59.345295] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:31:05.686 [2024-12-13 10:32:59.345384] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:31:05.686 [2024-12-13 10:32:59.345394] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:31:05.686 [2024-12-13 10:32:59.345403] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:31:05.686 [2024-12-13 10:32:59.345409] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500001db80): datao=0, datal=4096, cccid=4 00:31:05.686 [2024-12-13 10:32:59.345416] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x61500001db80): expected_datao=0, payload_size=4096 00:31:05.686 [2024-12-13 10:32:59.345422] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:05.686 [2024-12-13 10:32:59.349463] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:31:05.686 [2024-12-13 10:32:59.349478] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:31:05.686 [2024-12-13 10:32:59.349490] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:05.686 [2024-12-13 10:32:59.349498] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:05.686 [2024-12-13 10:32:59.349503] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:05.686 [2024-12-13 10:32:59.349509] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500001db80 00:31:05.686 [2024-12-13 10:32:59.349532] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:31:05.686 [2024-12-13 10:32:59.349579] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:05.686 [2024-12-13 10:32:59.349587] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500001db80) 00:31:05.686 [2024-12-13 10:32:59.349598] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.686 [2024-12-13 10:32:59.349607] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:05.686 [2024-12-13 10:32:59.349613] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:05.686 [2024-12-13 10:32:59.349619] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x61500001db80) 00:31:05.686 [2024-12-13 10:32:59.349627] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:31:05.686 [2024-12-13 10:32:59.349648] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:31:05.686 [2024-12-13 10:32:59.349656] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:31:05.686 [2024-12-13 10:32:59.349912] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:31:05.686 [2024-12-13 10:32:59.349925] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:31:05.686 [2024-12-13 10:32:59.349930] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:31:05.686 [2024-12-13 10:32:59.349938] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500001db80): datao=0, datal=1024, cccid=4 00:31:05.686 [2024-12-13 10:32:59.349945] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x61500001db80): expected_datao=0, payload_size=1024 00:31:05.686 [2024-12-13 10:32:59.349952] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:05.686 [2024-12-13 10:32:59.349961] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:31:05.686 [2024-12-13 10:32:59.349967] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:31:05.686 [2024-12-13 10:32:59.349974] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:05.686 [2024-12-13 10:32:59.349981] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:05.686 [2024-12-13 10:32:59.349986] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:05.686 [2024-12-13 10:32:59.349992] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x61500001db80 00:31:05.686 [2024-12-13 10:32:59.391621] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:05.686 [2024-12-13 10:32:59.391641] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:05.686 [2024-12-13 10:32:59.391646] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:05.686 [2024-12-13 10:32:59.391659] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500001db80 00:31:05.686 [2024-12-13 10:32:59.391684] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:05.686 [2024-12-13 10:32:59.391691] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500001db80) 00:31:05.686 [2024-12-13 10:32:59.391706] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.686 [2024-12-13 10:32:59.391732] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:31:05.686 [2024-12-13 10:32:59.391850] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:31:05.686 [2024-12-13 10:32:59.391859] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:31:05.686 [2024-12-13 10:32:59.391863] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:31:05.686 [2024-12-13 10:32:59.391869] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500001db80): datao=0, datal=3072, cccid=4 00:31:05.686 [2024-12-13 10:32:59.391875] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x61500001db80): expected_datao=0, payload_size=3072 00:31:05.686 [2024-12-13 10:32:59.391881] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:05.686 [2024-12-13 10:32:59.391893] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:31:05.686 [2024-12-13 10:32:59.391898] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:31:05.686 [2024-12-13 10:32:59.435470] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:05.686 [2024-12-13 10:32:59.435490] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:05.686 [2024-12-13 10:32:59.435495] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:05.686 [2024-12-13 10:32:59.435501] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500001db80 00:31:05.686 [2024-12-13 10:32:59.435520] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:05.686 [2024-12-13 10:32:59.435527] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500001db80) 00:31:05.686 [2024-12-13 10:32:59.435539] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.686 [2024-12-13 10:32:59.435562] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:31:05.686 [2024-12-13 10:32:59.435694] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:31:05.686 [2024-12-13 10:32:59.435721] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:31:05.686 [2024-12-13 10:32:59.435726] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:31:05.686 [2024-12-13 10:32:59.435731] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500001db80): datao=0, datal=8, cccid=4 00:31:05.686 [2024-12-13 10:32:59.435737] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x61500001db80): expected_datao=0, payload_size=8 00:31:05.686 [2024-12-13 10:32:59.435744] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:05.686 [2024-12-13 10:32:59.435753] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:31:05.686 [2024-12-13 10:32:59.435758] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:31:05.686 [2024-12-13 10:32:59.477539] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:05.686 [2024-12-13 10:32:59.477560] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:05.686 [2024-12-13 10:32:59.477565] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:05.686 [2024-12-13 10:32:59.477571] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500001db80 00:31:05.686 ===================================================== 00:31:05.686 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:31:05.686 ===================================================== 00:31:05.686 Controller Capabilities/Features 00:31:05.686 ================================ 00:31:05.686 Vendor ID: 0000 00:31:05.686 Subsystem Vendor ID: 0000 00:31:05.686 Serial Number: .................... 00:31:05.686 Model Number: ........................................ 00:31:05.686 Firmware Version: 25.01 00:31:05.686 Recommended Arb Burst: 0 00:31:05.686 IEEE OUI Identifier: 00 00 00 00:31:05.686 Multi-path I/O 00:31:05.686 May have multiple subsystem ports: No 00:31:05.686 May have multiple controllers: No 00:31:05.686 Associated with SR-IOV VF: No 00:31:05.686 Max Data Transfer Size: 131072 00:31:05.686 Max Number of Namespaces: 0 00:31:05.686 Max Number of I/O Queues: 1024 00:31:05.686 NVMe Specification Version (VS): 1.3 00:31:05.686 NVMe Specification Version (Identify): 1.3 00:31:05.686 Maximum Queue Entries: 128 00:31:05.686 Contiguous Queues Required: Yes 00:31:05.686 Arbitration Mechanisms Supported 00:31:05.686 Weighted Round Robin: Not Supported 00:31:05.686 Vendor Specific: Not Supported 00:31:05.686 Reset Timeout: 15000 ms 00:31:05.686 Doorbell Stride: 4 bytes 00:31:05.686 NVM Subsystem Reset: Not Supported 00:31:05.686 Command Sets Supported 00:31:05.687 NVM Command Set: Supported 00:31:05.687 Boot Partition: Not Supported 00:31:05.687 Memory Page Size Minimum: 4096 bytes 00:31:05.687 Memory Page Size Maximum: 4096 bytes 00:31:05.687 Persistent Memory Region: Not Supported 00:31:05.687 Optional Asynchronous Events Supported 00:31:05.687 Namespace Attribute Notices: Not Supported 00:31:05.687 Firmware Activation Notices: Not Supported 00:31:05.687 ANA Change Notices: Not Supported 00:31:05.687 PLE Aggregate Log Change Notices: Not Supported 00:31:05.687 LBA Status Info Alert Notices: Not Supported 00:31:05.687 EGE Aggregate Log Change Notices: Not Supported 00:31:05.687 Normal NVM Subsystem Shutdown event: Not Supported 00:31:05.687 Zone Descriptor Change Notices: Not Supported 00:31:05.687 Discovery Log Change Notices: Supported 00:31:05.687 Controller Attributes 00:31:05.687 128-bit Host Identifier: Not Supported 00:31:05.687 Non-Operational Permissive Mode: Not Supported 00:31:05.687 NVM Sets: Not Supported 00:31:05.687 Read Recovery Levels: Not Supported 00:31:05.687 Endurance Groups: Not Supported 00:31:05.687 Predictable Latency Mode: Not Supported 00:31:05.687 Traffic Based Keep ALive: Not Supported 00:31:05.687 Namespace Granularity: Not Supported 00:31:05.687 SQ Associations: Not Supported 00:31:05.687 UUID List: Not Supported 00:31:05.687 Multi-Domain Subsystem: Not Supported 00:31:05.687 Fixed Capacity Management: Not Supported 00:31:05.687 Variable Capacity Management: Not Supported 00:31:05.687 Delete Endurance Group: Not Supported 00:31:05.687 Delete NVM Set: Not Supported 00:31:05.687 Extended LBA Formats Supported: Not Supported 00:31:05.687 Flexible Data Placement Supported: Not Supported 00:31:05.687 00:31:05.687 Controller Memory Buffer Support 00:31:05.687 ================================ 00:31:05.687 Supported: No 00:31:05.687 00:31:05.687 Persistent Memory Region Support 00:31:05.687 ================================ 00:31:05.687 Supported: No 00:31:05.687 00:31:05.687 Admin Command Set Attributes 00:31:05.687 ============================ 00:31:05.687 Security Send/Receive: Not Supported 00:31:05.687 Format NVM: Not Supported 00:31:05.687 Firmware Activate/Download: Not Supported 00:31:05.687 Namespace Management: Not Supported 00:31:05.687 Device Self-Test: Not Supported 00:31:05.687 Directives: Not Supported 00:31:05.687 NVMe-MI: Not Supported 00:31:05.687 Virtualization Management: Not Supported 00:31:05.687 Doorbell Buffer Config: Not Supported 00:31:05.687 Get LBA Status Capability: Not Supported 00:31:05.687 Command & Feature Lockdown Capability: Not Supported 00:31:05.687 Abort Command Limit: 1 00:31:05.687 Async Event Request Limit: 4 00:31:05.687 Number of Firmware Slots: N/A 00:31:05.687 Firmware Slot 1 Read-Only: N/A 00:31:05.687 Firmware Activation Without Reset: N/A 00:31:05.687 Multiple Update Detection Support: N/A 00:31:05.687 Firmware Update Granularity: No Information Provided 00:31:05.687 Per-Namespace SMART Log: No 00:31:05.687 Asymmetric Namespace Access Log Page: Not Supported 00:31:05.687 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:31:05.687 Command Effects Log Page: Not Supported 00:31:05.687 Get Log Page Extended Data: Supported 00:31:05.687 Telemetry Log Pages: Not Supported 00:31:05.687 Persistent Event Log Pages: Not Supported 00:31:05.687 Supported Log Pages Log Page: May Support 00:31:05.687 Commands Supported & Effects Log Page: Not Supported 00:31:05.687 Feature Identifiers & Effects Log Page:May Support 00:31:05.687 NVMe-MI Commands & Effects Log Page: May Support 00:31:05.687 Data Area 4 for Telemetry Log: Not Supported 00:31:05.687 Error Log Page Entries Supported: 128 00:31:05.687 Keep Alive: Not Supported 00:31:05.687 00:31:05.687 NVM Command Set Attributes 00:31:05.687 ========================== 00:31:05.687 Submission Queue Entry Size 00:31:05.687 Max: 1 00:31:05.687 Min: 1 00:31:05.687 Completion Queue Entry Size 00:31:05.687 Max: 1 00:31:05.687 Min: 1 00:31:05.687 Number of Namespaces: 0 00:31:05.687 Compare Command: Not Supported 00:31:05.687 Write Uncorrectable Command: Not Supported 00:31:05.687 Dataset Management Command: Not Supported 00:31:05.687 Write Zeroes Command: Not Supported 00:31:05.687 Set Features Save Field: Not Supported 00:31:05.687 Reservations: Not Supported 00:31:05.687 Timestamp: Not Supported 00:31:05.687 Copy: Not Supported 00:31:05.687 Volatile Write Cache: Not Present 00:31:05.687 Atomic Write Unit (Normal): 1 00:31:05.687 Atomic Write Unit (PFail): 1 00:31:05.687 Atomic Compare & Write Unit: 1 00:31:05.687 Fused Compare & Write: Supported 00:31:05.687 Scatter-Gather List 00:31:05.687 SGL Command Set: Supported 00:31:05.687 SGL Keyed: Supported 00:31:05.687 SGL Bit Bucket Descriptor: Not Supported 00:31:05.687 SGL Metadata Pointer: Not Supported 00:31:05.687 Oversized SGL: Not Supported 00:31:05.687 SGL Metadata Address: Not Supported 00:31:05.687 SGL Offset: Supported 00:31:05.687 Transport SGL Data Block: Not Supported 00:31:05.687 Replay Protected Memory Block: Not Supported 00:31:05.687 00:31:05.687 Firmware Slot Information 00:31:05.687 ========================= 00:31:05.687 Active slot: 0 00:31:05.687 00:31:05.687 00:31:05.687 Error Log 00:31:05.687 ========= 00:31:05.687 00:31:05.687 Active Namespaces 00:31:05.687 ================= 00:31:05.687 Discovery Log Page 00:31:05.687 ================== 00:31:05.687 Generation Counter: 2 00:31:05.687 Number of Records: 2 00:31:05.687 Record Format: 0 00:31:05.687 00:31:05.687 Discovery Log Entry 0 00:31:05.687 ---------------------- 00:31:05.687 Transport Type: 3 (TCP) 00:31:05.687 Address Family: 1 (IPv4) 00:31:05.687 Subsystem Type: 3 (Current Discovery Subsystem) 00:31:05.687 Entry Flags: 00:31:05.687 Duplicate Returned Information: 1 00:31:05.687 Explicit Persistent Connection Support for Discovery: 1 00:31:05.687 Transport Requirements: 00:31:05.687 Secure Channel: Not Required 00:31:05.687 Port ID: 0 (0x0000) 00:31:05.687 Controller ID: 65535 (0xffff) 00:31:05.687 Admin Max SQ Size: 128 00:31:05.687 Transport Service Identifier: 4420 00:31:05.687 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:31:05.687 Transport Address: 10.0.0.2 00:31:05.687 Discovery Log Entry 1 00:31:05.687 ---------------------- 00:31:05.687 Transport Type: 3 (TCP) 00:31:05.687 Address Family: 1 (IPv4) 00:31:05.687 Subsystem Type: 2 (NVM Subsystem) 00:31:05.687 Entry Flags: 00:31:05.687 Duplicate Returned Information: 0 00:31:05.687 Explicit Persistent Connection Support for Discovery: 0 00:31:05.687 Transport Requirements: 00:31:05.687 Secure Channel: Not Required 00:31:05.687 Port ID: 0 (0x0000) 00:31:05.687 Controller ID: 65535 (0xffff) 00:31:05.687 Admin Max SQ Size: 128 00:31:05.687 Transport Service Identifier: 4420 00:31:05.687 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:31:05.687 Transport Address: 10.0.0.2 [2024-12-13 10:32:59.477698] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:31:05.687 [2024-12-13 10:32:59.477713] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500001db80 00:31:05.687 [2024-12-13 10:32:59.477725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.687 [2024-12-13 10:32:59.477733] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b280) on tqpair=0x61500001db80 00:31:05.687 [2024-12-13 10:32:59.477740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.687 [2024-12-13 10:32:59.477746] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b400) on tqpair=0x61500001db80 00:31:05.687 [2024-12-13 10:32:59.477753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.687 [2024-12-13 10:32:59.477760] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500001db80 00:31:05.687 [2024-12-13 10:32:59.477767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.687 [2024-12-13 10:32:59.477780] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:05.687 [2024-12-13 10:32:59.477787] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:05.687 [2024-12-13 10:32:59.477793] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500001db80) 00:31:05.687 [2024-12-13 10:32:59.477807] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.687 [2024-12-13 10:32:59.477828] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:31:05.687 [2024-12-13 10:32:59.477918] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:05.687 [2024-12-13 10:32:59.477928] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:05.687 [2024-12-13 10:32:59.477934] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:05.687 [2024-12-13 10:32:59.477939] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500001db80 00:31:05.687 [2024-12-13 10:32:59.477950] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:05.687 [2024-12-13 10:32:59.477956] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:05.688 [2024-12-13 10:32:59.477962] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500001db80) 00:31:05.688 [2024-12-13 10:32:59.477975] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.688 [2024-12-13 10:32:59.477995] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:31:05.688 [2024-12-13 10:32:59.478115] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:05.688 [2024-12-13 10:32:59.478123] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:05.688 [2024-12-13 10:32:59.478128] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:05.688 [2024-12-13 10:32:59.478133] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500001db80 00:31:05.688 [2024-12-13 10:32:59.478143] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:31:05.688 [2024-12-13 10:32:59.478150] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:31:05.688 [2024-12-13 10:32:59.478163] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:05.688 [2024-12-13 10:32:59.478169] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:05.688 [2024-12-13 10:32:59.478175] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500001db80) 00:31:05.688 [2024-12-13 10:32:59.478185] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.688 [2024-12-13 10:32:59.478199] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:31:05.688 [2024-12-13 10:32:59.478268] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:05.688 [2024-12-13 10:32:59.478277] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:05.688 [2024-12-13 10:32:59.478281] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:05.688 [2024-12-13 10:32:59.478286] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500001db80 00:31:05.688 [2024-12-13 10:32:59.478299] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:05.688 [2024-12-13 10:32:59.478304] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:05.688 [2024-12-13 10:32:59.478309] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500001db80) 00:31:05.688 [2024-12-13 10:32:59.478318] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.688 [2024-12-13 10:32:59.478331] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:31:05.688 [2024-12-13 10:32:59.478402] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:05.688 [2024-12-13 10:32:59.478410] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:05.688 [2024-12-13 10:32:59.478415] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:05.688 [2024-12-13 10:32:59.478420] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500001db80 00:31:05.688 [2024-12-13 10:32:59.478432] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:05.688 [2024-12-13 10:32:59.478440] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:05.688 [2024-12-13 10:32:59.478444] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500001db80) 00:31:05.688 [2024-12-13 10:32:59.482471] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.688 [2024-12-13 10:32:59.482495] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:31:05.688 [2024-12-13 10:32:59.482590] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:05.688 [2024-12-13 10:32:59.482599] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:05.688 [2024-12-13 10:32:59.482603] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:05.688 [2024-12-13 10:32:59.482609] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500001db80 00:31:05.688 [2024-12-13 10:32:59.482620] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 4 milliseconds 00:31:05.688 00:31:05.688 10:32:59 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:31:05.951 [2024-12-13 10:32:59.578392] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:31:05.951 [2024-12-13 10:32:59.578470] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4061912 ] 00:31:05.951 [2024-12-13 10:32:59.638993] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:31:05.951 [2024-12-13 10:32:59.639095] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:31:05.951 [2024-12-13 10:32:59.639106] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:31:05.951 [2024-12-13 10:32:59.639127] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:31:05.951 [2024-12-13 10:32:59.639141] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:31:05.951 [2024-12-13 10:32:59.642929] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:31:05.951 [2024-12-13 10:32:59.642972] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x61500001db80 0 00:31:05.951 [2024-12-13 10:32:59.649463] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:31:05.951 [2024-12-13 10:32:59.649488] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:31:05.951 [2024-12-13 10:32:59.649497] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:31:05.951 [2024-12-13 10:32:59.649503] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:31:05.951 [2024-12-13 10:32:59.649553] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:05.951 [2024-12-13 10:32:59.649562] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:05.951 [2024-12-13 10:32:59.649570] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500001db80) 00:31:05.951 [2024-12-13 10:32:59.649590] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:31:05.951 [2024-12-13 10:32:59.649618] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:31:05.951 [2024-12-13 10:32:59.656465] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:05.951 [2024-12-13 10:32:59.656485] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:05.951 [2024-12-13 10:32:59.656491] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:05.951 [2024-12-13 10:32:59.656502] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500001db80 00:31:05.951 [2024-12-13 10:32:59.656523] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:31:05.951 [2024-12-13 10:32:59.656546] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:31:05.951 [2024-12-13 10:32:59.656555] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:31:05.951 [2024-12-13 10:32:59.656574] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:05.951 [2024-12-13 10:32:59.656581] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:05.951 [2024-12-13 10:32:59.656589] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500001db80) 00:31:05.951 [2024-12-13 10:32:59.656602] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.951 [2024-12-13 10:32:59.656622] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:31:05.951 [2024-12-13 10:32:59.656754] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:05.951 [2024-12-13 10:32:59.656764] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:05.951 [2024-12-13 10:32:59.656769] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:05.951 [2024-12-13 10:32:59.656776] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500001db80 00:31:05.951 [2024-12-13 10:32:59.656787] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:31:05.951 [2024-12-13 10:32:59.656801] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:31:05.951 [2024-12-13 10:32:59.656814] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:05.951 [2024-12-13 10:32:59.656820] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:05.951 [2024-12-13 10:32:59.656826] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500001db80) 00:31:05.951 [2024-12-13 10:32:59.656838] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.952 [2024-12-13 10:32:59.656853] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:31:05.952 [2024-12-13 10:32:59.656950] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:05.952 [2024-12-13 10:32:59.656959] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:05.952 [2024-12-13 10:32:59.656964] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:05.952 [2024-12-13 10:32:59.656969] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500001db80 00:31:05.952 [2024-12-13 10:32:59.656977] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:31:05.952 [2024-12-13 10:32:59.656988] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:31:05.952 [2024-12-13 10:32:59.656997] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:05.952 [2024-12-13 10:32:59.657005] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:05.952 [2024-12-13 10:32:59.657011] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500001db80) 00:31:05.952 [2024-12-13 10:32:59.657023] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.952 [2024-12-13 10:32:59.657037] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:31:05.952 [2024-12-13 10:32:59.657157] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:05.952 [2024-12-13 10:32:59.657166] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:05.952 [2024-12-13 10:32:59.657173] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:05.952 [2024-12-13 10:32:59.657178] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500001db80 00:31:05.952 [2024-12-13 10:32:59.657186] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:31:05.952 [2024-12-13 10:32:59.657199] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:05.952 [2024-12-13 10:32:59.657205] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:05.952 [2024-12-13 10:32:59.657213] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500001db80) 00:31:05.952 [2024-12-13 10:32:59.657224] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.952 [2024-12-13 10:32:59.657237] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:31:05.952 [2024-12-13 10:32:59.657305] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:05.952 [2024-12-13 10:32:59.657314] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:05.952 [2024-12-13 10:32:59.657321] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:05.952 [2024-12-13 10:32:59.657326] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500001db80 00:31:05.952 [2024-12-13 10:32:59.657334] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:31:05.952 [2024-12-13 10:32:59.657341] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:31:05.952 [2024-12-13 10:32:59.657351] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:31:05.952 [2024-12-13 10:32:59.657462] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:31:05.952 [2024-12-13 10:32:59.657469] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:31:05.952 [2024-12-13 10:32:59.657487] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:05.952 [2024-12-13 10:32:59.657493] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:05.952 [2024-12-13 10:32:59.657498] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500001db80) 00:31:05.952 [2024-12-13 10:32:59.657509] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.952 [2024-12-13 10:32:59.657524] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:31:05.952 [2024-12-13 10:32:59.657646] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:05.952 [2024-12-13 10:32:59.657655] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:05.952 [2024-12-13 10:32:59.657660] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:05.952 [2024-12-13 10:32:59.657665] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500001db80 00:31:05.952 [2024-12-13 10:32:59.657672] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:31:05.952 [2024-12-13 10:32:59.657685] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:05.952 [2024-12-13 10:32:59.657691] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:05.952 [2024-12-13 10:32:59.657699] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500001db80) 00:31:05.952 [2024-12-13 10:32:59.657710] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.952 [2024-12-13 10:32:59.657724] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:31:05.952 [2024-12-13 10:32:59.657804] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:05.952 [2024-12-13 10:32:59.657814] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:05.952 [2024-12-13 10:32:59.657819] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:05.952 [2024-12-13 10:32:59.657824] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500001db80 00:31:05.952 [2024-12-13 10:32:59.657831] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:31:05.952 [2024-12-13 10:32:59.657838] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:31:05.952 [2024-12-13 10:32:59.657852] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:31:05.952 [2024-12-13 10:32:59.657861] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:31:05.952 [2024-12-13 10:32:59.657880] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:05.952 [2024-12-13 10:32:59.657886] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500001db80) 00:31:05.952 [2024-12-13 10:32:59.657897] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.952 [2024-12-13 10:32:59.657911] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:31:05.952 [2024-12-13 10:32:59.658034] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:31:05.952 [2024-12-13 10:32:59.658043] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:31:05.952 [2024-12-13 10:32:59.658048] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:31:05.952 [2024-12-13 10:32:59.658056] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500001db80): datao=0, datal=4096, cccid=0 00:31:05.952 [2024-12-13 10:32:59.658063] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b100) on tqpair(0x61500001db80): expected_datao=0, payload_size=4096 00:31:05.952 [2024-12-13 10:32:59.658071] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:05.952 [2024-12-13 10:32:59.658093] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:31:05.952 [2024-12-13 10:32:59.658100] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:31:05.952 [2024-12-13 10:32:59.700463] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:05.952 [2024-12-13 10:32:59.700484] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:05.952 [2024-12-13 10:32:59.700490] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:05.952 [2024-12-13 10:32:59.700496] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500001db80 00:31:05.952 [2024-12-13 10:32:59.700513] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:31:05.952 [2024-12-13 10:32:59.700521] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:31:05.952 [2024-12-13 10:32:59.700528] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:31:05.952 [2024-12-13 10:32:59.700535] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:31:05.952 [2024-12-13 10:32:59.700542] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:31:05.952 [2024-12-13 10:32:59.700557] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:31:05.952 [2024-12-13 10:32:59.700575] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:31:05.952 [2024-12-13 10:32:59.700586] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:05.952 [2024-12-13 10:32:59.700592] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:05.952 [2024-12-13 10:32:59.700601] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500001db80) 00:31:05.952 [2024-12-13 10:32:59.700615] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:31:05.952 [2024-12-13 10:32:59.700634] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:31:05.952 [2024-12-13 10:32:59.700770] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:05.952 [2024-12-13 10:32:59.700778] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:05.952 [2024-12-13 10:32:59.700782] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:05.952 [2024-12-13 10:32:59.700788] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500001db80 00:31:05.952 [2024-12-13 10:32:59.700798] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:05.952 [2024-12-13 10:32:59.700804] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:05.952 [2024-12-13 10:32:59.700810] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500001db80) 00:31:05.952 [2024-12-13 10:32:59.700822] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:05.952 [2024-12-13 10:32:59.700830] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:05.952 [2024-12-13 10:32:59.700835] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:05.952 [2024-12-13 10:32:59.700840] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x61500001db80) 00:31:05.952 [2024-12-13 10:32:59.700848] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:05.952 [2024-12-13 10:32:59.700855] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:05.953 [2024-12-13 10:32:59.700860] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:05.953 [2024-12-13 10:32:59.700865] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x61500001db80) 00:31:05.953 [2024-12-13 10:32:59.700873] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:05.953 [2024-12-13 10:32:59.700880] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:05.953 [2024-12-13 10:32:59.700884] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:05.953 [2024-12-13 10:32:59.700889] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500001db80) 00:31:05.953 [2024-12-13 10:32:59.700897] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:05.953 [2024-12-13 10:32:59.700903] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:31:05.953 [2024-12-13 10:32:59.700918] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:31:05.953 [2024-12-13 10:32:59.700927] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:05.953 [2024-12-13 10:32:59.700933] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500001db80) 00:31:05.953 [2024-12-13 10:32:59.700942] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.953 [2024-12-13 10:32:59.700958] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:31:05.953 [2024-12-13 10:32:59.700965] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b280, cid 1, qid 0 00:31:05.953 [2024-12-13 10:32:59.700971] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b400, cid 2, qid 0 00:31:05.953 [2024-12-13 10:32:59.700977] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:31:05.953 [2024-12-13 10:32:59.700983] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:31:05.953 [2024-12-13 10:32:59.701099] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:05.953 [2024-12-13 10:32:59.701108] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:05.953 [2024-12-13 10:32:59.701113] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:05.953 [2024-12-13 10:32:59.701118] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500001db80 00:31:05.953 [2024-12-13 10:32:59.701126] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:31:05.953 [2024-12-13 10:32:59.701134] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:31:05.953 [2024-12-13 10:32:59.701145] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:31:05.953 [2024-12-13 10:32:59.701153] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:31:05.953 [2024-12-13 10:32:59.701165] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:05.953 [2024-12-13 10:32:59.701171] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:05.953 [2024-12-13 10:32:59.701176] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500001db80) 00:31:05.953 [2024-12-13 10:32:59.701186] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:31:05.953 [2024-12-13 10:32:59.701201] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:31:05.953 [2024-12-13 10:32:59.701322] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:05.953 [2024-12-13 10:32:59.701330] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:05.953 [2024-12-13 10:32:59.701334] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:05.953 [2024-12-13 10:32:59.701340] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500001db80 00:31:05.953 [2024-12-13 10:32:59.701408] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:31:05.953 [2024-12-13 10:32:59.701428] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:31:05.953 [2024-12-13 10:32:59.701441] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:05.953 [2024-12-13 10:32:59.701463] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500001db80) 00:31:05.953 [2024-12-13 10:32:59.701474] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.953 [2024-12-13 10:32:59.701491] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:31:05.953 [2024-12-13 10:32:59.701587] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:31:05.953 [2024-12-13 10:32:59.701596] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:31:05.953 [2024-12-13 10:32:59.701601] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:31:05.953 [2024-12-13 10:32:59.701606] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500001db80): datao=0, datal=4096, cccid=4 00:31:05.953 [2024-12-13 10:32:59.701613] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x61500001db80): expected_datao=0, payload_size=4096 00:31:05.953 [2024-12-13 10:32:59.701618] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:05.953 [2024-12-13 10:32:59.701631] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:31:05.953 [2024-12-13 10:32:59.701636] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:31:05.953 [2024-12-13 10:32:59.701662] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:05.953 [2024-12-13 10:32:59.701672] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:05.953 [2024-12-13 10:32:59.701680] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:05.953 [2024-12-13 10:32:59.701685] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500001db80 00:31:05.953 [2024-12-13 10:32:59.701709] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:31:05.953 [2024-12-13 10:32:59.701728] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:31:05.953 [2024-12-13 10:32:59.701742] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:31:05.953 [2024-12-13 10:32:59.701755] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:05.953 [2024-12-13 10:32:59.701761] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500001db80) 00:31:05.953 [2024-12-13 10:32:59.701771] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.953 [2024-12-13 10:32:59.701786] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:31:05.953 [2024-12-13 10:32:59.701902] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:31:05.953 [2024-12-13 10:32:59.701911] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:31:05.953 [2024-12-13 10:32:59.701915] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:31:05.953 [2024-12-13 10:32:59.701920] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500001db80): datao=0, datal=4096, cccid=4 00:31:05.953 [2024-12-13 10:32:59.701927] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x61500001db80): expected_datao=0, payload_size=4096 00:31:05.953 [2024-12-13 10:32:59.701937] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:05.953 [2024-12-13 10:32:59.701945] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:31:05.953 [2024-12-13 10:32:59.701950] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:31:05.953 [2024-12-13 10:32:59.701977] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:05.953 [2024-12-13 10:32:59.701985] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:05.953 [2024-12-13 10:32:59.701989] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:05.953 [2024-12-13 10:32:59.701994] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500001db80 00:31:05.953 [2024-12-13 10:32:59.702017] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:31:05.953 [2024-12-13 10:32:59.702031] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:31:05.953 [2024-12-13 10:32:59.702045] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:05.953 [2024-12-13 10:32:59.702053] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500001db80) 00:31:05.953 [2024-12-13 10:32:59.702064] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.953 [2024-12-13 10:32:59.702079] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:31:05.953 [2024-12-13 10:32:59.702171] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:31:05.953 [2024-12-13 10:32:59.702179] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:31:05.953 [2024-12-13 10:32:59.702184] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:31:05.953 [2024-12-13 10:32:59.702190] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500001db80): datao=0, datal=4096, cccid=4 00:31:05.953 [2024-12-13 10:32:59.702196] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x61500001db80): expected_datao=0, payload_size=4096 00:31:05.953 [2024-12-13 10:32:59.702201] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:05.953 [2024-12-13 10:32:59.702212] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:31:05.953 [2024-12-13 10:32:59.702217] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:31:05.953 [2024-12-13 10:32:59.702249] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:05.953 [2024-12-13 10:32:59.702258] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:05.953 [2024-12-13 10:32:59.702262] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:05.953 [2024-12-13 10:32:59.702268] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500001db80 00:31:05.953 [2024-12-13 10:32:59.702286] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:31:05.953 [2024-12-13 10:32:59.702297] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:31:05.953 [2024-12-13 10:32:59.702307] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:31:05.953 [2024-12-13 10:32:59.702316] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:31:05.953 [2024-12-13 10:32:59.702323] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:31:05.953 [2024-12-13 10:32:59.702330] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:31:05.953 [2024-12-13 10:32:59.702338] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:31:05.953 [2024-12-13 10:32:59.702344] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:31:05.953 [2024-12-13 10:32:59.702352] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:31:05.954 [2024-12-13 10:32:59.702381] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:05.954 [2024-12-13 10:32:59.702390] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500001db80) 00:31:05.954 [2024-12-13 10:32:59.702401] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.954 [2024-12-13 10:32:59.702410] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:05.954 [2024-12-13 10:32:59.702416] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:05.954 [2024-12-13 10:32:59.702421] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x61500001db80) 00:31:05.954 [2024-12-13 10:32:59.702430] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:31:05.954 [2024-12-13 10:32:59.702458] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:31:05.954 [2024-12-13 10:32:59.702467] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:31:05.954 [2024-12-13 10:32:59.702590] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:05.954 [2024-12-13 10:32:59.702599] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:05.954 [2024-12-13 10:32:59.702605] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:05.954 [2024-12-13 10:32:59.702611] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500001db80 00:31:05.954 [2024-12-13 10:32:59.702623] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:05.954 [2024-12-13 10:32:59.702631] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:05.954 [2024-12-13 10:32:59.702635] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:05.954 [2024-12-13 10:32:59.702641] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x61500001db80 00:31:05.954 [2024-12-13 10:32:59.702658] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:05.954 [2024-12-13 10:32:59.702664] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x61500001db80) 00:31:05.954 [2024-12-13 10:32:59.702673] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.954 [2024-12-13 10:32:59.702687] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:31:05.954 [2024-12-13 10:32:59.702788] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:05.954 [2024-12-13 10:32:59.702797] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:05.954 [2024-12-13 10:32:59.702801] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:05.954 [2024-12-13 10:32:59.702807] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x61500001db80 00:31:05.954 [2024-12-13 10:32:59.702818] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:05.954 [2024-12-13 10:32:59.702824] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x61500001db80) 00:31:05.954 [2024-12-13 10:32:59.702836] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.954 [2024-12-13 10:32:59.702849] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:31:05.954 [2024-12-13 10:32:59.702924] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:05.954 [2024-12-13 10:32:59.702932] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:05.954 [2024-12-13 10:32:59.702937] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:05.954 [2024-12-13 10:32:59.702943] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x61500001db80 00:31:05.954 [2024-12-13 10:32:59.702955] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:05.954 [2024-12-13 10:32:59.702961] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x61500001db80) 00:31:05.954 [2024-12-13 10:32:59.702970] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.954 [2024-12-13 10:32:59.702982] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:31:05.954 [2024-12-13 10:32:59.703100] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:05.954 [2024-12-13 10:32:59.703108] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:05.954 [2024-12-13 10:32:59.703113] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:05.954 [2024-12-13 10:32:59.703118] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x61500001db80 00:31:05.954 [2024-12-13 10:32:59.703142] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:05.954 [2024-12-13 10:32:59.703149] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x61500001db80) 00:31:05.954 [2024-12-13 10:32:59.703159] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.954 [2024-12-13 10:32:59.703169] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:05.954 [2024-12-13 10:32:59.703174] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500001db80) 00:31:05.954 [2024-12-13 10:32:59.703184] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.954 [2024-12-13 10:32:59.703193] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:05.954 [2024-12-13 10:32:59.703199] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x61500001db80) 00:31:05.954 [2024-12-13 10:32:59.703208] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.954 [2024-12-13 10:32:59.703225] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:05.954 [2024-12-13 10:32:59.703231] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x61500001db80) 00:31:05.954 [2024-12-13 10:32:59.703242] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.954 [2024-12-13 10:32:59.703258] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:31:05.954 [2024-12-13 10:32:59.703266] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:31:05.954 [2024-12-13 10:32:59.703272] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001ba00, cid 6, qid 0 00:31:05.954 [2024-12-13 10:32:59.703277] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001bb80, cid 7, qid 0 00:31:05.954 [2024-12-13 10:32:59.703467] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:31:05.954 [2024-12-13 10:32:59.703477] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:31:05.954 [2024-12-13 10:32:59.703483] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:31:05.954 [2024-12-13 10:32:59.703488] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500001db80): datao=0, datal=8192, cccid=5 00:31:05.954 [2024-12-13 10:32:59.703495] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b880) on tqpair(0x61500001db80): expected_datao=0, payload_size=8192 00:31:05.954 [2024-12-13 10:32:59.703502] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:05.954 [2024-12-13 10:32:59.703512] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:31:05.954 [2024-12-13 10:32:59.703518] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:31:05.954 [2024-12-13 10:32:59.703526] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:31:05.954 [2024-12-13 10:32:59.703537] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:31:05.954 [2024-12-13 10:32:59.703542] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:31:05.954 [2024-12-13 10:32:59.703548] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500001db80): datao=0, datal=512, cccid=4 00:31:05.954 [2024-12-13 10:32:59.703553] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x61500001db80): expected_datao=0, payload_size=512 00:31:05.954 [2024-12-13 10:32:59.703559] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:05.954 [2024-12-13 10:32:59.703567] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:31:05.954 [2024-12-13 10:32:59.703572] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:31:05.954 [2024-12-13 10:32:59.703578] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:31:05.954 [2024-12-13 10:32:59.703585] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:31:05.954 [2024-12-13 10:32:59.703590] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:31:05.954 [2024-12-13 10:32:59.703595] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500001db80): datao=0, datal=512, cccid=6 00:31:05.954 [2024-12-13 10:32:59.703601] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001ba00) on tqpair(0x61500001db80): expected_datao=0, payload_size=512 00:31:05.954 [2024-12-13 10:32:59.703606] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:05.954 [2024-12-13 10:32:59.703616] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:31:05.954 [2024-12-13 10:32:59.703621] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:31:05.954 [2024-12-13 10:32:59.703628] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:31:05.954 [2024-12-13 10:32:59.703634] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:31:05.954 [2024-12-13 10:32:59.703639] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:31:05.954 [2024-12-13 10:32:59.703644] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500001db80): datao=0, datal=4096, cccid=7 00:31:05.954 [2024-12-13 10:32:59.703649] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001bb80) on tqpair(0x61500001db80): expected_datao=0, payload_size=4096 00:31:05.954 [2024-12-13 10:32:59.703657] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:05.954 [2024-12-13 10:32:59.703673] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:31:05.954 [2024-12-13 10:32:59.703678] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:31:05.954 [2024-12-13 10:32:59.703687] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:05.954 [2024-12-13 10:32:59.703694] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:05.954 [2024-12-13 10:32:59.703699] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:05.954 [2024-12-13 10:32:59.703707] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x61500001db80 00:31:05.954 [2024-12-13 10:32:59.703728] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:05.954 [2024-12-13 10:32:59.703739] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:05.954 [2024-12-13 10:32:59.703744] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:05.954 [2024-12-13 10:32:59.703749] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500001db80 00:31:05.954 [2024-12-13 10:32:59.703763] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:05.954 [2024-12-13 10:32:59.703770] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:05.954 [2024-12-13 10:32:59.703775] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:05.954 [2024-12-13 10:32:59.703780] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001ba00) on tqpair=0x61500001db80 00:31:05.954 [2024-12-13 10:32:59.703789] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:05.954 [2024-12-13 10:32:59.703796] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:05.954 [2024-12-13 10:32:59.703800] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:05.955 [2024-12-13 10:32:59.703805] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001bb80) on tqpair=0x61500001db80 00:31:05.955 ===================================================== 00:31:05.955 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:05.955 ===================================================== 00:31:05.955 Controller Capabilities/Features 00:31:05.955 ================================ 00:31:05.955 Vendor ID: 8086 00:31:05.955 Subsystem Vendor ID: 8086 00:31:05.955 Serial Number: SPDK00000000000001 00:31:05.955 Model Number: SPDK bdev Controller 00:31:05.955 Firmware Version: 25.01 00:31:05.955 Recommended Arb Burst: 6 00:31:05.955 IEEE OUI Identifier: e4 d2 5c 00:31:05.955 Multi-path I/O 00:31:05.955 May have multiple subsystem ports: Yes 00:31:05.955 May have multiple controllers: Yes 00:31:05.955 Associated with SR-IOV VF: No 00:31:05.955 Max Data Transfer Size: 131072 00:31:05.955 Max Number of Namespaces: 32 00:31:05.955 Max Number of I/O Queues: 127 00:31:05.955 NVMe Specification Version (VS): 1.3 00:31:05.955 NVMe Specification Version (Identify): 1.3 00:31:05.955 Maximum Queue Entries: 128 00:31:05.955 Contiguous Queues Required: Yes 00:31:05.955 Arbitration Mechanisms Supported 00:31:05.955 Weighted Round Robin: Not Supported 00:31:05.955 Vendor Specific: Not Supported 00:31:05.955 Reset Timeout: 15000 ms 00:31:05.955 Doorbell Stride: 4 bytes 00:31:05.955 NVM Subsystem Reset: Not Supported 00:31:05.955 Command Sets Supported 00:31:05.955 NVM Command Set: Supported 00:31:05.955 Boot Partition: Not Supported 00:31:05.955 Memory Page Size Minimum: 4096 bytes 00:31:05.955 Memory Page Size Maximum: 4096 bytes 00:31:05.955 Persistent Memory Region: Not Supported 00:31:05.955 Optional Asynchronous Events Supported 00:31:05.955 Namespace Attribute Notices: Supported 00:31:05.955 Firmware Activation Notices: Not Supported 00:31:05.955 ANA Change Notices: Not Supported 00:31:05.955 PLE Aggregate Log Change Notices: Not Supported 00:31:05.955 LBA Status Info Alert Notices: Not Supported 00:31:05.955 EGE Aggregate Log Change Notices: Not Supported 00:31:05.955 Normal NVM Subsystem Shutdown event: Not Supported 00:31:05.955 Zone Descriptor Change Notices: Not Supported 00:31:05.955 Discovery Log Change Notices: Not Supported 00:31:05.955 Controller Attributes 00:31:05.955 128-bit Host Identifier: Supported 00:31:05.955 Non-Operational Permissive Mode: Not Supported 00:31:05.955 NVM Sets: Not Supported 00:31:05.955 Read Recovery Levels: Not Supported 00:31:05.955 Endurance Groups: Not Supported 00:31:05.955 Predictable Latency Mode: Not Supported 00:31:05.955 Traffic Based Keep ALive: Not Supported 00:31:05.955 Namespace Granularity: Not Supported 00:31:05.955 SQ Associations: Not Supported 00:31:05.955 UUID List: Not Supported 00:31:05.955 Multi-Domain Subsystem: Not Supported 00:31:05.955 Fixed Capacity Management: Not Supported 00:31:05.955 Variable Capacity Management: Not Supported 00:31:05.955 Delete Endurance Group: Not Supported 00:31:05.955 Delete NVM Set: Not Supported 00:31:05.955 Extended LBA Formats Supported: Not Supported 00:31:05.955 Flexible Data Placement Supported: Not Supported 00:31:05.955 00:31:05.955 Controller Memory Buffer Support 00:31:05.955 ================================ 00:31:05.955 Supported: No 00:31:05.955 00:31:05.955 Persistent Memory Region Support 00:31:05.955 ================================ 00:31:05.955 Supported: No 00:31:05.955 00:31:05.955 Admin Command Set Attributes 00:31:05.955 ============================ 00:31:05.955 Security Send/Receive: Not Supported 00:31:05.955 Format NVM: Not Supported 00:31:05.955 Firmware Activate/Download: Not Supported 00:31:05.955 Namespace Management: Not Supported 00:31:05.955 Device Self-Test: Not Supported 00:31:05.955 Directives: Not Supported 00:31:05.955 NVMe-MI: Not Supported 00:31:05.955 Virtualization Management: Not Supported 00:31:05.955 Doorbell Buffer Config: Not Supported 00:31:05.955 Get LBA Status Capability: Not Supported 00:31:05.955 Command & Feature Lockdown Capability: Not Supported 00:31:05.955 Abort Command Limit: 4 00:31:05.955 Async Event Request Limit: 4 00:31:05.955 Number of Firmware Slots: N/A 00:31:05.955 Firmware Slot 1 Read-Only: N/A 00:31:05.955 Firmware Activation Without Reset: N/A 00:31:05.955 Multiple Update Detection Support: N/A 00:31:05.955 Firmware Update Granularity: No Information Provided 00:31:05.955 Per-Namespace SMART Log: No 00:31:05.955 Asymmetric Namespace Access Log Page: Not Supported 00:31:05.955 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:31:05.955 Command Effects Log Page: Supported 00:31:05.955 Get Log Page Extended Data: Supported 00:31:05.955 Telemetry Log Pages: Not Supported 00:31:05.955 Persistent Event Log Pages: Not Supported 00:31:05.955 Supported Log Pages Log Page: May Support 00:31:05.955 Commands Supported & Effects Log Page: Not Supported 00:31:05.955 Feature Identifiers & Effects Log Page:May Support 00:31:05.955 NVMe-MI Commands & Effects Log Page: May Support 00:31:05.955 Data Area 4 for Telemetry Log: Not Supported 00:31:05.955 Error Log Page Entries Supported: 128 00:31:05.955 Keep Alive: Supported 00:31:05.955 Keep Alive Granularity: 10000 ms 00:31:05.955 00:31:05.955 NVM Command Set Attributes 00:31:05.955 ========================== 00:31:05.955 Submission Queue Entry Size 00:31:05.955 Max: 64 00:31:05.955 Min: 64 00:31:05.955 Completion Queue Entry Size 00:31:05.955 Max: 16 00:31:05.955 Min: 16 00:31:05.955 Number of Namespaces: 32 00:31:05.955 Compare Command: Supported 00:31:05.955 Write Uncorrectable Command: Not Supported 00:31:05.955 Dataset Management Command: Supported 00:31:05.955 Write Zeroes Command: Supported 00:31:05.955 Set Features Save Field: Not Supported 00:31:05.955 Reservations: Supported 00:31:05.955 Timestamp: Not Supported 00:31:05.955 Copy: Supported 00:31:05.955 Volatile Write Cache: Present 00:31:05.955 Atomic Write Unit (Normal): 1 00:31:05.955 Atomic Write Unit (PFail): 1 00:31:05.955 Atomic Compare & Write Unit: 1 00:31:05.955 Fused Compare & Write: Supported 00:31:05.955 Scatter-Gather List 00:31:05.955 SGL Command Set: Supported 00:31:05.955 SGL Keyed: Supported 00:31:05.955 SGL Bit Bucket Descriptor: Not Supported 00:31:05.955 SGL Metadata Pointer: Not Supported 00:31:05.955 Oversized SGL: Not Supported 00:31:05.955 SGL Metadata Address: Not Supported 00:31:05.955 SGL Offset: Supported 00:31:05.955 Transport SGL Data Block: Not Supported 00:31:05.955 Replay Protected Memory Block: Not Supported 00:31:05.955 00:31:05.955 Firmware Slot Information 00:31:05.955 ========================= 00:31:05.955 Active slot: 1 00:31:05.955 Slot 1 Firmware Revision: 25.01 00:31:05.955 00:31:05.955 00:31:05.955 Commands Supported and Effects 00:31:05.955 ============================== 00:31:05.955 Admin Commands 00:31:05.955 -------------- 00:31:05.955 Get Log Page (02h): Supported 00:31:05.955 Identify (06h): Supported 00:31:05.955 Abort (08h): Supported 00:31:05.955 Set Features (09h): Supported 00:31:05.955 Get Features (0Ah): Supported 00:31:05.955 Asynchronous Event Request (0Ch): Supported 00:31:05.955 Keep Alive (18h): Supported 00:31:05.955 I/O Commands 00:31:05.955 ------------ 00:31:05.955 Flush (00h): Supported LBA-Change 00:31:05.955 Write (01h): Supported LBA-Change 00:31:05.955 Read (02h): Supported 00:31:05.955 Compare (05h): Supported 00:31:05.955 Write Zeroes (08h): Supported LBA-Change 00:31:05.955 Dataset Management (09h): Supported LBA-Change 00:31:05.955 Copy (19h): Supported LBA-Change 00:31:05.955 00:31:05.955 Error Log 00:31:05.955 ========= 00:31:05.955 00:31:05.955 Arbitration 00:31:05.955 =========== 00:31:05.955 Arbitration Burst: 1 00:31:05.955 00:31:05.955 Power Management 00:31:05.955 ================ 00:31:05.956 Number of Power States: 1 00:31:05.956 Current Power State: Power State #0 00:31:05.956 Power State #0: 00:31:05.956 Max Power: 0.00 W 00:31:05.956 Non-Operational State: Operational 00:31:05.956 Entry Latency: Not Reported 00:31:05.956 Exit Latency: Not Reported 00:31:05.956 Relative Read Throughput: 0 00:31:05.956 Relative Read Latency: 0 00:31:05.956 Relative Write Throughput: 0 00:31:05.956 Relative Write Latency: 0 00:31:05.956 Idle Power: Not Reported 00:31:05.956 Active Power: Not Reported 00:31:05.956 Non-Operational Permissive Mode: Not Supported 00:31:05.956 00:31:05.956 Health Information 00:31:05.956 ================== 00:31:05.956 Critical Warnings: 00:31:05.956 Available Spare Space: OK 00:31:05.956 Temperature: OK 00:31:05.956 Device Reliability: OK 00:31:05.956 Read Only: No 00:31:05.956 Volatile Memory Backup: OK 00:31:05.956 Current Temperature: 0 Kelvin (-273 Celsius) 00:31:05.956 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:31:05.956 Available Spare: 0% 00:31:05.956 Available Spare Threshold: 0% 00:31:05.956 Life Percentage Used:[2024-12-13 10:32:59.703945] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:05.956 [2024-12-13 10:32:59.703953] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x61500001db80) 00:31:05.956 [2024-12-13 10:32:59.703964] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.956 [2024-12-13 10:32:59.703981] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001bb80, cid 7, qid 0 00:31:05.956 [2024-12-13 10:32:59.704104] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:05.956 [2024-12-13 10:32:59.704114] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:05.956 [2024-12-13 10:32:59.704119] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:05.956 [2024-12-13 10:32:59.704125] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001bb80) on tqpair=0x61500001db80 00:31:05.956 [2024-12-13 10:32:59.704170] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:31:05.956 [2024-12-13 10:32:59.704184] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500001db80 00:31:05.956 [2024-12-13 10:32:59.704197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.956 [2024-12-13 10:32:59.704205] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b280) on tqpair=0x61500001db80 00:31:05.956 [2024-12-13 10:32:59.704213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.956 [2024-12-13 10:32:59.704219] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b400) on tqpair=0x61500001db80 00:31:05.956 [2024-12-13 10:32:59.704226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.956 [2024-12-13 10:32:59.704232] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500001db80 00:31:05.956 [2024-12-13 10:32:59.704241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.956 [2024-12-13 10:32:59.704252] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:05.956 [2024-12-13 10:32:59.704258] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:05.956 [2024-12-13 10:32:59.704264] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500001db80) 00:31:05.956 [2024-12-13 10:32:59.704275] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.956 [2024-12-13 10:32:59.704291] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:31:05.956 [2024-12-13 10:32:59.704415] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:05.956 [2024-12-13 10:32:59.704424] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:05.956 [2024-12-13 10:32:59.704430] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:05.956 [2024-12-13 10:32:59.704435] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500001db80 00:31:05.956 [2024-12-13 10:32:59.708457] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:05.956 [2024-12-13 10:32:59.708472] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:05.956 [2024-12-13 10:32:59.708478] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500001db80) 00:31:05.956 [2024-12-13 10:32:59.708490] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.956 [2024-12-13 10:32:59.708515] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:31:05.956 [2024-12-13 10:32:59.708669] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:05.956 [2024-12-13 10:32:59.708678] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:05.956 [2024-12-13 10:32:59.708683] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:05.956 [2024-12-13 10:32:59.708689] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500001db80 00:31:05.956 [2024-12-13 10:32:59.708696] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:31:05.956 [2024-12-13 10:32:59.708703] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:31:05.956 [2024-12-13 10:32:59.708718] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:05.956 [2024-12-13 10:32:59.708724] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:05.956 [2024-12-13 10:32:59.708735] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500001db80) 00:31:05.956 [2024-12-13 10:32:59.708746] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.956 [2024-12-13 10:32:59.708761] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:31:05.956 [2024-12-13 10:32:59.708834] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:05.956 [2024-12-13 10:32:59.708843] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:05.956 [2024-12-13 10:32:59.708847] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:05.956 [2024-12-13 10:32:59.708852] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500001db80 00:31:05.956 [2024-12-13 10:32:59.708865] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:05.956 [2024-12-13 10:32:59.708870] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:05.956 [2024-12-13 10:32:59.708875] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500001db80) 00:31:05.956 [2024-12-13 10:32:59.708884] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.956 [2024-12-13 10:32:59.708896] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:31:05.956 [2024-12-13 10:32:59.708970] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:05.956 [2024-12-13 10:32:59.708981] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:05.956 [2024-12-13 10:32:59.708985] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:05.956 [2024-12-13 10:32:59.708991] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500001db80 00:31:05.956 [2024-12-13 10:32:59.709003] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:05.956 [2024-12-13 10:32:59.709008] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:05.956 [2024-12-13 10:32:59.709013] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500001db80) 00:31:05.956 [2024-12-13 10:32:59.709024] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.956 [2024-12-13 10:32:59.709037] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:31:05.956 [2024-12-13 10:32:59.709120] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:05.956 [2024-12-13 10:32:59.709128] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:05.956 [2024-12-13 10:32:59.709133] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:05.956 [2024-12-13 10:32:59.709138] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500001db80 00:31:05.956 [2024-12-13 10:32:59.709150] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:05.956 [2024-12-13 10:32:59.709155] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:05.956 [2024-12-13 10:32:59.709160] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500001db80) 00:31:05.956 [2024-12-13 10:32:59.709169] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.956 [2024-12-13 10:32:59.709181] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:31:05.956 [2024-12-13 10:32:59.709271] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:05.956 [2024-12-13 10:32:59.709279] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:05.956 [2024-12-13 10:32:59.709284] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:05.956 [2024-12-13 10:32:59.709289] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500001db80 00:31:05.956 [2024-12-13 10:32:59.709301] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:05.956 [2024-12-13 10:32:59.709307] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:05.957 [2024-12-13 10:32:59.709312] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500001db80) 00:31:05.957 [2024-12-13 10:32:59.709320] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.957 [2024-12-13 10:32:59.709332] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:31:05.957 [2024-12-13 10:32:59.709397] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:05.957 [2024-12-13 10:32:59.709405] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:05.957 [2024-12-13 10:32:59.709409] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:05.957 [2024-12-13 10:32:59.709414] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500001db80 00:31:05.957 [2024-12-13 10:32:59.709427] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:05.957 [2024-12-13 10:32:59.709432] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:05.957 [2024-12-13 10:32:59.709437] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500001db80) 00:31:05.957 [2024-12-13 10:32:59.709446] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.957 [2024-12-13 10:32:59.709465] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:31:05.957 [2024-12-13 10:32:59.709575] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:05.957 [2024-12-13 10:32:59.709583] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:05.957 [2024-12-13 10:32:59.709588] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:05.957 [2024-12-13 10:32:59.709594] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500001db80 00:31:05.957 [2024-12-13 10:32:59.709606] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:05.957 [2024-12-13 10:32:59.709611] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:05.957 [2024-12-13 10:32:59.709616] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500001db80) 00:31:05.957 [2024-12-13 10:32:59.709625] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.957 [2024-12-13 10:32:59.709638] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:31:05.957 [2024-12-13 10:32:59.709724] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:05.957 [2024-12-13 10:32:59.709732] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:05.957 [2024-12-13 10:32:59.709737] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:05.957 [2024-12-13 10:32:59.709742] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500001db80 00:31:05.957 [2024-12-13 10:32:59.709754] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:05.957 [2024-12-13 10:32:59.709759] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:05.957 [2024-12-13 10:32:59.709764] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500001db80) 00:31:05.957 [2024-12-13 10:32:59.709773] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.957 [2024-12-13 10:32:59.709785] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:31:05.957 [2024-12-13 10:32:59.709895] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:05.957 [2024-12-13 10:32:59.709903] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:05.957 [2024-12-13 10:32:59.709911] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:05.957 [2024-12-13 10:32:59.709916] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500001db80 00:31:05.957 [2024-12-13 10:32:59.709928] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:05.957 [2024-12-13 10:32:59.709934] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:05.957 [2024-12-13 10:32:59.709939] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500001db80) 00:31:05.957 [2024-12-13 10:32:59.709948] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.957 [2024-12-13 10:32:59.709960] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:31:05.957 [2024-12-13 10:32:59.710035] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:05.957 [2024-12-13 10:32:59.710043] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:05.957 [2024-12-13 10:32:59.710053] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:05.957 [2024-12-13 10:32:59.710058] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500001db80 00:31:05.957 [2024-12-13 10:32:59.710071] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:05.957 [2024-12-13 10:32:59.710076] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:05.957 [2024-12-13 10:32:59.710081] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500001db80) 00:31:05.957 [2024-12-13 10:32:59.710090] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.957 [2024-12-13 10:32:59.710103] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:31:05.957 [2024-12-13 10:32:59.710179] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:05.957 [2024-12-13 10:32:59.710190] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:05.957 [2024-12-13 10:32:59.710195] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:05.957 [2024-12-13 10:32:59.710200] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500001db80 00:31:05.957 [2024-12-13 10:32:59.710211] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:05.957 [2024-12-13 10:32:59.710217] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:05.957 [2024-12-13 10:32:59.710222] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500001db80) 00:31:05.957 [2024-12-13 10:32:59.710235] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.957 [2024-12-13 10:32:59.710248] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:31:05.957 [2024-12-13 10:32:59.710329] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:05.957 [2024-12-13 10:32:59.710338] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:05.957 [2024-12-13 10:32:59.710342] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:05.957 [2024-12-13 10:32:59.710347] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500001db80 00:31:05.957 [2024-12-13 10:32:59.710359] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:05.957 [2024-12-13 10:32:59.710364] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:05.957 [2024-12-13 10:32:59.710369] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500001db80) 00:31:05.957 [2024-12-13 10:32:59.710378] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.957 [2024-12-13 10:32:59.710391] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:31:05.957 [2024-12-13 10:32:59.710481] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:05.957 [2024-12-13 10:32:59.710490] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:05.957 [2024-12-13 10:32:59.710494] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:05.957 [2024-12-13 10:32:59.710499] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500001db80 00:31:05.957 [2024-12-13 10:32:59.710511] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:05.957 [2024-12-13 10:32:59.710516] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:05.957 [2024-12-13 10:32:59.710521] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500001db80) 00:31:05.957 [2024-12-13 10:32:59.710530] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.957 [2024-12-13 10:32:59.710542] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:31:05.957 [2024-12-13 10:32:59.710616] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:05.957 [2024-12-13 10:32:59.710624] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:05.957 [2024-12-13 10:32:59.710628] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:05.957 [2024-12-13 10:32:59.710633] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500001db80 00:31:05.957 [2024-12-13 10:32:59.710645] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:05.957 [2024-12-13 10:32:59.710651] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:05.957 [2024-12-13 10:32:59.710656] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500001db80) 00:31:05.957 [2024-12-13 10:32:59.710665] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.957 [2024-12-13 10:32:59.710677] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:31:05.957 [2024-12-13 10:32:59.710784] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:05.957 [2024-12-13 10:32:59.710792] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:05.957 [2024-12-13 10:32:59.710797] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:05.957 [2024-12-13 10:32:59.710802] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500001db80 00:31:05.957 [2024-12-13 10:32:59.710814] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:05.957 [2024-12-13 10:32:59.710819] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:05.957 [2024-12-13 10:32:59.710824] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500001db80) 00:31:05.957 [2024-12-13 10:32:59.710833] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.957 [2024-12-13 10:32:59.710846] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:31:05.957 [2024-12-13 10:32:59.710935] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:05.957 [2024-12-13 10:32:59.710943] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:05.957 [2024-12-13 10:32:59.710948] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:05.957 [2024-12-13 10:32:59.710953] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500001db80 00:31:05.957 [2024-12-13 10:32:59.710964] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:05.957 [2024-12-13 10:32:59.710970] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:05.957 [2024-12-13 10:32:59.710974] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500001db80) 00:31:05.957 [2024-12-13 10:32:59.710983] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.958 [2024-12-13 10:32:59.710995] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:31:05.958 [2024-12-13 10:32:59.711091] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:05.958 [2024-12-13 10:32:59.711099] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:05.958 [2024-12-13 10:32:59.711104] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:05.958 [2024-12-13 10:32:59.711109] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500001db80 00:31:05.958 [2024-12-13 10:32:59.711121] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:05.958 [2024-12-13 10:32:59.711126] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:05.958 [2024-12-13 10:32:59.711131] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500001db80) 00:31:05.958 [2024-12-13 10:32:59.711140] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.958 [2024-12-13 10:32:59.711152] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:31:05.958 [2024-12-13 10:32:59.711228] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:05.958 [2024-12-13 10:32:59.711236] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:05.958 [2024-12-13 10:32:59.711241] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:05.958 [2024-12-13 10:32:59.711246] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500001db80 00:31:05.958 [2024-12-13 10:32:59.711258] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:05.958 [2024-12-13 10:32:59.711264] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:05.958 [2024-12-13 10:32:59.711269] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500001db80) 00:31:05.958 [2024-12-13 10:32:59.711278] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.958 [2024-12-13 10:32:59.711290] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:31:05.958 [2024-12-13 10:32:59.711389] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:05.958 [2024-12-13 10:32:59.711402] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:05.958 [2024-12-13 10:32:59.711407] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:05.958 [2024-12-13 10:32:59.711412] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500001db80 00:31:05.958 [2024-12-13 10:32:59.711432] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:05.958 [2024-12-13 10:32:59.711438] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:05.958 [2024-12-13 10:32:59.711442] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500001db80) 00:31:05.958 [2024-12-13 10:32:59.711460] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.958 [2024-12-13 10:32:59.711475] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:31:05.958 [2024-12-13 10:32:59.711545] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:05.958 [2024-12-13 10:32:59.711553] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:05.958 [2024-12-13 10:32:59.711558] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:05.958 [2024-12-13 10:32:59.711563] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500001db80 00:31:05.958 [2024-12-13 10:32:59.711574] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:05.958 [2024-12-13 10:32:59.711580] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:05.958 [2024-12-13 10:32:59.711585] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500001db80) 00:31:05.958 [2024-12-13 10:32:59.711593] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.958 [2024-12-13 10:32:59.711606] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:31:05.958 [2024-12-13 10:32:59.711704] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:05.958 [2024-12-13 10:32:59.711712] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:05.958 [2024-12-13 10:32:59.711717] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:05.958 [2024-12-13 10:32:59.711722] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500001db80 00:31:05.958 [2024-12-13 10:32:59.711734] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:05.958 [2024-12-13 10:32:59.711740] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:05.958 [2024-12-13 10:32:59.711745] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500001db80) 00:31:05.958 [2024-12-13 10:32:59.711754] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.958 [2024-12-13 10:32:59.711766] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:31:05.958 [2024-12-13 10:32:59.711845] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:05.958 [2024-12-13 10:32:59.711853] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:05.958 [2024-12-13 10:32:59.711858] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:05.958 [2024-12-13 10:32:59.711863] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500001db80 00:31:05.958 [2024-12-13 10:32:59.711875] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:05.958 [2024-12-13 10:32:59.711880] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:05.958 [2024-12-13 10:32:59.711885] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500001db80) 00:31:05.958 [2024-12-13 10:32:59.711894] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.958 [2024-12-13 10:32:59.711906] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:31:05.958 [2024-12-13 10:32:59.711997] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:05.958 [2024-12-13 10:32:59.712007] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:05.958 [2024-12-13 10:32:59.712012] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:05.958 [2024-12-13 10:32:59.712017] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500001db80 00:31:05.958 [2024-12-13 10:32:59.712028] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:05.958 [2024-12-13 10:32:59.712034] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:05.958 [2024-12-13 10:32:59.712039] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500001db80) 00:31:05.958 [2024-12-13 10:32:59.712048] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.958 [2024-12-13 10:32:59.712060] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:31:05.958 [2024-12-13 10:32:59.712148] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:05.958 [2024-12-13 10:32:59.712156] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:05.958 [2024-12-13 10:32:59.712161] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:05.958 [2024-12-13 10:32:59.712166] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500001db80 00:31:05.958 [2024-12-13 10:32:59.712183] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:05.958 [2024-12-13 10:32:59.712188] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:05.958 [2024-12-13 10:32:59.712193] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500001db80) 00:31:05.958 [2024-12-13 10:32:59.712202] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.958 [2024-12-13 10:32:59.712215] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:31:05.958 [2024-12-13 10:32:59.712300] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:05.958 [2024-12-13 10:32:59.712308] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:05.958 [2024-12-13 10:32:59.712313] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:05.958 [2024-12-13 10:32:59.712318] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500001db80 00:31:05.958 [2024-12-13 10:32:59.712330] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:05.958 [2024-12-13 10:32:59.712335] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:05.958 [2024-12-13 10:32:59.712340] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500001db80) 00:31:05.958 [2024-12-13 10:32:59.712349] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.958 [2024-12-13 10:32:59.712362] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:31:05.958 [2024-12-13 10:32:59.712431] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:05.958 [2024-12-13 10:32:59.712439] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:05.958 [2024-12-13 10:32:59.712444] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:05.958 [2024-12-13 10:32:59.716461] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500001db80 00:31:05.958 [2024-12-13 10:32:59.716484] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:05.958 [2024-12-13 10:32:59.716490] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:05.958 [2024-12-13 10:32:59.716495] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500001db80) 00:31:05.958 [2024-12-13 10:32:59.716505] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.958 [2024-12-13 10:32:59.716523] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:31:05.958 [2024-12-13 10:32:59.716660] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:05.958 [2024-12-13 10:32:59.716675] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:05.958 [2024-12-13 10:32:59.716680] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:05.958 [2024-12-13 10:32:59.716685] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500001db80 00:31:05.958 [2024-12-13 10:32:59.716696] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 7 milliseconds 00:31:05.958 0% 00:31:05.958 Data Units Read: 0 00:31:05.958 Data Units Written: 0 00:31:05.958 Host Read Commands: 0 00:31:05.958 Host Write Commands: 0 00:31:05.958 Controller Busy Time: 0 minutes 00:31:05.958 Power Cycles: 0 00:31:05.958 Power On Hours: 0 hours 00:31:05.958 Unsafe Shutdowns: 0 00:31:05.958 Unrecoverable Media Errors: 0 00:31:05.958 Lifetime Error Log Entries: 0 00:31:05.958 Warning Temperature Time: 0 minutes 00:31:05.958 Critical Temperature Time: 0 minutes 00:31:05.958 00:31:05.958 Number of Queues 00:31:05.958 ================ 00:31:05.959 Number of I/O Submission Queues: 127 00:31:05.959 Number of I/O Completion Queues: 127 00:31:05.959 00:31:05.959 Active Namespaces 00:31:05.959 ================= 00:31:05.959 Namespace ID:1 00:31:05.959 Error Recovery Timeout: Unlimited 00:31:05.959 Command Set Identifier: NVM (00h) 00:31:05.959 Deallocate: Supported 00:31:05.959 Deallocated/Unwritten Error: Not Supported 00:31:05.959 Deallocated Read Value: Unknown 00:31:05.959 Deallocate in Write Zeroes: Not Supported 00:31:05.959 Deallocated Guard Field: 0xFFFF 00:31:05.959 Flush: Supported 00:31:05.959 Reservation: Supported 00:31:05.959 Namespace Sharing Capabilities: Multiple Controllers 00:31:05.959 Size (in LBAs): 131072 (0GiB) 00:31:05.959 Capacity (in LBAs): 131072 (0GiB) 00:31:05.959 Utilization (in LBAs): 131072 (0GiB) 00:31:05.959 NGUID: ABCDEF0123456789ABCDEF0123456789 00:31:05.959 EUI64: ABCDEF0123456789 00:31:05.959 UUID: 153be8fe-7271-442d-a50b-bf7a69fae75a 00:31:05.959 Thin Provisioning: Not Supported 00:31:05.959 Per-NS Atomic Units: Yes 00:31:05.959 Atomic Boundary Size (Normal): 0 00:31:05.959 Atomic Boundary Size (PFail): 0 00:31:05.959 Atomic Boundary Offset: 0 00:31:05.959 Maximum Single Source Range Length: 65535 00:31:05.959 Maximum Copy Length: 65535 00:31:05.959 Maximum Source Range Count: 1 00:31:05.959 NGUID/EUI64 Never Reused: No 00:31:05.959 Namespace Write Protected: No 00:31:05.959 Number of LBA Formats: 1 00:31:05.959 Current LBA Format: LBA Format #00 00:31:05.959 LBA Format #00: Data Size: 512 Metadata Size: 0 00:31:05.959 00:31:05.959 10:32:59 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:31:05.959 10:32:59 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:05.959 10:32:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:05.959 10:32:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:05.959 10:32:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:05.959 10:32:59 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:31:05.959 10:32:59 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:31:05.959 10:32:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:05.959 10:32:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:31:05.959 10:32:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:05.959 10:32:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:31:05.959 10:32:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:05.959 10:32:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:05.959 rmmod nvme_tcp 00:31:05.959 rmmod nvme_fabrics 00:31:05.959 rmmod nvme_keyring 00:31:06.218 10:32:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:06.218 10:32:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:31:06.218 10:32:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:31:06.218 10:32:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 4061653 ']' 00:31:06.218 10:32:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 4061653 00:31:06.218 10:32:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # '[' -z 4061653 ']' 00:31:06.218 10:32:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # kill -0 4061653 00:31:06.218 10:32:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # uname 00:31:06.218 10:32:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:06.218 10:32:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4061653 00:31:06.218 10:32:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:06.218 10:32:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:06.218 10:32:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4061653' 00:31:06.218 killing process with pid 4061653 00:31:06.218 10:32:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@973 -- # kill 4061653 00:31:06.218 10:32:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@978 -- # wait 4061653 00:31:07.594 10:33:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:07.594 10:33:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:07.594 10:33:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:07.594 10:33:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:31:07.594 10:33:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 00:31:07.594 10:33:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:07.594 10:33:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 00:31:07.594 10:33:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:07.594 10:33:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:07.594 10:33:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:07.594 10:33:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:07.594 10:33:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:09.497 10:33:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:09.497 00:31:09.497 real 0m10.930s 00:31:09.497 user 0m12.155s 00:31:09.497 sys 0m4.710s 00:31:09.497 10:33:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:09.497 10:33:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:09.497 ************************************ 00:31:09.497 END TEST nvmf_identify 00:31:09.497 ************************************ 00:31:09.497 10:33:03 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:31:09.497 10:33:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:31:09.497 10:33:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:09.497 10:33:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:31:09.756 ************************************ 00:31:09.756 START TEST nvmf_perf 00:31:09.756 ************************************ 00:31:09.756 10:33:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:31:09.756 * Looking for test storage... 00:31:09.756 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:09.756 10:33:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:31:09.756 10:33:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # lcov --version 00:31:09.756 10:33:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:31:09.756 10:33:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:31:09.756 10:33:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:09.756 10:33:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:09.756 10:33:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:09.756 10:33:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:31:09.756 10:33:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:31:09.757 10:33:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:31:09.757 10:33:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:31:09.757 10:33:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:31:09.757 10:33:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:31:09.757 10:33:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:31:09.757 10:33:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:09.757 10:33:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:31:09.757 10:33:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:31:09.757 10:33:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:09.757 10:33:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:09.757 10:33:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:31:09.757 10:33:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:31:09.757 10:33:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:09.757 10:33:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:31:09.757 10:33:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:31:09.757 10:33:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:31:09.757 10:33:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:31:09.757 10:33:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:09.757 10:33:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:31:09.757 10:33:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:31:09.757 10:33:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:09.757 10:33:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:09.757 10:33:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:31:09.757 10:33:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:09.757 10:33:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:31:09.757 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:09.757 --rc genhtml_branch_coverage=1 00:31:09.757 --rc genhtml_function_coverage=1 00:31:09.757 --rc genhtml_legend=1 00:31:09.757 --rc geninfo_all_blocks=1 00:31:09.757 --rc geninfo_unexecuted_blocks=1 00:31:09.757 00:31:09.757 ' 00:31:09.757 10:33:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:31:09.757 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:09.757 --rc genhtml_branch_coverage=1 00:31:09.757 --rc genhtml_function_coverage=1 00:31:09.757 --rc genhtml_legend=1 00:31:09.757 --rc geninfo_all_blocks=1 00:31:09.757 --rc geninfo_unexecuted_blocks=1 00:31:09.757 00:31:09.757 ' 00:31:09.757 10:33:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:31:09.757 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:09.757 --rc genhtml_branch_coverage=1 00:31:09.757 --rc genhtml_function_coverage=1 00:31:09.757 --rc genhtml_legend=1 00:31:09.757 --rc geninfo_all_blocks=1 00:31:09.757 --rc geninfo_unexecuted_blocks=1 00:31:09.757 00:31:09.757 ' 00:31:09.757 10:33:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:31:09.757 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:09.757 --rc genhtml_branch_coverage=1 00:31:09.757 --rc genhtml_function_coverage=1 00:31:09.757 --rc genhtml_legend=1 00:31:09.757 --rc geninfo_all_blocks=1 00:31:09.757 --rc geninfo_unexecuted_blocks=1 00:31:09.757 00:31:09.757 ' 00:31:09.757 10:33:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:09.757 10:33:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:31:09.757 10:33:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:09.757 10:33:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:09.757 10:33:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:09.757 10:33:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:09.757 10:33:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:09.757 10:33:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:09.757 10:33:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:09.757 10:33:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:09.757 10:33:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:09.757 10:33:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:09.757 10:33:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:31:09.757 10:33:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:31:09.757 10:33:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:09.757 10:33:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:09.757 10:33:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:09.757 10:33:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:09.757 10:33:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:09.757 10:33:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:31:09.757 10:33:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:09.757 10:33:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:09.757 10:33:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:09.757 10:33:03 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:09.757 10:33:03 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:09.757 10:33:03 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:09.757 10:33:03 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:31:09.757 10:33:03 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:09.757 10:33:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:31:09.757 10:33:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:09.757 10:33:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:09.757 10:33:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:09.757 10:33:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:09.757 10:33:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:09.757 10:33:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:31:09.757 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:31:09.757 10:33:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:09.757 10:33:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:09.757 10:33:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:09.757 10:33:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:31:09.757 10:33:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:31:09.757 10:33:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:09.757 10:33:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:31:09.757 10:33:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:09.757 10:33:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:09.757 10:33:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:09.757 10:33:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:09.757 10:33:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:09.757 10:33:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:09.757 10:33:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:09.757 10:33:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:09.757 10:33:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:09.757 10:33:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:09.757 10:33:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable 00:31:09.757 10:33:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:31:15.025 10:33:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:15.025 10:33:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=() 00:31:15.025 10:33:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:15.025 10:33:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:15.025 10:33:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:15.025 10:33:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:15.025 10:33:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:15.025 10:33:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=() 00:31:15.025 10:33:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:15.025 10:33:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=() 00:31:15.025 10:33:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810 00:31:15.025 10:33:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=() 00:31:15.025 10:33:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722 00:31:15.025 10:33:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=() 00:31:15.025 10:33:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx 00:31:15.025 10:33:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:15.025 10:33:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:15.025 10:33:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:15.025 10:33:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:15.025 10:33:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:15.025 10:33:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:15.025 10:33:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:15.025 10:33:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:15.025 10:33:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:15.025 10:33:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:15.025 10:33:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:15.025 10:33:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:15.025 10:33:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:15.025 10:33:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:15.025 10:33:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:15.026 10:33:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:15.026 10:33:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:15.026 10:33:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:15.026 10:33:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:15.026 10:33:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:31:15.026 Found 0000:af:00.0 (0x8086 - 0x159b) 00:31:15.026 10:33:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:15.026 10:33:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:15.026 10:33:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:15.026 10:33:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:15.026 10:33:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:15.026 10:33:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:15.026 10:33:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:31:15.026 Found 0000:af:00.1 (0x8086 - 0x159b) 00:31:15.026 10:33:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:15.026 10:33:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:15.026 10:33:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:15.026 10:33:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:15.026 10:33:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:15.026 10:33:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:15.026 10:33:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:15.026 10:33:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:15.026 10:33:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:15.026 10:33:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:15.026 10:33:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:15.026 10:33:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:15.026 10:33:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:15.026 10:33:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:15.026 10:33:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:15.026 10:33:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:31:15.026 Found net devices under 0000:af:00.0: cvl_0_0 00:31:15.026 10:33:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:15.026 10:33:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:15.026 10:33:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:15.026 10:33:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:15.026 10:33:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:15.026 10:33:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:15.026 10:33:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:15.026 10:33:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:15.026 10:33:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:31:15.026 Found net devices under 0000:af:00.1: cvl_0_1 00:31:15.026 10:33:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:15.026 10:33:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:15.026 10:33:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # is_hw=yes 00:31:15.026 10:33:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:15.026 10:33:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:15.026 10:33:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:15.026 10:33:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:15.026 10:33:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:15.026 10:33:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:15.026 10:33:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:15.026 10:33:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:15.026 10:33:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:15.026 10:33:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:15.026 10:33:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:15.026 10:33:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:15.026 10:33:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:15.026 10:33:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:15.026 10:33:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:15.026 10:33:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:15.026 10:33:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:15.026 10:33:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:15.026 10:33:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:15.026 10:33:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:15.026 10:33:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:15.026 10:33:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:15.285 10:33:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:15.285 10:33:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:15.285 10:33:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:15.285 10:33:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:15.285 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:15.285 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.249 ms 00:31:15.285 00:31:15.285 --- 10.0.0.2 ping statistics --- 00:31:15.285 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:15.285 rtt min/avg/max/mdev = 0.249/0.249/0.249/0.000 ms 00:31:15.285 10:33:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:15.285 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:15.285 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.059 ms 00:31:15.285 00:31:15.285 --- 10.0.0.1 ping statistics --- 00:31:15.285 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:15.285 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:31:15.285 10:33:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:15.285 10:33:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # return 0 00:31:15.285 10:33:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:15.285 10:33:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:15.285 10:33:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:15.285 10:33:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:15.285 10:33:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:15.285 10:33:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:15.285 10:33:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:15.285 10:33:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:31:15.285 10:33:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:15.285 10:33:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:15.285 10:33:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:31:15.285 10:33:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=4065804 00:31:15.285 10:33:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 4065804 00:31:15.285 10:33:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:31:15.285 10:33:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # '[' -z 4065804 ']' 00:31:15.285 10:33:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:15.285 10:33:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:15.285 10:33:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:15.285 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:15.285 10:33:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:15.285 10:33:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:31:15.285 [2024-12-13 10:33:09.099132] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:31:15.285 [2024-12-13 10:33:09.099220] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:15.544 [2024-12-13 10:33:09.218407] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:15.544 [2024-12-13 10:33:09.330454] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:15.544 [2024-12-13 10:33:09.330502] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:15.544 [2024-12-13 10:33:09.330512] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:15.544 [2024-12-13 10:33:09.330539] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:15.544 [2024-12-13 10:33:09.330548] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:15.544 [2024-12-13 10:33:09.332809] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:31:15.544 [2024-12-13 10:33:09.332886] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:31:15.544 [2024-12-13 10:33:09.332911] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:31:15.544 [2024-12-13 10:33:09.332901] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:31:16.115 10:33:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:16.115 10:33:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@868 -- # return 0 00:31:16.115 10:33:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:16.115 10:33:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:16.115 10:33:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:31:16.115 10:33:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:16.115 10:33:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:31:16.115 10:33:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:31:19.400 10:33:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:31:19.400 10:33:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:31:19.400 10:33:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:5e:00.0 00:31:19.400 10:33:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:19.659 10:33:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:31:19.659 10:33:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:5e:00.0 ']' 00:31:19.659 10:33:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:31:19.659 10:33:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:31:19.659 10:33:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:31:19.917 [2024-12-13 10:33:13.676782] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:19.917 10:33:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:20.176 10:33:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:31:20.176 10:33:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:20.434 10:33:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:31:20.434 10:33:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:31:20.693 10:33:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:20.693 [2024-12-13 10:33:14.499881] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:20.693 10:33:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:20.951 10:33:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:5e:00.0 ']' 00:31:20.951 10:33:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:31:20.951 10:33:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:31:20.951 10:33:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:31:22.326 Initializing NVMe Controllers 00:31:22.326 Attached to NVMe Controller at 0000:5e:00.0 [8086:0a54] 00:31:22.326 Associating PCIE (0000:5e:00.0) NSID 1 with lcore 0 00:31:22.326 Initialization complete. Launching workers. 00:31:22.326 ======================================================== 00:31:22.326 Latency(us) 00:31:22.326 Device Information : IOPS MiB/s Average min max 00:31:22.326 PCIE (0000:5e:00.0) NSID 1 from core 0: 91751.41 358.40 348.25 21.90 4277.21 00:31:22.326 ======================================================== 00:31:22.326 Total : 91751.41 358.40 348.25 21.90 4277.21 00:31:22.326 00:31:22.326 10:33:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:23.702 Initializing NVMe Controllers 00:31:23.702 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:23.702 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:23.702 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:31:23.702 Initialization complete. Launching workers. 00:31:23.702 ======================================================== 00:31:23.702 Latency(us) 00:31:23.702 Device Information : IOPS MiB/s Average min max 00:31:23.702 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 96.00 0.38 10727.37 127.95 44677.23 00:31:23.702 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 50.00 0.20 20711.36 7018.93 48850.10 00:31:23.702 ======================================================== 00:31:23.702 Total : 146.00 0.57 14146.55 127.95 48850.10 00:31:23.702 00:31:23.702 10:33:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:25.078 Initializing NVMe Controllers 00:31:25.078 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:25.078 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:25.078 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:31:25.078 Initialization complete. Launching workers. 00:31:25.078 ======================================================== 00:31:25.078 Latency(us) 00:31:25.078 Device Information : IOPS MiB/s Average min max 00:31:25.078 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 9384.76 36.66 3412.88 498.59 8088.22 00:31:25.078 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3774.50 14.74 8507.61 5539.55 16891.34 00:31:25.078 ======================================================== 00:31:25.078 Total : 13159.26 51.40 4874.21 498.59 16891.34 00:31:25.078 00:31:25.336 10:33:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:31:25.336 10:33:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:31:25.336 10:33:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:28.625 Initializing NVMe Controllers 00:31:28.625 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:28.625 Controller IO queue size 128, less than required. 00:31:28.625 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:28.625 Controller IO queue size 128, less than required. 00:31:28.625 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:28.625 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:28.625 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:31:28.625 Initialization complete. Launching workers. 00:31:28.625 ======================================================== 00:31:28.625 Latency(us) 00:31:28.625 Device Information : IOPS MiB/s Average min max 00:31:28.625 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1512.49 378.12 88856.75 56575.74 326604.45 00:31:28.625 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 562.00 140.50 244080.17 120319.10 597846.82 00:31:28.625 ======================================================== 00:31:28.625 Total : 2074.49 518.62 130908.12 56575.74 597846.82 00:31:28.625 00:31:28.625 10:33:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:31:28.625 No valid NVMe controllers or AIO or URING devices found 00:31:28.625 Initializing NVMe Controllers 00:31:28.625 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:28.625 Controller IO queue size 128, less than required. 00:31:28.625 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:28.625 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:31:28.625 Controller IO queue size 128, less than required. 00:31:28.625 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:28.625 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:31:28.625 WARNING: Some requested NVMe devices were skipped 00:31:28.625 10:33:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:31:31.910 Initializing NVMe Controllers 00:31:31.910 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:31.910 Controller IO queue size 128, less than required. 00:31:31.910 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:31.910 Controller IO queue size 128, less than required. 00:31:31.910 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:31.910 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:31.910 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:31:31.910 Initialization complete. Launching workers. 00:31:31.910 00:31:31.910 ==================== 00:31:31.911 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:31:31.911 TCP transport: 00:31:31.911 polls: 8737 00:31:31.911 idle_polls: 5381 00:31:31.911 sock_completions: 3356 00:31:31.911 nvme_completions: 5197 00:31:31.911 submitted_requests: 7772 00:31:31.911 queued_requests: 1 00:31:31.911 00:31:31.911 ==================== 00:31:31.911 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:31:31.911 TCP transport: 00:31:31.911 polls: 12394 00:31:31.911 idle_polls: 8626 00:31:31.911 sock_completions: 3768 00:31:31.911 nvme_completions: 5537 00:31:31.911 submitted_requests: 8262 00:31:31.911 queued_requests: 1 00:31:31.911 ======================================================== 00:31:31.911 Latency(us) 00:31:31.911 Device Information : IOPS MiB/s Average min max 00:31:31.911 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1298.92 324.73 102908.97 54201.67 308429.96 00:31:31.911 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1383.92 345.98 96921.92 61732.45 480102.56 00:31:31.911 ======================================================== 00:31:31.911 Total : 2682.84 670.71 99820.61 54201.67 480102.56 00:31:31.911 00:31:31.911 10:33:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:31:31.911 10:33:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:31.911 10:33:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:31:31.911 10:33:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@71 -- # '[' -n 0000:5e:00.0 ']' 00:31:31.911 10:33:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:31:35.194 10:33:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # ls_guid=538aabb1-9ee5-4c49-8b02-77b43b38697e 00:31:35.194 10:33:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@73 -- # get_lvs_free_mb 538aabb1-9ee5-4c49-8b02-77b43b38697e 00:31:35.194 10:33:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # local lvs_uuid=538aabb1-9ee5-4c49-8b02-77b43b38697e 00:31:35.194 10:33:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # local lvs_info 00:31:35.194 10:33:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # local fc 00:31:35.194 10:33:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1371 -- # local cs 00:31:35.194 10:33:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:31:35.194 10:33:29 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # lvs_info='[ 00:31:35.194 { 00:31:35.194 "uuid": "538aabb1-9ee5-4c49-8b02-77b43b38697e", 00:31:35.194 "name": "lvs_0", 00:31:35.194 "base_bdev": "Nvme0n1", 00:31:35.194 "total_data_clusters": 238234, 00:31:35.194 "free_clusters": 238234, 00:31:35.194 "block_size": 512, 00:31:35.194 "cluster_size": 4194304 00:31:35.194 } 00:31:35.194 ]' 00:31:35.194 10:33:29 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="538aabb1-9ee5-4c49-8b02-77b43b38697e") .free_clusters' 00:31:35.452 10:33:29 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # fc=238234 00:31:35.452 10:33:29 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="538aabb1-9ee5-4c49-8b02-77b43b38697e") .cluster_size' 00:31:35.452 10:33:29 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # cs=4194304 00:31:35.452 10:33:29 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1377 -- # free_mb=952936 00:31:35.452 10:33:29 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1378 -- # echo 952936 00:31:35.452 952936 00:31:35.452 10:33:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@77 -- # '[' 952936 -gt 20480 ']' 00:31:35.452 10:33:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@78 -- # free_mb=20480 00:31:35.452 10:33:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 538aabb1-9ee5-4c49-8b02-77b43b38697e lbd_0 20480 00:31:35.710 10:33:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # lb_guid=b39ae31c-dea5-4d86-9ab6-ee78ebd89603 00:31:35.710 10:33:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore b39ae31c-dea5-4d86-9ab6-ee78ebd89603 lvs_n_0 00:31:36.645 10:33:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # ls_nested_guid=8fb44083-d6d0-481e-a634-e25bcdc39896 00:31:36.645 10:33:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@84 -- # get_lvs_free_mb 8fb44083-d6d0-481e-a634-e25bcdc39896 00:31:36.645 10:33:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # local lvs_uuid=8fb44083-d6d0-481e-a634-e25bcdc39896 00:31:36.645 10:33:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # local lvs_info 00:31:36.645 10:33:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # local fc 00:31:36.645 10:33:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1371 -- # local cs 00:31:36.645 10:33:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:31:36.645 10:33:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # lvs_info='[ 00:31:36.645 { 00:31:36.645 "uuid": "538aabb1-9ee5-4c49-8b02-77b43b38697e", 00:31:36.645 "name": "lvs_0", 00:31:36.645 "base_bdev": "Nvme0n1", 00:31:36.645 "total_data_clusters": 238234, 00:31:36.645 "free_clusters": 233114, 00:31:36.645 "block_size": 512, 00:31:36.645 "cluster_size": 4194304 00:31:36.645 }, 00:31:36.645 { 00:31:36.645 "uuid": "8fb44083-d6d0-481e-a634-e25bcdc39896", 00:31:36.645 "name": "lvs_n_0", 00:31:36.645 "base_bdev": "b39ae31c-dea5-4d86-9ab6-ee78ebd89603", 00:31:36.645 "total_data_clusters": 5114, 00:31:36.645 "free_clusters": 5114, 00:31:36.645 "block_size": 512, 00:31:36.645 "cluster_size": 4194304 00:31:36.645 } 00:31:36.645 ]' 00:31:36.645 10:33:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="8fb44083-d6d0-481e-a634-e25bcdc39896") .free_clusters' 00:31:36.645 10:33:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # fc=5114 00:31:36.645 10:33:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="8fb44083-d6d0-481e-a634-e25bcdc39896") .cluster_size' 00:31:36.645 10:33:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # cs=4194304 00:31:36.645 10:33:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1377 -- # free_mb=20456 00:31:36.645 10:33:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1378 -- # echo 20456 00:31:36.645 20456 00:31:36.645 10:33:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@85 -- # '[' 20456 -gt 20480 ']' 00:31:36.645 10:33:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 8fb44083-d6d0-481e-a634-e25bcdc39896 lbd_nest_0 20456 00:31:36.902 10:33:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # lb_nested_guid=78e115f0-199d-4ec8-be21-0410dbdcc47c 00:31:36.902 10:33:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:37.160 10:33:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:31:37.160 10:33:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 78e115f0-199d-4ec8-be21-0410dbdcc47c 00:31:37.419 10:33:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:37.677 10:33:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:31:37.677 10:33:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@96 -- # io_size=("512" "131072") 00:31:37.677 10:33:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:31:37.677 10:33:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:31:37.677 10:33:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:49.881 Initializing NVMe Controllers 00:31:49.881 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:49.881 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:49.881 Initialization complete. Launching workers. 00:31:49.881 ======================================================== 00:31:49.881 Latency(us) 00:31:49.881 Device Information : IOPS MiB/s Average min max 00:31:49.881 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 48.49 0.02 20685.54 147.00 45680.16 00:31:49.881 ======================================================== 00:31:49.881 Total : 48.49 0.02 20685.54 147.00 45680.16 00:31:49.881 00:31:49.881 10:33:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:31:49.881 10:33:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:32:00.022 Initializing NVMe Controllers 00:32:00.022 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:00.022 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:32:00.022 Initialization complete. Launching workers. 00:32:00.022 ======================================================== 00:32:00.022 Latency(us) 00:32:00.022 Device Information : IOPS MiB/s Average min max 00:32:00.022 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 80.10 10.01 12489.65 5986.92 48881.30 00:32:00.022 ======================================================== 00:32:00.022 Total : 80.10 10.01 12489.65 5986.92 48881.30 00:32:00.022 00:32:00.022 10:33:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:32:00.022 10:33:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:32:00.022 10:33:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:32:10.004 Initializing NVMe Controllers 00:32:10.004 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:10.004 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:32:10.004 Initialization complete. Launching workers. 00:32:10.004 ======================================================== 00:32:10.004 Latency(us) 00:32:10.004 Device Information : IOPS MiB/s Average min max 00:32:10.004 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8210.50 4.01 3897.28 270.17 8318.77 00:32:10.004 ======================================================== 00:32:10.004 Total : 8210.50 4.01 3897.28 270.17 8318.77 00:32:10.004 00:32:10.004 10:34:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:32:10.004 10:34:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:32:19.986 Initializing NVMe Controllers 00:32:19.986 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:19.986 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:32:19.986 Initialization complete. Launching workers. 00:32:19.986 ======================================================== 00:32:19.986 Latency(us) 00:32:19.986 Device Information : IOPS MiB/s Average min max 00:32:19.986 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3905.42 488.18 8195.00 664.41 28664.08 00:32:19.986 ======================================================== 00:32:19.986 Total : 3905.42 488.18 8195.00 664.41 28664.08 00:32:19.986 00:32:19.986 10:34:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:32:19.986 10:34:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:32:19.986 10:34:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:32:29.960 Initializing NVMe Controllers 00:32:29.961 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:29.961 Controller IO queue size 128, less than required. 00:32:29.961 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:29.961 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:32:29.961 Initialization complete. Launching workers. 00:32:29.961 ======================================================== 00:32:29.961 Latency(us) 00:32:29.961 Device Information : IOPS MiB/s Average min max 00:32:29.961 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 13004.45 6.35 9842.97 1673.28 26078.17 00:32:29.961 ======================================================== 00:32:29.961 Total : 13004.45 6.35 9842.97 1673.28 26078.17 00:32:29.961 00:32:29.961 10:34:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:32:29.961 10:34:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:32:42.169 Initializing NVMe Controllers 00:32:42.169 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:42.169 Controller IO queue size 128, less than required. 00:32:42.169 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:42.169 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:32:42.169 Initialization complete. Launching workers. 00:32:42.169 ======================================================== 00:32:42.169 Latency(us) 00:32:42.169 Device Information : IOPS MiB/s Average min max 00:32:42.169 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1199.60 149.95 106791.73 15822.71 213599.54 00:32:42.169 ======================================================== 00:32:42.169 Total : 1199.60 149.95 106791.73 15822.71 213599.54 00:32:42.169 00:32:42.169 10:34:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:42.169 10:34:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 78e115f0-199d-4ec8-be21-0410dbdcc47c 00:32:42.169 10:34:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:32:42.169 10:34:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@107 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete b39ae31c-dea5-4d86-9ab6-ee78ebd89603 00:32:42.169 10:34:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@108 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:32:42.169 10:34:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:32:42.169 10:34:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:32:42.169 10:34:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:42.169 10:34:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:32:42.169 10:34:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:42.169 10:34:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:32:42.169 10:34:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:42.169 10:34:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:42.169 rmmod nvme_tcp 00:32:42.169 rmmod nvme_fabrics 00:32:42.169 rmmod nvme_keyring 00:32:42.169 10:34:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:42.169 10:34:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:32:42.169 10:34:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:32:42.169 10:34:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 4065804 ']' 00:32:42.169 10:34:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 4065804 00:32:42.169 10:34:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # '[' -z 4065804 ']' 00:32:42.169 10:34:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # kill -0 4065804 00:32:42.169 10:34:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # uname 00:32:42.169 10:34:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:42.169 10:34:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4065804 00:32:42.169 10:34:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:42.169 10:34:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:42.169 10:34:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4065804' 00:32:42.169 killing process with pid 4065804 00:32:42.169 10:34:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@973 -- # kill 4065804 00:32:42.169 10:34:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@978 -- # wait 4065804 00:32:44.701 10:34:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:44.701 10:34:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:44.701 10:34:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:44.701 10:34:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:32:44.701 10:34:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 00:32:44.701 10:34:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 00:32:44.701 10:34:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:44.701 10:34:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:44.701 10:34:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:44.701 10:34:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:44.701 10:34:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:44.701 10:34:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:46.605 10:34:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:46.605 00:32:46.605 real 1m36.898s 00:32:46.605 user 5m48.174s 00:32:46.605 sys 0m16.777s 00:32:46.605 10:34:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:46.605 10:34:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:32:46.605 ************************************ 00:32:46.605 END TEST nvmf_perf 00:32:46.605 ************************************ 00:32:46.605 10:34:40 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:32:46.605 10:34:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:32:46.605 10:34:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:46.605 10:34:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.605 ************************************ 00:32:46.605 START TEST nvmf_fio_host 00:32:46.605 ************************************ 00:32:46.605 10:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:32:46.605 * Looking for test storage... 00:32:46.606 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:46.606 10:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:32:46.606 10:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # lcov --version 00:32:46.606 10:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:32:46.866 10:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:32:46.866 10:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:46.866 10:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:46.866 10:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:46.866 10:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:32:46.866 10:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:32:46.866 10:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:32:46.866 10:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:32:46.866 10:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:32:46.866 10:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:32:46.866 10:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:32:46.866 10:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:46.866 10:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:32:46.866 10:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:32:46.866 10:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:46.866 10:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:46.866 10:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:32:46.866 10:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:32:46.866 10:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:46.866 10:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:32:46.866 10:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:32:46.866 10:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:32:46.866 10:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:32:46.866 10:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:46.866 10:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:32:46.866 10:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:32:46.866 10:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:46.866 10:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:46.866 10:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:32:46.866 10:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:46.866 10:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:32:46.866 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:46.866 --rc genhtml_branch_coverage=1 00:32:46.866 --rc genhtml_function_coverage=1 00:32:46.866 --rc genhtml_legend=1 00:32:46.866 --rc geninfo_all_blocks=1 00:32:46.866 --rc geninfo_unexecuted_blocks=1 00:32:46.866 00:32:46.866 ' 00:32:46.866 10:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:32:46.866 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:46.866 --rc genhtml_branch_coverage=1 00:32:46.866 --rc genhtml_function_coverage=1 00:32:46.866 --rc genhtml_legend=1 00:32:46.866 --rc geninfo_all_blocks=1 00:32:46.866 --rc geninfo_unexecuted_blocks=1 00:32:46.866 00:32:46.866 ' 00:32:46.866 10:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:32:46.866 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:46.866 --rc genhtml_branch_coverage=1 00:32:46.866 --rc genhtml_function_coverage=1 00:32:46.866 --rc genhtml_legend=1 00:32:46.866 --rc geninfo_all_blocks=1 00:32:46.866 --rc geninfo_unexecuted_blocks=1 00:32:46.866 00:32:46.866 ' 00:32:46.866 10:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:32:46.866 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:46.866 --rc genhtml_branch_coverage=1 00:32:46.866 --rc genhtml_function_coverage=1 00:32:46.866 --rc genhtml_legend=1 00:32:46.866 --rc geninfo_all_blocks=1 00:32:46.866 --rc geninfo_unexecuted_blocks=1 00:32:46.866 00:32:46.866 ' 00:32:46.866 10:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:46.866 10:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:32:46.866 10:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:46.866 10:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:46.866 10:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:46.866 10:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:46.866 10:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:46.866 10:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:46.866 10:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:32:46.866 10:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:46.866 10:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:46.866 10:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:32:46.866 10:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:46.866 10:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:46.866 10:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:46.866 10:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:46.866 10:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:46.866 10:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:46.866 10:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:46.866 10:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:46.866 10:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:46.866 10:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:46.867 10:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:32:46.867 10:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:32:46.867 10:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:46.867 10:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:46.867 10:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:46.867 10:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:46.867 10:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:46.867 10:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:32:46.867 10:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:46.867 10:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:46.867 10:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:46.867 10:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:46.867 10:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:46.867 10:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:46.867 10:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:32:46.867 10:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:46.867 10:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:32:46.867 10:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:46.867 10:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:46.867 10:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:46.867 10:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:46.867 10:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:46.867 10:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:46.867 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:46.867 10:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:46.867 10:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:46.867 10:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:46.867 10:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:46.867 10:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:32:46.867 10:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:46.867 10:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:46.867 10:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:46.867 10:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:46.867 10:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:46.867 10:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:46.867 10:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:46.867 10:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:46.867 10:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:46.867 10:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:46.867 10:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable 00:32:46.867 10:34:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:32:53.436 10:34:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:53.436 10:34:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=() 00:32:53.436 10:34:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:53.436 10:34:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:53.436 10:34:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:53.436 10:34:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:53.436 10:34:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:53.436 10:34:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=() 00:32:53.436 10:34:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:53.436 10:34:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=() 00:32:53.436 10:34:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810 00:32:53.436 10:34:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=() 00:32:53.436 10:34:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722 00:32:53.436 10:34:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=() 00:32:53.436 10:34:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx 00:32:53.436 10:34:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:53.436 10:34:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:53.436 10:34:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:53.436 10:34:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:53.436 10:34:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:53.436 10:34:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:53.436 10:34:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:53.436 10:34:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:53.436 10:34:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:53.436 10:34:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:53.436 10:34:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:53.436 10:34:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:53.436 10:34:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:53.436 10:34:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:53.436 10:34:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:53.436 10:34:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:53.436 10:34:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:53.436 10:34:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:53.436 10:34:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:53.436 10:34:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:32:53.436 Found 0000:af:00.0 (0x8086 - 0x159b) 00:32:53.436 10:34:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:53.436 10:34:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:53.436 10:34:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:53.436 10:34:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:53.436 10:34:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:53.436 10:34:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:53.436 10:34:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:32:53.436 Found 0000:af:00.1 (0x8086 - 0x159b) 00:32:53.436 10:34:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:53.436 10:34:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:53.436 10:34:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:53.436 10:34:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:53.436 10:34:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:53.436 10:34:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:53.436 10:34:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:53.436 10:34:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:53.436 10:34:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:53.436 10:34:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:53.436 10:34:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:53.436 10:34:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:53.436 10:34:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:53.436 10:34:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:53.436 10:34:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:53.436 10:34:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:32:53.436 Found net devices under 0000:af:00.0: cvl_0_0 00:32:53.436 10:34:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:53.436 10:34:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:53.436 10:34:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:53.436 10:34:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:53.436 10:34:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:53.436 10:34:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:53.436 10:34:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:53.436 10:34:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:53.436 10:34:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:32:53.436 Found net devices under 0000:af:00.1: cvl_0_1 00:32:53.436 10:34:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:53.436 10:34:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:53.436 10:34:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # is_hw=yes 00:32:53.436 10:34:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:53.436 10:34:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:53.436 10:34:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:53.436 10:34:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:53.436 10:34:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:53.436 10:34:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:53.436 10:34:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:53.436 10:34:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:53.436 10:34:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:53.436 10:34:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:53.436 10:34:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:53.436 10:34:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:53.436 10:34:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:53.436 10:34:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:53.436 10:34:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:53.436 10:34:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:53.436 10:34:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:53.436 10:34:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:53.436 10:34:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:53.436 10:34:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:53.436 10:34:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:53.436 10:34:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:53.436 10:34:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:53.436 10:34:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:53.436 10:34:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:53.436 10:34:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:53.436 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:53.436 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.276 ms 00:32:53.436 00:32:53.436 --- 10.0.0.2 ping statistics --- 00:32:53.436 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:53.436 rtt min/avg/max/mdev = 0.276/0.276/0.276/0.000 ms 00:32:53.436 10:34:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:53.436 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:53.437 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.232 ms 00:32:53.437 00:32:53.437 --- 10.0.0.1 ping statistics --- 00:32:53.437 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:53.437 rtt min/avg/max/mdev = 0.232/0.232/0.232/0.000 ms 00:32:53.437 10:34:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:53.437 10:34:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # return 0 00:32:53.437 10:34:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:53.437 10:34:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:53.437 10:34:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:53.437 10:34:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:53.437 10:34:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:53.437 10:34:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:53.437 10:34:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:53.437 10:34:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:32:53.437 10:34:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:32:53.437 10:34:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:53.437 10:34:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:32:53.437 10:34:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=4083622 00:32:53.437 10:34:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:32:53.437 10:34:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:53.437 10:34:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 4083622 00:32:53.437 10:34:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # '[' -z 4083622 ']' 00:32:53.437 10:34:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:53.437 10:34:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:53.437 10:34:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:53.437 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:53.437 10:34:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:53.437 10:34:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:32:53.437 [2024-12-13 10:34:46.484683] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:32:53.437 [2024-12-13 10:34:46.484772] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:53.437 [2024-12-13 10:34:46.604865] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:53.437 [2024-12-13 10:34:46.719263] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:53.437 [2024-12-13 10:34:46.719309] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:53.437 [2024-12-13 10:34:46.719320] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:53.437 [2024-12-13 10:34:46.719330] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:53.437 [2024-12-13 10:34:46.719338] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:53.437 [2024-12-13 10:34:46.721627] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:32:53.437 [2024-12-13 10:34:46.721656] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:32:53.437 [2024-12-13 10:34:46.721673] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:32:53.437 [2024-12-13 10:34:46.721676] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:32:53.437 10:34:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:53.437 10:34:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@868 -- # return 0 00:32:53.437 10:34:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:32:53.696 [2024-12-13 10:34:47.471665] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:53.696 10:34:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:32:53.696 10:34:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:53.696 10:34:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:32:53.696 10:34:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:32:53.955 Malloc1 00:32:53.955 10:34:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:54.213 10:34:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:32:54.472 10:34:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:54.731 [2024-12-13 10:34:48.389845] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:54.731 10:34:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:54.731 10:34:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:32:54.731 10:34:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:32:54.731 10:34:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:32:54.731 10:34:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:32:54.731 10:34:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:54.731 10:34:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:32:54.731 10:34:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:32:54.731 10:34:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:32:54.731 10:34:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:32:54.731 10:34:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:32:54.731 10:34:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:32:54.731 10:34:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:32:54.731 10:34:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:32:55.015 10:34:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:32:55.015 10:34:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:32:55.015 10:34:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1351 -- # break 00:32:55.015 10:34:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:32:55.015 10:34:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:32:55.277 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:32:55.277 fio-3.35 00:32:55.277 Starting 1 thread 00:32:57.804 00:32:57.804 test: (groupid=0, jobs=1): err= 0: pid=4084204: Fri Dec 13 10:34:51 2024 00:32:57.804 read: IOPS=9796, BW=38.3MiB/s (40.1MB/s)(78.3MiB/2046msec) 00:32:57.804 slat (nsec): min=1661, max=194331, avg=1908.06, stdev=1909.04 00:32:57.804 clat (usec): min=2675, max=52840, avg=7173.51, stdev=2586.89 00:32:57.804 lat (usec): min=2704, max=52842, avg=7175.42, stdev=2586.86 00:32:57.804 clat percentiles (usec): 00:32:57.804 | 1.00th=[ 5800], 5.00th=[ 6194], 10.00th=[ 6390], 20.00th=[ 6587], 00:32:57.804 | 30.00th=[ 6783], 40.00th=[ 6915], 50.00th=[ 7046], 60.00th=[ 7177], 00:32:57.804 | 70.00th=[ 7308], 80.00th=[ 7439], 90.00th=[ 7701], 95.00th=[ 7832], 00:32:57.804 | 99.00th=[ 8291], 99.50th=[ 8717], 99.90th=[50594], 99.95th=[52167], 00:32:57.804 | 99.99th=[52691] 00:32:57.804 bw ( KiB/s): min=39168, max=40416, per=100.00%, avg=39942.00, stdev=539.33, samples=4 00:32:57.804 iops : min= 9792, max=10104, avg=9985.50, stdev=134.83, samples=4 00:32:57.804 write: IOPS=9811, BW=38.3MiB/s (40.2MB/s)(78.4MiB/2046msec); 0 zone resets 00:32:57.804 slat (nsec): min=1712, max=177505, avg=1949.31, stdev=1431.09 00:32:57.804 clat (usec): min=1959, max=50528, avg=5824.00, stdev=2267.14 00:32:57.804 lat (usec): min=1975, max=50531, avg=5825.95, stdev=2267.13 00:32:57.804 clat percentiles (usec): 00:32:57.804 | 1.00th=[ 4686], 5.00th=[ 5080], 10.00th=[ 5211], 20.00th=[ 5407], 00:32:57.804 | 30.00th=[ 5473], 40.00th=[ 5604], 50.00th=[ 5735], 60.00th=[ 5800], 00:32:57.804 | 70.00th=[ 5932], 80.00th=[ 6063], 90.00th=[ 6259], 95.00th=[ 6390], 00:32:57.804 | 99.00th=[ 6783], 99.50th=[ 7046], 99.90th=[49021], 99.95th=[50070], 00:32:57.804 | 99.99th=[50594] 00:32:57.804 bw ( KiB/s): min=39552, max=40584, per=100.00%, avg=40038.00, stdev=518.64, samples=4 00:32:57.804 iops : min= 9888, max=10146, avg=10009.50, stdev=129.66, samples=4 00:32:57.804 lat (msec) : 2=0.01%, 4=0.13%, 10=99.55%, 50=0.20%, 100=0.11% 00:32:57.804 cpu : usr=75.75%, sys=23.08%, ctx=95, majf=0, minf=1505 00:32:57.804 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:32:57.804 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:57.804 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:57.804 issued rwts: total=20043,20074,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:57.804 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:57.804 00:32:57.804 Run status group 0 (all jobs): 00:32:57.804 READ: bw=38.3MiB/s (40.1MB/s), 38.3MiB/s-38.3MiB/s (40.1MB/s-40.1MB/s), io=78.3MiB (82.1MB), run=2046-2046msec 00:32:57.804 WRITE: bw=38.3MiB/s (40.2MB/s), 38.3MiB/s-38.3MiB/s (40.2MB/s-40.2MB/s), io=78.4MiB (82.2MB), run=2046-2046msec 00:32:58.061 ----------------------------------------------------- 00:32:58.061 Suppressions used: 00:32:58.061 count bytes template 00:32:58.061 1 57 /usr/src/fio/parse.c 00:32:58.061 1 8 libtcmalloc_minimal.so 00:32:58.061 ----------------------------------------------------- 00:32:58.061 00:32:58.062 10:34:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:32:58.062 10:34:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:32:58.062 10:34:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:32:58.062 10:34:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:58.062 10:34:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:32:58.062 10:34:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:32:58.062 10:34:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:32:58.062 10:34:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:32:58.062 10:34:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:32:58.062 10:34:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:32:58.062 10:34:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:32:58.062 10:34:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:32:58.347 10:34:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:32:58.347 10:34:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:32:58.347 10:34:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1351 -- # break 00:32:58.347 10:34:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:32:58.347 10:34:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:32:58.611 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:32:58.611 fio-3.35 00:32:58.611 Starting 1 thread 00:32:59.541 [2024-12-13 10:34:53.133782] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:32:59.541 [2024-12-13 10:34:53.133857] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:32:59.541 [2024-12-13 10:34:53.133869] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:33:00.911 00:33:00.911 test: (groupid=0, jobs=1): err= 0: pid=4084761: Fri Dec 13 10:34:54 2024 00:33:00.911 read: IOPS=9512, BW=149MiB/s (156MB/s)(298MiB/2007msec) 00:33:00.911 slat (nsec): min=2609, max=92479, avg=3222.18, stdev=1557.09 00:33:00.911 clat (usec): min=2285, max=13980, avg=7690.90, stdev=1774.03 00:33:00.911 lat (usec): min=2288, max=13983, avg=7694.12, stdev=1774.13 00:33:00.911 clat percentiles (usec): 00:33:00.911 | 1.00th=[ 4080], 5.00th=[ 4948], 10.00th=[ 5473], 20.00th=[ 6259], 00:33:00.911 | 30.00th=[ 6718], 40.00th=[ 7177], 50.00th=[ 7570], 60.00th=[ 8029], 00:33:00.911 | 70.00th=[ 8455], 80.00th=[ 8979], 90.00th=[ 9896], 95.00th=[10814], 00:33:00.911 | 99.00th=[12649], 99.50th=[13304], 99.90th=[13698], 99.95th=[13829], 00:33:00.911 | 99.99th=[13960] 00:33:00.911 bw ( KiB/s): min=72640, max=82304, per=50.12%, avg=76288.00, stdev=4243.67, samples=4 00:33:00.911 iops : min= 4540, max= 5144, avg=4768.00, stdev=265.23, samples=4 00:33:00.911 write: IOPS=5507, BW=86.0MiB/s (90.2MB/s)(156MiB/1812msec); 0 zone resets 00:33:00.911 slat (usec): min=27, max=353, avg=32.67, stdev= 6.83 00:33:00.911 clat (usec): min=4383, max=17904, avg=10040.49, stdev=1631.29 00:33:00.911 lat (usec): min=4414, max=17935, avg=10073.16, stdev=1631.91 00:33:00.911 clat percentiles (usec): 00:33:00.911 | 1.00th=[ 6849], 5.00th=[ 7570], 10.00th=[ 8094], 20.00th=[ 8586], 00:33:00.911 | 30.00th=[ 9110], 40.00th=[ 9503], 50.00th=[ 9896], 60.00th=[10421], 00:33:00.911 | 70.00th=[10814], 80.00th=[11338], 90.00th=[12125], 95.00th=[12780], 00:33:00.911 | 99.00th=[13960], 99.50th=[15008], 99.90th=[17171], 99.95th=[17695], 00:33:00.911 | 99.99th=[17957] 00:33:00.911 bw ( KiB/s): min=75552, max=86016, per=90.03%, avg=79328.00, stdev=4631.72, samples=4 00:33:00.911 iops : min= 4722, max= 5376, avg=4958.00, stdev=289.48, samples=4 00:33:00.911 lat (msec) : 4=0.56%, 10=76.51%, 20=22.93% 00:33:00.911 cpu : usr=85.10%, sys=12.81%, ctx=130, majf=0, minf=2402 00:33:00.911 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:33:00.911 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:00.911 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:00.911 issued rwts: total=19092,9979,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:00.911 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:00.911 00:33:00.911 Run status group 0 (all jobs): 00:33:00.911 READ: bw=149MiB/s (156MB/s), 149MiB/s-149MiB/s (156MB/s-156MB/s), io=298MiB (313MB), run=2007-2007msec 00:33:00.911 WRITE: bw=86.0MiB/s (90.2MB/s), 86.0MiB/s-86.0MiB/s (90.2MB/s-90.2MB/s), io=156MiB (163MB), run=1812-1812msec 00:33:01.168 ----------------------------------------------------- 00:33:01.168 Suppressions used: 00:33:01.168 count bytes template 00:33:01.168 1 57 /usr/src/fio/parse.c 00:33:01.168 473 45408 /usr/src/fio/iolog.c 00:33:01.168 1 8 libtcmalloc_minimal.so 00:33:01.168 ----------------------------------------------------- 00:33:01.168 00:33:01.168 10:34:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:01.426 10:34:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:33:01.426 10:34:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:33:01.426 10:34:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # get_nvme_bdfs 00:33:01.426 10:34:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1498 -- # bdfs=() 00:33:01.426 10:34:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1498 -- # local bdfs 00:33:01.426 10:34:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:33:01.426 10:34:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:33:01.426 10:34:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:33:01.426 10:34:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:33:01.426 10:34:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:33:01.426 10:34:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:5e:00.0 -i 10.0.0.2 00:33:04.696 Nvme0n1 00:33:04.697 10:34:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:33:07.217 10:35:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # ls_guid=2f9264ef-d455-4f7d-91d5-40b1bb55bbc6 00:33:07.217 10:35:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@54 -- # get_lvs_free_mb 2f9264ef-d455-4f7d-91d5-40b1bb55bbc6 00:33:07.217 10:35:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # local lvs_uuid=2f9264ef-d455-4f7d-91d5-40b1bb55bbc6 00:33:07.217 10:35:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # local lvs_info 00:33:07.217 10:35:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # local fc 00:33:07.217 10:35:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1371 -- # local cs 00:33:07.218 10:35:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:33:07.474 10:35:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # lvs_info='[ 00:33:07.474 { 00:33:07.474 "uuid": "2f9264ef-d455-4f7d-91d5-40b1bb55bbc6", 00:33:07.474 "name": "lvs_0", 00:33:07.474 "base_bdev": "Nvme0n1", 00:33:07.474 "total_data_clusters": 930, 00:33:07.474 "free_clusters": 930, 00:33:07.474 "block_size": 512, 00:33:07.474 "cluster_size": 1073741824 00:33:07.474 } 00:33:07.474 ]' 00:33:07.475 10:35:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="2f9264ef-d455-4f7d-91d5-40b1bb55bbc6") .free_clusters' 00:33:07.475 10:35:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # fc=930 00:33:07.475 10:35:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="2f9264ef-d455-4f7d-91d5-40b1bb55bbc6") .cluster_size' 00:33:07.475 10:35:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # cs=1073741824 00:33:07.475 10:35:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1377 -- # free_mb=952320 00:33:07.475 10:35:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1378 -- # echo 952320 00:33:07.475 952320 00:33:07.475 10:35:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 952320 00:33:08.038 fe9272fd-dc5c-4c89-9b87-10359a2c33a4 00:33:08.038 10:35:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:33:08.038 10:35:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:33:08.295 10:35:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:33:08.552 10:35:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@59 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:33:08.552 10:35:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:33:08.552 10:35:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:33:08.552 10:35:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:08.552 10:35:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:33:08.552 10:35:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:33:08.552 10:35:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:33:08.552 10:35:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:33:08.552 10:35:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:08.552 10:35:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:33:08.552 10:35:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:33:08.552 10:35:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:08.552 10:35:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:33:08.552 10:35:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:33:08.552 10:35:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1351 -- # break 00:33:08.552 10:35:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:33:08.552 10:35:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:33:08.809 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:33:08.809 fio-3.35 00:33:08.809 Starting 1 thread 00:33:11.331 00:33:11.331 test: (groupid=0, jobs=1): err= 0: pid=4086458: Fri Dec 13 10:35:04 2024 00:33:11.331 read: IOPS=6900, BW=27.0MiB/s (28.3MB/s)(54.1MiB/2007msec) 00:33:11.331 slat (nsec): min=1658, max=120778, avg=2126.30, stdev=1558.23 00:33:11.331 clat (usec): min=688, max=170749, avg=10161.42, stdev=11001.31 00:33:11.331 lat (usec): min=691, max=170778, avg=10163.54, stdev=11001.53 00:33:11.331 clat percentiles (msec): 00:33:11.332 | 1.00th=[ 8], 5.00th=[ 9], 10.00th=[ 9], 20.00th=[ 9], 00:33:11.332 | 30.00th=[ 9], 40.00th=[ 10], 50.00th=[ 10], 60.00th=[ 10], 00:33:11.332 | 70.00th=[ 10], 80.00th=[ 11], 90.00th=[ 11], 95.00th=[ 11], 00:33:11.332 | 99.00th=[ 12], 99.50th=[ 16], 99.90th=[ 171], 99.95th=[ 171], 00:33:11.332 | 99.99th=[ 171] 00:33:11.332 bw ( KiB/s): min=19488, max=30384, per=99.88%, avg=27570.00, stdev=5388.84, samples=4 00:33:11.332 iops : min= 4872, max= 7596, avg=6892.50, stdev=1347.21, samples=4 00:33:11.332 write: IOPS=6905, BW=27.0MiB/s (28.3MB/s)(54.1MiB/2007msec); 0 zone resets 00:33:11.332 slat (nsec): min=1714, max=105145, avg=2148.96, stdev=1114.64 00:33:11.332 clat (usec): min=207, max=169031, avg=8261.12, stdev=10284.98 00:33:11.332 lat (usec): min=210, max=169037, avg=8263.27, stdev=10285.25 00:33:11.332 clat percentiles (msec): 00:33:11.332 | 1.00th=[ 6], 5.00th=[ 7], 10.00th=[ 7], 20.00th=[ 8], 00:33:11.332 | 30.00th=[ 8], 40.00th=[ 8], 50.00th=[ 8], 60.00th=[ 8], 00:33:11.332 | 70.00th=[ 8], 80.00th=[ 9], 90.00th=[ 9], 95.00th=[ 9], 00:33:11.332 | 99.00th=[ 10], 99.50th=[ 14], 99.90th=[ 169], 99.95th=[ 169], 00:33:11.332 | 99.99th=[ 169] 00:33:11.332 bw ( KiB/s): min=20392, max=30216, per=99.90%, avg=27596.00, stdev=4805.16, samples=4 00:33:11.332 iops : min= 5098, max= 7554, avg=6899.00, stdev=1201.29, samples=4 00:33:11.332 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01% 00:33:11.332 lat (msec) : 2=0.02%, 4=0.22%, 10=88.25%, 20=11.02%, 250=0.46% 00:33:11.332 cpu : usr=75.02%, sys=24.13%, ctx=33, majf=0, minf=1505 00:33:11.332 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:33:11.332 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:11.332 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:11.332 issued rwts: total=13850,13860,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:11.332 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:11.332 00:33:11.332 Run status group 0 (all jobs): 00:33:11.332 READ: bw=27.0MiB/s (28.3MB/s), 27.0MiB/s-27.0MiB/s (28.3MB/s-28.3MB/s), io=54.1MiB (56.7MB), run=2007-2007msec 00:33:11.332 WRITE: bw=27.0MiB/s (28.3MB/s), 27.0MiB/s-27.0MiB/s (28.3MB/s-28.3MB/s), io=54.1MiB (56.8MB), run=2007-2007msec 00:33:11.332 ----------------------------------------------------- 00:33:11.332 Suppressions used: 00:33:11.332 count bytes template 00:33:11.332 1 58 /usr/src/fio/parse.c 00:33:11.332 1 8 libtcmalloc_minimal.so 00:33:11.332 ----------------------------------------------------- 00:33:11.332 00:33:11.588 10:35:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:33:11.588 10:35:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:33:12.956 10:35:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # ls_nested_guid=2bc096bb-d8e6-40a8-a669-1a2775f61536 00:33:12.956 10:35:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@65 -- # get_lvs_free_mb 2bc096bb-d8e6-40a8-a669-1a2775f61536 00:33:12.956 10:35:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # local lvs_uuid=2bc096bb-d8e6-40a8-a669-1a2775f61536 00:33:12.956 10:35:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # local lvs_info 00:33:12.956 10:35:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # local fc 00:33:12.956 10:35:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1371 -- # local cs 00:33:12.956 10:35:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:33:12.956 10:35:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # lvs_info='[ 00:33:12.956 { 00:33:12.956 "uuid": "2f9264ef-d455-4f7d-91d5-40b1bb55bbc6", 00:33:12.956 "name": "lvs_0", 00:33:12.956 "base_bdev": "Nvme0n1", 00:33:12.956 "total_data_clusters": 930, 00:33:12.956 "free_clusters": 0, 00:33:12.956 "block_size": 512, 00:33:12.956 "cluster_size": 1073741824 00:33:12.956 }, 00:33:12.956 { 00:33:12.956 "uuid": "2bc096bb-d8e6-40a8-a669-1a2775f61536", 00:33:12.956 "name": "lvs_n_0", 00:33:12.956 "base_bdev": "fe9272fd-dc5c-4c89-9b87-10359a2c33a4", 00:33:12.956 "total_data_clusters": 237847, 00:33:12.956 "free_clusters": 237847, 00:33:12.956 "block_size": 512, 00:33:12.956 "cluster_size": 4194304 00:33:12.956 } 00:33:12.956 ]' 00:33:12.956 10:35:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="2bc096bb-d8e6-40a8-a669-1a2775f61536") .free_clusters' 00:33:12.956 10:35:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # fc=237847 00:33:12.956 10:35:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="2bc096bb-d8e6-40a8-a669-1a2775f61536") .cluster_size' 00:33:13.212 10:35:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # cs=4194304 00:33:13.212 10:35:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1377 -- # free_mb=951388 00:33:13.212 10:35:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1378 -- # echo 951388 00:33:13.212 951388 00:33:13.213 10:35:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 951388 00:33:14.142 7c9dc3b5-0491-4c00-b925-97b024600d2a 00:33:14.142 10:35:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:33:14.142 10:35:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:33:14.399 10:35:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:33:14.656 10:35:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@70 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:33:14.656 10:35:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:33:14.656 10:35:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:33:14.656 10:35:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:14.656 10:35:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:33:14.656 10:35:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:33:14.656 10:35:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:33:14.656 10:35:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:33:14.656 10:35:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:14.656 10:35:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:33:14.656 10:35:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:33:14.656 10:35:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:14.656 10:35:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:33:14.656 10:35:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:33:14.656 10:35:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1351 -- # break 00:33:14.656 10:35:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:33:14.656 10:35:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:33:14.913 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:33:14.913 fio-3.35 00:33:14.913 Starting 1 thread 00:33:17.434 00:33:17.434 test: (groupid=0, jobs=1): err= 0: pid=4087656: Fri Dec 13 10:35:11 2024 00:33:17.434 read: IOPS=6689, BW=26.1MiB/s (27.4MB/s)(52.5MiB/2008msec) 00:33:17.435 slat (nsec): min=1661, max=117540, avg=2009.22, stdev=1504.53 00:33:17.435 clat (usec): min=3652, max=16896, avg=10489.69, stdev=962.22 00:33:17.435 lat (usec): min=3656, max=16898, avg=10491.70, stdev=962.09 00:33:17.435 clat percentiles (usec): 00:33:17.435 | 1.00th=[ 8356], 5.00th=[ 8979], 10.00th=[ 9372], 20.00th=[ 9765], 00:33:17.435 | 30.00th=[10028], 40.00th=[10290], 50.00th=[10552], 60.00th=[10683], 00:33:17.435 | 70.00th=[10945], 80.00th=[11207], 90.00th=[11600], 95.00th=[11994], 00:33:17.435 | 99.00th=[12911], 99.50th=[13435], 99.90th=[15139], 99.95th=[15533], 00:33:17.435 | 99.99th=[16909] 00:33:17.435 bw ( KiB/s): min=25872, max=27104, per=99.86%, avg=26722.00, stdev=572.07, samples=4 00:33:17.435 iops : min= 6468, max= 6776, avg=6680.50, stdev=143.02, samples=4 00:33:17.435 write: IOPS=6693, BW=26.1MiB/s (27.4MB/s)(52.5MiB/2008msec); 0 zone resets 00:33:17.435 slat (nsec): min=1720, max=104246, avg=2079.06, stdev=1151.28 00:33:17.435 clat (usec): min=1657, max=15444, avg=8524.02, stdev=800.04 00:33:17.435 lat (usec): min=1662, max=15445, avg=8526.10, stdev=799.96 00:33:17.435 clat percentiles (usec): 00:33:17.435 | 1.00th=[ 6652], 5.00th=[ 7373], 10.00th=[ 7635], 20.00th=[ 7963], 00:33:17.435 | 30.00th=[ 8160], 40.00th=[ 8356], 50.00th=[ 8455], 60.00th=[ 8717], 00:33:17.435 | 70.00th=[ 8848], 80.00th=[ 9110], 90.00th=[ 9372], 95.00th=[ 9765], 00:33:17.435 | 99.00th=[10421], 99.50th=[10814], 99.90th=[13566], 99.95th=[15139], 00:33:17.435 | 99.99th=[15401] 00:33:17.435 bw ( KiB/s): min=26304, max=27136, per=99.99%, avg=26772.00, stdev=349.90, samples=4 00:33:17.435 iops : min= 6576, max= 6784, avg=6693.00, stdev=87.48, samples=4 00:33:17.435 lat (msec) : 2=0.01%, 4=0.10%, 10=63.12%, 20=36.77% 00:33:17.435 cpu : usr=74.74%, sys=24.31%, ctx=83, majf=0, minf=1505 00:33:17.435 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:33:17.435 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:17.435 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:17.435 issued rwts: total=13433,13441,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:17.435 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:17.435 00:33:17.435 Run status group 0 (all jobs): 00:33:17.435 READ: bw=26.1MiB/s (27.4MB/s), 26.1MiB/s-26.1MiB/s (27.4MB/s-27.4MB/s), io=52.5MiB (55.0MB), run=2008-2008msec 00:33:17.435 WRITE: bw=26.1MiB/s (27.4MB/s), 26.1MiB/s-26.1MiB/s (27.4MB/s-27.4MB/s), io=52.5MiB (55.1MB), run=2008-2008msec 00:33:17.728 ----------------------------------------------------- 00:33:17.728 Suppressions used: 00:33:17.728 count bytes template 00:33:17.728 1 58 /usr/src/fio/parse.c 00:33:17.728 1 8 libtcmalloc_minimal.so 00:33:17.728 ----------------------------------------------------- 00:33:17.728 00:33:17.728 10:35:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:33:18.007 10:35:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@74 -- # sync 00:33:18.007 10:35:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -t 120 bdev_lvol_delete lvs_n_0/lbd_nest_0 00:33:22.268 10:35:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:33:22.268 10:35:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:33:25.542 10:35:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:33:25.542 10:35:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:33:27.439 10:35:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:33:27.439 10:35:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:33:27.439 10:35:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:33:27.439 10:35:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:27.439 10:35:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:33:27.439 10:35:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:27.439 10:35:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:33:27.439 10:35:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:27.439 10:35:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:27.439 rmmod nvme_tcp 00:33:27.439 rmmod nvme_fabrics 00:33:27.439 rmmod nvme_keyring 00:33:27.439 10:35:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:27.439 10:35:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:33:27.439 10:35:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:33:27.439 10:35:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 4083622 ']' 00:33:27.439 10:35:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 4083622 00:33:27.439 10:35:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' -z 4083622 ']' 00:33:27.439 10:35:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # kill -0 4083622 00:33:27.439 10:35:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # uname 00:33:27.439 10:35:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:27.439 10:35:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4083622 00:33:27.439 10:35:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:27.439 10:35:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:27.439 10:35:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4083622' 00:33:27.439 killing process with pid 4083622 00:33:27.439 10:35:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@973 -- # kill 4083622 00:33:27.439 10:35:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@978 -- # wait 4083622 00:33:28.813 10:35:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:28.813 10:35:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:28.813 10:35:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:28.813 10:35:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:33:28.813 10:35:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:28.813 10:35:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 00:33:28.813 10:35:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 00:33:28.813 10:35:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:28.813 10:35:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:28.813 10:35:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:28.813 10:35:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:28.813 10:35:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:30.717 10:35:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:30.717 00:33:30.717 real 0m44.104s 00:33:30.717 user 2m55.298s 00:33:30.717 sys 0m10.268s 00:33:30.717 10:35:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:30.717 10:35:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:33:30.717 ************************************ 00:33:30.717 END TEST nvmf_fio_host 00:33:30.717 ************************************ 00:33:30.717 10:35:24 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:33:30.717 10:35:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:33:30.717 10:35:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:30.717 10:35:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:33:30.717 ************************************ 00:33:30.717 START TEST nvmf_failover 00:33:30.717 ************************************ 00:33:30.717 10:35:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:33:30.976 * Looking for test storage... 00:33:30.976 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:30.976 10:35:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:33:30.976 10:35:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # lcov --version 00:33:30.976 10:35:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:33:30.976 10:35:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:33:30.976 10:35:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:30.976 10:35:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:30.976 10:35:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:30.976 10:35:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:33:30.976 10:35:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:33:30.976 10:35:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:33:30.976 10:35:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:33:30.976 10:35:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:33:30.976 10:35:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:33:30.976 10:35:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:33:30.976 10:35:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:30.976 10:35:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:33:30.976 10:35:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:33:30.976 10:35:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:30.976 10:35:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:30.976 10:35:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:33:30.976 10:35:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:33:30.976 10:35:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:30.976 10:35:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:33:30.976 10:35:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:33:30.976 10:35:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:33:30.976 10:35:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:33:30.976 10:35:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:30.976 10:35:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:33:30.976 10:35:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:33:30.976 10:35:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:30.976 10:35:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:30.976 10:35:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:33:30.976 10:35:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:30.976 10:35:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:33:30.976 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:30.977 --rc genhtml_branch_coverage=1 00:33:30.977 --rc genhtml_function_coverage=1 00:33:30.977 --rc genhtml_legend=1 00:33:30.977 --rc geninfo_all_blocks=1 00:33:30.977 --rc geninfo_unexecuted_blocks=1 00:33:30.977 00:33:30.977 ' 00:33:30.977 10:35:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:33:30.977 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:30.977 --rc genhtml_branch_coverage=1 00:33:30.977 --rc genhtml_function_coverage=1 00:33:30.977 --rc genhtml_legend=1 00:33:30.977 --rc geninfo_all_blocks=1 00:33:30.977 --rc geninfo_unexecuted_blocks=1 00:33:30.977 00:33:30.977 ' 00:33:30.977 10:35:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:33:30.977 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:30.977 --rc genhtml_branch_coverage=1 00:33:30.977 --rc genhtml_function_coverage=1 00:33:30.977 --rc genhtml_legend=1 00:33:30.977 --rc geninfo_all_blocks=1 00:33:30.977 --rc geninfo_unexecuted_blocks=1 00:33:30.977 00:33:30.977 ' 00:33:30.977 10:35:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:33:30.977 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:30.977 --rc genhtml_branch_coverage=1 00:33:30.977 --rc genhtml_function_coverage=1 00:33:30.977 --rc genhtml_legend=1 00:33:30.977 --rc geninfo_all_blocks=1 00:33:30.977 --rc geninfo_unexecuted_blocks=1 00:33:30.977 00:33:30.977 ' 00:33:30.977 10:35:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:30.977 10:35:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:33:30.977 10:35:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:30.977 10:35:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:30.977 10:35:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:30.977 10:35:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:30.977 10:35:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:30.977 10:35:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:30.977 10:35:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:30.977 10:35:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:30.977 10:35:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:30.977 10:35:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:30.977 10:35:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:33:30.977 10:35:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:33:30.977 10:35:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:30.977 10:35:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:30.977 10:35:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:30.977 10:35:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:30.977 10:35:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:30.977 10:35:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:33:30.977 10:35:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:30.977 10:35:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:30.977 10:35:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:30.977 10:35:24 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:30.977 10:35:24 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:30.977 10:35:24 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:30.977 10:35:24 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:33:30.977 10:35:24 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:30.977 10:35:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:33:30.977 10:35:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:30.977 10:35:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:30.977 10:35:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:30.977 10:35:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:30.977 10:35:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:30.977 10:35:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:30.977 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:30.977 10:35:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:30.977 10:35:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:30.977 10:35:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:30.977 10:35:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:33:30.977 10:35:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:33:30.977 10:35:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:30.977 10:35:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:33:30.977 10:35:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:33:30.977 10:35:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:30.977 10:35:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:30.977 10:35:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:30.977 10:35:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:30.977 10:35:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:30.977 10:35:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:30.977 10:35:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:30.977 10:35:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:30.977 10:35:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:30.977 10:35:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:30.977 10:35:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable 00:33:30.977 10:35:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:33:36.248 10:35:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:36.248 10:35:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=() 00:33:36.248 10:35:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:36.248 10:35:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:36.248 10:35:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:36.248 10:35:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:36.248 10:35:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:36.248 10:35:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=() 00:33:36.248 10:35:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:36.248 10:35:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=() 00:33:36.248 10:35:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810 00:33:36.248 10:35:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=() 00:33:36.248 10:35:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722 00:33:36.248 10:35:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=() 00:33:36.248 10:35:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx 00:33:36.248 10:35:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:36.248 10:35:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:36.248 10:35:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:36.248 10:35:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:36.248 10:35:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:36.248 10:35:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:36.248 10:35:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:36.248 10:35:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:36.248 10:35:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:36.248 10:35:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:36.248 10:35:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:36.248 10:35:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:36.248 10:35:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:36.248 10:35:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:36.248 10:35:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:36.248 10:35:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:36.248 10:35:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:36.248 10:35:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:36.248 10:35:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:36.248 10:35:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:33:36.248 Found 0000:af:00.0 (0x8086 - 0x159b) 00:33:36.248 10:35:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:36.248 10:35:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:36.248 10:35:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:36.248 10:35:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:36.248 10:35:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:36.248 10:35:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:36.248 10:35:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:33:36.248 Found 0000:af:00.1 (0x8086 - 0x159b) 00:33:36.248 10:35:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:36.248 10:35:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:36.248 10:35:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:36.248 10:35:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:36.248 10:35:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:36.248 10:35:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:36.248 10:35:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:36.248 10:35:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:36.248 10:35:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:36.248 10:35:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:36.248 10:35:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:36.248 10:35:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:36.248 10:35:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:36.248 10:35:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:36.248 10:35:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:36.248 10:35:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:33:36.248 Found net devices under 0000:af:00.0: cvl_0_0 00:33:36.248 10:35:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:36.248 10:35:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:36.248 10:35:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:36.248 10:35:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:36.248 10:35:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:36.248 10:35:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:36.248 10:35:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:36.248 10:35:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:36.248 10:35:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:33:36.248 Found net devices under 0000:af:00.1: cvl_0_1 00:33:36.248 10:35:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:36.248 10:35:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:36.248 10:35:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # is_hw=yes 00:33:36.248 10:35:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:36.248 10:35:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:36.248 10:35:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:36.248 10:35:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:36.248 10:35:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:36.248 10:35:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:36.248 10:35:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:36.248 10:35:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:36.248 10:35:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:36.248 10:35:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:36.248 10:35:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:36.248 10:35:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:36.248 10:35:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:36.248 10:35:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:36.248 10:35:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:36.248 10:35:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:36.248 10:35:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:36.248 10:35:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:36.507 10:35:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:36.507 10:35:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:36.507 10:35:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:36.507 10:35:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:36.507 10:35:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:36.507 10:35:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:36.507 10:35:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:36.507 10:35:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:36.507 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:36.507 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.331 ms 00:33:36.507 00:33:36.507 --- 10.0.0.2 ping statistics --- 00:33:36.507 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:36.507 rtt min/avg/max/mdev = 0.331/0.331/0.331/0.000 ms 00:33:36.507 10:35:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:36.507 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:36.507 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.125 ms 00:33:36.507 00:33:36.507 --- 10.0.0.1 ping statistics --- 00:33:36.507 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:36.507 rtt min/avg/max/mdev = 0.125/0.125/0.125/0.000 ms 00:33:36.507 10:35:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:36.507 10:35:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # return 0 00:33:36.507 10:35:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:36.507 10:35:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:36.507 10:35:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:36.507 10:35:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:36.507 10:35:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:36.507 10:35:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:36.507 10:35:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:36.507 10:35:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:33:36.507 10:35:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:36.507 10:35:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:36.507 10:35:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:33:36.507 10:35:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=4093098 00:33:36.507 10:35:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:33:36.507 10:35:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 4093098 00:33:36.507 10:35:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 4093098 ']' 00:33:36.507 10:35:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:36.507 10:35:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:36.507 10:35:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:36.507 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:36.507 10:35:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:36.507 10:35:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:33:36.769 [2024-12-13 10:35:30.406114] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:33:36.769 [2024-12-13 10:35:30.406201] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:36.769 [2024-12-13 10:35:30.527074] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:33:36.769 [2024-12-13 10:35:30.635608] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:36.769 [2024-12-13 10:35:30.635653] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:36.769 [2024-12-13 10:35:30.635663] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:36.769 [2024-12-13 10:35:30.635673] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:36.769 [2024-12-13 10:35:30.635680] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:36.769 [2024-12-13 10:35:30.637921] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:33:36.769 [2024-12-13 10:35:30.637987] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:33:36.769 [2024-12-13 10:35:30.637996] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:33:37.335 10:35:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:37.335 10:35:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:33:37.335 10:35:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:37.335 10:35:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:37.335 10:35:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:33:37.593 10:35:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:37.593 10:35:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:33:37.593 [2024-12-13 10:35:31.416211] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:37.593 10:35:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:33:37.851 Malloc0 00:33:38.108 10:35:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:38.108 10:35:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:38.366 10:35:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:38.624 [2024-12-13 10:35:32.285181] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:38.624 10:35:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:33:38.624 [2024-12-13 10:35:32.469753] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:33:38.624 10:35:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:33:38.882 [2024-12-13 10:35:32.662426] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:33:38.882 10:35:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:33:38.882 10:35:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=4093405 00:33:38.882 10:35:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:33:38.882 10:35:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 4093405 /var/tmp/bdevperf.sock 00:33:38.882 10:35:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 4093405 ']' 00:33:38.882 10:35:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:33:38.882 10:35:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:38.882 10:35:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:33:38.882 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:33:38.882 10:35:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:38.882 10:35:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:33:39.818 10:35:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:39.818 10:35:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:33:39.818 10:35:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:33:40.077 NVMe0n1 00:33:40.336 10:35:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:33:40.336 00:33:40.594 10:35:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:33:40.594 10:35:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=4093637 00:33:40.594 10:35:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:33:41.530 10:35:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:41.530 [2024-12-13 10:35:35.416383] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:41.530 [2024-12-13 10:35:35.416437] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:41.530 [2024-12-13 10:35:35.416459] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:41.530 [2024-12-13 10:35:35.416469] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:41.530 [2024-12-13 10:35:35.416478] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:41.530 [2024-12-13 10:35:35.416487] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:41.530 [2024-12-13 10:35:35.416495] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:41.530 [2024-12-13 10:35:35.416503] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:41.530 [2024-12-13 10:35:35.416511] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:41.530 [2024-12-13 10:35:35.416524] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:41.530 [2024-12-13 10:35:35.416532] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:41.530 [2024-12-13 10:35:35.416540] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:41.531 [2024-12-13 10:35:35.416548] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:41.531 [2024-12-13 10:35:35.416556] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:41.531 [2024-12-13 10:35:35.416564] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:41.531 [2024-12-13 10:35:35.416572] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:41.531 [2024-12-13 10:35:35.416580] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:41.531 [2024-12-13 10:35:35.416588] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:41.531 [2024-12-13 10:35:35.416597] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:41.531 [2024-12-13 10:35:35.416604] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:41.531 [2024-12-13 10:35:35.416612] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:41.531 [2024-12-13 10:35:35.416620] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:41.531 [2024-12-13 10:35:35.416628] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:33:41.789 10:35:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:33:45.077 10:35:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:33:45.077 00:33:45.077 10:35:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:33:45.077 [2024-12-13 10:35:38.947384] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:33:45.077 [2024-12-13 10:35:38.947433] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:33:45.077 [2024-12-13 10:35:38.947443] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:33:45.077 [2024-12-13 10:35:38.947460] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:33:45.077 [2024-12-13 10:35:38.947469] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:33:45.077 [2024-12-13 10:35:38.947477] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:33:45.077 [2024-12-13 10:35:38.947486] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:33:45.077 [2024-12-13 10:35:38.947495] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:33:45.077 [2024-12-13 10:35:38.947503] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:33:45.336 10:35:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:33:48.624 10:35:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:48.624 [2024-12-13 10:35:42.165399] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:48.624 10:35:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:33:49.562 10:35:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:33:49.562 [2024-12-13 10:35:43.381090] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:49.562 [2024-12-13 10:35:43.381136] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:49.562 [2024-12-13 10:35:43.381146] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:49.562 [2024-12-13 10:35:43.381155] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:49.562 [2024-12-13 10:35:43.381163] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:49.562 [2024-12-13 10:35:43.381171] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:49.562 [2024-12-13 10:35:43.381179] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:33:49.562 10:35:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 4093637 00:33:56.130 { 00:33:56.130 "results": [ 00:33:56.130 { 00:33:56.130 "job": "NVMe0n1", 00:33:56.130 "core_mask": "0x1", 00:33:56.130 "workload": "verify", 00:33:56.130 "status": "finished", 00:33:56.130 "verify_range": { 00:33:56.130 "start": 0, 00:33:56.130 "length": 16384 00:33:56.130 }, 00:33:56.130 "queue_depth": 128, 00:33:56.130 "io_size": 4096, 00:33:56.130 "runtime": 15.011374, 00:33:56.130 "iops": 9703.841900148514, 00:33:56.130 "mibps": 37.905632422455135, 00:33:56.130 "io_failed": 4309, 00:33:56.130 "io_timeout": 0, 00:33:56.130 "avg_latency_us": 12786.821394340783, 00:33:56.130 "min_latency_us": 468.1142857142857, 00:33:56.130 "max_latency_us": 12732.708571428571 00:33:56.130 } 00:33:56.130 ], 00:33:56.130 "core_count": 1 00:33:56.130 } 00:33:56.130 10:35:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 4093405 00:33:56.130 10:35:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 4093405 ']' 00:33:56.130 10:35:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 4093405 00:33:56.130 10:35:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:33:56.130 10:35:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:56.130 10:35:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4093405 00:33:56.130 10:35:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:56.130 10:35:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:56.130 10:35:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4093405' 00:33:56.130 killing process with pid 4093405 00:33:56.130 10:35:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 4093405 00:33:56.130 10:35:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 4093405 00:33:56.704 10:35:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:33:56.704 [2024-12-13 10:35:32.747078] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:33:56.704 [2024-12-13 10:35:32.747187] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4093405 ] 00:33:56.704 [2024-12-13 10:35:32.860161] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:56.704 [2024-12-13 10:35:32.974063] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:33:56.704 Running I/O for 15 seconds... 00:33:56.704 9705.00 IOPS, 37.91 MiB/s [2024-12-13T09:35:50.595Z] [2024-12-13 10:35:35.417472] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:33:56.704 [2024-12-13 10:35:35.417521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.704 [2024-12-13 10:35:35.417542] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:56.704 [2024-12-13 10:35:35.417553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.704 [2024-12-13 10:35:35.417564] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:56.704 [2024-12-13 10:35:35.417574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.704 [2024-12-13 10:35:35.417584] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:56.704 [2024-12-13 10:35:35.417593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.704 [2024-12-13 10:35:35.417603] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325580 is same with the state(6) to be set 00:33:56.704 [2024-12-13 10:35:35.418353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:85176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.704 [2024-12-13 10:35:35.418381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.704 [2024-12-13 10:35:35.418402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:85184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.704 [2024-12-13 10:35:35.418414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.704 [2024-12-13 10:35:35.418428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:85192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.704 [2024-12-13 10:35:35.418440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.704 [2024-12-13 10:35:35.418460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:85200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.704 [2024-12-13 10:35:35.418470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.704 [2024-12-13 10:35:35.418482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:85208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.704 [2024-12-13 10:35:35.418493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.705 [2024-12-13 10:35:35.418506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:85216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.705 [2024-12-13 10:35:35.418516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.705 [2024-12-13 10:35:35.418528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:85224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.705 [2024-12-13 10:35:35.418544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.705 [2024-12-13 10:35:35.418555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:85232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.705 [2024-12-13 10:35:35.418565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.705 [2024-12-13 10:35:35.418576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:85240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.705 [2024-12-13 10:35:35.418595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.705 [2024-12-13 10:35:35.418607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:85248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.705 [2024-12-13 10:35:35.418616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.705 [2024-12-13 10:35:35.418627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:85256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.705 [2024-12-13 10:35:35.418636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.705 [2024-12-13 10:35:35.418648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:85264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.705 [2024-12-13 10:35:35.418657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.705 [2024-12-13 10:35:35.418670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:85272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.705 [2024-12-13 10:35:35.418680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.705 [2024-12-13 10:35:35.418692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:85280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.705 [2024-12-13 10:35:35.418701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.705 [2024-12-13 10:35:35.418713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:85288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.705 [2024-12-13 10:35:35.418722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.705 [2024-12-13 10:35:35.418734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:85296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.705 [2024-12-13 10:35:35.418744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.705 [2024-12-13 10:35:35.418755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:85304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.705 [2024-12-13 10:35:35.418765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.705 [2024-12-13 10:35:35.418776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:85312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.705 [2024-12-13 10:35:35.418786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.705 [2024-12-13 10:35:35.418797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:85320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.705 [2024-12-13 10:35:35.418807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.705 [2024-12-13 10:35:35.418820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:85328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.705 [2024-12-13 10:35:35.418830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.705 [2024-12-13 10:35:35.418843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:85336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.705 [2024-12-13 10:35:35.418852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.705 [2024-12-13 10:35:35.418863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:85344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.705 [2024-12-13 10:35:35.418873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.705 [2024-12-13 10:35:35.418884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:85352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.705 [2024-12-13 10:35:35.418893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.705 [2024-12-13 10:35:35.418905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:85360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.705 [2024-12-13 10:35:35.418914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.705 [2024-12-13 10:35:35.418925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:85368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.705 [2024-12-13 10:35:35.418934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.705 [2024-12-13 10:35:35.418945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:85376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.705 [2024-12-13 10:35:35.418955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.705 [2024-12-13 10:35:35.418967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:85384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.705 [2024-12-13 10:35:35.418976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.705 [2024-12-13 10:35:35.418987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:85392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.705 [2024-12-13 10:35:35.418996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.705 [2024-12-13 10:35:35.419007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:85400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.705 [2024-12-13 10:35:35.419016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.705 [2024-12-13 10:35:35.419027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:85408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.705 [2024-12-13 10:35:35.419036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.705 [2024-12-13 10:35:35.419047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:85416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.705 [2024-12-13 10:35:35.419057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.705 [2024-12-13 10:35:35.419068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:85424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.705 [2024-12-13 10:35:35.419079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.705 [2024-12-13 10:35:35.419090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:85432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.705 [2024-12-13 10:35:35.419099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.705 [2024-12-13 10:35:35.419110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:85440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.705 [2024-12-13 10:35:35.419120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.705 [2024-12-13 10:35:35.419131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:85448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.705 [2024-12-13 10:35:35.419140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.705 [2024-12-13 10:35:35.419151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:85456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.705 [2024-12-13 10:35:35.419159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.705 [2024-12-13 10:35:35.419171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.705 [2024-12-13 10:35:35.419180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.705 [2024-12-13 10:35:35.419191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:85472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.705 [2024-12-13 10:35:35.419200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.705 [2024-12-13 10:35:35.419211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:85480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.705 [2024-12-13 10:35:35.419222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.705 [2024-12-13 10:35:35.419234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:85488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.705 [2024-12-13 10:35:35.419243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.705 [2024-12-13 10:35:35.419254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:84864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.705 [2024-12-13 10:35:35.419264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.705 [2024-12-13 10:35:35.419275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:84872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.705 [2024-12-13 10:35:35.419285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.705 [2024-12-13 10:35:35.419296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:84880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.705 [2024-12-13 10:35:35.419305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.705 [2024-12-13 10:35:35.419316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:84888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.705 [2024-12-13 10:35:35.419325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.706 [2024-12-13 10:35:35.419337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:84896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.706 [2024-12-13 10:35:35.419351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.706 [2024-12-13 10:35:35.419363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:84904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.706 [2024-12-13 10:35:35.419372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.706 [2024-12-13 10:35:35.419383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:84912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.706 [2024-12-13 10:35:35.419393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.706 [2024-12-13 10:35:35.419404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:85496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.706 [2024-12-13 10:35:35.419413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.706 [2024-12-13 10:35:35.419424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:85504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.706 [2024-12-13 10:35:35.419434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.706 [2024-12-13 10:35:35.419445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:85512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.706 [2024-12-13 10:35:35.419461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.706 [2024-12-13 10:35:35.419472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:85520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.706 [2024-12-13 10:35:35.419482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.706 [2024-12-13 10:35:35.419493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:85528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.706 [2024-12-13 10:35:35.419503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.706 [2024-12-13 10:35:35.419514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:85536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.706 [2024-12-13 10:35:35.419524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.706 [2024-12-13 10:35:35.419535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:85544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.706 [2024-12-13 10:35:35.419544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.706 [2024-12-13 10:35:35.419555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:85552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.706 [2024-12-13 10:35:35.419565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.706 [2024-12-13 10:35:35.419576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:85560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.706 [2024-12-13 10:35:35.419585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.706 [2024-12-13 10:35:35.419596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:85568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.706 [2024-12-13 10:35:35.419605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.706 [2024-12-13 10:35:35.419618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:85576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.706 [2024-12-13 10:35:35.419627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.706 [2024-12-13 10:35:35.419638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:85584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.706 [2024-12-13 10:35:35.419648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.706 [2024-12-13 10:35:35.419658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:85592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.706 [2024-12-13 10:35:35.419668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.706 [2024-12-13 10:35:35.419679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:85600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.706 [2024-12-13 10:35:35.419688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.706 [2024-12-13 10:35:35.419706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:85608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.706 [2024-12-13 10:35:35.419716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.706 [2024-12-13 10:35:35.419727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:85616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.706 [2024-12-13 10:35:35.419737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.706 [2024-12-13 10:35:35.419748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:85624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.706 [2024-12-13 10:35:35.419757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.706 [2024-12-13 10:35:35.419768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:85632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.706 [2024-12-13 10:35:35.419778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.706 [2024-12-13 10:35:35.419789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:85640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.706 [2024-12-13 10:35:35.419798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.706 [2024-12-13 10:35:35.419809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:85648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.706 [2024-12-13 10:35:35.419818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.706 [2024-12-13 10:35:35.419829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:85656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.706 [2024-12-13 10:35:35.419838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.706 [2024-12-13 10:35:35.419849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:85664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.706 [2024-12-13 10:35:35.419858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.706 [2024-12-13 10:35:35.419869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:85672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.706 [2024-12-13 10:35:35.419879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.706 [2024-12-13 10:35:35.419891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:85680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.706 [2024-12-13 10:35:35.419900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.706 [2024-12-13 10:35:35.419911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:85688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.706 [2024-12-13 10:35:35.419920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.706 [2024-12-13 10:35:35.419931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:85696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.706 [2024-12-13 10:35:35.419945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.706 [2024-12-13 10:35:35.419957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:85704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.706 [2024-12-13 10:35:35.419965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.706 [2024-12-13 10:35:35.419976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:85712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.706 [2024-12-13 10:35:35.419985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.706 [2024-12-13 10:35:35.419997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:85720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.706 [2024-12-13 10:35:35.420007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.706 [2024-12-13 10:35:35.420018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:85728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.706 [2024-12-13 10:35:35.420027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.706 [2024-12-13 10:35:35.420039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:85736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.706 [2024-12-13 10:35:35.420049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.706 [2024-12-13 10:35:35.420060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:85744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.706 [2024-12-13 10:35:35.420069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.706 [2024-12-13 10:35:35.420080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:85752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.706 [2024-12-13 10:35:35.420089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.706 [2024-12-13 10:35:35.420101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:85760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.706 [2024-12-13 10:35:35.420111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.706 [2024-12-13 10:35:35.420121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:85768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.706 [2024-12-13 10:35:35.420130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.706 [2024-12-13 10:35:35.420143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:85776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.706 [2024-12-13 10:35:35.420153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.706 [2024-12-13 10:35:35.420164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:85784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.707 [2024-12-13 10:35:35.420173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.707 [2024-12-13 10:35:35.420184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:85792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.707 [2024-12-13 10:35:35.420193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.707 [2024-12-13 10:35:35.420204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:85800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.707 [2024-12-13 10:35:35.420214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.707 [2024-12-13 10:35:35.420225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:85808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.707 [2024-12-13 10:35:35.420234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.707 [2024-12-13 10:35:35.420245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:85816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.707 [2024-12-13 10:35:35.420254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.707 [2024-12-13 10:35:35.420266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:84920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.707 [2024-12-13 10:35:35.420275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.707 [2024-12-13 10:35:35.420286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:84928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.707 [2024-12-13 10:35:35.420295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.707 [2024-12-13 10:35:35.420306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:84936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.707 [2024-12-13 10:35:35.420315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.707 [2024-12-13 10:35:35.420327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:84944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.707 [2024-12-13 10:35:35.420336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.707 [2024-12-13 10:35:35.420346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:84952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.707 [2024-12-13 10:35:35.420355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.707 [2024-12-13 10:35:35.420367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:84960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.707 [2024-12-13 10:35:35.420377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.707 [2024-12-13 10:35:35.420388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:84968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.707 [2024-12-13 10:35:35.420397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.707 [2024-12-13 10:35:35.420409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:84976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.707 [2024-12-13 10:35:35.420420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.707 [2024-12-13 10:35:35.420431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:84984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.707 [2024-12-13 10:35:35.420440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.707 [2024-12-13 10:35:35.420457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:84992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.707 [2024-12-13 10:35:35.420467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.707 [2024-12-13 10:35:35.420477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:85000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.707 [2024-12-13 10:35:35.420486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.707 [2024-12-13 10:35:35.420497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:85008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.707 [2024-12-13 10:35:35.420507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.707 [2024-12-13 10:35:35.420518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:85016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.707 [2024-12-13 10:35:35.420527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.707 [2024-12-13 10:35:35.420538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:85024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.707 [2024-12-13 10:35:35.420547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.707 [2024-12-13 10:35:35.420558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:85032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.707 [2024-12-13 10:35:35.420568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.707 [2024-12-13 10:35:35.420579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:85040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.707 [2024-12-13 10:35:35.420588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.707 [2024-12-13 10:35:35.420598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:85048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.707 [2024-12-13 10:35:35.420607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.707 [2024-12-13 10:35:35.420618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:85056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.707 [2024-12-13 10:35:35.420629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.707 [2024-12-13 10:35:35.420640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:85064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.707 [2024-12-13 10:35:35.420649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.707 [2024-12-13 10:35:35.420660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:85072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.707 [2024-12-13 10:35:35.420670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.707 [2024-12-13 10:35:35.420681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:85080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.707 [2024-12-13 10:35:35.420691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.707 [2024-12-13 10:35:35.420704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:85088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.707 [2024-12-13 10:35:35.420713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.707 [2024-12-13 10:35:35.420723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:85096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.707 [2024-12-13 10:35:35.420732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.707 [2024-12-13 10:35:35.420744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:85104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.707 [2024-12-13 10:35:35.420753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.707 [2024-12-13 10:35:35.420763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:85824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.707 [2024-12-13 10:35:35.420772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.707 [2024-12-13 10:35:35.420783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:85832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.707 [2024-12-13 10:35:35.420793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.707 [2024-12-13 10:35:35.420804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:85840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.707 [2024-12-13 10:35:35.420813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.707 [2024-12-13 10:35:35.420823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:85848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.707 [2024-12-13 10:35:35.420832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.707 [2024-12-13 10:35:35.420843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:85856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.707 [2024-12-13 10:35:35.420853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.707 [2024-12-13 10:35:35.420864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:85864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.707 [2024-12-13 10:35:35.420873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.707 [2024-12-13 10:35:35.420883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:85872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.707 [2024-12-13 10:35:35.420892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.707 [2024-12-13 10:35:35.420903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:85880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.707 [2024-12-13 10:35:35.420912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.707 [2024-12-13 10:35:35.420925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:85112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.707 [2024-12-13 10:35:35.420934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.707 [2024-12-13 10:35:35.420945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:85120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.707 [2024-12-13 10:35:35.420956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.707 [2024-12-13 10:35:35.420966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:85128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.707 [2024-12-13 10:35:35.420976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.707 [2024-12-13 10:35:35.420986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:85136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.707 [2024-12-13 10:35:35.420995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.707 [2024-12-13 10:35:35.421007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:85144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.708 [2024-12-13 10:35:35.421016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.708 [2024-12-13 10:35:35.421028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:85152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.708 [2024-12-13 10:35:35.421037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.708 [2024-12-13 10:35:35.421048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:85160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.708 [2024-12-13 10:35:35.421057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.708 [2024-12-13 10:35:35.421083] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:56.708 [2024-12-13 10:35:35.421093] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:56.708 [2024-12-13 10:35:35.421103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:85168 len:8 PRP1 0x0 PRP2 0x0 00:33:56.708 [2024-12-13 10:35:35.421117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.708 [2024-12-13 10:35:35.421431] bdev_nvme.c:2057:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:33:56.708 [2024-12-13 10:35:35.421457] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:33:56.708 [2024-12-13 10:35:35.424524] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:33:56.708 [2024-12-13 10:35:35.424569] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325580 (9): Bad file descriptor 00:33:56.708 [2024-12-13 10:35:35.451363] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:33:56.708 9649.50 IOPS, 37.69 MiB/s [2024-12-13T09:35:50.599Z] 9678.33 IOPS, 37.81 MiB/s [2024-12-13T09:35:50.599Z] 9742.25 IOPS, 38.06 MiB/s [2024-12-13T09:35:50.599Z] [2024-12-13 10:35:38.947814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:106360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.708 [2024-12-13 10:35:38.947862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.708 [2024-12-13 10:35:38.947886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:106368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.708 [2024-12-13 10:35:38.947911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.708 [2024-12-13 10:35:38.947923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:106376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.708 [2024-12-13 10:35:38.947933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.708 [2024-12-13 10:35:38.947944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:106384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.708 [2024-12-13 10:35:38.947953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.708 [2024-12-13 10:35:38.947965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:106392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.708 [2024-12-13 10:35:38.947975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.708 [2024-12-13 10:35:38.947986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:106400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.708 [2024-12-13 10:35:38.947996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.708 [2024-12-13 10:35:38.948007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:106408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.708 [2024-12-13 10:35:38.948016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.708 [2024-12-13 10:35:38.948027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:106416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.708 [2024-12-13 10:35:38.948036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.708 [2024-12-13 10:35:38.948047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:106424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.708 [2024-12-13 10:35:38.948057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.708 [2024-12-13 10:35:38.948070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:106616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.708 [2024-12-13 10:35:38.948079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.708 [2024-12-13 10:35:38.948090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:106624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.708 [2024-12-13 10:35:38.948100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.708 [2024-12-13 10:35:38.948110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:106432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.708 [2024-12-13 10:35:38.948120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.708 [2024-12-13 10:35:38.948131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:106440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.708 [2024-12-13 10:35:38.948140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.708 [2024-12-13 10:35:38.948152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:106448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.708 [2024-12-13 10:35:38.948161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.708 [2024-12-13 10:35:38.948174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:106456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.708 [2024-12-13 10:35:38.948185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.708 [2024-12-13 10:35:38.948196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:106464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.708 [2024-12-13 10:35:38.948206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.708 [2024-12-13 10:35:38.948216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:106472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.708 [2024-12-13 10:35:38.948227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.708 [2024-12-13 10:35:38.948238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:106480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.708 [2024-12-13 10:35:38.948247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.708 [2024-12-13 10:35:38.948259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:106488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.708 [2024-12-13 10:35:38.948267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.708 [2024-12-13 10:35:38.948278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:106496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.708 [2024-12-13 10:35:38.948288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.708 [2024-12-13 10:35:38.948299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:106504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.708 [2024-12-13 10:35:38.948309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.708 [2024-12-13 10:35:38.948320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:106512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.708 [2024-12-13 10:35:38.948329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.708 [2024-12-13 10:35:38.948340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:106520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.708 [2024-12-13 10:35:38.948350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.708 [2024-12-13 10:35:38.948362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:106528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.708 [2024-12-13 10:35:38.948371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.708 [2024-12-13 10:35:38.948382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:106536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.708 [2024-12-13 10:35:38.948391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.708 [2024-12-13 10:35:38.948403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:106544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.708 [2024-12-13 10:35:38.948413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.708 [2024-12-13 10:35:38.948424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:106552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.708 [2024-12-13 10:35:38.948435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.708 [2024-12-13 10:35:38.948446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:106560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.708 [2024-12-13 10:35:38.948462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.708 [2024-12-13 10:35:38.948473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:106568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.708 [2024-12-13 10:35:38.948483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.708 [2024-12-13 10:35:38.948494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:106576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.708 [2024-12-13 10:35:38.948503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.708 [2024-12-13 10:35:38.948514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:106584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.708 [2024-12-13 10:35:38.948523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.708 [2024-12-13 10:35:38.948534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:106592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.708 [2024-12-13 10:35:38.948543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.708 [2024-12-13 10:35:38.948554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:106600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.708 [2024-12-13 10:35:38.948565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.708 [2024-12-13 10:35:38.948576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:106608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.709 [2024-12-13 10:35:38.948586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.709 [2024-12-13 10:35:38.948598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:106632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.709 [2024-12-13 10:35:38.948608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.709 [2024-12-13 10:35:38.948619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:106640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.709 [2024-12-13 10:35:38.948628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.709 [2024-12-13 10:35:38.948639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:106648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.709 [2024-12-13 10:35:38.948650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.709 [2024-12-13 10:35:38.948662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:106656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.709 [2024-12-13 10:35:38.948672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.709 [2024-12-13 10:35:38.948683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:106664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.709 [2024-12-13 10:35:38.948694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.709 [2024-12-13 10:35:38.948708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:106672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.709 [2024-12-13 10:35:38.948718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.709 [2024-12-13 10:35:38.948730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:106680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.709 [2024-12-13 10:35:38.948740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.709 [2024-12-13 10:35:38.948751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:106688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.709 [2024-12-13 10:35:38.948761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.709 [2024-12-13 10:35:38.948772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:106696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.709 [2024-12-13 10:35:38.948782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.709 [2024-12-13 10:35:38.948793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:106704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.709 [2024-12-13 10:35:38.948803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.709 [2024-12-13 10:35:38.948814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:106712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.709 [2024-12-13 10:35:38.948823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.709 [2024-12-13 10:35:38.948833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:106720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.709 [2024-12-13 10:35:38.948842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.709 [2024-12-13 10:35:38.948853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:106728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.709 [2024-12-13 10:35:38.948862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.709 [2024-12-13 10:35:38.948873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:106736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.709 [2024-12-13 10:35:38.948883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.709 [2024-12-13 10:35:38.948894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:106744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.709 [2024-12-13 10:35:38.948903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.709 [2024-12-13 10:35:38.948914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:106752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.709 [2024-12-13 10:35:38.948924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.709 [2024-12-13 10:35:38.948935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:106760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.709 [2024-12-13 10:35:38.948945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.709 [2024-12-13 10:35:38.948956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:106768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.709 [2024-12-13 10:35:38.948965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.709 [2024-12-13 10:35:38.948978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:106776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.709 [2024-12-13 10:35:38.948989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.709 [2024-12-13 10:35:38.949000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:106784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.709 [2024-12-13 10:35:38.949009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.709 [2024-12-13 10:35:38.949020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:106792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.709 [2024-12-13 10:35:38.949029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.709 [2024-12-13 10:35:38.949040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:106800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.709 [2024-12-13 10:35:38.949050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.709 [2024-12-13 10:35:38.949061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:106808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.709 [2024-12-13 10:35:38.949070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.709 [2024-12-13 10:35:38.949080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:106816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.709 [2024-12-13 10:35:38.949089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.709 [2024-12-13 10:35:38.949101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:106824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.709 [2024-12-13 10:35:38.949110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.709 [2024-12-13 10:35:38.949121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:106832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.709 [2024-12-13 10:35:38.949131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.709 [2024-12-13 10:35:38.949142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:106840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.709 [2024-12-13 10:35:38.949152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.709 [2024-12-13 10:35:38.949162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:106848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.709 [2024-12-13 10:35:38.949172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.709 [2024-12-13 10:35:38.949183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:106856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.709 [2024-12-13 10:35:38.949192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.709 [2024-12-13 10:35:38.949203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:106864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.709 [2024-12-13 10:35:38.949212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.709 [2024-12-13 10:35:38.949223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:106872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.709 [2024-12-13 10:35:38.949234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.709 [2024-12-13 10:35:38.949245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:106880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.710 [2024-12-13 10:35:38.949260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.710 [2024-12-13 10:35:38.949271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:106888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.710 [2024-12-13 10:35:38.949281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.710 [2024-12-13 10:35:38.949292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:106896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.710 [2024-12-13 10:35:38.949301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.710 [2024-12-13 10:35:38.949312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:106904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.710 [2024-12-13 10:35:38.949322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.710 [2024-12-13 10:35:38.949334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:106912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.710 [2024-12-13 10:35:38.949343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.710 [2024-12-13 10:35:38.949354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:106920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.710 [2024-12-13 10:35:38.949366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.710 [2024-12-13 10:35:38.949377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:106928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.710 [2024-12-13 10:35:38.949387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.710 [2024-12-13 10:35:38.949398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:106936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.710 [2024-12-13 10:35:38.949408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.710 [2024-12-13 10:35:38.949421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:106944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.710 [2024-12-13 10:35:38.949431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.710 [2024-12-13 10:35:38.949442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:106952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.710 [2024-12-13 10:35:38.949461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.710 [2024-12-13 10:35:38.949473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:106960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.710 [2024-12-13 10:35:38.949483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.710 [2024-12-13 10:35:38.949494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:106968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.710 [2024-12-13 10:35:38.949504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.710 [2024-12-13 10:35:38.949517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:106976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.710 [2024-12-13 10:35:38.949526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.710 [2024-12-13 10:35:38.949537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:106984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.710 [2024-12-13 10:35:38.949547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.710 [2024-12-13 10:35:38.949558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:106992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.710 [2024-12-13 10:35:38.949568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.710 [2024-12-13 10:35:38.949579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:107000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.710 [2024-12-13 10:35:38.949588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.710 [2024-12-13 10:35:38.949600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:107008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.710 [2024-12-13 10:35:38.949610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.710 [2024-12-13 10:35:38.949621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:107016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.710 [2024-12-13 10:35:38.949630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.710 [2024-12-13 10:35:38.949641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:107024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.710 [2024-12-13 10:35:38.949650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.710 [2024-12-13 10:35:38.949662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:107032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.710 [2024-12-13 10:35:38.949671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.710 [2024-12-13 10:35:38.949682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:107040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.710 [2024-12-13 10:35:38.949691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.710 [2024-12-13 10:35:38.949702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:107048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.710 [2024-12-13 10:35:38.949712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.710 [2024-12-13 10:35:38.949723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:107056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.710 [2024-12-13 10:35:38.949732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.710 [2024-12-13 10:35:38.949743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:107064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.710 [2024-12-13 10:35:38.949752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.710 [2024-12-13 10:35:38.949763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:107072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.710 [2024-12-13 10:35:38.949774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.710 [2024-12-13 10:35:38.949785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:107080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.710 [2024-12-13 10:35:38.949795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.710 [2024-12-13 10:35:38.949808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:107088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.710 [2024-12-13 10:35:38.949818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.710 [2024-12-13 10:35:38.949829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:107096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.710 [2024-12-13 10:35:38.949838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.710 [2024-12-13 10:35:38.949849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:107104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.710 [2024-12-13 10:35:38.949858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.710 [2024-12-13 10:35:38.949870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:107112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.710 [2024-12-13 10:35:38.949879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.710 [2024-12-13 10:35:38.949890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:107120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.710 [2024-12-13 10:35:38.949899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.710 [2024-12-13 10:35:38.949909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:107128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.710 [2024-12-13 10:35:38.949925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.710 [2024-12-13 10:35:38.949936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:107136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.710 [2024-12-13 10:35:38.949946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.710 [2024-12-13 10:35:38.949956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:107144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.710 [2024-12-13 10:35:38.949966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.710 [2024-12-13 10:35:38.949977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:107152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.710 [2024-12-13 10:35:38.949988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.710 [2024-12-13 10:35:38.949999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:107160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.710 [2024-12-13 10:35:38.950008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.710 [2024-12-13 10:35:38.950019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:107168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.710 [2024-12-13 10:35:38.950028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.710 [2024-12-13 10:35:38.950041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:107176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.710 [2024-12-13 10:35:38.950051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.710 [2024-12-13 10:35:38.950062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:107184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.710 [2024-12-13 10:35:38.950071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.710 [2024-12-13 10:35:38.950083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:107192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.710 [2024-12-13 10:35:38.950093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.710 [2024-12-13 10:35:38.950104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:107200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.710 [2024-12-13 10:35:38.950113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.710 [2024-12-13 10:35:38.950123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:107208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.711 [2024-12-13 10:35:38.950133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.711 [2024-12-13 10:35:38.950144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:107216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.711 [2024-12-13 10:35:38.950153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.711 [2024-12-13 10:35:38.950164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:107224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.711 [2024-12-13 10:35:38.950173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.711 [2024-12-13 10:35:38.950183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:107232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.711 [2024-12-13 10:35:38.950193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.711 [2024-12-13 10:35:38.950204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:107240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.711 [2024-12-13 10:35:38.950212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.711 [2024-12-13 10:35:38.950223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:107248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.711 [2024-12-13 10:35:38.950232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.711 [2024-12-13 10:35:38.950244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:107256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.711 [2024-12-13 10:35:38.950255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.711 [2024-12-13 10:35:38.950266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:107264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.711 [2024-12-13 10:35:38.950275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.711 [2024-12-13 10:35:38.950317] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:56.711 [2024-12-13 10:35:38.950330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:107272 len:8 PRP1 0x0 PRP2 0x0 00:33:56.711 [2024-12-13 10:35:38.950343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.711 [2024-12-13 10:35:38.950358] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:56.711 [2024-12-13 10:35:38.950366] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:56.711 [2024-12-13 10:35:38.950378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:107280 len:8 PRP1 0x0 PRP2 0x0 00:33:56.711 [2024-12-13 10:35:38.950388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.711 [2024-12-13 10:35:38.950398] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:56.711 [2024-12-13 10:35:38.950405] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:56.711 [2024-12-13 10:35:38.950413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:107288 len:8 PRP1 0x0 PRP2 0x0 00:33:56.711 [2024-12-13 10:35:38.950422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.711 [2024-12-13 10:35:38.950431] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:56.711 [2024-12-13 10:35:38.950438] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:56.711 [2024-12-13 10:35:38.950446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:107296 len:8 PRP1 0x0 PRP2 0x0 00:33:56.711 [2024-12-13 10:35:38.950461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.711 [2024-12-13 10:35:38.950470] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:56.711 [2024-12-13 10:35:38.950477] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:56.711 [2024-12-13 10:35:38.950485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:107304 len:8 PRP1 0x0 PRP2 0x0 00:33:56.711 [2024-12-13 10:35:38.950495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.711 [2024-12-13 10:35:38.950504] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:56.711 [2024-12-13 10:35:38.950511] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:56.711 [2024-12-13 10:35:38.950520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:107312 len:8 PRP1 0x0 PRP2 0x0 00:33:56.711 [2024-12-13 10:35:38.950528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.711 [2024-12-13 10:35:38.950537] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:56.711 [2024-12-13 10:35:38.950544] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:56.711 [2024-12-13 10:35:38.950552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:107320 len:8 PRP1 0x0 PRP2 0x0 00:33:56.711 [2024-12-13 10:35:38.950561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.711 [2024-12-13 10:35:38.950570] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:56.711 [2024-12-13 10:35:38.950578] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:56.711 [2024-12-13 10:35:38.950589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:107328 len:8 PRP1 0x0 PRP2 0x0 00:33:56.711 [2024-12-13 10:35:38.950598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.711 [2024-12-13 10:35:38.950614] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:56.711 [2024-12-13 10:35:38.950621] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:56.711 [2024-12-13 10:35:38.950631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:107336 len:8 PRP1 0x0 PRP2 0x0 00:33:56.711 [2024-12-13 10:35:38.950640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.711 [2024-12-13 10:35:38.950649] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:56.711 [2024-12-13 10:35:38.950656] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:56.711 [2024-12-13 10:35:38.950663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:107344 len:8 PRP1 0x0 PRP2 0x0 00:33:56.711 [2024-12-13 10:35:38.950673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.711 [2024-12-13 10:35:38.950682] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:56.711 [2024-12-13 10:35:38.950689] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:56.711 [2024-12-13 10:35:38.950696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:107352 len:8 PRP1 0x0 PRP2 0x0 00:33:56.711 [2024-12-13 10:35:38.950705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.711 [2024-12-13 10:35:38.950714] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:56.711 [2024-12-13 10:35:38.950721] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:56.711 [2024-12-13 10:35:38.950731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:107360 len:8 PRP1 0x0 PRP2 0x0 00:33:56.711 [2024-12-13 10:35:38.950740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.711 [2024-12-13 10:35:38.950748] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:56.711 [2024-12-13 10:35:38.950755] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:56.711 [2024-12-13 10:35:38.950763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:107368 len:8 PRP1 0x0 PRP2 0x0 00:33:56.711 [2024-12-13 10:35:38.950771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.711 [2024-12-13 10:35:38.950780] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:56.711 [2024-12-13 10:35:38.950787] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:56.711 [2024-12-13 10:35:38.950795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:107376 len:8 PRP1 0x0 PRP2 0x0 00:33:56.711 [2024-12-13 10:35:38.950803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.711 [2024-12-13 10:35:38.951095] bdev_nvme.c:2057:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:33:56.711 [2024-12-13 10:35:38.951128] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:56.711 [2024-12-13 10:35:38.951140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.711 [2024-12-13 10:35:38.951152] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:56.711 [2024-12-13 10:35:38.951162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.711 [2024-12-13 10:35:38.951172] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:56.711 [2024-12-13 10:35:38.951183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.711 [2024-12-13 10:35:38.951195] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:33:56.711 [2024-12-13 10:35:38.951205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.711 [2024-12-13 10:35:38.951214] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:33:56.711 [2024-12-13 10:35:38.951252] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325580 (9): Bad file descriptor 00:33:56.711 [2024-12-13 10:35:38.954307] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:33:56.711 [2024-12-13 10:35:38.978815] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:33:56.711 9661.20 IOPS, 37.74 MiB/s [2024-12-13T09:35:50.602Z] 9665.67 IOPS, 37.76 MiB/s [2024-12-13T09:35:50.602Z] 9681.00 IOPS, 37.82 MiB/s [2024-12-13T09:35:50.602Z] 9682.25 IOPS, 37.82 MiB/s [2024-12-13T09:35:50.602Z] 9696.78 IOPS, 37.88 MiB/s [2024-12-13T09:35:50.602Z] [2024-12-13 10:35:43.381602] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:33:56.711 [2024-12-13 10:35:43.381647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.711 [2024-12-13 10:35:43.381662] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:56.711 [2024-12-13 10:35:43.381672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.711 [2024-12-13 10:35:43.381682] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:56.711 [2024-12-13 10:35:43.381692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.712 [2024-12-13 10:35:43.381703] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:56.712 [2024-12-13 10:35:43.381711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.712 [2024-12-13 10:35:43.381720] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325580 is same with the state(6) to be set 00:33:56.712 [2024-12-13 10:35:43.381782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:64880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.712 [2024-12-13 10:35:43.381796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.712 [2024-12-13 10:35:43.381822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:64888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.712 [2024-12-13 10:35:43.381832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.712 [2024-12-13 10:35:43.381844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:64896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.712 [2024-12-13 10:35:43.381854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.712 [2024-12-13 10:35:43.381865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:64904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.712 [2024-12-13 10:35:43.381876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.712 [2024-12-13 10:35:43.381887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:64912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.712 [2024-12-13 10:35:43.381897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.712 [2024-12-13 10:35:43.381912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:64920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.712 [2024-12-13 10:35:43.381921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.712 [2024-12-13 10:35:43.381932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:64928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.712 [2024-12-13 10:35:43.381942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.712 [2024-12-13 10:35:43.381953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:64936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.712 [2024-12-13 10:35:43.381963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.712 [2024-12-13 10:35:43.381974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:64944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.712 [2024-12-13 10:35:43.381984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.712 [2024-12-13 10:35:43.381995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:64952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.712 [2024-12-13 10:35:43.382004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.712 [2024-12-13 10:35:43.382015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:64960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.712 [2024-12-13 10:35:43.382024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.712 [2024-12-13 10:35:43.382036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:64968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.712 [2024-12-13 10:35:43.382045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.712 [2024-12-13 10:35:43.382057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:64976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.712 [2024-12-13 10:35:43.382066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.712 [2024-12-13 10:35:43.382077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:64984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.712 [2024-12-13 10:35:43.382087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.712 [2024-12-13 10:35:43.382098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:64992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.712 [2024-12-13 10:35:43.382107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.712 [2024-12-13 10:35:43.382118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:65000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.712 [2024-12-13 10:35:43.382127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.712 [2024-12-13 10:35:43.382140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:65008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.712 [2024-12-13 10:35:43.382149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.712 [2024-12-13 10:35:43.382161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:65016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.712 [2024-12-13 10:35:43.382173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.712 [2024-12-13 10:35:43.382185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:65024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.712 [2024-12-13 10:35:43.382195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.712 [2024-12-13 10:35:43.382207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:65032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.712 [2024-12-13 10:35:43.382216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.712 [2024-12-13 10:35:43.382227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:65040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.712 [2024-12-13 10:35:43.382238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.712 [2024-12-13 10:35:43.382250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:65048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.712 [2024-12-13 10:35:43.382260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.712 [2024-12-13 10:35:43.382271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:65056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.712 [2024-12-13 10:35:43.382282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.712 [2024-12-13 10:35:43.382294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:65064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.712 [2024-12-13 10:35:43.382304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.712 [2024-12-13 10:35:43.382316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:65072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.712 [2024-12-13 10:35:43.382326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.712 [2024-12-13 10:35:43.382337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:65080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.712 [2024-12-13 10:35:43.382346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.712 [2024-12-13 10:35:43.382357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:65088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.712 [2024-12-13 10:35:43.382366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.712 [2024-12-13 10:35:43.382378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:65096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.712 [2024-12-13 10:35:43.382387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.712 [2024-12-13 10:35:43.382399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:65104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.712 [2024-12-13 10:35:43.382408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.712 [2024-12-13 10:35:43.382419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:65112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.712 [2024-12-13 10:35:43.382428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.712 [2024-12-13 10:35:43.382441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:65120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.712 [2024-12-13 10:35:43.382457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.712 [2024-12-13 10:35:43.382469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:65128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.712 [2024-12-13 10:35:43.382478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.712 [2024-12-13 10:35:43.382489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:65136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.712 [2024-12-13 10:35:43.382498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.712 [2024-12-13 10:35:43.382510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:65144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.712 [2024-12-13 10:35:43.382519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.712 [2024-12-13 10:35:43.382531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:65152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.712 [2024-12-13 10:35:43.382540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.712 [2024-12-13 10:35:43.382552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:65160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.712 [2024-12-13 10:35:43.382561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.712 [2024-12-13 10:35:43.382572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:65168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.712 [2024-12-13 10:35:43.382582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.712 [2024-12-13 10:35:43.382593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:65176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.712 [2024-12-13 10:35:43.382602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.712 [2024-12-13 10:35:43.382613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:65184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.712 [2024-12-13 10:35:43.382622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.713 [2024-12-13 10:35:43.382633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:65192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.713 [2024-12-13 10:35:43.382642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.713 [2024-12-13 10:35:43.382653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:65200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.713 [2024-12-13 10:35:43.382663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.713 [2024-12-13 10:35:43.382702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:65208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.713 [2024-12-13 10:35:43.382711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.713 [2024-12-13 10:35:43.382722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:65216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.713 [2024-12-13 10:35:43.382733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.713 [2024-12-13 10:35:43.382744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:65224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.713 [2024-12-13 10:35:43.382754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.713 [2024-12-13 10:35:43.382765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:65232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.713 [2024-12-13 10:35:43.382775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.713 [2024-12-13 10:35:43.382785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:65240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.713 [2024-12-13 10:35:43.382796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.713 [2024-12-13 10:35:43.382807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:65248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.713 [2024-12-13 10:35:43.382817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.713 [2024-12-13 10:35:43.382828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:65320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.713 [2024-12-13 10:35:43.382837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.713 [2024-12-13 10:35:43.382849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:65328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.713 [2024-12-13 10:35:43.382859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.713 [2024-12-13 10:35:43.382870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:65336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.713 [2024-12-13 10:35:43.382879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.713 [2024-12-13 10:35:43.382890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:65344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.713 [2024-12-13 10:35:43.382899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.713 [2024-12-13 10:35:43.382910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:65352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.713 [2024-12-13 10:35:43.382920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.713 [2024-12-13 10:35:43.382930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:65360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.713 [2024-12-13 10:35:43.382939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.713 [2024-12-13 10:35:43.382954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:65368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.713 [2024-12-13 10:35:43.382964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.713 [2024-12-13 10:35:43.382975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:65376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.713 [2024-12-13 10:35:43.382984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.713 [2024-12-13 10:35:43.382994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:65384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.713 [2024-12-13 10:35:43.383005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.713 [2024-12-13 10:35:43.383016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:65392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.713 [2024-12-13 10:35:43.383026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.713 [2024-12-13 10:35:43.383037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:65400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.713 [2024-12-13 10:35:43.383046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.713 [2024-12-13 10:35:43.383056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:65408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.713 [2024-12-13 10:35:43.383065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.713 [2024-12-13 10:35:43.383076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:65416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.713 [2024-12-13 10:35:43.383085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.713 [2024-12-13 10:35:43.383097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:65424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.713 [2024-12-13 10:35:43.383105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.713 [2024-12-13 10:35:43.383116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:65432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.713 [2024-12-13 10:35:43.383126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.713 [2024-12-13 10:35:43.383137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:65440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.713 [2024-12-13 10:35:43.383145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.713 [2024-12-13 10:35:43.383156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:65448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.713 [2024-12-13 10:35:43.383166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.713 [2024-12-13 10:35:43.383178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:65456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.713 [2024-12-13 10:35:43.383187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.713 [2024-12-13 10:35:43.383198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:65464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.713 [2024-12-13 10:35:43.383208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.713 [2024-12-13 10:35:43.383218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:65472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.713 [2024-12-13 10:35:43.383227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.713 [2024-12-13 10:35:43.383238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:65480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.713 [2024-12-13 10:35:43.383247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.713 [2024-12-13 10:35:43.383260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:65488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.713 [2024-12-13 10:35:43.383269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.713 [2024-12-13 10:35:43.383280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:65496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.713 [2024-12-13 10:35:43.383289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.713 [2024-12-13 10:35:43.383300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:65504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.713 [2024-12-13 10:35:43.383309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.713 [2024-12-13 10:35:43.383320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:65512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.713 [2024-12-13 10:35:43.383329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.713 [2024-12-13 10:35:43.383341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:65256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.713 [2024-12-13 10:35:43.383350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.714 [2024-12-13 10:35:43.383361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:65520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.714 [2024-12-13 10:35:43.383371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.714 [2024-12-13 10:35:43.383382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:65528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.714 [2024-12-13 10:35:43.383391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.714 [2024-12-13 10:35:43.383402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:65536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.714 [2024-12-13 10:35:43.383411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.714 [2024-12-13 10:35:43.383423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:65544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.714 [2024-12-13 10:35:43.383432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.714 [2024-12-13 10:35:43.383443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:65552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.714 [2024-12-13 10:35:43.383458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.714 [2024-12-13 10:35:43.383469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:65560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.714 [2024-12-13 10:35:43.383478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.714 [2024-12-13 10:35:43.383489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:65568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.714 [2024-12-13 10:35:43.383498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.714 [2024-12-13 10:35:43.383510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:65576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.714 [2024-12-13 10:35:43.383522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.714 [2024-12-13 10:35:43.383533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:65584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.714 [2024-12-13 10:35:43.383542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.714 [2024-12-13 10:35:43.383554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:65592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.714 [2024-12-13 10:35:43.383564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.714 [2024-12-13 10:35:43.383575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:65600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.714 [2024-12-13 10:35:43.383584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.714 [2024-12-13 10:35:43.383595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:65608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.714 [2024-12-13 10:35:43.383604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.714 [2024-12-13 10:35:43.383616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:65616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.714 [2024-12-13 10:35:43.383625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.714 [2024-12-13 10:35:43.383635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:65624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.714 [2024-12-13 10:35:43.383645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.714 [2024-12-13 10:35:43.383655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:65632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.714 [2024-12-13 10:35:43.383665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.714 [2024-12-13 10:35:43.383677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:65640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.714 [2024-12-13 10:35:43.383686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.714 [2024-12-13 10:35:43.383697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:65648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.714 [2024-12-13 10:35:43.383706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.714 [2024-12-13 10:35:43.383717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:65656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.714 [2024-12-13 10:35:43.383727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.714 [2024-12-13 10:35:43.383738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:65664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.714 [2024-12-13 10:35:43.383746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.714 [2024-12-13 10:35:43.383764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:65672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.714 [2024-12-13 10:35:43.383775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.714 [2024-12-13 10:35:43.383787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:65680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.714 [2024-12-13 10:35:43.383797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.714 [2024-12-13 10:35:43.383808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:65688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.714 [2024-12-13 10:35:43.383817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.714 [2024-12-13 10:35:43.383829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.714 [2024-12-13 10:35:43.383839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.714 [2024-12-13 10:35:43.383849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:65704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.714 [2024-12-13 10:35:43.383858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.714 [2024-12-13 10:35:43.383869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:65712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.714 [2024-12-13 10:35:43.383879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.714 [2024-12-13 10:35:43.383890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:65720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.714 [2024-12-13 10:35:43.383899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.714 [2024-12-13 10:35:43.383909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:65728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.714 [2024-12-13 10:35:43.383918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.714 [2024-12-13 10:35:43.383929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:65736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.714 [2024-12-13 10:35:43.383938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.714 [2024-12-13 10:35:43.383949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:65744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.714 [2024-12-13 10:35:43.383958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.714 [2024-12-13 10:35:43.383968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:65752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.714 [2024-12-13 10:35:43.383978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.714 [2024-12-13 10:35:43.383989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:65760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.714 [2024-12-13 10:35:43.383998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.714 [2024-12-13 10:35:43.384008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:65768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.714 [2024-12-13 10:35:43.384017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.714 [2024-12-13 10:35:43.384033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:65776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.714 [2024-12-13 10:35:43.384043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.714 [2024-12-13 10:35:43.384056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:65784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.714 [2024-12-13 10:35:43.384065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.714 [2024-12-13 10:35:43.384075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:65792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.714 [2024-12-13 10:35:43.384085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.714 [2024-12-13 10:35:43.384098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:65800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.714 [2024-12-13 10:35:43.384107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.714 [2024-12-13 10:35:43.384118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:65808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.714 [2024-12-13 10:35:43.384127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.714 [2024-12-13 10:35:43.384138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:65816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.714 [2024-12-13 10:35:43.384151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.714 [2024-12-13 10:35:43.384163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:65824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.714 [2024-12-13 10:35:43.384172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.714 [2024-12-13 10:35:43.384182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:65832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.714 [2024-12-13 10:35:43.384192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.714 [2024-12-13 10:35:43.384203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:65840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.715 [2024-12-13 10:35:43.384213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.715 [2024-12-13 10:35:43.384224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:65848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.715 [2024-12-13 10:35:43.384233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.715 [2024-12-13 10:35:43.384244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:65856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.715 [2024-12-13 10:35:43.384254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.715 [2024-12-13 10:35:43.384264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:65864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.715 [2024-12-13 10:35:43.384274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.715 [2024-12-13 10:35:43.384284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:65872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.715 [2024-12-13 10:35:43.384293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.715 [2024-12-13 10:35:43.384304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:65880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.715 [2024-12-13 10:35:43.384316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.715 [2024-12-13 10:35:43.384327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:65888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.715 [2024-12-13 10:35:43.384336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.715 [2024-12-13 10:35:43.384346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:65896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.715 [2024-12-13 10:35:43.384356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.715 [2024-12-13 10:35:43.384367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:65264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.715 [2024-12-13 10:35:43.384376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.715 [2024-12-13 10:35:43.384387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:65272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.715 [2024-12-13 10:35:43.384396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.715 [2024-12-13 10:35:43.384407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:65280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.715 [2024-12-13 10:35:43.384417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.715 [2024-12-13 10:35:43.384430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:65288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.715 [2024-12-13 10:35:43.384440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.715 [2024-12-13 10:35:43.384457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:65296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.715 [2024-12-13 10:35:43.384466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.715 [2024-12-13 10:35:43.384478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:65304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.715 [2024-12-13 10:35:43.384487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.715 [2024-12-13 10:35:43.384512] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:56.715 [2024-12-13 10:35:43.384521] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:56.715 [2024-12-13 10:35:43.384531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:65312 len:8 PRP1 0x0 PRP2 0x0 00:33:56.715 [2024-12-13 10:35:43.384541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.715 [2024-12-13 10:35:43.384860] bdev_nvme.c:2057:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:33:56.715 [2024-12-13 10:35:43.384874] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:33:56.715 [2024-12-13 10:35:43.387948] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:33:56.715 [2024-12-13 10:35:43.387991] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325580 (9): Bad file descriptor 00:33:56.715 [2024-12-13 10:35:43.424800] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:33:56.715 9664.50 IOPS, 37.75 MiB/s [2024-12-13T09:35:50.606Z] 9683.27 IOPS, 37.83 MiB/s [2024-12-13T09:35:50.606Z] 9680.42 IOPS, 37.81 MiB/s [2024-12-13T09:35:50.606Z] 9686.38 IOPS, 37.84 MiB/s [2024-12-13T09:35:50.606Z] 9706.21 IOPS, 37.91 MiB/s 00:33:56.715 Latency(us) 00:33:56.715 [2024-12-13T09:35:50.606Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:56.715 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:33:56.715 Verification LBA range: start 0x0 length 0x4000 00:33:56.715 NVMe0n1 : 15.01 9703.84 37.91 287.05 0.00 12786.82 468.11 12732.71 00:33:56.715 [2024-12-13T09:35:50.606Z] =================================================================================================================== 00:33:56.715 [2024-12-13T09:35:50.606Z] Total : 9703.84 37.91 287.05 0.00 12786.82 468.11 12732.71 00:33:56.715 Received shutdown signal, test time was about 15.000000 seconds 00:33:56.715 00:33:56.715 Latency(us) 00:33:56.715 [2024-12-13T09:35:50.606Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:56.715 [2024-12-13T09:35:50.606Z] =================================================================================================================== 00:33:56.715 [2024-12-13T09:35:50.606Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:56.715 10:35:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:33:56.715 10:35:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:33:56.715 10:35:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:33:56.715 10:35:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=4096303 00:33:56.715 10:35:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:33:56.715 10:35:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 4096303 /var/tmp/bdevperf.sock 00:33:56.715 10:35:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 4096303 ']' 00:33:56.715 10:35:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:33:56.715 10:35:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:56.715 10:35:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:33:56.715 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:33:56.715 10:35:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:56.715 10:35:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:33:57.657 10:35:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:57.657 10:35:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:33:57.657 10:35:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:33:57.657 [2024-12-13 10:35:51.440355] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:33:57.657 10:35:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:33:57.916 [2024-12-13 10:35:51.648962] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:33:57.916 10:35:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:33:58.175 NVMe0n1 00:33:58.175 10:35:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:33:58.434 00:33:58.434 10:35:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:33:59.002 00:33:59.002 10:35:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:33:59.002 10:35:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:33:59.002 10:35:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:33:59.261 10:35:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:34:02.549 10:35:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:34:02.549 10:35:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:34:02.549 10:35:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=4097202 00:34:02.549 10:35:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:34:02.549 10:35:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 4097202 00:34:03.486 { 00:34:03.486 "results": [ 00:34:03.486 { 00:34:03.486 "job": "NVMe0n1", 00:34:03.486 "core_mask": "0x1", 00:34:03.486 "workload": "verify", 00:34:03.486 "status": "finished", 00:34:03.486 "verify_range": { 00:34:03.486 "start": 0, 00:34:03.486 "length": 16384 00:34:03.486 }, 00:34:03.486 "queue_depth": 128, 00:34:03.486 "io_size": 4096, 00:34:03.486 "runtime": 1.008084, 00:34:03.486 "iops": 9756.131433491653, 00:34:03.486 "mibps": 38.10988841207677, 00:34:03.486 "io_failed": 0, 00:34:03.486 "io_timeout": 0, 00:34:03.486 "avg_latency_us": 13063.63625806764, 00:34:03.486 "min_latency_us": 2715.062857142857, 00:34:03.486 "max_latency_us": 11421.988571428572 00:34:03.486 } 00:34:03.486 ], 00:34:03.486 "core_count": 1 00:34:03.486 } 00:34:03.745 10:35:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:34:03.745 [2024-12-13 10:35:50.445577] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:34:03.745 [2024-12-13 10:35:50.445688] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4096303 ] 00:34:03.745 [2024-12-13 10:35:50.559838] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:03.745 [2024-12-13 10:35:50.674465] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:34:03.745 [2024-12-13 10:35:53.034668] bdev_nvme.c:2057:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:34:03.745 [2024-12-13 10:35:53.034740] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:34:03.745 [2024-12-13 10:35:53.034759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.745 [2024-12-13 10:35:53.034774] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:34:03.745 [2024-12-13 10:35:53.034784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.745 [2024-12-13 10:35:53.034795] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:34:03.745 [2024-12-13 10:35:53.034806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.745 [2024-12-13 10:35:53.034816] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:34:03.745 [2024-12-13 10:35:53.034826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.745 [2024-12-13 10:35:53.034836] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:34:03.745 [2024-12-13 10:35:53.034887] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:34:03.745 [2024-12-13 10:35:53.034915] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325580 (9): Bad file descriptor 00:34:03.745 [2024-12-13 10:35:53.043653] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:34:03.745 Running I/O for 1 seconds... 00:34:03.745 9707.00 IOPS, 37.92 MiB/s 00:34:03.745 Latency(us) 00:34:03.745 [2024-12-13T09:35:57.636Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:03.745 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:34:03.745 Verification LBA range: start 0x0 length 0x4000 00:34:03.745 NVMe0n1 : 1.01 9756.13 38.11 0.00 0.00 13063.64 2715.06 11421.99 00:34:03.745 [2024-12-13T09:35:57.636Z] =================================================================================================================== 00:34:03.745 [2024-12-13T09:35:57.636Z] Total : 9756.13 38.11 0.00 0.00 13063.64 2715.06 11421.99 00:34:03.745 10:35:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:34:03.745 10:35:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:34:03.745 10:35:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:34:04.004 10:35:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:34:04.004 10:35:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:34:04.262 10:35:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:34:04.521 10:35:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:34:07.808 10:36:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:34:07.808 10:36:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:34:07.808 10:36:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 4096303 00:34:07.808 10:36:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 4096303 ']' 00:34:07.808 10:36:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 4096303 00:34:07.808 10:36:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:34:07.808 10:36:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:07.808 10:36:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4096303 00:34:07.808 10:36:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:07.808 10:36:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:07.808 10:36:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4096303' 00:34:07.808 killing process with pid 4096303 00:34:07.808 10:36:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 4096303 00:34:07.808 10:36:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 4096303 00:34:08.744 10:36:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:34:08.744 10:36:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:08.744 10:36:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:34:08.744 10:36:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:34:08.744 10:36:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:34:08.744 10:36:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:08.744 10:36:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:34:08.744 10:36:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:08.744 10:36:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:34:08.744 10:36:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:08.744 10:36:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:08.744 rmmod nvme_tcp 00:34:08.744 rmmod nvme_fabrics 00:34:08.744 rmmod nvme_keyring 00:34:08.744 10:36:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:08.744 10:36:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:34:08.744 10:36:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:34:08.744 10:36:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 4093098 ']' 00:34:08.744 10:36:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 4093098 00:34:08.744 10:36:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 4093098 ']' 00:34:08.744 10:36:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 4093098 00:34:08.744 10:36:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:34:08.744 10:36:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:08.744 10:36:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4093098 00:34:09.003 10:36:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:34:09.003 10:36:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:34:09.003 10:36:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4093098' 00:34:09.003 killing process with pid 4093098 00:34:09.003 10:36:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 4093098 00:34:09.003 10:36:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 4093098 00:34:10.380 10:36:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:10.381 10:36:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:10.381 10:36:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:10.381 10:36:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:34:10.381 10:36:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 00:34:10.381 10:36:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:10.381 10:36:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 00:34:10.381 10:36:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:10.381 10:36:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:10.381 10:36:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:10.381 10:36:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:10.381 10:36:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:12.284 10:36:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:12.284 00:34:12.284 real 0m41.499s 00:34:12.284 user 2m13.560s 00:34:12.284 sys 0m7.849s 00:34:12.284 10:36:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:12.284 10:36:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:34:12.284 ************************************ 00:34:12.284 END TEST nvmf_failover 00:34:12.284 ************************************ 00:34:12.284 10:36:06 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:34:12.284 10:36:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:34:12.284 10:36:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:12.284 10:36:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:34:12.284 ************************************ 00:34:12.284 START TEST nvmf_host_discovery 00:34:12.284 ************************************ 00:34:12.284 10:36:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:34:12.543 * Looking for test storage... 00:34:12.543 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:34:12.543 10:36:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:34:12.543 10:36:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # lcov --version 00:34:12.543 10:36:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:34:12.543 10:36:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:34:12.543 10:36:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:12.543 10:36:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:12.543 10:36:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:12.543 10:36:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:34:12.543 10:36:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:34:12.543 10:36:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:34:12.543 10:36:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:34:12.543 10:36:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:34:12.543 10:36:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:34:12.543 10:36:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:34:12.543 10:36:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:12.543 10:36:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:34:12.543 10:36:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:34:12.543 10:36:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:12.543 10:36:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:12.543 10:36:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:34:12.543 10:36:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:34:12.543 10:36:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:12.543 10:36:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:34:12.543 10:36:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:34:12.543 10:36:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:34:12.543 10:36:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:34:12.543 10:36:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:12.543 10:36:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:34:12.543 10:36:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:34:12.543 10:36:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:12.543 10:36:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:12.543 10:36:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:34:12.543 10:36:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:12.543 10:36:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:34:12.543 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:12.543 --rc genhtml_branch_coverage=1 00:34:12.543 --rc genhtml_function_coverage=1 00:34:12.543 --rc genhtml_legend=1 00:34:12.543 --rc geninfo_all_blocks=1 00:34:12.543 --rc geninfo_unexecuted_blocks=1 00:34:12.543 00:34:12.543 ' 00:34:12.543 10:36:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:34:12.543 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:12.543 --rc genhtml_branch_coverage=1 00:34:12.543 --rc genhtml_function_coverage=1 00:34:12.543 --rc genhtml_legend=1 00:34:12.543 --rc geninfo_all_blocks=1 00:34:12.543 --rc geninfo_unexecuted_blocks=1 00:34:12.543 00:34:12.543 ' 00:34:12.543 10:36:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:34:12.543 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:12.543 --rc genhtml_branch_coverage=1 00:34:12.543 --rc genhtml_function_coverage=1 00:34:12.543 --rc genhtml_legend=1 00:34:12.543 --rc geninfo_all_blocks=1 00:34:12.543 --rc geninfo_unexecuted_blocks=1 00:34:12.543 00:34:12.543 ' 00:34:12.543 10:36:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:34:12.543 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:12.543 --rc genhtml_branch_coverage=1 00:34:12.543 --rc genhtml_function_coverage=1 00:34:12.543 --rc genhtml_legend=1 00:34:12.543 --rc geninfo_all_blocks=1 00:34:12.543 --rc geninfo_unexecuted_blocks=1 00:34:12.543 00:34:12.543 ' 00:34:12.543 10:36:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:12.543 10:36:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:34:12.543 10:36:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:12.543 10:36:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:12.543 10:36:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:12.543 10:36:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:12.543 10:36:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:12.543 10:36:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:12.543 10:36:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:12.543 10:36:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:12.543 10:36:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:12.543 10:36:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:12.543 10:36:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:34:12.543 10:36:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:34:12.543 10:36:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:12.543 10:36:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:12.543 10:36:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:12.543 10:36:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:12.543 10:36:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:12.543 10:36:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:34:12.543 10:36:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:12.543 10:36:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:12.543 10:36:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:12.544 10:36:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:12.544 10:36:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:12.544 10:36:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:12.544 10:36:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:34:12.544 10:36:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:12.544 10:36:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:34:12.544 10:36:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:12.544 10:36:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:12.544 10:36:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:12.544 10:36:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:12.544 10:36:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:12.544 10:36:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:12.544 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:12.544 10:36:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:12.544 10:36:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:12.544 10:36:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:12.544 10:36:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:34:12.544 10:36:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:34:12.544 10:36:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:34:12.544 10:36:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:34:12.544 10:36:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:34:12.544 10:36:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:34:12.544 10:36:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:34:12.544 10:36:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:12.544 10:36:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:12.544 10:36:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:12.544 10:36:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:12.544 10:36:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:12.544 10:36:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:12.544 10:36:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:12.544 10:36:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:12.544 10:36:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:12.544 10:36:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:12.544 10:36:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:34:12.544 10:36:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:17.965 10:36:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:17.965 10:36:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:34:17.965 10:36:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:17.965 10:36:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:17.965 10:36:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:17.965 10:36:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:17.965 10:36:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:17.965 10:36:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:34:17.965 10:36:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:17.965 10:36:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # e810=() 00:34:17.965 10:36:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:34:17.965 10:36:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # x722=() 00:34:17.965 10:36:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:34:17.965 10:36:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # mlx=() 00:34:17.965 10:36:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:34:17.965 10:36:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:17.965 10:36:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:17.965 10:36:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:17.965 10:36:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:17.965 10:36:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:17.965 10:36:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:17.965 10:36:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:17.965 10:36:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:17.965 10:36:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:17.965 10:36:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:17.965 10:36:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:17.965 10:36:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:17.965 10:36:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:17.965 10:36:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:17.965 10:36:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:17.965 10:36:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:17.965 10:36:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:17.965 10:36:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:17.965 10:36:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:17.965 10:36:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:34:17.965 Found 0000:af:00.0 (0x8086 - 0x159b) 00:34:17.965 10:36:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:17.965 10:36:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:17.965 10:36:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:17.965 10:36:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:17.965 10:36:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:17.965 10:36:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:17.965 10:36:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:34:17.965 Found 0000:af:00.1 (0x8086 - 0x159b) 00:34:17.965 10:36:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:17.965 10:36:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:17.965 10:36:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:17.965 10:36:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:17.966 10:36:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:17.966 10:36:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:17.966 10:36:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:17.966 10:36:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:17.966 10:36:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:17.966 10:36:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:17.966 10:36:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:17.966 10:36:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:17.966 10:36:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:17.966 10:36:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:17.966 10:36:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:17.966 10:36:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:34:17.966 Found net devices under 0000:af:00.0: cvl_0_0 00:34:17.966 10:36:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:17.966 10:36:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:17.966 10:36:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:17.966 10:36:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:17.966 10:36:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:17.966 10:36:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:17.966 10:36:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:17.966 10:36:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:17.966 10:36:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:34:17.966 Found net devices under 0000:af:00.1: cvl_0_1 00:34:17.966 10:36:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:17.966 10:36:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:17.966 10:36:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:34:17.966 10:36:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:17.966 10:36:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:17.966 10:36:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:17.966 10:36:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:17.966 10:36:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:17.966 10:36:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:17.966 10:36:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:17.966 10:36:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:17.966 10:36:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:17.966 10:36:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:17.966 10:36:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:17.966 10:36:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:17.966 10:36:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:17.966 10:36:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:17.966 10:36:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:17.966 10:36:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:17.966 10:36:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:17.966 10:36:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:17.966 10:36:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:17.966 10:36:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:17.966 10:36:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:17.966 10:36:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:17.966 10:36:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:17.966 10:36:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:17.966 10:36:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:17.966 10:36:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:17.966 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:17.966 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.315 ms 00:34:17.966 00:34:17.966 --- 10.0.0.2 ping statistics --- 00:34:17.966 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:17.966 rtt min/avg/max/mdev = 0.315/0.315/0.315/0.000 ms 00:34:17.966 10:36:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:17.966 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:17.966 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.217 ms 00:34:17.966 00:34:17.966 --- 10.0.0.1 ping statistics --- 00:34:17.966 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:17.966 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:34:17.966 10:36:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:17.966 10:36:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # return 0 00:34:17.966 10:36:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:17.966 10:36:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:17.966 10:36:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:17.966 10:36:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:17.966 10:36:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:17.966 10:36:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:17.966 10:36:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:17.966 10:36:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:34:17.966 10:36:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:17.966 10:36:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:17.966 10:36:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:17.966 10:36:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=4102251 00:34:17.966 10:36:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 4102251 00:34:17.966 10:36:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:34:17.966 10:36:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 4102251 ']' 00:34:17.966 10:36:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:17.966 10:36:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:17.966 10:36:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:17.966 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:17.966 10:36:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:17.966 10:36:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:17.966 [2024-12-13 10:36:11.519520] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:34:17.966 [2024-12-13 10:36:11.519613] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:17.966 [2024-12-13 10:36:11.636898] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:17.966 [2024-12-13 10:36:11.737228] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:17.966 [2024-12-13 10:36:11.737272] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:17.966 [2024-12-13 10:36:11.737282] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:17.966 [2024-12-13 10:36:11.737308] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:17.966 [2024-12-13 10:36:11.737317] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:17.966 [2024-12-13 10:36:11.738692] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:34:18.533 10:36:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:18.533 10:36:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:34:18.533 10:36:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:18.533 10:36:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:18.533 10:36:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:18.533 10:36:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:18.533 10:36:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:18.533 10:36:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:18.533 10:36:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:18.533 [2024-12-13 10:36:12.367220] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:18.533 10:36:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:18.533 10:36:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:34:18.533 10:36:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:18.533 10:36:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:18.533 [2024-12-13 10:36:12.379370] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:34:18.533 10:36:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:18.533 10:36:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:34:18.533 10:36:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:18.533 10:36:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:18.533 null0 00:34:18.533 10:36:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:18.533 10:36:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:34:18.533 10:36:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:18.533 10:36:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:18.533 null1 00:34:18.533 10:36:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:18.533 10:36:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:34:18.533 10:36:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:18.533 10:36:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:18.533 10:36:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:18.533 10:36:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=4102338 00:34:18.533 10:36:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:34:18.533 10:36:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 4102338 /tmp/host.sock 00:34:18.533 10:36:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 4102338 ']' 00:34:18.533 10:36:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:34:18.533 10:36:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:18.533 10:36:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:34:18.533 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:34:18.533 10:36:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:18.533 10:36:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:18.791 [2024-12-13 10:36:12.484958] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:34:18.791 [2024-12-13 10:36:12.485042] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4102338 ] 00:34:18.791 [2024-12-13 10:36:12.597465] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:19.049 [2024-12-13 10:36:12.703459] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:34:19.615 10:36:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:19.615 10:36:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:34:19.615 10:36:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:34:19.615 10:36:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:34:19.615 10:36:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:19.615 10:36:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:19.615 10:36:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:19.615 10:36:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:34:19.615 10:36:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:19.615 10:36:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:19.615 10:36:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:19.615 10:36:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:34:19.615 10:36:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:34:19.615 10:36:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:34:19.615 10:36:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:34:19.615 10:36:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:34:19.615 10:36:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:19.615 10:36:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:34:19.615 10:36:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:19.615 10:36:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:19.615 10:36:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:34:19.615 10:36:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:34:19.615 10:36:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:19.615 10:36:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:34:19.615 10:36:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:34:19.615 10:36:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:19.615 10:36:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:34:19.615 10:36:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:19.615 10:36:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:19.615 10:36:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:34:19.615 10:36:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:34:19.615 10:36:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:19.615 10:36:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:19.615 10:36:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:19.615 10:36:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:34:19.615 10:36:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:34:19.615 10:36:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:34:19.615 10:36:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:34:19.615 10:36:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:19.615 10:36:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:34:19.615 10:36:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:19.615 10:36:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:19.615 10:36:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:34:19.615 10:36:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:34:19.615 10:36:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:19.615 10:36:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:34:19.615 10:36:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:34:19.615 10:36:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:19.615 10:36:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:34:19.615 10:36:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:19.615 10:36:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:19.615 10:36:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:34:19.615 10:36:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:34:19.615 10:36:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:19.615 10:36:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:19.615 10:36:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:19.615 10:36:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:34:19.615 10:36:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:34:19.615 10:36:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:34:19.615 10:36:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:19.615 10:36:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:34:19.615 10:36:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:19.615 10:36:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:34:19.873 10:36:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:19.873 10:36:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:34:19.873 10:36:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:34:19.873 10:36:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:19.873 10:36:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:19.873 10:36:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:19.873 10:36:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:34:19.873 10:36:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:34:19.873 10:36:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:34:19.873 10:36:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:19.873 10:36:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:34:19.873 10:36:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:19.873 10:36:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:19.874 10:36:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:19.874 [2024-12-13 10:36:13.594650] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:19.874 10:36:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:19.874 10:36:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:34:19.874 10:36:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:34:19.874 10:36:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:34:19.874 10:36:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:19.874 10:36:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:34:19.874 10:36:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:19.874 10:36:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:34:19.874 10:36:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:19.874 10:36:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:34:19.874 10:36:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:34:19.874 10:36:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:34:19.874 10:36:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:19.874 10:36:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:19.874 10:36:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:34:19.874 10:36:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:19.874 10:36:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:34:19.874 10:36:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:19.874 10:36:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:34:19.874 10:36:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:34:19.874 10:36:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:34:19.874 10:36:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:34:19.874 10:36:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:34:19.874 10:36:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:34:19.874 10:36:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:34:19.874 10:36:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:34:19.874 10:36:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:34:19.874 10:36:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:34:19.874 10:36:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:34:19.874 10:36:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:19.874 10:36:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:19.874 10:36:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:19.874 10:36:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:34:19.874 10:36:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:34:19.874 10:36:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:34:19.874 10:36:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:34:19.874 10:36:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:34:19.874 10:36:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:19.874 10:36:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:19.874 10:36:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:19.874 10:36:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:34:19.874 10:36:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:34:19.874 10:36:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:34:19.874 10:36:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:34:19.874 10:36:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:34:19.874 10:36:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:34:19.874 10:36:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:34:19.874 10:36:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:19.874 10:36:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:19.874 10:36:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:34:19.874 10:36:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:34:19.874 10:36:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:34:19.874 10:36:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:20.132 10:36:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == \n\v\m\e\0 ]] 00:34:20.132 10:36:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:34:20.696 [2024-12-13 10:36:14.338099] bdev_nvme.c:7516:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:34:20.696 [2024-12-13 10:36:14.338131] bdev_nvme.c:7602:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:34:20.696 [2024-12-13 10:36:14.338157] bdev_nvme.c:7479:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:34:20.696 [2024-12-13 10:36:14.466559] bdev_nvme.c:7445:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:34:20.954 [2024-12-13 10:36:14.651721] bdev_nvme.c:5663:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:34:20.954 [2024-12-13 10:36:14.652981] bdev_nvme.c:1990:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x615000325f80:1 started. 00:34:20.954 [2024-12-13 10:36:14.654700] bdev_nvme.c:7335:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:34:20.954 [2024-12-13 10:36:14.654723] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:34:20.954 [2024-12-13 10:36:14.657680] bdev_nvme.c:1792:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x615000325f80 was disconnected and freed. delete nvme_qpair. 00:34:20.954 10:36:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:34:20.954 10:36:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:34:20.954 10:36:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:34:20.954 10:36:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:34:20.954 10:36:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:34:20.954 10:36:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:20.954 10:36:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:34:20.954 10:36:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:20.954 10:36:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:34:20.954 10:36:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:21.212 10:36:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:21.212 10:36:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:34:21.212 10:36:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:34:21.212 10:36:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:34:21.212 10:36:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:34:21.212 10:36:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:34:21.212 10:36:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:34:21.212 10:36:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:34:21.212 10:36:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:34:21.212 10:36:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:21.212 10:36:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:21.212 10:36:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:34:21.212 10:36:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:21.212 10:36:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:34:21.212 10:36:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:21.212 10:36:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:34:21.212 10:36:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:34:21.212 10:36:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:34:21.212 10:36:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:34:21.212 10:36:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:34:21.212 10:36:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:34:21.212 10:36:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:34:21.212 10:36:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:34:21.212 10:36:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:34:21.212 10:36:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:34:21.212 10:36:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:34:21.212 10:36:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:21.212 10:36:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:34:21.212 10:36:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:21.212 10:36:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:21.212 10:36:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0 ]] 00:34:21.212 10:36:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:34:21.212 10:36:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:34:21.212 10:36:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:34:21.212 10:36:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:34:21.212 10:36:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:34:21.212 10:36:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:34:21.212 10:36:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:34:21.212 10:36:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:34:21.212 10:36:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:34:21.213 10:36:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:34:21.213 10:36:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:34:21.213 10:36:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:21.213 10:36:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:21.213 10:36:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:21.213 10:36:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:34:21.213 10:36:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:34:21.213 10:36:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:34:21.213 10:36:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:34:21.213 10:36:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:34:21.213 10:36:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:21.213 10:36:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:21.213 10:36:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:21.213 10:36:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:34:21.213 10:36:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:34:21.213 10:36:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:34:21.213 10:36:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:34:21.213 [2024-12-13 10:36:14.984614] bdev_nvme.c:1990:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x615000326200:1 started. 00:34:21.213 10:36:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:34:21.213 10:36:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:34:21.213 [2024-12-13 10:36:14.988545] bdev_nvme.c:1792:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x615000326200 was disconnected and freed. delete nvme_qpair. 00:34:21.213 10:36:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:21.213 10:36:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:34:21.213 10:36:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:21.213 10:36:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:34:21.213 10:36:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:34:21.213 10:36:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:21.213 10:36:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:21.213 10:36:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:34:21.213 10:36:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:34:21.213 10:36:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:34:21.213 10:36:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:34:21.213 10:36:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:34:21.213 10:36:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:34:21.213 10:36:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:34:21.213 10:36:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:34:21.213 10:36:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:34:21.213 10:36:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:34:21.213 10:36:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:34:21.213 10:36:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:21.213 10:36:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:21.213 10:36:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:34:21.213 10:36:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:21.213 10:36:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:34:21.213 10:36:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:34:21.213 10:36:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:34:21.213 10:36:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:34:21.213 10:36:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:34:21.213 10:36:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:21.213 10:36:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:21.213 [2024-12-13 10:36:15.087499] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:34:21.213 [2024-12-13 10:36:15.088294] bdev_nvme.c:7498:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:34:21.213 [2024-12-13 10:36:15.088323] bdev_nvme.c:7479:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:34:21.213 10:36:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:21.213 10:36:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:34:21.213 10:36:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:34:21.213 10:36:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:34:21.213 10:36:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:34:21.213 10:36:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:34:21.213 10:36:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:34:21.213 10:36:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:34:21.213 10:36:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:34:21.213 10:36:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:34:21.213 10:36:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:21.213 10:36:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:34:21.213 10:36:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:21.471 10:36:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:21.471 10:36:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:21.471 10:36:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:34:21.471 10:36:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:34:21.471 10:36:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:34:21.471 10:36:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:34:21.471 10:36:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:34:21.471 10:36:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:34:21.471 10:36:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:34:21.471 10:36:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:21.471 10:36:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:34:21.471 10:36:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:34:21.471 10:36:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:34:21.471 10:36:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:21.471 10:36:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:21.471 10:36:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:21.471 10:36:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:34:21.471 10:36:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:34:21.471 10:36:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:34:21.471 10:36:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:34:21.471 10:36:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:34:21.471 10:36:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:34:21.471 10:36:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:34:21.471 10:36:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:34:21.471 10:36:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:34:21.471 10:36:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:34:21.471 10:36:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:34:21.471 10:36:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:21.471 10:36:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:34:21.471 10:36:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:21.471 10:36:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:21.471 [2024-12-13 10:36:15.216742] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:34:21.471 10:36:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:34:21.471 10:36:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:34:21.728 [2024-12-13 10:36:15.363701] bdev_nvme.c:5663:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4421 00:34:21.728 [2024-12-13 10:36:15.363758] bdev_nvme.c:7335:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:34:21.728 [2024-12-13 10:36:15.363771] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:34:21.728 [2024-12-13 10:36:15.363780] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:34:22.662 10:36:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:34:22.662 10:36:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:34:22.662 10:36:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:34:22.662 10:36:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:34:22.662 10:36:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:34:22.662 10:36:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:34:22.662 10:36:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:22.662 10:36:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:34:22.662 10:36:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:22.662 10:36:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:22.662 10:36:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:34:22.662 10:36:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:34:22.662 10:36:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:34:22.662 10:36:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:34:22.662 10:36:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:34:22.662 10:36:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:34:22.662 10:36:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:34:22.662 10:36:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:34:22.662 10:36:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:34:22.662 10:36:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:34:22.662 10:36:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:34:22.662 10:36:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:34:22.662 10:36:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:22.662 10:36:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:22.662 10:36:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:22.662 10:36:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:34:22.662 10:36:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:34:22.662 10:36:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:34:22.662 10:36:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:34:22.662 10:36:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:22.662 10:36:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:22.662 10:36:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:22.662 [2024-12-13 10:36:16.343505] bdev_nvme.c:7498:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:34:22.662 [2024-12-13 10:36:16.343536] bdev_nvme.c:7479:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:34:22.662 [2024-12-13 10:36:16.346046] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:34:22.662 [2024-12-13 10:36:16.346075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:22.662 [2024-12-13 10:36:16.346088] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:34:22.662 [2024-12-13 10:36:16.346098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:22.662 [2024-12-13 10:36:16.346109] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:34:22.662 [2024-12-13 10:36:16.346119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:22.662 [2024-12-13 10:36:16.346129] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:34:22.662 [2024-12-13 10:36:16.346138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:22.662 [2024-12-13 10:36:16.346148] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:34:22.662 10:36:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:22.662 10:36:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:34:22.662 10:36:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:34:22.662 10:36:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:34:22.662 10:36:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:34:22.662 10:36:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:34:22.662 10:36:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:34:22.662 10:36:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:34:22.662 10:36:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:34:22.663 10:36:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:22.663 10:36:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:34:22.663 10:36:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:22.663 10:36:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:34:22.663 [2024-12-13 10:36:16.356054] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:34:22.663 [2024-12-13 10:36:16.366094] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:34:22.663 [2024-12-13 10:36:16.366120] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:34:22.663 [2024-12-13 10:36:16.366128] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:34:22.663 [2024-12-13 10:36:16.366138] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:34:22.663 [2024-12-13 10:36:16.366170] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:34:22.663 [2024-12-13 10:36:16.366459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.663 [2024-12-13 10:36:16.366483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:34:22.663 [2024-12-13 10:36:16.366495] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:34:22.663 [2024-12-13 10:36:16.366512] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:34:22.663 [2024-12-13 10:36:16.366536] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:34:22.663 [2024-12-13 10:36:16.366550] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:34:22.663 [2024-12-13 10:36:16.366562] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:34:22.663 [2024-12-13 10:36:16.366571] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:34:22.663 [2024-12-13 10:36:16.366579] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:34:22.663 [2024-12-13 10:36:16.366585] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:34:22.663 10:36:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:22.663 [2024-12-13 10:36:16.376206] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:34:22.663 [2024-12-13 10:36:16.376228] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:34:22.663 [2024-12-13 10:36:16.376235] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:34:22.663 [2024-12-13 10:36:16.376242] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:34:22.663 [2024-12-13 10:36:16.376263] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:34:22.663 [2024-12-13 10:36:16.376554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.663 [2024-12-13 10:36:16.376574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:34:22.663 [2024-12-13 10:36:16.376585] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:34:22.663 [2024-12-13 10:36:16.376600] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:34:22.663 [2024-12-13 10:36:16.376623] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:34:22.663 [2024-12-13 10:36:16.376633] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:34:22.663 [2024-12-13 10:36:16.376642] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:34:22.663 [2024-12-13 10:36:16.376650] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:34:22.663 [2024-12-13 10:36:16.376657] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:34:22.663 [2024-12-13 10:36:16.376663] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:34:22.663 [2024-12-13 10:36:16.386298] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:34:22.663 [2024-12-13 10:36:16.386320] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:34:22.663 [2024-12-13 10:36:16.386327] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:34:22.663 [2024-12-13 10:36:16.386333] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:34:22.663 [2024-12-13 10:36:16.386356] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:34:22.663 [2024-12-13 10:36:16.386554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.663 [2024-12-13 10:36:16.386572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:34:22.663 [2024-12-13 10:36:16.386583] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:34:22.663 [2024-12-13 10:36:16.386598] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:34:22.663 [2024-12-13 10:36:16.386611] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:34:22.663 [2024-12-13 10:36:16.386619] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:34:22.663 [2024-12-13 10:36:16.386628] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:34:22.663 [2024-12-13 10:36:16.386636] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:34:22.663 [2024-12-13 10:36:16.386643] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:34:22.663 [2024-12-13 10:36:16.386649] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:34:22.663 [2024-12-13 10:36:16.396391] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:34:22.663 [2024-12-13 10:36:16.396415] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:34:22.663 [2024-12-13 10:36:16.396422] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:34:22.663 [2024-12-13 10:36:16.396429] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:34:22.663 [2024-12-13 10:36:16.396460] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:34:22.663 [2024-12-13 10:36:16.396760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.663 [2024-12-13 10:36:16.396778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:34:22.663 [2024-12-13 10:36:16.396788] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:34:22.663 [2024-12-13 10:36:16.396803] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:34:22.663 [2024-12-13 10:36:16.396832] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:34:22.663 [2024-12-13 10:36:16.396842] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:34:22.663 [2024-12-13 10:36:16.396850] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:34:22.663 [2024-12-13 10:36:16.396858] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:34:22.663 [2024-12-13 10:36:16.396865] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:34:22.663 [2024-12-13 10:36:16.396871] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:34:22.663 10:36:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:22.663 10:36:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:34:22.663 10:36:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:34:22.663 10:36:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:34:22.663 10:36:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:34:22.663 10:36:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:34:22.663 10:36:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:34:22.663 10:36:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:34:22.663 10:36:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:22.663 10:36:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:34:22.663 10:36:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:34:22.663 10:36:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:22.663 10:36:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:34:22.663 10:36:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:22.663 [2024-12-13 10:36:16.406495] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:34:22.663 [2024-12-13 10:36:16.406518] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:34:22.663 [2024-12-13 10:36:16.406526] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:34:22.663 [2024-12-13 10:36:16.406532] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:34:22.663 [2024-12-13 10:36:16.406551] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:34:22.663 [2024-12-13 10:36:16.406792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.663 [2024-12-13 10:36:16.406809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:34:22.663 [2024-12-13 10:36:16.406819] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:34:22.663 [2024-12-13 10:36:16.406835] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:34:22.663 [2024-12-13 10:36:16.407750] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:34:22.663 [2024-12-13 10:36:16.407769] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:34:22.663 [2024-12-13 10:36:16.407780] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:34:22.663 [2024-12-13 10:36:16.407794] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:34:22.663 [2024-12-13 10:36:16.407801] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:34:22.664 [2024-12-13 10:36:16.407807] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:34:22.664 [2024-12-13 10:36:16.416587] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:34:22.664 [2024-12-13 10:36:16.416609] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:34:22.664 [2024-12-13 10:36:16.416616] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:34:22.664 [2024-12-13 10:36:16.416623] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:34:22.664 [2024-12-13 10:36:16.416648] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:34:22.664 [2024-12-13 10:36:16.416826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.664 [2024-12-13 10:36:16.416846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:34:22.664 [2024-12-13 10:36:16.416857] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:34:22.664 [2024-12-13 10:36:16.416872] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:34:22.664 [2024-12-13 10:36:16.416901] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:34:22.664 [2024-12-13 10:36:16.416911] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:34:22.664 [2024-12-13 10:36:16.416921] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:34:22.664 [2024-12-13 10:36:16.416929] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:34:22.664 [2024-12-13 10:36:16.416935] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:34:22.664 [2024-12-13 10:36:16.416941] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:34:22.664 [2024-12-13 10:36:16.426684] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:34:22.664 [2024-12-13 10:36:16.426705] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:34:22.664 [2024-12-13 10:36:16.426711] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:34:22.664 [2024-12-13 10:36:16.426717] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:34:22.664 [2024-12-13 10:36:16.426736] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:34:22.664 [2024-12-13 10:36:16.426873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.664 [2024-12-13 10:36:16.426889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:34:22.664 [2024-12-13 10:36:16.426899] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:34:22.664 [2024-12-13 10:36:16.426913] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:34:22.664 [2024-12-13 10:36:16.426949] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:34:22.664 [2024-12-13 10:36:16.426960] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:34:22.664 [2024-12-13 10:36:16.426969] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:34:22.664 [2024-12-13 10:36:16.426976] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:34:22.664 [2024-12-13 10:36:16.426983] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:34:22.664 [2024-12-13 10:36:16.426989] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:34:22.664 [2024-12-13 10:36:16.429954] bdev_nvme.c:7303:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:34:22.664 [2024-12-13 10:36:16.429981] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:34:22.664 10:36:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:22.664 10:36:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:34:22.664 10:36:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:34:22.664 10:36:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:34:22.664 10:36:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:34:22.664 10:36:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:34:22.664 10:36:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:34:22.664 10:36:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:34:22.664 10:36:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:34:22.664 10:36:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:34:22.664 10:36:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:22.664 10:36:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:22.664 10:36:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:34:22.664 10:36:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:34:22.664 10:36:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:34:22.664 10:36:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:22.664 10:36:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4421 == \4\4\2\1 ]] 00:34:22.664 10:36:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:34:22.664 10:36:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:34:22.664 10:36:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:34:22.664 10:36:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:34:22.664 10:36:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:34:22.664 10:36:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:34:22.664 10:36:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:34:22.664 10:36:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:34:22.664 10:36:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:34:22.664 10:36:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:34:22.664 10:36:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:22.664 10:36:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:22.664 10:36:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:34:22.664 10:36:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:22.664 10:36:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:34:22.664 10:36:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:34:22.664 10:36:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:34:22.664 10:36:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:34:22.664 10:36:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:34:22.664 10:36:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:22.664 10:36:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:22.922 10:36:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:22.922 10:36:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:34:22.922 10:36:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:34:22.922 10:36:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:34:22.922 10:36:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:34:22.922 10:36:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:34:22.922 10:36:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:34:22.922 10:36:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:34:22.922 10:36:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:34:22.922 10:36:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:22.922 10:36:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:34:22.922 10:36:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:22.922 10:36:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:34:22.922 10:36:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:22.922 10:36:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:34:22.922 10:36:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:34:22.922 10:36:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:34:22.922 10:36:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:34:22.922 10:36:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:34:22.922 10:36:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:34:22.922 10:36:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:34:22.922 10:36:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:34:22.922 10:36:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:22.922 10:36:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:34:22.922 10:36:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:22.922 10:36:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:34:22.922 10:36:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:22.922 10:36:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:34:22.922 10:36:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:22.922 10:36:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:34:22.922 10:36:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:34:22.922 10:36:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:34:22.923 10:36:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:34:22.923 10:36:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:34:22.923 10:36:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:34:22.923 10:36:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:34:22.923 10:36:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:34:22.923 10:36:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:34:22.923 10:36:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:34:22.923 10:36:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:34:22.923 10:36:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:22.923 10:36:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:22.923 10:36:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:34:22.923 10:36:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:22.923 10:36:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:34:22.923 10:36:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:34:22.923 10:36:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:34:22.923 10:36:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:34:22.923 10:36:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:34:22.923 10:36:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:22.923 10:36:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:24.296 [2024-12-13 10:36:17.756625] bdev_nvme.c:7516:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:34:24.296 [2024-12-13 10:36:17.756647] bdev_nvme.c:7602:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:34:24.296 [2024-12-13 10:36:17.756675] bdev_nvme.c:7479:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:34:24.296 [2024-12-13 10:36:17.842955] bdev_nvme.c:7445:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:34:24.296 [2024-12-13 10:36:17.941825] bdev_nvme.c:5663:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.2:4421 00:34:24.296 [2024-12-13 10:36:17.942796] bdev_nvme.c:1990:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0x615000327380:1 started. 00:34:24.296 [2024-12-13 10:36:17.944734] bdev_nvme.c:7335:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:34:24.296 [2024-12-13 10:36:17.944769] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:34:24.296 10:36:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:24.296 10:36:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:34:24.296 10:36:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:34:24.296 [2024-12-13 10:36:17.946716] bdev_nvme.c:1792:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0x615000327380 was disconnected and freed. delete nvme_qpair. 00:34:24.296 10:36:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:34:24.296 10:36:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:34:24.296 10:36:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:24.296 10:36:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:34:24.296 10:36:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:24.296 10:36:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:34:24.296 10:36:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:24.296 10:36:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:24.296 request: 00:34:24.296 { 00:34:24.296 "name": "nvme", 00:34:24.296 "trtype": "tcp", 00:34:24.296 "traddr": "10.0.0.2", 00:34:24.296 "adrfam": "ipv4", 00:34:24.296 "trsvcid": "8009", 00:34:24.296 "hostnqn": "nqn.2021-12.io.spdk:test", 00:34:24.296 "wait_for_attach": true, 00:34:24.296 "method": "bdev_nvme_start_discovery", 00:34:24.296 "req_id": 1 00:34:24.296 } 00:34:24.296 Got JSON-RPC error response 00:34:24.296 response: 00:34:24.296 { 00:34:24.296 "code": -17, 00:34:24.296 "message": "File exists" 00:34:24.296 } 00:34:24.296 10:36:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:34:24.296 10:36:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:34:24.296 10:36:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:34:24.296 10:36:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:34:24.296 10:36:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:34:24.296 10:36:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:34:24.296 10:36:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:34:24.296 10:36:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:34:24.296 10:36:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:24.296 10:36:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:34:24.296 10:36:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:24.296 10:36:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:34:24.296 10:36:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:24.296 10:36:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:34:24.296 10:36:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:34:24.296 10:36:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:34:24.296 10:36:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:24.296 10:36:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:34:24.296 10:36:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:24.296 10:36:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:34:24.296 10:36:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:24.296 10:36:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:24.296 10:36:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:34:24.296 10:36:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:34:24.296 10:36:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:34:24.296 10:36:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:34:24.296 10:36:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:34:24.296 10:36:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:24.296 10:36:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:34:24.296 10:36:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:24.297 10:36:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:34:24.297 10:36:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:24.297 10:36:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:24.297 request: 00:34:24.297 { 00:34:24.297 "name": "nvme_second", 00:34:24.297 "trtype": "tcp", 00:34:24.297 "traddr": "10.0.0.2", 00:34:24.297 "adrfam": "ipv4", 00:34:24.297 "trsvcid": "8009", 00:34:24.297 "hostnqn": "nqn.2021-12.io.spdk:test", 00:34:24.297 "wait_for_attach": true, 00:34:24.297 "method": "bdev_nvme_start_discovery", 00:34:24.297 "req_id": 1 00:34:24.297 } 00:34:24.297 Got JSON-RPC error response 00:34:24.297 response: 00:34:24.297 { 00:34:24.297 "code": -17, 00:34:24.297 "message": "File exists" 00:34:24.297 } 00:34:24.297 10:36:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:34:24.297 10:36:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:34:24.297 10:36:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:34:24.297 10:36:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:34:24.297 10:36:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:34:24.297 10:36:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:34:24.297 10:36:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:34:24.297 10:36:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:34:24.297 10:36:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:24.297 10:36:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:24.297 10:36:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:34:24.297 10:36:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:34:24.297 10:36:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:24.297 10:36:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:34:24.297 10:36:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:34:24.297 10:36:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:24.297 10:36:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:34:24.297 10:36:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:34:24.297 10:36:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:24.297 10:36:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:34:24.297 10:36:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:24.297 10:36:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:24.297 10:36:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:34:24.297 10:36:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:34:24.297 10:36:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:34:24.297 10:36:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:34:24.297 10:36:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:34:24.297 10:36:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:24.297 10:36:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:34:24.297 10:36:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:24.297 10:36:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:34:24.297 10:36:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:24.297 10:36:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:25.670 [2024-12-13 10:36:19.180341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.670 [2024-12-13 10:36:19.180375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000327600 with addr=10.0.0.2, port=8010 00:34:25.670 [2024-12-13 10:36:19.180428] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:34:25.670 [2024-12-13 10:36:19.180438] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:34:25.670 [2024-12-13 10:36:19.180457] bdev_nvme.c:7584:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:34:26.602 [2024-12-13 10:36:20.182831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:26.602 [2024-12-13 10:36:20.182878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000327880 with addr=10.0.0.2, port=8010 00:34:26.602 [2024-12-13 10:36:20.182945] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:34:26.602 [2024-12-13 10:36:20.182957] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:34:26.602 [2024-12-13 10:36:20.182967] bdev_nvme.c:7584:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:34:27.533 [2024-12-13 10:36:21.184854] bdev_nvme.c:7559:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:34:27.533 request: 00:34:27.533 { 00:34:27.533 "name": "nvme_second", 00:34:27.533 "trtype": "tcp", 00:34:27.533 "traddr": "10.0.0.2", 00:34:27.533 "adrfam": "ipv4", 00:34:27.533 "trsvcid": "8010", 00:34:27.533 "hostnqn": "nqn.2021-12.io.spdk:test", 00:34:27.533 "wait_for_attach": false, 00:34:27.533 "attach_timeout_ms": 3000, 00:34:27.533 "method": "bdev_nvme_start_discovery", 00:34:27.533 "req_id": 1 00:34:27.533 } 00:34:27.533 Got JSON-RPC error response 00:34:27.533 response: 00:34:27.533 { 00:34:27.533 "code": -110, 00:34:27.533 "message": "Connection timed out" 00:34:27.533 } 00:34:27.533 10:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:34:27.533 10:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:34:27.533 10:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:34:27.533 10:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:34:27.533 10:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:34:27.533 10:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:34:27.533 10:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:34:27.533 10:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:34:27.533 10:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:27.533 10:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:34:27.533 10:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:27.533 10:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:34:27.533 10:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:27.533 10:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:34:27.533 10:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:34:27.533 10:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 4102338 00:34:27.533 10:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:34:27.533 10:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:27.534 10:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:34:27.534 10:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:27.534 10:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:34:27.534 10:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:27.534 10:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:27.534 rmmod nvme_tcp 00:34:27.534 rmmod nvme_fabrics 00:34:27.534 rmmod nvme_keyring 00:34:27.534 10:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:27.534 10:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:34:27.534 10:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:34:27.534 10:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 4102251 ']' 00:34:27.534 10:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 4102251 00:34:27.534 10:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # '[' -z 4102251 ']' 00:34:27.534 10:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # kill -0 4102251 00:34:27.534 10:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # uname 00:34:27.534 10:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:27.534 10:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4102251 00:34:27.534 10:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:34:27.534 10:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:34:27.534 10:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4102251' 00:34:27.534 killing process with pid 4102251 00:34:27.534 10:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@973 -- # kill 4102251 00:34:27.534 10:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@978 -- # wait 4102251 00:34:28.906 10:36:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:28.906 10:36:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:28.906 10:36:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:28.906 10:36:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:34:28.906 10:36:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:28.906 10:36:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 00:34:28.906 10:36:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:34:28.906 10:36:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:28.906 10:36:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:28.906 10:36:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:28.906 10:36:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:28.906 10:36:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:30.808 10:36:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:30.808 00:34:30.808 real 0m18.413s 00:34:30.808 user 0m23.573s 00:34:30.808 sys 0m5.397s 00:34:30.808 10:36:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:30.808 10:36:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:30.808 ************************************ 00:34:30.808 END TEST nvmf_host_discovery 00:34:30.808 ************************************ 00:34:30.808 10:36:24 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:34:30.808 10:36:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:34:30.808 10:36:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:30.808 10:36:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:34:30.808 ************************************ 00:34:30.808 START TEST nvmf_host_multipath_status 00:34:30.808 ************************************ 00:34:30.808 10:36:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:34:30.808 * Looking for test storage... 00:34:30.808 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:34:30.808 10:36:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:34:30.808 10:36:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # lcov --version 00:34:30.808 10:36:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:34:31.066 10:36:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:34:31.066 10:36:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:31.066 10:36:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:31.066 10:36:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:31.066 10:36:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:34:31.066 10:36:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:34:31.066 10:36:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:34:31.066 10:36:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:34:31.066 10:36:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:34:31.066 10:36:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:34:31.066 10:36:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:34:31.066 10:36:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:31.066 10:36:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:34:31.066 10:36:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:34:31.066 10:36:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:31.066 10:36:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:31.066 10:36:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:34:31.066 10:36:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:34:31.066 10:36:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:31.066 10:36:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:34:31.066 10:36:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:34:31.066 10:36:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:34:31.066 10:36:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:34:31.066 10:36:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:31.066 10:36:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:34:31.066 10:36:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:34:31.066 10:36:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:31.066 10:36:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:31.066 10:36:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:34:31.067 10:36:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:31.067 10:36:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:34:31.067 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:31.067 --rc genhtml_branch_coverage=1 00:34:31.067 --rc genhtml_function_coverage=1 00:34:31.067 --rc genhtml_legend=1 00:34:31.067 --rc geninfo_all_blocks=1 00:34:31.067 --rc geninfo_unexecuted_blocks=1 00:34:31.067 00:34:31.067 ' 00:34:31.067 10:36:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:34:31.067 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:31.067 --rc genhtml_branch_coverage=1 00:34:31.067 --rc genhtml_function_coverage=1 00:34:31.067 --rc genhtml_legend=1 00:34:31.067 --rc geninfo_all_blocks=1 00:34:31.067 --rc geninfo_unexecuted_blocks=1 00:34:31.067 00:34:31.067 ' 00:34:31.067 10:36:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:34:31.067 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:31.067 --rc genhtml_branch_coverage=1 00:34:31.067 --rc genhtml_function_coverage=1 00:34:31.067 --rc genhtml_legend=1 00:34:31.067 --rc geninfo_all_blocks=1 00:34:31.067 --rc geninfo_unexecuted_blocks=1 00:34:31.067 00:34:31.067 ' 00:34:31.067 10:36:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:34:31.067 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:31.067 --rc genhtml_branch_coverage=1 00:34:31.067 --rc genhtml_function_coverage=1 00:34:31.067 --rc genhtml_legend=1 00:34:31.067 --rc geninfo_all_blocks=1 00:34:31.067 --rc geninfo_unexecuted_blocks=1 00:34:31.067 00:34:31.067 ' 00:34:31.067 10:36:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:31.067 10:36:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:34:31.067 10:36:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:31.067 10:36:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:31.067 10:36:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:31.067 10:36:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:31.067 10:36:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:31.067 10:36:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:31.067 10:36:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:31.067 10:36:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:31.067 10:36:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:31.067 10:36:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:31.067 10:36:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:34:31.067 10:36:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:34:31.067 10:36:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:31.067 10:36:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:31.067 10:36:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:31.067 10:36:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:31.067 10:36:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:31.067 10:36:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:34:31.067 10:36:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:31.067 10:36:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:31.067 10:36:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:31.067 10:36:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:31.067 10:36:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:31.067 10:36:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:31.067 10:36:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:34:31.067 10:36:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:31.067 10:36:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:34:31.067 10:36:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:31.067 10:36:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:31.067 10:36:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:31.067 10:36:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:31.067 10:36:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:31.067 10:36:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:31.067 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:31.067 10:36:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:31.067 10:36:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:31.067 10:36:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:31.067 10:36:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:34:31.067 10:36:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:34:31.067 10:36:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:34:31.067 10:36:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:34:31.067 10:36:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:34:31.067 10:36:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:34:31.067 10:36:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:34:31.067 10:36:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:31.067 10:36:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:31.067 10:36:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:31.067 10:36:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:31.067 10:36:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:31.067 10:36:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:31.067 10:36:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:31.067 10:36:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:31.067 10:36:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:31.067 10:36:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:31.067 10:36:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # xtrace_disable 00:34:31.067 10:36:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:34:36.332 10:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:36.332 10:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # pci_devs=() 00:34:36.332 10:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:36.332 10:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:36.332 10:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:36.332 10:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:36.332 10:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:36.332 10:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # net_devs=() 00:34:36.332 10:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:36.332 10:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # e810=() 00:34:36.332 10:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # local -ga e810 00:34:36.332 10:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # x722=() 00:34:36.332 10:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # local -ga x722 00:34:36.332 10:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # mlx=() 00:34:36.332 10:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # local -ga mlx 00:34:36.332 10:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:36.332 10:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:36.332 10:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:36.332 10:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:36.332 10:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:36.332 10:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:36.332 10:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:36.332 10:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:36.332 10:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:36.332 10:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:36.332 10:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:36.332 10:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:36.332 10:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:36.332 10:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:36.332 10:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:36.332 10:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:36.332 10:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:36.332 10:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:36.332 10:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:36.332 10:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:34:36.332 Found 0000:af:00.0 (0x8086 - 0x159b) 00:34:36.332 10:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:36.332 10:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:36.332 10:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:36.332 10:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:36.332 10:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:36.332 10:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:36.332 10:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:34:36.332 Found 0000:af:00.1 (0x8086 - 0x159b) 00:34:36.332 10:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:36.332 10:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:36.332 10:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:36.332 10:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:36.332 10:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:36.332 10:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:36.332 10:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:36.332 10:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:36.332 10:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:36.332 10:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:36.332 10:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:36.332 10:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:36.332 10:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:36.332 10:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:36.332 10:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:36.332 10:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:34:36.332 Found net devices under 0000:af:00.0: cvl_0_0 00:34:36.332 10:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:36.332 10:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:36.332 10:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:36.332 10:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:36.332 10:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:36.332 10:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:36.332 10:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:36.332 10:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:36.332 10:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:34:36.332 Found net devices under 0000:af:00.1: cvl_0_1 00:34:36.332 10:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:36.332 10:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:36.332 10:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # is_hw=yes 00:34:36.332 10:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:36.332 10:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:36.332 10:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:36.332 10:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:36.332 10:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:36.332 10:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:36.332 10:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:36.332 10:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:36.332 10:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:36.332 10:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:36.332 10:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:36.333 10:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:36.333 10:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:36.333 10:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:36.333 10:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:36.333 10:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:36.333 10:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:36.333 10:36:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:36.333 10:36:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:36.333 10:36:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:36.333 10:36:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:36.333 10:36:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:36.333 10:36:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:36.333 10:36:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:36.333 10:36:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:36.333 10:36:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:36.333 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:36.333 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.303 ms 00:34:36.333 00:34:36.333 --- 10.0.0.2 ping statistics --- 00:34:36.333 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:36.333 rtt min/avg/max/mdev = 0.303/0.303/0.303/0.000 ms 00:34:36.333 10:36:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:36.333 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:36.333 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.194 ms 00:34:36.333 00:34:36.333 --- 10.0.0.1 ping statistics --- 00:34:36.333 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:36.333 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:34:36.333 10:36:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:36.333 10:36:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # return 0 00:34:36.333 10:36:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:36.333 10:36:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:36.333 10:36:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:36.333 10:36:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:36.333 10:36:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:36.333 10:36:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:36.333 10:36:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:36.333 10:36:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:34:36.333 10:36:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:36.333 10:36:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:36.333 10:36:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:34:36.333 10:36:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=4107534 00:34:36.333 10:36:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 4107534 00:34:36.333 10:36:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 4107534 ']' 00:34:36.333 10:36:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:36.333 10:36:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:36.333 10:36:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:36.333 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:36.333 10:36:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:34:36.333 10:36:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:36.333 10:36:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:34:36.592 [2024-12-13 10:36:30.269903] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:34:36.592 [2024-12-13 10:36:30.269992] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:36.592 [2024-12-13 10:36:30.389833] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:34:36.851 [2024-12-13 10:36:30.500821] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:36.851 [2024-12-13 10:36:30.500863] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:36.851 [2024-12-13 10:36:30.500874] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:36.851 [2024-12-13 10:36:30.500886] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:36.851 [2024-12-13 10:36:30.500894] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:36.851 [2024-12-13 10:36:30.502887] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:34:36.851 [2024-12-13 10:36:30.502896] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:34:37.418 10:36:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:37.418 10:36:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:34:37.418 10:36:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:37.418 10:36:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:37.418 10:36:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:34:37.418 10:36:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:37.418 10:36:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=4107534 00:34:37.418 10:36:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:34:37.418 [2024-12-13 10:36:31.293980] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:37.677 10:36:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:34:37.677 Malloc0 00:34:37.936 10:36:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:34:37.936 10:36:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:38.194 10:36:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:38.453 [2024-12-13 10:36:32.105544] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:38.453 10:36:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:34:38.453 [2024-12-13 10:36:32.281991] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:34:38.453 10:36:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:34:38.453 10:36:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=4107792 00:34:38.453 10:36:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:34:38.453 10:36:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 4107792 /var/tmp/bdevperf.sock 00:34:38.453 10:36:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 4107792 ']' 00:34:38.453 10:36:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:34:38.453 10:36:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:38.453 10:36:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:34:38.453 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:34:38.453 10:36:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:38.453 10:36:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:34:39.390 10:36:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:39.390 10:36:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:34:39.390 10:36:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:34:39.649 10:36:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:34:39.907 Nvme0n1 00:34:39.907 10:36:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:34:40.475 Nvme0n1 00:34:40.475 10:36:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:34:40.475 10:36:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:34:42.377 10:36:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:34:42.377 10:36:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:34:42.635 10:36:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:34:42.635 10:36:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:34:44.010 10:36:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:34:44.010 10:36:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:34:44.010 10:36:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:44.011 10:36:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:44.011 10:36:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:44.011 10:36:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:34:44.011 10:36:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:44.011 10:36:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:44.269 10:36:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:44.269 10:36:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:44.269 10:36:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:44.269 10:36:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:44.269 10:36:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:44.269 10:36:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:44.269 10:36:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:44.269 10:36:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:44.528 10:36:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:44.528 10:36:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:34:44.528 10:36:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:44.528 10:36:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:44.787 10:36:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:44.787 10:36:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:34:44.787 10:36:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:44.787 10:36:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:45.046 10:36:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:45.046 10:36:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:34:45.046 10:36:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:34:45.304 10:36:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:34:45.304 10:36:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:34:46.681 10:36:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:34:46.682 10:36:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:34:46.682 10:36:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:46.682 10:36:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:46.682 10:36:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:46.682 10:36:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:34:46.682 10:36:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:46.682 10:36:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:46.941 10:36:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:46.941 10:36:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:46.941 10:36:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:46.941 10:36:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:46.941 10:36:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:46.941 10:36:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:46.941 10:36:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:46.941 10:36:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:47.200 10:36:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:47.200 10:36:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:34:47.200 10:36:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:47.200 10:36:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:47.458 10:36:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:47.458 10:36:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:34:47.459 10:36:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:47.459 10:36:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:47.717 10:36:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:47.717 10:36:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:34:47.717 10:36:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:34:47.717 10:36:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:34:47.976 10:36:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:34:49.353 10:36:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:34:49.353 10:36:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:34:49.353 10:36:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:49.353 10:36:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:49.353 10:36:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:49.353 10:36:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:34:49.353 10:36:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:49.353 10:36:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:49.353 10:36:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:49.353 10:36:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:49.353 10:36:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:49.353 10:36:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:49.612 10:36:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:49.612 10:36:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:49.612 10:36:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:49.612 10:36:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:49.871 10:36:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:49.871 10:36:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:34:49.871 10:36:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:49.871 10:36:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:50.129 10:36:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:50.129 10:36:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:34:50.129 10:36:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:50.129 10:36:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:50.389 10:36:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:50.389 10:36:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:34:50.389 10:36:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:34:50.389 10:36:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:34:50.647 10:36:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:34:51.583 10:36:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:34:51.583 10:36:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:34:51.583 10:36:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:51.583 10:36:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:51.841 10:36:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:51.841 10:36:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:34:51.841 10:36:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:51.841 10:36:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:52.100 10:36:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:52.100 10:36:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:52.100 10:36:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:52.100 10:36:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:52.359 10:36:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:52.359 10:36:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:52.359 10:36:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:52.359 10:36:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:52.618 10:36:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:52.618 10:36:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:34:52.618 10:36:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:52.618 10:36:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:52.618 10:36:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:52.618 10:36:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:34:52.618 10:36:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:52.618 10:36:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:52.876 10:36:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:52.876 10:36:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:34:52.876 10:36:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:34:53.135 10:36:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:34:53.394 10:36:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:34:54.329 10:36:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:34:54.329 10:36:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:34:54.329 10:36:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:54.329 10:36:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:54.588 10:36:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:54.588 10:36:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:34:54.588 10:36:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:54.588 10:36:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:54.588 10:36:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:54.588 10:36:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:54.588 10:36:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:54.588 10:36:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:54.846 10:36:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:54.846 10:36:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:54.846 10:36:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:54.846 10:36:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:55.105 10:36:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:55.105 10:36:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:34:55.105 10:36:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:55.105 10:36:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:55.364 10:36:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:55.364 10:36:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:34:55.364 10:36:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:55.364 10:36:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:55.364 10:36:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:55.364 10:36:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:34:55.364 10:36:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:34:55.623 10:36:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:34:55.881 10:36:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:34:56.817 10:36:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:34:56.817 10:36:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:34:56.817 10:36:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:56.817 10:36:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:57.076 10:36:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:57.076 10:36:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:34:57.076 10:36:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:57.076 10:36:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:57.334 10:36:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:57.334 10:36:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:57.334 10:36:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:57.334 10:36:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:57.592 10:36:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:57.592 10:36:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:57.592 10:36:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:57.592 10:36:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:57.592 10:36:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:57.592 10:36:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:34:57.592 10:36:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:57.592 10:36:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:57.851 10:36:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:57.851 10:36:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:34:57.851 10:36:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:57.851 10:36:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:58.110 10:36:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:58.110 10:36:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:34:58.368 10:36:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:34:58.368 10:36:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:34:58.627 10:36:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:34:58.886 10:36:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:34:59.820 10:36:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:34:59.820 10:36:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:34:59.820 10:36:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:59.820 10:36:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:35:00.078 10:36:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:00.078 10:36:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:35:00.078 10:36:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:35:00.078 10:36:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:00.078 10:36:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:00.079 10:36:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:35:00.079 10:36:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:00.079 10:36:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:35:00.337 10:36:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:00.337 10:36:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:35:00.337 10:36:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:00.337 10:36:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:35:00.595 10:36:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:00.595 10:36:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:35:00.595 10:36:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:00.595 10:36:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:35:00.854 10:36:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:00.854 10:36:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:35:00.854 10:36:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:35:00.854 10:36:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:01.113 10:36:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:01.113 10:36:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:35:01.113 10:36:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:35:01.113 10:36:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:35:01.373 10:36:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:35:02.434 10:36:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:35:02.434 10:36:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:35:02.434 10:36:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:02.434 10:36:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:35:02.711 10:36:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:35:02.711 10:36:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:35:02.711 10:36:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:02.711 10:36:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:35:02.711 10:36:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:02.711 10:36:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:35:02.711 10:36:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:02.979 10:36:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:35:02.979 10:36:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:02.979 10:36:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:35:02.979 10:36:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:02.979 10:36:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:35:03.238 10:36:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:03.238 10:36:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:35:03.238 10:36:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:03.238 10:36:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:35:03.496 10:36:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:03.496 10:36:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:35:03.497 10:36:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:03.497 10:36:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:35:03.755 10:36:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:03.755 10:36:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:35:03.755 10:36:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:35:03.755 10:36:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:35:04.014 10:36:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:35:05.014 10:36:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:35:05.014 10:36:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:35:05.014 10:36:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:05.014 10:36:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:35:05.273 10:36:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:05.273 10:36:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:35:05.273 10:36:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:05.273 10:36:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:35:05.532 10:36:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:05.532 10:36:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:35:05.532 10:36:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:05.532 10:36:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:35:05.791 10:36:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:05.791 10:36:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:35:05.791 10:36:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:05.791 10:36:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:35:05.791 10:36:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:05.791 10:36:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:35:05.791 10:36:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:05.791 10:36:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:35:06.049 10:36:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:06.049 10:36:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:35:06.049 10:36:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:06.049 10:36:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:35:06.308 10:37:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:06.308 10:37:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:35:06.308 10:37:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:35:06.567 10:37:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:35:06.841 10:37:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:35:07.778 10:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:35:07.778 10:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:35:07.778 10:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:07.778 10:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:35:08.036 10:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:08.036 10:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:35:08.036 10:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:08.036 10:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:35:08.036 10:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:35:08.036 10:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:35:08.037 10:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:08.037 10:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:35:08.295 10:37:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:08.295 10:37:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:35:08.295 10:37:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:08.295 10:37:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:35:08.554 10:37:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:08.554 10:37:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:35:08.554 10:37:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:08.554 10:37:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:35:08.814 10:37:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:08.814 10:37:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:35:08.814 10:37:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:08.814 10:37:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:35:08.814 10:37:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:35:08.814 10:37:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 4107792 00:35:08.814 10:37:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 4107792 ']' 00:35:08.814 10:37:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 4107792 00:35:08.814 10:37:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:35:08.814 10:37:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:09.073 10:37:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4107792 00:35:09.073 10:37:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:35:09.073 10:37:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:35:09.073 10:37:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4107792' 00:35:09.073 killing process with pid 4107792 00:35:09.073 10:37:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 4107792 00:35:09.073 10:37:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 4107792 00:35:09.073 { 00:35:09.073 "results": [ 00:35:09.073 { 00:35:09.073 "job": "Nvme0n1", 00:35:09.073 "core_mask": "0x4", 00:35:09.073 "workload": "verify", 00:35:09.073 "status": "terminated", 00:35:09.073 "verify_range": { 00:35:09.073 "start": 0, 00:35:09.073 "length": 16384 00:35:09.073 }, 00:35:09.073 "queue_depth": 128, 00:35:09.073 "io_size": 4096, 00:35:09.073 "runtime": 28.502533, 00:35:09.073 "iops": 9299.138430959803, 00:35:09.073 "mibps": 36.32475949593673, 00:35:09.073 "io_failed": 0, 00:35:09.073 "io_timeout": 0, 00:35:09.073 "avg_latency_us": 13742.319367491617, 00:35:09.073 "min_latency_us": 854.3085714285714, 00:35:09.073 "max_latency_us": 3019898.88 00:35:09.073 } 00:35:09.073 ], 00:35:09.073 "core_count": 1 00:35:09.073 } 00:35:10.012 10:37:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 4107792 00:35:10.012 10:37:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:35:10.012 [2024-12-13 10:36:32.361117] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:35:10.012 [2024-12-13 10:36:32.361208] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4107792 ] 00:35:10.012 [2024-12-13 10:36:32.472906] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:10.012 [2024-12-13 10:36:32.579979] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:35:10.012 Running I/O for 90 seconds... 00:35:10.012 9953.00 IOPS, 38.88 MiB/s [2024-12-13T09:37:03.903Z] 10053.00 IOPS, 39.27 MiB/s [2024-12-13T09:37:03.903Z] 10018.00 IOPS, 39.13 MiB/s [2024-12-13T09:37:03.903Z] 10032.00 IOPS, 39.19 MiB/s [2024-12-13T09:37:03.903Z] 10012.60 IOPS, 39.11 MiB/s [2024-12-13T09:37:03.903Z] 10016.83 IOPS, 39.13 MiB/s [2024-12-13T09:37:03.903Z] 10031.29 IOPS, 39.18 MiB/s [2024-12-13T09:37:03.903Z] 10023.88 IOPS, 39.16 MiB/s [2024-12-13T09:37:03.903Z] 10024.67 IOPS, 39.16 MiB/s [2024-12-13T09:37:03.903Z] 10013.80 IOPS, 39.12 MiB/s [2024-12-13T09:37:03.903Z] 10006.00 IOPS, 39.09 MiB/s [2024-12-13T09:37:03.903Z] 9978.58 IOPS, 38.98 MiB/s [2024-12-13T09:37:03.903Z] [2024-12-13 10:36:46.850804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:89368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.012 [2024-12-13 10:36:46.850870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:35:10.012 [2024-12-13 10:36:46.850929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:89376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.012 [2024-12-13 10:36:46.850943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:10.012 [2024-12-13 10:36:46.850963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:89384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.012 [2024-12-13 10:36:46.850973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:35:10.012 [2024-12-13 10:36:46.850992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:89392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.012 [2024-12-13 10:36:46.851003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:35:10.013 [2024-12-13 10:36:46.851020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:89400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.013 [2024-12-13 10:36:46.851031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:35:10.013 [2024-12-13 10:36:46.851048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:89408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.013 [2024-12-13 10:36:46.851059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:35:10.013 [2024-12-13 10:36:46.851077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:89416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.013 [2024-12-13 10:36:46.851087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:35:10.013 [2024-12-13 10:36:46.851104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:89424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.013 [2024-12-13 10:36:46.851115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:35:10.013 [2024-12-13 10:36:46.851133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:89432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.013 [2024-12-13 10:36:46.851145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:10.013 [2024-12-13 10:36:46.851163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:89440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.013 [2024-12-13 10:36:46.851180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:35:10.013 [2024-12-13 10:36:46.851199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:89448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.013 [2024-12-13 10:36:46.851210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:35:10.013 [2024-12-13 10:36:46.851227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:89456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.013 [2024-12-13 10:36:46.851238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:35:10.013 [2024-12-13 10:36:46.851255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:89464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.013 [2024-12-13 10:36:46.851265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:35:10.013 [2024-12-13 10:36:46.851283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:89472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.013 [2024-12-13 10:36:46.851293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:10.013 [2024-12-13 10:36:46.851309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:89480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.013 [2024-12-13 10:36:46.851319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:35:10.013 [2024-12-13 10:36:46.851337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:89488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.013 [2024-12-13 10:36:46.851347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:35:10.013 [2024-12-13 10:36:46.852182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:89496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.013 [2024-12-13 10:36:46.852210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:35:10.013 [2024-12-13 10:36:46.852233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:89504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.013 [2024-12-13 10:36:46.852245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:35:10.013 [2024-12-13 10:36:46.852264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:89512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.013 [2024-12-13 10:36:46.852274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:35:10.013 [2024-12-13 10:36:46.852293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:89520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.013 [2024-12-13 10:36:46.852304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:10.013 [2024-12-13 10:36:46.852323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:89528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.013 [2024-12-13 10:36:46.852334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:35:10.013 [2024-12-13 10:36:46.852352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:89536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.013 [2024-12-13 10:36:46.852367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:10.013 [2024-12-13 10:36:46.852386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:89544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.013 [2024-12-13 10:36:46.852396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:10.013 [2024-12-13 10:36:46.852414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:89552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.013 [2024-12-13 10:36:46.852426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:10.013 [2024-12-13 10:36:46.852444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:89560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.013 [2024-12-13 10:36:46.852461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:10.013 [2024-12-13 10:36:46.852481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:89568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.013 [2024-12-13 10:36:46.852492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:10.013 [2024-12-13 10:36:46.852512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:89576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.013 [2024-12-13 10:36:46.852524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.013 [2024-12-13 10:36:46.852543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:89584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.013 [2024-12-13 10:36:46.852553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:10.013 [2024-12-13 10:36:46.852571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:89592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.013 [2024-12-13 10:36:46.852581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:35:10.013 [2024-12-13 10:36:46.852599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:89600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.013 [2024-12-13 10:36:46.852609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:35:10.013 [2024-12-13 10:36:46.852627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:89608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.013 [2024-12-13 10:36:46.852638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:35:10.013 [2024-12-13 10:36:46.852656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:89616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.013 [2024-12-13 10:36:46.852666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:35:10.013 [2024-12-13 10:36:46.852683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:89624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.013 [2024-12-13 10:36:46.852694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:35:10.013 [2024-12-13 10:36:46.852712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:89632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.013 [2024-12-13 10:36:46.852722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:35:10.013 [2024-12-13 10:36:46.852742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:89640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.013 [2024-12-13 10:36:46.852752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:35:10.013 [2024-12-13 10:36:46.852770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:89648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.013 [2024-12-13 10:36:46.852780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:35:10.013 [2024-12-13 10:36:46.852797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:89656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.013 [2024-12-13 10:36:46.852808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:35:10.013 [2024-12-13 10:36:46.852826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:89664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.013 [2024-12-13 10:36:46.852836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:35:10.013 [2024-12-13 10:36:46.852853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:89672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.013 [2024-12-13 10:36:46.852864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:35:10.013 [2024-12-13 10:36:46.852882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:89680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.014 [2024-12-13 10:36:46.852891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:35:10.014 [2024-12-13 10:36:46.852909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:89688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.014 [2024-12-13 10:36:46.852919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:35:10.014 [2024-12-13 10:36:46.852937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:89696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.014 [2024-12-13 10:36:46.852946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:35:10.014 [2024-12-13 10:36:46.852966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:89704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.014 [2024-12-13 10:36:46.852977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:35:10.014 [2024-12-13 10:36:46.852995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:89712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.014 [2024-12-13 10:36:46.853006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:35:10.014 [2024-12-13 10:36:46.853023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:89720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.014 [2024-12-13 10:36:46.853033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:35:10.014 [2024-12-13 10:36:46.853051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:89728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.014 [2024-12-13 10:36:46.853061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:35:10.014 [2024-12-13 10:36:46.853081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:89736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.014 [2024-12-13 10:36:46.853092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:35:10.014 [2024-12-13 10:36:46.853109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:89744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.014 [2024-12-13 10:36:46.853119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:35:10.014 [2024-12-13 10:36:46.853139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:89752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.014 [2024-12-13 10:36:46.853149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:35:10.014 [2024-12-13 10:36:46.853241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:89304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.014 [2024-12-13 10:36:46.853254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:35:10.014 [2024-12-13 10:36:46.853276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:89760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.014 [2024-12-13 10:36:46.853287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:35:10.014 [2024-12-13 10:36:46.853307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:89768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.014 [2024-12-13 10:36:46.853318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:35:10.014 [2024-12-13 10:36:46.853337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:89776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.014 [2024-12-13 10:36:46.853347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:35:10.014 [2024-12-13 10:36:46.853367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:89784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.014 [2024-12-13 10:36:46.853377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:35:10.014 [2024-12-13 10:36:46.853397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:89792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.014 [2024-12-13 10:36:46.853408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:35:10.014 [2024-12-13 10:36:46.853428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:89800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.014 [2024-12-13 10:36:46.853438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:35:10.014 [2024-12-13 10:36:46.853465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:89808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.014 [2024-12-13 10:36:46.853476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:35:10.014 [2024-12-13 10:36:46.853495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:89816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.014 [2024-12-13 10:36:46.853506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:35:10.014 [2024-12-13 10:36:46.853527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:89824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.014 [2024-12-13 10:36:46.853539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:10.014 [2024-12-13 10:36:46.853560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:89832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.014 [2024-12-13 10:36:46.853570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:10.014 [2024-12-13 10:36:46.853590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:89840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.014 [2024-12-13 10:36:46.853600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:35:10.014 [2024-12-13 10:36:46.853619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:89848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.014 [2024-12-13 10:36:46.853630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:35:10.014 [2024-12-13 10:36:46.853649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:89856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.014 [2024-12-13 10:36:46.853660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:35:10.014 [2024-12-13 10:36:46.853679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:89864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.014 [2024-12-13 10:36:46.853690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:35:10.014 [2024-12-13 10:36:46.853709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:89872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.014 [2024-12-13 10:36:46.853725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:35:10.014 [2024-12-13 10:36:46.853745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:89880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.014 [2024-12-13 10:36:46.853754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:35:10.014 [2024-12-13 10:36:46.853780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:89888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.014 [2024-12-13 10:36:46.853790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:35:10.014 [2024-12-13 10:36:46.853810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:89896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.014 [2024-12-13 10:36:46.853820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:35:10.014 [2024-12-13 10:36:46.853839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:89904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.014 [2024-12-13 10:36:46.853849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:35:10.014 [2024-12-13 10:36:46.853868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:89912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.014 [2024-12-13 10:36:46.853878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:35:10.014 [2024-12-13 10:36:46.853898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:89920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.014 [2024-12-13 10:36:46.853909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:35:10.014 [2024-12-13 10:36:46.853932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:89928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.014 [2024-12-13 10:36:46.853943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:35:10.014 [2024-12-13 10:36:46.853963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:89936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.014 [2024-12-13 10:36:46.853973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:35:10.014 [2024-12-13 10:36:46.853992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:89944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.014 [2024-12-13 10:36:46.854002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:35:10.014 [2024-12-13 10:36:46.854022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:89952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.014 [2024-12-13 10:36:46.854032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:35:10.014 [2024-12-13 10:36:46.854051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:89960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.014 [2024-12-13 10:36:46.854062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:35:10.014 [2024-12-13 10:36:46.854081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:89968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.014 [2024-12-13 10:36:46.854091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:35:10.014 [2024-12-13 10:36:46.854110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:89976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.014 [2024-12-13 10:36:46.854121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:35:10.014 [2024-12-13 10:36:46.854140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:89984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.015 [2024-12-13 10:36:46.854150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:35:10.015 [2024-12-13 10:36:46.854169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:89992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.015 [2024-12-13 10:36:46.854180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:35:10.015 [2024-12-13 10:36:46.854199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:90000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.015 [2024-12-13 10:36:46.854210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:35:10.015 [2024-12-13 10:36:46.854296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:90008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.015 [2024-12-13 10:36:46.854310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:35:10.015 [2024-12-13 10:36:46.854334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:90016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.015 [2024-12-13 10:36:46.854345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:35:10.015 [2024-12-13 10:36:46.854369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:90024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.015 [2024-12-13 10:36:46.854379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:35:10.015 [2024-12-13 10:36:46.854401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:90032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.015 [2024-12-13 10:36:46.854411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:35:10.015 [2024-12-13 10:36:46.854432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:90040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.015 [2024-12-13 10:36:46.854443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:10.015 [2024-12-13 10:36:46.854471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:90048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.015 [2024-12-13 10:36:46.854481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:35:10.015 [2024-12-13 10:36:46.854502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:90056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.015 [2024-12-13 10:36:46.854513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:35:10.015 [2024-12-13 10:36:46.854534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:90064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.015 [2024-12-13 10:36:46.854544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:35:10.015 [2024-12-13 10:36:46.854566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:90072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.015 [2024-12-13 10:36:46.854576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:35:10.015 [2024-12-13 10:36:46.854598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:90080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.015 [2024-12-13 10:36:46.854608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:10.015 [2024-12-13 10:36:46.854629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:90088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.015 [2024-12-13 10:36:46.854639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:10.015 [2024-12-13 10:36:46.854660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.015 [2024-12-13 10:36:46.854671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:35:10.015 [2024-12-13 10:36:46.854692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:90104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.015 [2024-12-13 10:36:46.854704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:35:10.015 [2024-12-13 10:36:46.854726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:90112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.015 [2024-12-13 10:36:46.854736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:35:10.015 [2024-12-13 10:36:46.854760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:90120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.015 [2024-12-13 10:36:46.854770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:35:10.015 [2024-12-13 10:36:46.854792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:90128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.015 [2024-12-13 10:36:46.854802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:35:10.015 [2024-12-13 10:36:46.854824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:90136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.015 [2024-12-13 10:36:46.854834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:35:10.015 [2024-12-13 10:36:46.854855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:90144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.015 [2024-12-13 10:36:46.854865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:35:10.015 [2024-12-13 10:36:46.854887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:90152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.015 [2024-12-13 10:36:46.854897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:35:10.015 [2024-12-13 10:36:46.854918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:90160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.015 [2024-12-13 10:36:46.854928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:35:10.015 [2024-12-13 10:36:46.854950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:90168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.015 [2024-12-13 10:36:46.854960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:35:10.015 [2024-12-13 10:36:46.854982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:90176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.015 [2024-12-13 10:36:46.854992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:35:10.015 [2024-12-13 10:36:46.855014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:90184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.015 [2024-12-13 10:36:46.855025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:35:10.015 [2024-12-13 10:36:46.855047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:90192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.015 [2024-12-13 10:36:46.855057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:35:10.015 [2024-12-13 10:36:46.855079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:90200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.015 [2024-12-13 10:36:46.855089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:35:10.015 [2024-12-13 10:36:46.855113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:90208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.015 [2024-12-13 10:36:46.855123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:35:10.015 [2024-12-13 10:36:46.855146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:90216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.015 [2024-12-13 10:36:46.855157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:35:10.015 [2024-12-13 10:36:46.855178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:90224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.015 [2024-12-13 10:36:46.855189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:35:10.015 [2024-12-13 10:36:46.855210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:90232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.015 [2024-12-13 10:36:46.855219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:35:10.015 [2024-12-13 10:36:46.855241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:90240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.015 [2024-12-13 10:36:46.855251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:35:10.015 [2024-12-13 10:36:46.855272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:90248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.015 [2024-12-13 10:36:46.855282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:35:10.015 [2024-12-13 10:36:46.855304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:90256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.015 [2024-12-13 10:36:46.855313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:35:10.015 [2024-12-13 10:36:46.855398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:90264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.015 [2024-12-13 10:36:46.855410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:35:10.015 [2024-12-13 10:36:46.855435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:90272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.015 [2024-12-13 10:36:46.855445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:35:10.015 [2024-12-13 10:36:46.855473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:90280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.015 [2024-12-13 10:36:46.855483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:35:10.015 [2024-12-13 10:36:46.855507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:90288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.015 [2024-12-13 10:36:46.855517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:35:10.015 [2024-12-13 10:36:46.855540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:89312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.015 [2024-12-13 10:36:46.855550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:35:10.016 [2024-12-13 10:36:46.855574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:89320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.016 [2024-12-13 10:36:46.855585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:35:10.016 [2024-12-13 10:36:46.855607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:89328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.016 [2024-12-13 10:36:46.855620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:35:10.016 [2024-12-13 10:36:46.855643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:89336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.016 [2024-12-13 10:36:46.855652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:35:10.016 [2024-12-13 10:36:46.855676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:89344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.016 [2024-12-13 10:36:46.855686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:10.016 [2024-12-13 10:36:46.855709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:89352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.016 [2024-12-13 10:36:46.855720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:10.016 [2024-12-13 10:36:46.855743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:89360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.016 [2024-12-13 10:36:46.855753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:10.016 9680.85 IOPS, 37.82 MiB/s [2024-12-13T09:37:03.907Z] 8989.36 IOPS, 35.11 MiB/s [2024-12-13T09:37:03.907Z] 8390.07 IOPS, 32.77 MiB/s [2024-12-13T09:37:03.907Z] 8098.62 IOPS, 31.64 MiB/s [2024-12-13T09:37:03.907Z] 8197.41 IOPS, 32.02 MiB/s [2024-12-13T09:37:03.907Z] 8292.22 IOPS, 32.39 MiB/s [2024-12-13T09:37:03.907Z] 8483.58 IOPS, 33.14 MiB/s [2024-12-13T09:37:03.907Z] 8661.20 IOPS, 33.83 MiB/s [2024-12-13T09:37:03.907Z] 8806.48 IOPS, 34.40 MiB/s [2024-12-13T09:37:03.907Z] 8854.41 IOPS, 34.59 MiB/s [2024-12-13T09:37:03.907Z] 8894.09 IOPS, 34.74 MiB/s [2024-12-13T09:37:03.907Z] 8974.71 IOPS, 35.06 MiB/s [2024-12-13T09:37:03.907Z] 9104.04 IOPS, 35.56 MiB/s [2024-12-13T09:37:03.907Z] 9221.31 IOPS, 36.02 MiB/s [2024-12-13T09:37:03.907Z] [2024-12-13 10:37:00.451710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:100040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.016 [2024-12-13 10:37:00.451766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:35:10.016 [2024-12-13 10:37:00.451816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:100056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.016 [2024-12-13 10:37:00.451829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:35:10.016 [2024-12-13 10:37:00.451848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:100072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.016 [2024-12-13 10:37:00.451858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:35:10.016 [2024-12-13 10:37:00.451876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:99920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.016 [2024-12-13 10:37:00.451886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:35:10.016 [2024-12-13 10:37:00.451903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:99952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.016 [2024-12-13 10:37:00.451914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:35:10.016 [2024-12-13 10:37:00.455038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:100096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.016 [2024-12-13 10:37:00.455072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:35:10.016 [2024-12-13 10:37:00.455104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:100112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.016 [2024-12-13 10:37:00.455121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:35:10.016 [2024-12-13 10:37:00.455138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:100128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.016 [2024-12-13 10:37:00.455148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:35:10.016 [2024-12-13 10:37:00.455166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:100144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.016 [2024-12-13 10:37:00.455176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:35:10.016 [2024-12-13 10:37:00.455193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:100160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.016 [2024-12-13 10:37:00.455203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:10.016 [2024-12-13 10:37:00.455220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:100176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.016 [2024-12-13 10:37:00.455231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:10.016 [2024-12-13 10:37:00.455247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:100192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.016 [2024-12-13 10:37:00.455257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:35:10.016 [2024-12-13 10:37:00.455274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:100208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.016 [2024-12-13 10:37:00.455285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:35:10.016 [2024-12-13 10:37:00.455302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:100224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.016 [2024-12-13 10:37:00.455312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:35:10.016 [2024-12-13 10:37:00.455329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:100240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.016 [2024-12-13 10:37:00.455341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:35:10.016 [2024-12-13 10:37:00.455359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:100256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.016 [2024-12-13 10:37:00.455369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:35:10.016 [2024-12-13 10:37:00.455385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:100272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.016 [2024-12-13 10:37:00.455396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:35:10.016 [2024-12-13 10:37:00.455413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:100288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.016 [2024-12-13 10:37:00.455423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:35:10.016 [2024-12-13 10:37:00.456609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:99984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.016 [2024-12-13 10:37:00.456635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:35:10.016 [2024-12-13 10:37:00.456662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:100016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.016 [2024-12-13 10:37:00.456674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:35:10.016 [2024-12-13 10:37:00.456692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:100304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.016 [2024-12-13 10:37:00.456702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:35:10.016 [2024-12-13 10:37:00.456720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:100320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.016 [2024-12-13 10:37:00.456730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:35:10.016 [2024-12-13 10:37:00.456747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:100336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.016 [2024-12-13 10:37:00.456757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:35:10.016 [2024-12-13 10:37:00.456774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:100352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.016 [2024-12-13 10:37:00.456785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:35:10.016 [2024-12-13 10:37:00.456802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:100368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.016 [2024-12-13 10:37:00.456812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:35:10.016 [2024-12-13 10:37:00.456830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:100384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.016 [2024-12-13 10:37:00.456840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:35:10.016 [2024-12-13 10:37:00.456857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:100400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.016 [2024-12-13 10:37:00.456867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:35:10.016 [2024-12-13 10:37:00.456885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:100416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:10.016 [2024-12-13 10:37:00.456896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:35:10.016 9265.04 IOPS, 36.19 MiB/s [2024-12-13T09:37:03.907Z] 9295.79 IOPS, 36.31 MiB/s [2024-12-13T09:37:03.907Z] Received shutdown signal, test time was about 28.503204 seconds 00:35:10.016 00:35:10.016 Latency(us) 00:35:10.016 [2024-12-13T09:37:03.907Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:10.016 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:35:10.016 Verification LBA range: start 0x0 length 0x4000 00:35:10.016 Nvme0n1 : 28.50 9299.14 36.32 0.00 0.00 13742.32 854.31 3019898.88 00:35:10.016 [2024-12-13T09:37:03.908Z] =================================================================================================================== 00:35:10.017 [2024-12-13T09:37:03.908Z] Total : 9299.14 36.32 0.00 0.00 13742.32 854.31 3019898.88 00:35:10.017 10:37:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:10.017 10:37:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:35:10.017 10:37:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:35:10.017 10:37:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:35:10.017 10:37:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:10.017 10:37:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:35:10.017 10:37:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:10.017 10:37:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:35:10.017 10:37:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:10.017 10:37:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:10.017 rmmod nvme_tcp 00:35:10.017 rmmod nvme_fabrics 00:35:10.276 rmmod nvme_keyring 00:35:10.276 10:37:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:10.276 10:37:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:35:10.276 10:37:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:35:10.276 10:37:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 4107534 ']' 00:35:10.276 10:37:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 4107534 00:35:10.276 10:37:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 4107534 ']' 00:35:10.276 10:37:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 4107534 00:35:10.276 10:37:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:35:10.276 10:37:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:10.276 10:37:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4107534 00:35:10.276 10:37:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:10.276 10:37:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:10.276 10:37:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4107534' 00:35:10.276 killing process with pid 4107534 00:35:10.276 10:37:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 4107534 00:35:10.276 10:37:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 4107534 00:35:11.672 10:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:11.672 10:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:11.672 10:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:11.672 10:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:35:11.672 10:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 00:35:11.672 10:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 00:35:11.672 10:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:11.672 10:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:11.672 10:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:11.672 10:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:11.672 10:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:11.672 10:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:13.578 10:37:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:13.578 00:35:13.578 real 0m42.800s 00:35:13.578 user 1m56.084s 00:35:13.578 sys 0m10.923s 00:35:13.578 10:37:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:13.578 10:37:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:35:13.578 ************************************ 00:35:13.578 END TEST nvmf_host_multipath_status 00:35:13.578 ************************************ 00:35:13.578 10:37:07 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:35:13.579 10:37:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:35:13.579 10:37:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:13.579 10:37:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:35:13.579 ************************************ 00:35:13.579 START TEST nvmf_discovery_remove_ifc 00:35:13.579 ************************************ 00:35:13.579 10:37:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:35:13.838 * Looking for test storage... 00:35:13.838 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:35:13.838 10:37:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:35:13.838 10:37:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # lcov --version 00:35:13.838 10:37:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:35:13.838 10:37:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:35:13.838 10:37:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:13.838 10:37:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:13.838 10:37:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:13.838 10:37:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:35:13.838 10:37:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:35:13.838 10:37:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:35:13.838 10:37:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:35:13.838 10:37:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:35:13.838 10:37:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:35:13.838 10:37:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:35:13.838 10:37:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:13.838 10:37:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:35:13.838 10:37:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:35:13.838 10:37:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:13.838 10:37:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:13.838 10:37:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:35:13.838 10:37:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:35:13.838 10:37:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:13.838 10:37:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:35:13.838 10:37:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:35:13.838 10:37:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:35:13.838 10:37:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:35:13.838 10:37:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:13.838 10:37:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:35:13.838 10:37:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:35:13.838 10:37:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:13.838 10:37:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:13.838 10:37:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:35:13.838 10:37:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:13.839 10:37:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:35:13.839 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:13.839 --rc genhtml_branch_coverage=1 00:35:13.839 --rc genhtml_function_coverage=1 00:35:13.839 --rc genhtml_legend=1 00:35:13.839 --rc geninfo_all_blocks=1 00:35:13.839 --rc geninfo_unexecuted_blocks=1 00:35:13.839 00:35:13.839 ' 00:35:13.839 10:37:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:35:13.839 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:13.839 --rc genhtml_branch_coverage=1 00:35:13.839 --rc genhtml_function_coverage=1 00:35:13.839 --rc genhtml_legend=1 00:35:13.839 --rc geninfo_all_blocks=1 00:35:13.839 --rc geninfo_unexecuted_blocks=1 00:35:13.839 00:35:13.839 ' 00:35:13.839 10:37:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:35:13.839 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:13.839 --rc genhtml_branch_coverage=1 00:35:13.839 --rc genhtml_function_coverage=1 00:35:13.839 --rc genhtml_legend=1 00:35:13.839 --rc geninfo_all_blocks=1 00:35:13.839 --rc geninfo_unexecuted_blocks=1 00:35:13.839 00:35:13.839 ' 00:35:13.839 10:37:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:35:13.839 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:13.839 --rc genhtml_branch_coverage=1 00:35:13.839 --rc genhtml_function_coverage=1 00:35:13.839 --rc genhtml_legend=1 00:35:13.839 --rc geninfo_all_blocks=1 00:35:13.839 --rc geninfo_unexecuted_blocks=1 00:35:13.839 00:35:13.839 ' 00:35:13.839 10:37:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:13.839 10:37:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:35:13.839 10:37:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:13.839 10:37:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:13.839 10:37:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:13.839 10:37:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:13.839 10:37:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:13.839 10:37:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:13.839 10:37:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:13.839 10:37:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:13.839 10:37:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:13.839 10:37:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:13.839 10:37:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:35:13.839 10:37:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:35:13.839 10:37:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:13.839 10:37:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:13.839 10:37:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:13.839 10:37:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:13.839 10:37:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:13.839 10:37:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:35:13.839 10:37:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:13.839 10:37:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:13.839 10:37:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:13.839 10:37:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:13.839 10:37:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:13.839 10:37:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:13.839 10:37:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:35:13.839 10:37:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:13.839 10:37:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:35:13.839 10:37:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:13.839 10:37:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:13.839 10:37:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:13.839 10:37:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:13.839 10:37:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:13.839 10:37:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:13.839 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:13.839 10:37:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:13.839 10:37:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:13.839 10:37:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:13.839 10:37:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:35:13.839 10:37:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:35:13.839 10:37:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:35:13.839 10:37:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:35:13.839 10:37:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:35:13.839 10:37:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:35:13.839 10:37:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:35:13.839 10:37:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:13.839 10:37:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:13.839 10:37:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:13.839 10:37:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:13.839 10:37:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:13.839 10:37:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:13.839 10:37:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:13.839 10:37:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:13.839 10:37:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:13.839 10:37:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:13.839 10:37:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@309 -- # xtrace_disable 00:35:13.839 10:37:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:19.129 10:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:19.129 10:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # pci_devs=() 00:35:19.129 10:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:19.129 10:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:19.129 10:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:19.129 10:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:19.129 10:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:19.129 10:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # net_devs=() 00:35:19.129 10:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:19.129 10:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # e810=() 00:35:19.129 10:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # local -ga e810 00:35:19.129 10:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # x722=() 00:35:19.129 10:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # local -ga x722 00:35:19.129 10:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # mlx=() 00:35:19.129 10:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # local -ga mlx 00:35:19.129 10:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:19.129 10:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:19.130 10:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:19.130 10:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:19.130 10:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:19.130 10:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:19.130 10:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:19.130 10:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:19.130 10:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:19.130 10:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:19.130 10:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:19.130 10:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:19.130 10:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:19.130 10:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:19.130 10:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:19.130 10:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:19.130 10:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:19.130 10:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:19.130 10:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:19.130 10:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:35:19.130 Found 0000:af:00.0 (0x8086 - 0x159b) 00:35:19.130 10:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:19.130 10:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:19.130 10:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:19.130 10:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:19.130 10:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:19.130 10:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:19.130 10:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:35:19.130 Found 0000:af:00.1 (0x8086 - 0x159b) 00:35:19.130 10:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:19.130 10:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:19.130 10:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:19.130 10:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:19.130 10:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:19.130 10:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:19.130 10:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:19.130 10:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:19.130 10:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:19.130 10:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:19.130 10:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:19.130 10:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:19.130 10:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:19.130 10:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:19.130 10:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:19.130 10:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:35:19.130 Found net devices under 0000:af:00.0: cvl_0_0 00:35:19.130 10:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:19.130 10:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:19.130 10:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:19.130 10:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:19.130 10:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:19.130 10:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:19.130 10:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:19.130 10:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:19.130 10:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:35:19.130 Found net devices under 0000:af:00.1: cvl_0_1 00:35:19.130 10:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:19.130 10:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:19.130 10:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # is_hw=yes 00:35:19.130 10:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:19.130 10:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:19.130 10:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:19.130 10:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:19.130 10:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:19.130 10:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:19.130 10:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:19.130 10:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:19.130 10:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:19.130 10:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:19.130 10:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:19.130 10:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:19.130 10:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:19.130 10:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:19.130 10:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:19.130 10:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:19.130 10:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:19.130 10:37:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:19.130 10:37:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:19.130 10:37:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:19.130 10:37:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:19.130 10:37:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:19.389 10:37:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:19.389 10:37:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:19.389 10:37:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:19.389 10:37:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:19.389 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:19.389 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.315 ms 00:35:19.389 00:35:19.389 --- 10.0.0.2 ping statistics --- 00:35:19.389 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:19.389 rtt min/avg/max/mdev = 0.315/0.315/0.315/0.000 ms 00:35:19.389 10:37:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:19.389 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:19.389 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.129 ms 00:35:19.389 00:35:19.389 --- 10.0.0.1 ping statistics --- 00:35:19.389 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:19.389 rtt min/avg/max/mdev = 0.129/0.129/0.129/0.000 ms 00:35:19.389 10:37:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:19.389 10:37:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # return 0 00:35:19.389 10:37:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:35:19.389 10:37:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:19.389 10:37:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:19.389 10:37:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:19.389 10:37:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:19.389 10:37:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:19.389 10:37:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:19.389 10:37:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:35:19.389 10:37:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:19.389 10:37:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:19.389 10:37:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:19.389 10:37:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=4116576 00:35:19.389 10:37:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:35:19.389 10:37:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 4116576 00:35:19.389 10:37:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 4116576 ']' 00:35:19.389 10:37:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:19.389 10:37:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:19.389 10:37:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:19.389 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:19.389 10:37:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:19.389 10:37:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:19.389 [2024-12-13 10:37:13.238466] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:35:19.389 [2024-12-13 10:37:13.238557] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:19.648 [2024-12-13 10:37:13.355954] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:19.648 [2024-12-13 10:37:13.459973] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:19.648 [2024-12-13 10:37:13.460012] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:19.648 [2024-12-13 10:37:13.460021] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:19.649 [2024-12-13 10:37:13.460032] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:19.649 [2024-12-13 10:37:13.460040] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:19.649 [2024-12-13 10:37:13.461403] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:35:20.217 10:37:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:20.217 10:37:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:35:20.217 10:37:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:20.217 10:37:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:20.217 10:37:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:20.217 10:37:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:20.217 10:37:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:35:20.217 10:37:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:20.217 10:37:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:20.217 [2024-12-13 10:37:14.089781] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:20.217 [2024-12-13 10:37:14.097927] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:35:20.476 null0 00:35:20.476 [2024-12-13 10:37:14.129940] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:20.476 10:37:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:20.476 10:37:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=4116813 00:35:20.476 10:37:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:35:20.476 10:37:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 4116813 /tmp/host.sock 00:35:20.476 10:37:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 4116813 ']' 00:35:20.476 10:37:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:35:20.476 10:37:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:20.476 10:37:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:35:20.476 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:35:20.476 10:37:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:20.476 10:37:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:20.476 [2024-12-13 10:37:14.225715] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:35:20.476 [2024-12-13 10:37:14.225796] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4116813 ] 00:35:20.476 [2024-12-13 10:37:14.338140] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:20.735 [2024-12-13 10:37:14.454267] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:35:21.303 10:37:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:21.303 10:37:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:35:21.303 10:37:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:35:21.303 10:37:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:35:21.303 10:37:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:21.303 10:37:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:21.303 10:37:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:21.303 10:37:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:35:21.303 10:37:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:21.303 10:37:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:21.561 10:37:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:21.561 10:37:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:35:21.561 10:37:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:21.561 10:37:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:22.937 [2024-12-13 10:37:16.440993] bdev_nvme.c:7516:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:35:22.937 [2024-12-13 10:37:16.441030] bdev_nvme.c:7602:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:35:22.937 [2024-12-13 10:37:16.441058] bdev_nvme.c:7479:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:35:22.937 [2024-12-13 10:37:16.568464] bdev_nvme.c:7445:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:35:22.937 [2024-12-13 10:37:16.669333] bdev_nvme.c:5663:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:35:22.937 [2024-12-13 10:37:16.670541] bdev_nvme.c:1990:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x615000326200:1 started. 00:35:22.937 [2024-12-13 10:37:16.672173] bdev_nvme.c:8312:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:35:22.937 [2024-12-13 10:37:16.672225] bdev_nvme.c:8312:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:35:22.937 [2024-12-13 10:37:16.672285] bdev_nvme.c:8312:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:35:22.937 [2024-12-13 10:37:16.672304] bdev_nvme.c:7335:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:35:22.937 [2024-12-13 10:37:16.672330] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:35:22.937 10:37:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:22.937 10:37:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:35:22.937 10:37:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:22.937 10:37:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:22.937 10:37:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:22.937 10:37:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:22.937 10:37:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:22.937 10:37:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:22.937 10:37:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:22.937 [2024-12-13 10:37:16.679353] bdev_nvme.c:1792:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x615000326200 was disconnected and freed. delete nvme_qpair. 00:35:22.937 10:37:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:22.937 10:37:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:35:22.937 10:37:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:35:22.937 10:37:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:35:22.937 10:37:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:35:22.937 10:37:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:22.937 10:37:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:22.937 10:37:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:22.937 10:37:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:22.937 10:37:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:22.937 10:37:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:22.937 10:37:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:23.196 10:37:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:23.196 10:37:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:35:23.196 10:37:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:35:24.132 10:37:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:24.132 10:37:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:24.132 10:37:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:24.132 10:37:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:24.132 10:37:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:24.132 10:37:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:24.132 10:37:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:24.132 10:37:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:24.132 10:37:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:35:24.132 10:37:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:35:25.068 10:37:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:25.068 10:37:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:25.068 10:37:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:25.068 10:37:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:25.068 10:37:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:25.068 10:37:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:25.068 10:37:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:25.068 10:37:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:25.326 10:37:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:35:25.326 10:37:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:35:26.260 10:37:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:26.260 10:37:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:26.260 10:37:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:26.260 10:37:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:26.260 10:37:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:26.260 10:37:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:26.260 10:37:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:26.260 10:37:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:26.260 10:37:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:35:26.260 10:37:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:35:27.196 10:37:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:27.196 10:37:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:27.196 10:37:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:27.196 10:37:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:27.196 10:37:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:27.196 10:37:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:27.196 10:37:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:27.196 10:37:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:27.196 10:37:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:35:27.196 10:37:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:35:28.574 10:37:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:28.574 10:37:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:28.574 10:37:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:28.574 10:37:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:28.574 10:37:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:28.574 10:37:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:28.574 10:37:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:28.574 10:37:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:28.574 [2024-12-13 10:37:22.113178] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:35:28.574 [2024-12-13 10:37:22.113240] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:35:28.574 [2024-12-13 10:37:22.113256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.574 [2024-12-13 10:37:22.113271] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:35:28.574 [2024-12-13 10:37:22.113282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.574 [2024-12-13 10:37:22.113293] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:35:28.574 [2024-12-13 10:37:22.113302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.574 [2024-12-13 10:37:22.113312] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:35:28.574 [2024-12-13 10:37:22.113322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.574 [2024-12-13 10:37:22.113332] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:35:28.574 [2024-12-13 10:37:22.113341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:28.574 [2024-12-13 10:37:22.113350] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325d00 is same with the state(6) to be set 00:35:28.574 [2024-12-13 10:37:22.123195] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325d00 (9): Bad file descriptor 00:35:28.574 10:37:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:35:28.574 10:37:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:35:28.574 [2024-12-13 10:37:22.133237] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:35:28.574 [2024-12-13 10:37:22.133262] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:35:28.574 [2024-12-13 10:37:22.133270] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:35:28.574 [2024-12-13 10:37:22.133277] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:35:28.574 [2024-12-13 10:37:22.133318] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:35:29.511 10:37:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:29.511 10:37:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:29.511 10:37:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:29.511 10:37:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:29.511 10:37:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:29.511 10:37:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:29.511 10:37:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:29.511 [2024-12-13 10:37:23.142469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:35:29.511 [2024-12-13 10:37:23.142520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325d00 with addr=10.0.0.2, port=4420 00:35:29.511 [2024-12-13 10:37:23.142543] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325d00 is same with the state(6) to be set 00:35:29.511 [2024-12-13 10:37:23.142581] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325d00 (9): Bad file descriptor 00:35:29.511 [2024-12-13 10:37:23.143234] bdev_nvme.c:3173:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:35:29.511 [2024-12-13 10:37:23.143294] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:35:29.511 [2024-12-13 10:37:23.143316] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:35:29.511 [2024-12-13 10:37:23.143334] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:35:29.511 [2024-12-13 10:37:23.143350] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:35:29.511 [2024-12-13 10:37:23.143363] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:35:29.511 [2024-12-13 10:37:23.143374] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:35:29.511 [2024-12-13 10:37:23.143391] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:35:29.511 [2024-12-13 10:37:23.143402] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:35:29.511 10:37:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:29.511 10:37:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:35:29.511 10:37:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:35:30.448 [2024-12-13 10:37:24.145890] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:35:30.448 [2024-12-13 10:37:24.145920] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:35:30.448 [2024-12-13 10:37:24.145936] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:35:30.448 [2024-12-13 10:37:24.145946] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:35:30.448 [2024-12-13 10:37:24.145956] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:35:30.448 [2024-12-13 10:37:24.145970] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:35:30.448 [2024-12-13 10:37:24.145977] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:35:30.448 [2024-12-13 10:37:24.145983] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:35:30.448 [2024-12-13 10:37:24.146017] bdev_nvme.c:7267:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:35:30.448 [2024-12-13 10:37:24.146047] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:35:30.448 [2024-12-13 10:37:24.146061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.448 [2024-12-13 10:37:24.146075] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:35:30.448 [2024-12-13 10:37:24.146085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.448 [2024-12-13 10:37:24.146095] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:35:30.448 [2024-12-13 10:37:24.146104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.448 [2024-12-13 10:37:24.146115] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:35:30.448 [2024-12-13 10:37:24.146124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.448 [2024-12-13 10:37:24.146135] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:35:30.448 [2024-12-13 10:37:24.146144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.448 [2024-12-13 10:37:24.146153] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:35:30.448 [2024-12-13 10:37:24.146193] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325800 (9): Bad file descriptor 00:35:30.448 [2024-12-13 10:37:24.147190] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:35:30.448 [2024-12-13 10:37:24.147212] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:35:30.448 10:37:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:30.448 10:37:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:30.448 10:37:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:30.448 10:37:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:30.448 10:37:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:30.449 10:37:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:30.449 10:37:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:30.449 10:37:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:30.449 10:37:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:35:30.449 10:37:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:30.449 10:37:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:30.449 10:37:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:35:30.449 10:37:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:30.449 10:37:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:30.449 10:37:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:30.449 10:37:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:30.449 10:37:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:30.449 10:37:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:30.449 10:37:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:30.449 10:37:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:30.708 10:37:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:35:30.708 10:37:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:35:31.644 10:37:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:31.644 10:37:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:31.644 10:37:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:31.644 10:37:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:31.644 10:37:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:31.644 10:37:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:31.644 10:37:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:31.644 10:37:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:31.644 10:37:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:35:31.644 10:37:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:35:32.580 [2024-12-13 10:37:26.205670] bdev_nvme.c:7516:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:35:32.580 [2024-12-13 10:37:26.205695] bdev_nvme.c:7602:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:35:32.580 [2024-12-13 10:37:26.205724] bdev_nvme.c:7479:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:35:32.580 [2024-12-13 10:37:26.292002] bdev_nvme.c:7445:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:35:32.580 10:37:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:32.580 10:37:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:32.580 10:37:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:32.580 10:37:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:32.580 10:37:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:32.580 10:37:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:32.580 10:37:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:32.580 10:37:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:32.580 10:37:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:35:32.580 10:37:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:35:32.838 [2024-12-13 10:37:26.475134] bdev_nvme.c:5663:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4420 00:35:32.838 [2024-12-13 10:37:26.476183] bdev_nvme.c:1990:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0x615000326e80:1 started. 00:35:32.838 [2024-12-13 10:37:26.477794] bdev_nvme.c:8312:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:35:32.838 [2024-12-13 10:37:26.477841] bdev_nvme.c:8312:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:35:32.838 [2024-12-13 10:37:26.477889] bdev_nvme.c:8312:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:35:32.838 [2024-12-13 10:37:26.477908] bdev_nvme.c:7335:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:35:32.838 [2024-12-13 10:37:26.477920] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:35:32.838 [2024-12-13 10:37:26.484858] bdev_nvme.c:1792:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0x615000326e80 was disconnected and freed. delete nvme_qpair. 00:35:33.774 10:37:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:33.774 10:37:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:33.774 10:37:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:33.774 10:37:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:33.774 10:37:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:33.774 10:37:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:33.774 10:37:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:33.774 10:37:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:33.774 10:37:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:35:33.774 10:37:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:35:33.774 10:37:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 4116813 00:35:33.774 10:37:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 4116813 ']' 00:35:33.774 10:37:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 4116813 00:35:33.775 10:37:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:35:33.775 10:37:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:33.775 10:37:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4116813 00:35:33.775 10:37:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:33.775 10:37:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:33.775 10:37:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4116813' 00:35:33.775 killing process with pid 4116813 00:35:33.775 10:37:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 4116813 00:35:33.775 10:37:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 4116813 00:35:34.710 10:37:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:35:34.710 10:37:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:34.710 10:37:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:35:34.710 10:37:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:34.710 10:37:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:35:34.710 10:37:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:34.710 10:37:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:34.710 rmmod nvme_tcp 00:35:34.710 rmmod nvme_fabrics 00:35:34.710 rmmod nvme_keyring 00:35:34.710 10:37:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:34.710 10:37:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:35:34.710 10:37:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:35:34.710 10:37:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 4116576 ']' 00:35:34.710 10:37:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 4116576 00:35:34.710 10:37:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 4116576 ']' 00:35:34.710 10:37:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 4116576 00:35:34.710 10:37:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:35:34.710 10:37:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:34.710 10:37:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4116576 00:35:34.710 10:37:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:34.710 10:37:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:34.710 10:37:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4116576' 00:35:34.710 killing process with pid 4116576 00:35:34.710 10:37:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 4116576 00:35:34.710 10:37:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 4116576 00:35:36.088 10:37:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:36.088 10:37:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:36.088 10:37:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:36.088 10:37:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:35:36.088 10:37:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 00:35:36.088 10:37:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 00:35:36.088 10:37:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:36.088 10:37:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:36.088 10:37:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:36.088 10:37:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:36.088 10:37:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:36.088 10:37:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:38.000 10:37:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:38.000 00:35:38.000 real 0m24.251s 00:35:38.000 user 0m31.692s 00:35:38.000 sys 0m5.658s 00:35:38.000 10:37:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:38.000 10:37:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:38.000 ************************************ 00:35:38.000 END TEST nvmf_discovery_remove_ifc 00:35:38.000 ************************************ 00:35:38.000 10:37:31 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:35:38.000 10:37:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:35:38.000 10:37:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:38.000 10:37:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:35:38.000 ************************************ 00:35:38.000 START TEST nvmf_identify_kernel_target 00:35:38.000 ************************************ 00:35:38.000 10:37:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:35:38.000 * Looking for test storage... 00:35:38.000 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:35:38.000 10:37:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:35:38.000 10:37:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # lcov --version 00:35:38.000 10:37:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:35:38.260 10:37:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:35:38.260 10:37:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:38.260 10:37:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:38.260 10:37:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:38.260 10:37:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:35:38.260 10:37:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:35:38.260 10:37:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:35:38.260 10:37:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:35:38.260 10:37:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:35:38.260 10:37:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:35:38.260 10:37:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:35:38.260 10:37:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:38.260 10:37:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:35:38.260 10:37:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:35:38.260 10:37:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:38.260 10:37:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:38.260 10:37:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:35:38.260 10:37:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:35:38.260 10:37:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:38.260 10:37:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:35:38.260 10:37:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:35:38.260 10:37:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:35:38.260 10:37:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:35:38.260 10:37:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:38.260 10:37:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:35:38.260 10:37:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:35:38.260 10:37:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:38.260 10:37:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:38.260 10:37:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:35:38.260 10:37:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:38.260 10:37:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:35:38.260 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:38.260 --rc genhtml_branch_coverage=1 00:35:38.260 --rc genhtml_function_coverage=1 00:35:38.260 --rc genhtml_legend=1 00:35:38.260 --rc geninfo_all_blocks=1 00:35:38.260 --rc geninfo_unexecuted_blocks=1 00:35:38.260 00:35:38.260 ' 00:35:38.260 10:37:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:35:38.260 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:38.260 --rc genhtml_branch_coverage=1 00:35:38.260 --rc genhtml_function_coverage=1 00:35:38.260 --rc genhtml_legend=1 00:35:38.260 --rc geninfo_all_blocks=1 00:35:38.260 --rc geninfo_unexecuted_blocks=1 00:35:38.260 00:35:38.260 ' 00:35:38.260 10:37:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:35:38.260 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:38.260 --rc genhtml_branch_coverage=1 00:35:38.260 --rc genhtml_function_coverage=1 00:35:38.260 --rc genhtml_legend=1 00:35:38.260 --rc geninfo_all_blocks=1 00:35:38.260 --rc geninfo_unexecuted_blocks=1 00:35:38.260 00:35:38.260 ' 00:35:38.260 10:37:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:35:38.260 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:38.260 --rc genhtml_branch_coverage=1 00:35:38.260 --rc genhtml_function_coverage=1 00:35:38.260 --rc genhtml_legend=1 00:35:38.260 --rc geninfo_all_blocks=1 00:35:38.260 --rc geninfo_unexecuted_blocks=1 00:35:38.260 00:35:38.260 ' 00:35:38.260 10:37:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:38.260 10:37:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:35:38.260 10:37:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:38.260 10:37:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:38.260 10:37:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:38.260 10:37:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:38.260 10:37:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:38.260 10:37:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:38.260 10:37:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:38.260 10:37:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:38.260 10:37:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:38.260 10:37:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:38.260 10:37:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:35:38.260 10:37:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:35:38.260 10:37:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:38.260 10:37:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:38.260 10:37:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:38.261 10:37:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:38.261 10:37:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:38.261 10:37:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:35:38.261 10:37:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:38.261 10:37:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:38.261 10:37:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:38.261 10:37:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:38.261 10:37:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:38.261 10:37:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:38.261 10:37:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:35:38.261 10:37:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:38.261 10:37:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:35:38.261 10:37:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:38.261 10:37:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:38.261 10:37:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:38.261 10:37:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:38.261 10:37:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:38.261 10:37:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:38.261 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:38.261 10:37:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:38.261 10:37:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:38.261 10:37:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:38.261 10:37:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:35:38.261 10:37:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:38.261 10:37:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:38.261 10:37:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:38.261 10:37:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:38.261 10:37:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:38.261 10:37:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:38.261 10:37:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:38.261 10:37:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:38.261 10:37:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:38.261 10:37:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:38.261 10:37:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # xtrace_disable 00:35:38.261 10:37:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:35:43.530 10:37:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:43.530 10:37:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # pci_devs=() 00:35:43.530 10:37:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:43.530 10:37:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:43.530 10:37:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:43.530 10:37:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:43.530 10:37:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:43.530 10:37:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # net_devs=() 00:35:43.530 10:37:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:43.530 10:37:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # e810=() 00:35:43.530 10:37:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # local -ga e810 00:35:43.530 10:37:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # x722=() 00:35:43.530 10:37:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # local -ga x722 00:35:43.530 10:37:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # mlx=() 00:35:43.530 10:37:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # local -ga mlx 00:35:43.530 10:37:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:43.531 10:37:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:43.531 10:37:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:43.531 10:37:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:43.531 10:37:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:43.531 10:37:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:43.531 10:37:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:43.531 10:37:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:43.531 10:37:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:43.531 10:37:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:43.531 10:37:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:43.531 10:37:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:43.531 10:37:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:43.531 10:37:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:43.531 10:37:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:43.531 10:37:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:43.531 10:37:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:43.531 10:37:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:43.531 10:37:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:43.531 10:37:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:35:43.531 Found 0000:af:00.0 (0x8086 - 0x159b) 00:35:43.531 10:37:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:43.531 10:37:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:43.531 10:37:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:43.531 10:37:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:43.531 10:37:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:43.531 10:37:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:43.531 10:37:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:35:43.531 Found 0000:af:00.1 (0x8086 - 0x159b) 00:35:43.531 10:37:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:43.531 10:37:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:43.531 10:37:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:43.531 10:37:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:43.531 10:37:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:43.531 10:37:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:43.531 10:37:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:43.531 10:37:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:43.531 10:37:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:43.531 10:37:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:43.531 10:37:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:43.531 10:37:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:43.531 10:37:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:43.531 10:37:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:43.531 10:37:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:43.531 10:37:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:35:43.531 Found net devices under 0000:af:00.0: cvl_0_0 00:35:43.531 10:37:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:43.531 10:37:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:43.531 10:37:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:43.531 10:37:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:43.531 10:37:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:43.531 10:37:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:43.531 10:37:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:43.531 10:37:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:43.531 10:37:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:35:43.531 Found net devices under 0000:af:00.1: cvl_0_1 00:35:43.531 10:37:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:43.531 10:37:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:43.531 10:37:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # is_hw=yes 00:35:43.531 10:37:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:43.531 10:37:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:43.531 10:37:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:43.531 10:37:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:43.531 10:37:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:43.531 10:37:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:43.531 10:37:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:43.531 10:37:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:43.531 10:37:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:43.531 10:37:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:43.531 10:37:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:43.531 10:37:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:43.531 10:37:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:43.531 10:37:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:43.531 10:37:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:43.531 10:37:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:43.531 10:37:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:43.531 10:37:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:43.531 10:37:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:43.531 10:37:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:43.531 10:37:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:43.531 10:37:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:43.531 10:37:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:43.531 10:37:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:43.531 10:37:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:43.531 10:37:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:43.531 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:43.531 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.301 ms 00:35:43.531 00:35:43.531 --- 10.0.0.2 ping statistics --- 00:35:43.531 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:43.531 rtt min/avg/max/mdev = 0.301/0.301/0.301/0.000 ms 00:35:43.531 10:37:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:43.531 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:43.531 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.174 ms 00:35:43.531 00:35:43.531 --- 10.0.0.1 ping statistics --- 00:35:43.531 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:43.531 rtt min/avg/max/mdev = 0.174/0.174/0.174/0.000 ms 00:35:43.531 10:37:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:43.531 10:37:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # return 0 00:35:43.531 10:37:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:35:43.531 10:37:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:43.531 10:37:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:43.531 10:37:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:43.531 10:37:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:43.531 10:37:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:43.531 10:37:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:43.531 10:37:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:35:43.531 10:37:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:35:43.531 10:37:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:35:43.531 10:37:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:43.531 10:37:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:43.531 10:37:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:43.532 10:37:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:43.532 10:37:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:43.532 10:37:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:43.532 10:37:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:43.532 10:37:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:43.532 10:37:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:43.532 10:37:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:35:43.532 10:37:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:35:43.532 10:37:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:35:43.532 10:37:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:35:43.532 10:37:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:43.532 10:37:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:35:43.532 10:37:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:35:43.532 10:37:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:35:43.532 10:37:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:35:43.532 10:37:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:35:43.532 10:37:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:35:43.532 10:37:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:35:46.066 Waiting for block devices as requested 00:35:46.066 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:35:46.066 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:35:46.066 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:35:46.325 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:35:46.325 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:35:46.325 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:35:46.325 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:35:46.584 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:35:46.584 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:35:46.585 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:35:46.843 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:35:46.843 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:35:46.843 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:35:46.843 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:35:47.102 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:35:47.102 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:35:47.102 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:35:47.361 10:37:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:35:47.361 10:37:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:35:47.361 10:37:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:35:47.361 10:37:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:35:47.361 10:37:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:35:47.361 10:37:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:35:47.361 10:37:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:35:47.361 10:37:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:35:47.361 10:37:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:35:47.361 No valid GPT data, bailing 00:35:47.361 10:37:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:35:47.361 10:37:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:35:47.361 10:37:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:35:47.361 10:37:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:35:47.361 10:37:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:35:47.361 10:37:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:47.361 10:37:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:35:47.361 10:37:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:35:47.361 10:37:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:35:47.361 10:37:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:35:47.361 10:37:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:35:47.361 10:37:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:35:47.361 10:37:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:35:47.361 10:37:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 00:35:47.361 10:37:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:35:47.361 10:37:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:35:47.361 10:37:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:35:47.361 10:37:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:35:47.361 00:35:47.361 Discovery Log Number of Records 2, Generation counter 2 00:35:47.361 =====Discovery Log Entry 0====== 00:35:47.361 trtype: tcp 00:35:47.361 adrfam: ipv4 00:35:47.361 subtype: current discovery subsystem 00:35:47.361 treq: not specified, sq flow control disable supported 00:35:47.361 portid: 1 00:35:47.361 trsvcid: 4420 00:35:47.361 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:35:47.361 traddr: 10.0.0.1 00:35:47.361 eflags: none 00:35:47.361 sectype: none 00:35:47.361 =====Discovery Log Entry 1====== 00:35:47.361 trtype: tcp 00:35:47.361 adrfam: ipv4 00:35:47.361 subtype: nvme subsystem 00:35:47.361 treq: not specified, sq flow control disable supported 00:35:47.361 portid: 1 00:35:47.361 trsvcid: 4420 00:35:47.361 subnqn: nqn.2016-06.io.spdk:testnqn 00:35:47.361 traddr: 10.0.0.1 00:35:47.361 eflags: none 00:35:47.361 sectype: none 00:35:47.361 10:37:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:35:47.361 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:35:47.621 ===================================================== 00:35:47.621 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:35:47.621 ===================================================== 00:35:47.621 Controller Capabilities/Features 00:35:47.621 ================================ 00:35:47.621 Vendor ID: 0000 00:35:47.621 Subsystem Vendor ID: 0000 00:35:47.621 Serial Number: 10b65c4dfd022409e507 00:35:47.621 Model Number: Linux 00:35:47.621 Firmware Version: 6.8.9-20 00:35:47.621 Recommended Arb Burst: 0 00:35:47.621 IEEE OUI Identifier: 00 00 00 00:35:47.621 Multi-path I/O 00:35:47.621 May have multiple subsystem ports: No 00:35:47.621 May have multiple controllers: No 00:35:47.621 Associated with SR-IOV VF: No 00:35:47.621 Max Data Transfer Size: Unlimited 00:35:47.621 Max Number of Namespaces: 0 00:35:47.621 Max Number of I/O Queues: 1024 00:35:47.621 NVMe Specification Version (VS): 1.3 00:35:47.621 NVMe Specification Version (Identify): 1.3 00:35:47.621 Maximum Queue Entries: 1024 00:35:47.621 Contiguous Queues Required: No 00:35:47.621 Arbitration Mechanisms Supported 00:35:47.621 Weighted Round Robin: Not Supported 00:35:47.621 Vendor Specific: Not Supported 00:35:47.621 Reset Timeout: 7500 ms 00:35:47.621 Doorbell Stride: 4 bytes 00:35:47.621 NVM Subsystem Reset: Not Supported 00:35:47.621 Command Sets Supported 00:35:47.621 NVM Command Set: Supported 00:35:47.621 Boot Partition: Not Supported 00:35:47.621 Memory Page Size Minimum: 4096 bytes 00:35:47.621 Memory Page Size Maximum: 4096 bytes 00:35:47.621 Persistent Memory Region: Not Supported 00:35:47.621 Optional Asynchronous Events Supported 00:35:47.621 Namespace Attribute Notices: Not Supported 00:35:47.621 Firmware Activation Notices: Not Supported 00:35:47.621 ANA Change Notices: Not Supported 00:35:47.621 PLE Aggregate Log Change Notices: Not Supported 00:35:47.621 LBA Status Info Alert Notices: Not Supported 00:35:47.621 EGE Aggregate Log Change Notices: Not Supported 00:35:47.621 Normal NVM Subsystem Shutdown event: Not Supported 00:35:47.621 Zone Descriptor Change Notices: Not Supported 00:35:47.622 Discovery Log Change Notices: Supported 00:35:47.622 Controller Attributes 00:35:47.622 128-bit Host Identifier: Not Supported 00:35:47.622 Non-Operational Permissive Mode: Not Supported 00:35:47.622 NVM Sets: Not Supported 00:35:47.622 Read Recovery Levels: Not Supported 00:35:47.622 Endurance Groups: Not Supported 00:35:47.622 Predictable Latency Mode: Not Supported 00:35:47.622 Traffic Based Keep ALive: Not Supported 00:35:47.622 Namespace Granularity: Not Supported 00:35:47.622 SQ Associations: Not Supported 00:35:47.622 UUID List: Not Supported 00:35:47.622 Multi-Domain Subsystem: Not Supported 00:35:47.622 Fixed Capacity Management: Not Supported 00:35:47.622 Variable Capacity Management: Not Supported 00:35:47.622 Delete Endurance Group: Not Supported 00:35:47.622 Delete NVM Set: Not Supported 00:35:47.622 Extended LBA Formats Supported: Not Supported 00:35:47.622 Flexible Data Placement Supported: Not Supported 00:35:47.622 00:35:47.622 Controller Memory Buffer Support 00:35:47.622 ================================ 00:35:47.622 Supported: No 00:35:47.622 00:35:47.622 Persistent Memory Region Support 00:35:47.622 ================================ 00:35:47.622 Supported: No 00:35:47.622 00:35:47.622 Admin Command Set Attributes 00:35:47.622 ============================ 00:35:47.622 Security Send/Receive: Not Supported 00:35:47.622 Format NVM: Not Supported 00:35:47.622 Firmware Activate/Download: Not Supported 00:35:47.622 Namespace Management: Not Supported 00:35:47.622 Device Self-Test: Not Supported 00:35:47.622 Directives: Not Supported 00:35:47.622 NVMe-MI: Not Supported 00:35:47.622 Virtualization Management: Not Supported 00:35:47.622 Doorbell Buffer Config: Not Supported 00:35:47.622 Get LBA Status Capability: Not Supported 00:35:47.622 Command & Feature Lockdown Capability: Not Supported 00:35:47.622 Abort Command Limit: 1 00:35:47.622 Async Event Request Limit: 1 00:35:47.622 Number of Firmware Slots: N/A 00:35:47.622 Firmware Slot 1 Read-Only: N/A 00:35:47.622 Firmware Activation Without Reset: N/A 00:35:47.622 Multiple Update Detection Support: N/A 00:35:47.622 Firmware Update Granularity: No Information Provided 00:35:47.622 Per-Namespace SMART Log: No 00:35:47.622 Asymmetric Namespace Access Log Page: Not Supported 00:35:47.622 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:35:47.622 Command Effects Log Page: Not Supported 00:35:47.622 Get Log Page Extended Data: Supported 00:35:47.622 Telemetry Log Pages: Not Supported 00:35:47.622 Persistent Event Log Pages: Not Supported 00:35:47.622 Supported Log Pages Log Page: May Support 00:35:47.622 Commands Supported & Effects Log Page: Not Supported 00:35:47.622 Feature Identifiers & Effects Log Page:May Support 00:35:47.622 NVMe-MI Commands & Effects Log Page: May Support 00:35:47.622 Data Area 4 for Telemetry Log: Not Supported 00:35:47.622 Error Log Page Entries Supported: 1 00:35:47.622 Keep Alive: Not Supported 00:35:47.622 00:35:47.622 NVM Command Set Attributes 00:35:47.622 ========================== 00:35:47.622 Submission Queue Entry Size 00:35:47.622 Max: 1 00:35:47.622 Min: 1 00:35:47.622 Completion Queue Entry Size 00:35:47.622 Max: 1 00:35:47.622 Min: 1 00:35:47.622 Number of Namespaces: 0 00:35:47.622 Compare Command: Not Supported 00:35:47.622 Write Uncorrectable Command: Not Supported 00:35:47.622 Dataset Management Command: Not Supported 00:35:47.622 Write Zeroes Command: Not Supported 00:35:47.622 Set Features Save Field: Not Supported 00:35:47.622 Reservations: Not Supported 00:35:47.622 Timestamp: Not Supported 00:35:47.622 Copy: Not Supported 00:35:47.622 Volatile Write Cache: Not Present 00:35:47.622 Atomic Write Unit (Normal): 1 00:35:47.622 Atomic Write Unit (PFail): 1 00:35:47.622 Atomic Compare & Write Unit: 1 00:35:47.622 Fused Compare & Write: Not Supported 00:35:47.622 Scatter-Gather List 00:35:47.622 SGL Command Set: Supported 00:35:47.622 SGL Keyed: Not Supported 00:35:47.622 SGL Bit Bucket Descriptor: Not Supported 00:35:47.622 SGL Metadata Pointer: Not Supported 00:35:47.622 Oversized SGL: Not Supported 00:35:47.622 SGL Metadata Address: Not Supported 00:35:47.622 SGL Offset: Supported 00:35:47.622 Transport SGL Data Block: Not Supported 00:35:47.622 Replay Protected Memory Block: Not Supported 00:35:47.622 00:35:47.622 Firmware Slot Information 00:35:47.622 ========================= 00:35:47.622 Active slot: 0 00:35:47.622 00:35:47.622 00:35:47.622 Error Log 00:35:47.622 ========= 00:35:47.622 00:35:47.622 Active Namespaces 00:35:47.622 ================= 00:35:47.622 Discovery Log Page 00:35:47.622 ================== 00:35:47.622 Generation Counter: 2 00:35:47.622 Number of Records: 2 00:35:47.622 Record Format: 0 00:35:47.622 00:35:47.622 Discovery Log Entry 0 00:35:47.622 ---------------------- 00:35:47.622 Transport Type: 3 (TCP) 00:35:47.622 Address Family: 1 (IPv4) 00:35:47.622 Subsystem Type: 3 (Current Discovery Subsystem) 00:35:47.622 Entry Flags: 00:35:47.622 Duplicate Returned Information: 0 00:35:47.622 Explicit Persistent Connection Support for Discovery: 0 00:35:47.622 Transport Requirements: 00:35:47.622 Secure Channel: Not Specified 00:35:47.622 Port ID: 1 (0x0001) 00:35:47.622 Controller ID: 65535 (0xffff) 00:35:47.622 Admin Max SQ Size: 32 00:35:47.622 Transport Service Identifier: 4420 00:35:47.622 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:35:47.622 Transport Address: 10.0.0.1 00:35:47.622 Discovery Log Entry 1 00:35:47.622 ---------------------- 00:35:47.622 Transport Type: 3 (TCP) 00:35:47.622 Address Family: 1 (IPv4) 00:35:47.622 Subsystem Type: 2 (NVM Subsystem) 00:35:47.622 Entry Flags: 00:35:47.622 Duplicate Returned Information: 0 00:35:47.622 Explicit Persistent Connection Support for Discovery: 0 00:35:47.622 Transport Requirements: 00:35:47.622 Secure Channel: Not Specified 00:35:47.622 Port ID: 1 (0x0001) 00:35:47.622 Controller ID: 65535 (0xffff) 00:35:47.622 Admin Max SQ Size: 32 00:35:47.622 Transport Service Identifier: 4420 00:35:47.622 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:35:47.622 Transport Address: 10.0.0.1 00:35:47.622 10:37:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:47.622 get_feature(0x01) failed 00:35:47.622 get_feature(0x02) failed 00:35:47.622 get_feature(0x04) failed 00:35:47.622 ===================================================== 00:35:47.622 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:35:47.622 ===================================================== 00:35:47.622 Controller Capabilities/Features 00:35:47.622 ================================ 00:35:47.622 Vendor ID: 0000 00:35:47.622 Subsystem Vendor ID: 0000 00:35:47.622 Serial Number: 5baae98bc6dbbbe773fa 00:35:47.622 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:35:47.622 Firmware Version: 6.8.9-20 00:35:47.622 Recommended Arb Burst: 6 00:35:47.622 IEEE OUI Identifier: 00 00 00 00:35:47.622 Multi-path I/O 00:35:47.622 May have multiple subsystem ports: Yes 00:35:47.622 May have multiple controllers: Yes 00:35:47.622 Associated with SR-IOV VF: No 00:35:47.622 Max Data Transfer Size: Unlimited 00:35:47.622 Max Number of Namespaces: 1024 00:35:47.622 Max Number of I/O Queues: 128 00:35:47.622 NVMe Specification Version (VS): 1.3 00:35:47.622 NVMe Specification Version (Identify): 1.3 00:35:47.622 Maximum Queue Entries: 1024 00:35:47.622 Contiguous Queues Required: No 00:35:47.622 Arbitration Mechanisms Supported 00:35:47.622 Weighted Round Robin: Not Supported 00:35:47.622 Vendor Specific: Not Supported 00:35:47.622 Reset Timeout: 7500 ms 00:35:47.622 Doorbell Stride: 4 bytes 00:35:47.622 NVM Subsystem Reset: Not Supported 00:35:47.622 Command Sets Supported 00:35:47.622 NVM Command Set: Supported 00:35:47.622 Boot Partition: Not Supported 00:35:47.622 Memory Page Size Minimum: 4096 bytes 00:35:47.622 Memory Page Size Maximum: 4096 bytes 00:35:47.622 Persistent Memory Region: Not Supported 00:35:47.622 Optional Asynchronous Events Supported 00:35:47.622 Namespace Attribute Notices: Supported 00:35:47.622 Firmware Activation Notices: Not Supported 00:35:47.622 ANA Change Notices: Supported 00:35:47.622 PLE Aggregate Log Change Notices: Not Supported 00:35:47.622 LBA Status Info Alert Notices: Not Supported 00:35:47.622 EGE Aggregate Log Change Notices: Not Supported 00:35:47.622 Normal NVM Subsystem Shutdown event: Not Supported 00:35:47.622 Zone Descriptor Change Notices: Not Supported 00:35:47.622 Discovery Log Change Notices: Not Supported 00:35:47.622 Controller Attributes 00:35:47.623 128-bit Host Identifier: Supported 00:35:47.623 Non-Operational Permissive Mode: Not Supported 00:35:47.623 NVM Sets: Not Supported 00:35:47.623 Read Recovery Levels: Not Supported 00:35:47.623 Endurance Groups: Not Supported 00:35:47.623 Predictable Latency Mode: Not Supported 00:35:47.623 Traffic Based Keep ALive: Supported 00:35:47.623 Namespace Granularity: Not Supported 00:35:47.623 SQ Associations: Not Supported 00:35:47.623 UUID List: Not Supported 00:35:47.623 Multi-Domain Subsystem: Not Supported 00:35:47.623 Fixed Capacity Management: Not Supported 00:35:47.623 Variable Capacity Management: Not Supported 00:35:47.623 Delete Endurance Group: Not Supported 00:35:47.623 Delete NVM Set: Not Supported 00:35:47.623 Extended LBA Formats Supported: Not Supported 00:35:47.623 Flexible Data Placement Supported: Not Supported 00:35:47.623 00:35:47.623 Controller Memory Buffer Support 00:35:47.623 ================================ 00:35:47.623 Supported: No 00:35:47.623 00:35:47.623 Persistent Memory Region Support 00:35:47.623 ================================ 00:35:47.623 Supported: No 00:35:47.623 00:35:47.623 Admin Command Set Attributes 00:35:47.623 ============================ 00:35:47.623 Security Send/Receive: Not Supported 00:35:47.623 Format NVM: Not Supported 00:35:47.623 Firmware Activate/Download: Not Supported 00:35:47.623 Namespace Management: Not Supported 00:35:47.623 Device Self-Test: Not Supported 00:35:47.623 Directives: Not Supported 00:35:47.623 NVMe-MI: Not Supported 00:35:47.623 Virtualization Management: Not Supported 00:35:47.623 Doorbell Buffer Config: Not Supported 00:35:47.623 Get LBA Status Capability: Not Supported 00:35:47.623 Command & Feature Lockdown Capability: Not Supported 00:35:47.623 Abort Command Limit: 4 00:35:47.623 Async Event Request Limit: 4 00:35:47.623 Number of Firmware Slots: N/A 00:35:47.623 Firmware Slot 1 Read-Only: N/A 00:35:47.623 Firmware Activation Without Reset: N/A 00:35:47.623 Multiple Update Detection Support: N/A 00:35:47.623 Firmware Update Granularity: No Information Provided 00:35:47.623 Per-Namespace SMART Log: Yes 00:35:47.623 Asymmetric Namespace Access Log Page: Supported 00:35:47.623 ANA Transition Time : 10 sec 00:35:47.623 00:35:47.623 Asymmetric Namespace Access Capabilities 00:35:47.623 ANA Optimized State : Supported 00:35:47.623 ANA Non-Optimized State : Supported 00:35:47.623 ANA Inaccessible State : Supported 00:35:47.623 ANA Persistent Loss State : Supported 00:35:47.623 ANA Change State : Supported 00:35:47.623 ANAGRPID is not changed : No 00:35:47.623 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:35:47.623 00:35:47.623 ANA Group Identifier Maximum : 128 00:35:47.623 Number of ANA Group Identifiers : 128 00:35:47.623 Max Number of Allowed Namespaces : 1024 00:35:47.623 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:35:47.623 Command Effects Log Page: Supported 00:35:47.623 Get Log Page Extended Data: Supported 00:35:47.623 Telemetry Log Pages: Not Supported 00:35:47.623 Persistent Event Log Pages: Not Supported 00:35:47.623 Supported Log Pages Log Page: May Support 00:35:47.623 Commands Supported & Effects Log Page: Not Supported 00:35:47.623 Feature Identifiers & Effects Log Page:May Support 00:35:47.623 NVMe-MI Commands & Effects Log Page: May Support 00:35:47.623 Data Area 4 for Telemetry Log: Not Supported 00:35:47.623 Error Log Page Entries Supported: 128 00:35:47.623 Keep Alive: Supported 00:35:47.623 Keep Alive Granularity: 1000 ms 00:35:47.623 00:35:47.623 NVM Command Set Attributes 00:35:47.623 ========================== 00:35:47.623 Submission Queue Entry Size 00:35:47.623 Max: 64 00:35:47.623 Min: 64 00:35:47.623 Completion Queue Entry Size 00:35:47.623 Max: 16 00:35:47.623 Min: 16 00:35:47.623 Number of Namespaces: 1024 00:35:47.623 Compare Command: Not Supported 00:35:47.623 Write Uncorrectable Command: Not Supported 00:35:47.623 Dataset Management Command: Supported 00:35:47.623 Write Zeroes Command: Supported 00:35:47.623 Set Features Save Field: Not Supported 00:35:47.623 Reservations: Not Supported 00:35:47.623 Timestamp: Not Supported 00:35:47.623 Copy: Not Supported 00:35:47.623 Volatile Write Cache: Present 00:35:47.623 Atomic Write Unit (Normal): 1 00:35:47.623 Atomic Write Unit (PFail): 1 00:35:47.623 Atomic Compare & Write Unit: 1 00:35:47.623 Fused Compare & Write: Not Supported 00:35:47.623 Scatter-Gather List 00:35:47.623 SGL Command Set: Supported 00:35:47.623 SGL Keyed: Not Supported 00:35:47.623 SGL Bit Bucket Descriptor: Not Supported 00:35:47.623 SGL Metadata Pointer: Not Supported 00:35:47.623 Oversized SGL: Not Supported 00:35:47.623 SGL Metadata Address: Not Supported 00:35:47.623 SGL Offset: Supported 00:35:47.623 Transport SGL Data Block: Not Supported 00:35:47.623 Replay Protected Memory Block: Not Supported 00:35:47.623 00:35:47.623 Firmware Slot Information 00:35:47.623 ========================= 00:35:47.623 Active slot: 0 00:35:47.623 00:35:47.623 Asymmetric Namespace Access 00:35:47.623 =========================== 00:35:47.623 Change Count : 0 00:35:47.623 Number of ANA Group Descriptors : 1 00:35:47.623 ANA Group Descriptor : 0 00:35:47.623 ANA Group ID : 1 00:35:47.623 Number of NSID Values : 1 00:35:47.623 Change Count : 0 00:35:47.623 ANA State : 1 00:35:47.623 Namespace Identifier : 1 00:35:47.623 00:35:47.623 Commands Supported and Effects 00:35:47.623 ============================== 00:35:47.623 Admin Commands 00:35:47.623 -------------- 00:35:47.623 Get Log Page (02h): Supported 00:35:47.623 Identify (06h): Supported 00:35:47.623 Abort (08h): Supported 00:35:47.623 Set Features (09h): Supported 00:35:47.623 Get Features (0Ah): Supported 00:35:47.623 Asynchronous Event Request (0Ch): Supported 00:35:47.623 Keep Alive (18h): Supported 00:35:47.623 I/O Commands 00:35:47.623 ------------ 00:35:47.623 Flush (00h): Supported 00:35:47.623 Write (01h): Supported LBA-Change 00:35:47.623 Read (02h): Supported 00:35:47.623 Write Zeroes (08h): Supported LBA-Change 00:35:47.623 Dataset Management (09h): Supported 00:35:47.623 00:35:47.623 Error Log 00:35:47.623 ========= 00:35:47.623 Entry: 0 00:35:47.623 Error Count: 0x3 00:35:47.623 Submission Queue Id: 0x0 00:35:47.623 Command Id: 0x5 00:35:47.623 Phase Bit: 0 00:35:47.623 Status Code: 0x2 00:35:47.623 Status Code Type: 0x0 00:35:47.623 Do Not Retry: 1 00:35:47.623 Error Location: 0x28 00:35:47.623 LBA: 0x0 00:35:47.623 Namespace: 0x0 00:35:47.623 Vendor Log Page: 0x0 00:35:47.623 ----------- 00:35:47.623 Entry: 1 00:35:47.623 Error Count: 0x2 00:35:47.623 Submission Queue Id: 0x0 00:35:47.623 Command Id: 0x5 00:35:47.623 Phase Bit: 0 00:35:47.623 Status Code: 0x2 00:35:47.623 Status Code Type: 0x0 00:35:47.623 Do Not Retry: 1 00:35:47.623 Error Location: 0x28 00:35:47.623 LBA: 0x0 00:35:47.623 Namespace: 0x0 00:35:47.623 Vendor Log Page: 0x0 00:35:47.623 ----------- 00:35:47.623 Entry: 2 00:35:47.623 Error Count: 0x1 00:35:47.623 Submission Queue Id: 0x0 00:35:47.623 Command Id: 0x4 00:35:47.623 Phase Bit: 0 00:35:47.623 Status Code: 0x2 00:35:47.623 Status Code Type: 0x0 00:35:47.623 Do Not Retry: 1 00:35:47.623 Error Location: 0x28 00:35:47.623 LBA: 0x0 00:35:47.623 Namespace: 0x0 00:35:47.623 Vendor Log Page: 0x0 00:35:47.623 00:35:47.623 Number of Queues 00:35:47.623 ================ 00:35:47.623 Number of I/O Submission Queues: 128 00:35:47.623 Number of I/O Completion Queues: 128 00:35:47.623 00:35:47.623 ZNS Specific Controller Data 00:35:47.623 ============================ 00:35:47.623 Zone Append Size Limit: 0 00:35:47.623 00:35:47.623 00:35:47.623 Active Namespaces 00:35:47.623 ================= 00:35:47.623 get_feature(0x05) failed 00:35:47.623 Namespace ID:1 00:35:47.623 Command Set Identifier: NVM (00h) 00:35:47.623 Deallocate: Supported 00:35:47.623 Deallocated/Unwritten Error: Not Supported 00:35:47.623 Deallocated Read Value: Unknown 00:35:47.623 Deallocate in Write Zeroes: Not Supported 00:35:47.623 Deallocated Guard Field: 0xFFFF 00:35:47.623 Flush: Supported 00:35:47.623 Reservation: Not Supported 00:35:47.623 Namespace Sharing Capabilities: Multiple Controllers 00:35:47.623 Size (in LBAs): 1953525168 (931GiB) 00:35:47.623 Capacity (in LBAs): 1953525168 (931GiB) 00:35:47.623 Utilization (in LBAs): 1953525168 (931GiB) 00:35:47.623 UUID: d7b63c7e-519e-4f2b-bf91-cba7eb2c23fd 00:35:47.623 Thin Provisioning: Not Supported 00:35:47.623 Per-NS Atomic Units: Yes 00:35:47.623 Atomic Boundary Size (Normal): 0 00:35:47.623 Atomic Boundary Size (PFail): 0 00:35:47.624 Atomic Boundary Offset: 0 00:35:47.624 NGUID/EUI64 Never Reused: No 00:35:47.624 ANA group ID: 1 00:35:47.624 Namespace Write Protected: No 00:35:47.624 Number of LBA Formats: 1 00:35:47.624 Current LBA Format: LBA Format #00 00:35:47.624 LBA Format #00: Data Size: 512 Metadata Size: 0 00:35:47.624 00:35:47.624 10:37:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:35:47.624 10:37:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:47.624 10:37:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:35:47.624 10:37:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:47.624 10:37:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:35:47.624 10:37:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:47.624 10:37:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:47.624 rmmod nvme_tcp 00:35:47.624 rmmod nvme_fabrics 00:35:47.624 10:37:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:47.624 10:37:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:35:47.624 10:37:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:35:47.624 10:37:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:35:47.624 10:37:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:47.624 10:37:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:47.624 10:37:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:47.624 10:37:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:35:47.624 10:37:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:47.624 10:37:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 00:35:47.624 10:37:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 00:35:47.624 10:37:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:47.624 10:37:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:47.624 10:37:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:47.624 10:37:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:47.624 10:37:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:50.160 10:37:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:50.160 10:37:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:35:50.160 10:37:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:35:50.160 10:37:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:35:50.160 10:37:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:50.160 10:37:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:35:50.160 10:37:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:35:50.160 10:37:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:50.160 10:37:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:35:50.160 10:37:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:35:50.160 10:37:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:35:52.265 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:35:52.265 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:35:52.265 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:35:52.265 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:35:52.265 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:35:52.265 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:35:52.265 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:35:52.265 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:35:52.265 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:35:52.265 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:35:52.265 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:35:52.524 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:35:52.524 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:35:52.524 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:35:52.524 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:35:52.524 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:35:53.461 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:35:53.461 00:35:53.461 real 0m15.344s 00:35:53.461 user 0m3.970s 00:35:53.461 sys 0m7.703s 00:35:53.461 10:37:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:53.461 10:37:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:35:53.461 ************************************ 00:35:53.461 END TEST nvmf_identify_kernel_target 00:35:53.461 ************************************ 00:35:53.461 10:37:47 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:35:53.461 10:37:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:35:53.461 10:37:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:53.461 10:37:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:35:53.461 ************************************ 00:35:53.461 START TEST nvmf_auth_host 00:35:53.461 ************************************ 00:35:53.461 10:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:35:53.461 * Looking for test storage... 00:35:53.461 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:35:53.461 10:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:35:53.461 10:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # lcov --version 00:35:53.461 10:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:35:53.720 10:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:35:53.720 10:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:53.720 10:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:53.720 10:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:53.720 10:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:35:53.720 10:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:35:53.720 10:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:35:53.720 10:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:35:53.720 10:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:35:53.720 10:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:35:53.720 10:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:35:53.720 10:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:53.720 10:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:35:53.720 10:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:35:53.720 10:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:53.720 10:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:53.720 10:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:35:53.720 10:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:35:53.720 10:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:53.720 10:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:35:53.720 10:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:35:53.720 10:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:35:53.720 10:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:35:53.720 10:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:53.720 10:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:35:53.720 10:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:35:53.720 10:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:53.720 10:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:53.720 10:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:35:53.720 10:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:53.720 10:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:35:53.720 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:53.720 --rc genhtml_branch_coverage=1 00:35:53.720 --rc genhtml_function_coverage=1 00:35:53.720 --rc genhtml_legend=1 00:35:53.720 --rc geninfo_all_blocks=1 00:35:53.720 --rc geninfo_unexecuted_blocks=1 00:35:53.720 00:35:53.720 ' 00:35:53.720 10:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:35:53.720 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:53.720 --rc genhtml_branch_coverage=1 00:35:53.720 --rc genhtml_function_coverage=1 00:35:53.720 --rc genhtml_legend=1 00:35:53.720 --rc geninfo_all_blocks=1 00:35:53.721 --rc geninfo_unexecuted_blocks=1 00:35:53.721 00:35:53.721 ' 00:35:53.721 10:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:35:53.721 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:53.721 --rc genhtml_branch_coverage=1 00:35:53.721 --rc genhtml_function_coverage=1 00:35:53.721 --rc genhtml_legend=1 00:35:53.721 --rc geninfo_all_blocks=1 00:35:53.721 --rc geninfo_unexecuted_blocks=1 00:35:53.721 00:35:53.721 ' 00:35:53.721 10:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:35:53.721 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:53.721 --rc genhtml_branch_coverage=1 00:35:53.721 --rc genhtml_function_coverage=1 00:35:53.721 --rc genhtml_legend=1 00:35:53.721 --rc geninfo_all_blocks=1 00:35:53.721 --rc geninfo_unexecuted_blocks=1 00:35:53.721 00:35:53.721 ' 00:35:53.721 10:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:53.721 10:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:35:53.721 10:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:53.721 10:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:53.721 10:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:53.721 10:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:53.721 10:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:53.721 10:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:53.721 10:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:53.721 10:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:53.721 10:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:53.721 10:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:53.721 10:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:35:53.721 10:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:35:53.721 10:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:53.721 10:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:53.721 10:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:53.721 10:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:53.721 10:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:53.721 10:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:35:53.721 10:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:53.721 10:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:53.721 10:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:53.721 10:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:53.721 10:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:53.721 10:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:53.721 10:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:35:53.721 10:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:53.721 10:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:35:53.721 10:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:53.721 10:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:53.721 10:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:53.721 10:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:53.721 10:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:53.721 10:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:53.721 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:53.721 10:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:53.721 10:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:53.721 10:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:53.721 10:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:35:53.721 10:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:35:53.721 10:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:35:53.721 10:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:35:53.721 10:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:35:53.721 10:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:35:53.721 10:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:35:53.721 10:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:35:53.721 10:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:35:53.721 10:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:53.721 10:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:53.721 10:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:53.721 10:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:53.721 10:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:53.721 10:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:53.721 10:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:53.721 10:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:53.721 10:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:53.721 10:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:53.721 10:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # xtrace_disable 00:35:53.721 10:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:58.993 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:58.993 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # pci_devs=() 00:35:58.993 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:58.993 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:58.993 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:58.993 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:58.993 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:58.993 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # net_devs=() 00:35:58.993 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:58.993 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # e810=() 00:35:58.993 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # local -ga e810 00:35:58.993 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # x722=() 00:35:58.993 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # local -ga x722 00:35:58.993 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # mlx=() 00:35:58.993 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # local -ga mlx 00:35:58.993 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:58.993 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:58.993 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:58.993 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:58.993 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:58.993 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:58.993 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:58.993 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:58.993 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:58.993 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:58.993 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:58.993 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:58.993 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:58.993 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:58.993 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:58.993 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:58.993 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:58.993 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:58.993 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:58.993 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:35:58.993 Found 0000:af:00.0 (0x8086 - 0x159b) 00:35:58.993 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:58.993 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:58.993 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:58.993 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:58.993 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:58.993 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:58.993 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:35:58.993 Found 0000:af:00.1 (0x8086 - 0x159b) 00:35:58.993 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:58.993 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:58.993 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:58.993 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:58.993 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:58.993 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:58.993 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:58.993 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:58.993 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:58.993 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:58.993 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:58.993 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:58.993 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:58.993 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:58.993 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:58.993 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:35:58.993 Found net devices under 0000:af:00.0: cvl_0_0 00:35:58.993 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:58.993 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:58.993 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:58.993 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:58.993 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:58.993 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:58.993 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:58.993 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:58.993 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:35:58.993 Found net devices under 0000:af:00.1: cvl_0_1 00:35:58.993 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:58.993 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:58.993 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # is_hw=yes 00:35:58.993 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:58.993 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:58.993 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:58.993 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:58.993 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:58.993 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:58.993 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:58.993 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:58.993 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:58.993 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:58.993 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:58.993 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:58.993 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:58.993 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:58.993 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:58.993 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:58.993 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:58.993 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:58.993 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:58.993 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:58.993 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:58.993 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:58.993 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:58.993 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:58.993 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:58.993 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:58.993 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:58.993 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.243 ms 00:35:58.993 00:35:58.993 --- 10.0.0.2 ping statistics --- 00:35:58.993 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:58.993 rtt min/avg/max/mdev = 0.243/0.243/0.243/0.000 ms 00:35:58.993 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:58.993 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:58.993 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.207 ms 00:35:58.993 00:35:58.993 --- 10.0.0.1 ping statistics --- 00:35:58.993 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:58.993 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:35:58.994 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:58.994 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # return 0 00:35:58.994 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:35:58.994 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:58.994 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:58.994 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:58.994 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:58.994 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:58.994 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:59.253 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:35:59.253 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:59.253 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:59.253 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:59.253 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=4128751 00:35:59.253 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 4128751 00:35:59.253 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:35:59.253 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 4128751 ']' 00:35:59.253 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:59.253 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:59.253 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:59.253 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:59.253 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:00.189 10:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:00.189 10:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:36:00.189 10:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:36:00.189 10:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:00.189 10:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:00.190 10:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:00.190 10:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:36:00.190 10:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:36:00.190 10:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:36:00.190 10:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:36:00.190 10:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:36:00.190 10:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:36:00.190 10:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:36:00.190 10:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:36:00.190 10:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=42c80cc3401b50481059c033b9c0c337 00:36:00.190 10:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:36:00.190 10:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.U1U 00:36:00.190 10:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 42c80cc3401b50481059c033b9c0c337 0 00:36:00.190 10:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 42c80cc3401b50481059c033b9c0c337 0 00:36:00.190 10:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:36:00.190 10:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:36:00.190 10:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=42c80cc3401b50481059c033b9c0c337 00:36:00.190 10:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:36:00.190 10:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:36:00.190 10:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.U1U 00:36:00.190 10:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.U1U 00:36:00.190 10:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.U1U 00:36:00.190 10:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:36:00.190 10:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:36:00.190 10:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:36:00.190 10:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:36:00.190 10:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:36:00.190 10:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:36:00.190 10:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:36:00.190 10:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=6769436b4852b8df41319cc8e4aecaf70796d4c2831ccaa4a778fa686461282e 00:36:00.190 10:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:36:00.190 10:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.WJO 00:36:00.190 10:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 6769436b4852b8df41319cc8e4aecaf70796d4c2831ccaa4a778fa686461282e 3 00:36:00.190 10:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 6769436b4852b8df41319cc8e4aecaf70796d4c2831ccaa4a778fa686461282e 3 00:36:00.190 10:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:36:00.190 10:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:36:00.190 10:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=6769436b4852b8df41319cc8e4aecaf70796d4c2831ccaa4a778fa686461282e 00:36:00.190 10:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:36:00.190 10:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:36:00.190 10:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.WJO 00:36:00.190 10:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.WJO 00:36:00.190 10:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.WJO 00:36:00.190 10:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:36:00.190 10:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:36:00.190 10:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:36:00.190 10:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:36:00.190 10:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:36:00.190 10:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:36:00.190 10:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:36:00.190 10:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=ecebd3ab15d24e3ff69554e47b64f799a8a2754fbfc8d284 00:36:00.190 10:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:36:00.190 10:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.K0e 00:36:00.190 10:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key ecebd3ab15d24e3ff69554e47b64f799a8a2754fbfc8d284 0 00:36:00.190 10:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 ecebd3ab15d24e3ff69554e47b64f799a8a2754fbfc8d284 0 00:36:00.190 10:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:36:00.190 10:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:36:00.190 10:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=ecebd3ab15d24e3ff69554e47b64f799a8a2754fbfc8d284 00:36:00.190 10:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:36:00.190 10:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:36:00.190 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.K0e 00:36:00.190 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.K0e 00:36:00.190 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.K0e 00:36:00.190 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:36:00.190 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:36:00.190 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:36:00.190 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:36:00.190 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:36:00.190 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:36:00.190 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:36:00.190 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=8ba2a87247e5ae5c8902a2ec6f85cc8fcb3f766ac17f8537 00:36:00.190 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:36:00.190 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.ofV 00:36:00.190 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 8ba2a87247e5ae5c8902a2ec6f85cc8fcb3f766ac17f8537 2 00:36:00.190 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 8ba2a87247e5ae5c8902a2ec6f85cc8fcb3f766ac17f8537 2 00:36:00.190 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:36:00.190 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:36:00.190 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=8ba2a87247e5ae5c8902a2ec6f85cc8fcb3f766ac17f8537 00:36:00.190 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:36:00.190 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:36:00.190 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.ofV 00:36:00.190 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.ofV 00:36:00.190 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.ofV 00:36:00.190 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:36:00.190 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:36:00.190 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:36:00.190 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:36:00.190 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:36:00.190 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:36:00.190 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:36:00.450 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=15b82bad36c77b6ced11e74b6b3f799f 00:36:00.450 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:36:00.450 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.x8h 00:36:00.450 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 15b82bad36c77b6ced11e74b6b3f799f 1 00:36:00.450 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 15b82bad36c77b6ced11e74b6b3f799f 1 00:36:00.450 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:36:00.450 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:36:00.450 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=15b82bad36c77b6ced11e74b6b3f799f 00:36:00.450 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:36:00.450 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:36:00.450 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.x8h 00:36:00.450 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.x8h 00:36:00.450 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.x8h 00:36:00.450 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:36:00.450 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:36:00.450 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:36:00.450 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:36:00.450 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:36:00.450 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:36:00.450 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:36:00.450 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=306956986549f352450fb8769674b3d5 00:36:00.450 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:36:00.450 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.Xfs 00:36:00.450 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 306956986549f352450fb8769674b3d5 1 00:36:00.450 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 306956986549f352450fb8769674b3d5 1 00:36:00.450 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:36:00.450 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:36:00.450 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=306956986549f352450fb8769674b3d5 00:36:00.450 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:36:00.450 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:36:00.450 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.Xfs 00:36:00.450 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.Xfs 00:36:00.450 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.Xfs 00:36:00.450 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:36:00.450 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:36:00.450 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:36:00.450 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:36:00.450 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:36:00.450 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:36:00.450 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:36:00.450 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=779c730fe022d9d812beeb78f56e3c9cd0b23cc6f5df7d6a 00:36:00.450 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:36:00.450 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.i1W 00:36:00.450 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 779c730fe022d9d812beeb78f56e3c9cd0b23cc6f5df7d6a 2 00:36:00.450 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 779c730fe022d9d812beeb78f56e3c9cd0b23cc6f5df7d6a 2 00:36:00.450 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:36:00.450 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:36:00.450 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=779c730fe022d9d812beeb78f56e3c9cd0b23cc6f5df7d6a 00:36:00.450 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:36:00.450 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:36:00.450 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.i1W 00:36:00.450 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.i1W 00:36:00.450 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.i1W 00:36:00.450 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:36:00.450 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:36:00.450 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:36:00.450 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:36:00.450 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:36:00.450 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:36:00.450 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:36:00.450 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=d964151fe9bb9fd802b5c14bdaee350d 00:36:00.450 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:36:00.450 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.RN3 00:36:00.450 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key d964151fe9bb9fd802b5c14bdaee350d 0 00:36:00.450 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 d964151fe9bb9fd802b5c14bdaee350d 0 00:36:00.450 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:36:00.450 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:36:00.450 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=d964151fe9bb9fd802b5c14bdaee350d 00:36:00.450 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:36:00.450 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:36:00.450 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.RN3 00:36:00.450 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.RN3 00:36:00.450 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.RN3 00:36:00.450 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:36:00.450 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:36:00.450 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:36:00.450 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:36:00.450 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:36:00.450 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:36:00.450 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:36:00.450 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=e4e368ed08e2e8c7a6b3cf7876dda7b0e6d60e0e351d9525036f8d85805bfae1 00:36:00.450 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:36:00.450 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.I0b 00:36:00.450 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key e4e368ed08e2e8c7a6b3cf7876dda7b0e6d60e0e351d9525036f8d85805bfae1 3 00:36:00.450 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 e4e368ed08e2e8c7a6b3cf7876dda7b0e6d60e0e351d9525036f8d85805bfae1 3 00:36:00.450 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:36:00.450 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:36:00.450 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=e4e368ed08e2e8c7a6b3cf7876dda7b0e6d60e0e351d9525036f8d85805bfae1 00:36:00.709 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:36:00.709 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:36:00.709 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.I0b 00:36:00.709 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.I0b 00:36:00.709 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.I0b 00:36:00.709 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:36:00.709 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 4128751 00:36:00.709 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 4128751 ']' 00:36:00.710 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:00.710 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:00.710 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:00.710 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:00.710 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:00.710 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:00.710 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:00.710 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:36:00.710 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:36:00.710 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.U1U 00:36:00.710 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:00.710 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:00.710 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:00.710 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.WJO ]] 00:36:00.710 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.WJO 00:36:00.710 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:00.710 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:00.969 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:00.969 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:36:00.969 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.K0e 00:36:00.969 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:00.969 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:00.969 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:00.969 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.ofV ]] 00:36:00.969 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.ofV 00:36:00.969 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:00.969 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:00.969 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:00.969 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:36:00.969 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.x8h 00:36:00.969 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:00.969 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:00.969 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:00.969 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.Xfs ]] 00:36:00.969 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Xfs 00:36:00.969 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:00.969 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:00.969 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:00.969 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:36:00.969 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.i1W 00:36:00.969 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:00.969 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:00.969 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:00.969 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.RN3 ]] 00:36:00.969 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.RN3 00:36:00.969 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:00.969 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:00.969 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:00.969 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:36:00.969 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.I0b 00:36:00.969 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:00.969 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:00.969 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:00.969 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:36:00.969 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:36:00.969 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:36:00.969 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:00.969 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:00.969 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:00.969 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:00.969 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:00.969 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:00.969 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:00.969 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:00.969 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:00.969 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:00.969 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:36:00.969 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:36:00.969 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:36:00.969 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:36:00.969 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:36:00.969 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:36:00.969 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:36:00.969 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:36:00.969 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:36:00.969 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:36:00.969 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:36:03.499 Waiting for block devices as requested 00:36:03.499 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:36:03.499 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:36:03.758 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:36:03.758 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:36:03.758 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:36:03.758 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:36:04.017 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:36:04.017 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:36:04.017 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:36:04.017 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:36:04.275 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:36:04.275 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:36:04.275 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:36:04.534 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:36:04.534 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:36:04.534 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:36:04.534 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:36:05.102 10:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:36:05.102 10:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:36:05.102 10:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:36:05.102 10:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:36:05.102 10:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:36:05.102 10:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:36:05.102 10:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:36:05.102 10:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:36:05.102 10:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:36:05.102 No valid GPT data, bailing 00:36:05.102 10:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:36:05.361 10:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:36:05.361 10:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:36:05.361 10:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:36:05.361 10:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:36:05.361 10:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:36:05.361 10:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:36:05.361 10:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:36:05.361 10:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:36:05.361 10:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:36:05.361 10:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:36:05.361 10:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:36:05.361 10:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:36:05.361 10:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 00:36:05.361 10:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:36:05.361 10:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:36:05.361 10:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:36:05.361 10:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:36:05.361 00:36:05.361 Discovery Log Number of Records 2, Generation counter 2 00:36:05.361 =====Discovery Log Entry 0====== 00:36:05.361 trtype: tcp 00:36:05.361 adrfam: ipv4 00:36:05.361 subtype: current discovery subsystem 00:36:05.361 treq: not specified, sq flow control disable supported 00:36:05.361 portid: 1 00:36:05.361 trsvcid: 4420 00:36:05.361 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:36:05.361 traddr: 10.0.0.1 00:36:05.361 eflags: none 00:36:05.361 sectype: none 00:36:05.361 =====Discovery Log Entry 1====== 00:36:05.361 trtype: tcp 00:36:05.361 adrfam: ipv4 00:36:05.361 subtype: nvme subsystem 00:36:05.361 treq: not specified, sq flow control disable supported 00:36:05.361 portid: 1 00:36:05.361 trsvcid: 4420 00:36:05.361 subnqn: nqn.2024-02.io.spdk:cnode0 00:36:05.361 traddr: 10.0.0.1 00:36:05.361 eflags: none 00:36:05.361 sectype: none 00:36:05.361 10:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:36:05.361 10:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:36:05.361 10:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:36:05.361 10:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:36:05.361 10:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:05.361 10:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:05.361 10:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:05.361 10:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:05.361 10:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWNlYmQzYWIxNWQyNGUzZmY2OTU1NGU0N2I2NGY3OTlhOGEyNzU0ZmJmYzhkMjg0bofh4A==: 00:36:05.361 10:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGJhMmE4NzI0N2U1YWU1Yzg5MDJhMmVjNmY4NWNjOGZjYjNmNzY2YWMxN2Y4NTM3zV7Lug==: 00:36:05.361 10:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:05.361 10:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:05.361 10:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWNlYmQzYWIxNWQyNGUzZmY2OTU1NGU0N2I2NGY3OTlhOGEyNzU0ZmJmYzhkMjg0bofh4A==: 00:36:05.361 10:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGJhMmE4NzI0N2U1YWU1Yzg5MDJhMmVjNmY4NWNjOGZjYjNmNzY2YWMxN2Y4NTM3zV7Lug==: ]] 00:36:05.361 10:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGJhMmE4NzI0N2U1YWU1Yzg5MDJhMmVjNmY4NWNjOGZjYjNmNzY2YWMxN2Y4NTM3zV7Lug==: 00:36:05.361 10:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:36:05.361 10:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:36:05.361 10:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:36:05.361 10:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:36:05.361 10:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:36:05.361 10:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:05.361 10:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:36:05.361 10:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:36:05.361 10:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:05.361 10:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:05.361 10:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:36:05.361 10:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:05.361 10:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:05.361 10:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:05.361 10:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:05.361 10:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:05.361 10:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:05.361 10:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:05.361 10:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:05.361 10:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:05.361 10:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:05.361 10:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:05.361 10:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:05.361 10:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:05.362 10:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:05.362 10:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:05.362 10:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:05.362 10:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:05.621 nvme0n1 00:36:05.621 10:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:05.621 10:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:05.621 10:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:05.621 10:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:05.621 10:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:05.621 10:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:05.621 10:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:05.621 10:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:05.621 10:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:05.621 10:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:05.621 10:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:05.621 10:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:36:05.621 10:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:05.621 10:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:05.621 10:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:36:05.621 10:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:05.621 10:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:05.621 10:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:05.621 10:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:05.621 10:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDJjODBjYzM0MDFiNTA0ODEwNTljMDMzYjljMGMzMzfIPHwZ: 00:36:05.621 10:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Njc2OTQzNmI0ODUyYjhkZjQxMzE5Y2M4ZTRhZWNhZjcwNzk2ZDRjMjgzMWNjYWE0YTc3OGZhNjg2NDYxMjgyZQHH+LY=: 00:36:05.621 10:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:05.621 10:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:05.621 10:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDJjODBjYzM0MDFiNTA0ODEwNTljMDMzYjljMGMzMzfIPHwZ: 00:36:05.621 10:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Njc2OTQzNmI0ODUyYjhkZjQxMzE5Y2M4ZTRhZWNhZjcwNzk2ZDRjMjgzMWNjYWE0YTc3OGZhNjg2NDYxMjgyZQHH+LY=: ]] 00:36:05.621 10:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Njc2OTQzNmI0ODUyYjhkZjQxMzE5Y2M4ZTRhZWNhZjcwNzk2ZDRjMjgzMWNjYWE0YTc3OGZhNjg2NDYxMjgyZQHH+LY=: 00:36:05.621 10:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:36:05.621 10:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:05.621 10:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:05.621 10:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:05.621 10:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:05.621 10:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:05.621 10:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:36:05.621 10:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:05.621 10:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:05.621 10:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:05.621 10:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:05.621 10:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:05.621 10:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:05.621 10:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:05.621 10:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:05.621 10:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:05.621 10:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:05.621 10:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:05.621 10:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:05.621 10:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:05.621 10:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:05.621 10:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:05.621 10:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:05.621 10:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:05.881 nvme0n1 00:36:05.881 10:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:05.881 10:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:05.881 10:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:05.881 10:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:05.881 10:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:05.881 10:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:05.881 10:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:05.881 10:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:05.881 10:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:05.881 10:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:05.881 10:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:05.881 10:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:05.881 10:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:36:05.881 10:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:05.881 10:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:05.881 10:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:05.881 10:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:05.881 10:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWNlYmQzYWIxNWQyNGUzZmY2OTU1NGU0N2I2NGY3OTlhOGEyNzU0ZmJmYzhkMjg0bofh4A==: 00:36:05.881 10:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGJhMmE4NzI0N2U1YWU1Yzg5MDJhMmVjNmY4NWNjOGZjYjNmNzY2YWMxN2Y4NTM3zV7Lug==: 00:36:05.881 10:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:05.881 10:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:05.881 10:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWNlYmQzYWIxNWQyNGUzZmY2OTU1NGU0N2I2NGY3OTlhOGEyNzU0ZmJmYzhkMjg0bofh4A==: 00:36:05.881 10:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGJhMmE4NzI0N2U1YWU1Yzg5MDJhMmVjNmY4NWNjOGZjYjNmNzY2YWMxN2Y4NTM3zV7Lug==: ]] 00:36:05.881 10:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGJhMmE4NzI0N2U1YWU1Yzg5MDJhMmVjNmY4NWNjOGZjYjNmNzY2YWMxN2Y4NTM3zV7Lug==: 00:36:05.881 10:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:36:05.881 10:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:05.881 10:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:05.881 10:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:05.881 10:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:05.881 10:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:05.881 10:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:36:05.881 10:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:05.881 10:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:05.881 10:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:05.881 10:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:05.881 10:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:05.881 10:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:05.881 10:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:05.881 10:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:05.881 10:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:05.881 10:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:05.881 10:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:05.881 10:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:05.881 10:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:05.881 10:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:05.881 10:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:05.881 10:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:05.881 10:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:05.881 nvme0n1 00:36:05.881 10:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:05.881 10:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:05.881 10:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:05.881 10:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:05.881 10:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:05.881 10:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:06.140 10:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:06.141 10:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:06.141 10:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:06.141 10:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:06.141 10:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:06.141 10:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:06.141 10:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:36:06.141 10:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:06.141 10:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:06.141 10:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:06.141 10:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:06.141 10:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTViODJiYWQzNmM3N2I2Y2VkMTFlNzRiNmIzZjc5OWbt6JGf: 00:36:06.141 10:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzA2OTU2OTg2NTQ5ZjM1MjQ1MGZiODc2OTY3NGIzZDU4kR/r: 00:36:06.141 10:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:06.141 10:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:06.141 10:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTViODJiYWQzNmM3N2I2Y2VkMTFlNzRiNmIzZjc5OWbt6JGf: 00:36:06.141 10:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzA2OTU2OTg2NTQ5ZjM1MjQ1MGZiODc2OTY3NGIzZDU4kR/r: ]] 00:36:06.141 10:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzA2OTU2OTg2NTQ5ZjM1MjQ1MGZiODc2OTY3NGIzZDU4kR/r: 00:36:06.141 10:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:36:06.141 10:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:06.141 10:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:06.141 10:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:06.141 10:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:06.141 10:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:06.141 10:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:36:06.141 10:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:06.141 10:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:06.141 10:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:06.141 10:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:06.141 10:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:06.141 10:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:06.141 10:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:06.141 10:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:06.141 10:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:06.141 10:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:06.141 10:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:06.141 10:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:06.141 10:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:06.141 10:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:06.141 10:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:06.141 10:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:06.141 10:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:06.141 nvme0n1 00:36:06.141 10:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:06.141 10:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:06.141 10:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:06.141 10:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:06.141 10:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:06.141 10:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:06.141 10:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:06.141 10:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:06.141 10:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:06.141 10:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:06.400 10:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:06.400 10:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:06.400 10:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:36:06.400 10:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:06.400 10:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:06.400 10:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:06.400 10:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:06.400 10:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Nzc5YzczMGZlMDIyZDlkODEyYmVlYjc4ZjU2ZTNjOWNkMGIyM2NjNmY1ZGY3ZDZh152WXw==: 00:36:06.400 10:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDk2NDE1MWZlOWJiOWZkODAyYjVjMTRiZGFlZTM1MGTdIRAt: 00:36:06.400 10:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:06.400 10:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:06.400 10:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Nzc5YzczMGZlMDIyZDlkODEyYmVlYjc4ZjU2ZTNjOWNkMGIyM2NjNmY1ZGY3ZDZh152WXw==: 00:36:06.400 10:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDk2NDE1MWZlOWJiOWZkODAyYjVjMTRiZGFlZTM1MGTdIRAt: ]] 00:36:06.400 10:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDk2NDE1MWZlOWJiOWZkODAyYjVjMTRiZGFlZTM1MGTdIRAt: 00:36:06.400 10:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:36:06.400 10:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:06.400 10:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:06.400 10:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:06.400 10:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:06.400 10:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:06.400 10:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:36:06.400 10:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:06.400 10:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:06.400 10:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:06.400 10:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:06.400 10:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:06.400 10:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:06.400 10:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:06.400 10:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:06.400 10:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:06.400 10:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:06.400 10:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:06.400 10:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:06.400 10:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:06.400 10:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:06.400 10:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:06.400 10:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:06.400 10:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:06.400 nvme0n1 00:36:06.400 10:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:06.400 10:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:06.400 10:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:06.400 10:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:06.400 10:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:06.400 10:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:06.400 10:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:06.400 10:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:06.400 10:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:06.400 10:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:06.400 10:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:06.400 10:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:06.400 10:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:36:06.400 10:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:06.400 10:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:06.400 10:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:06.400 10:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:06.400 10:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTRlMzY4ZWQwOGUyZThjN2E2YjNjZjc4NzZkZGE3YjBlNmQ2MGUwZTM1MWQ5NTI1MDM2ZjhkODU4MDViZmFlMbOSEjg=: 00:36:06.400 10:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:06.400 10:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:06.400 10:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:06.400 10:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTRlMzY4ZWQwOGUyZThjN2E2YjNjZjc4NzZkZGE3YjBlNmQ2MGUwZTM1MWQ5NTI1MDM2ZjhkODU4MDViZmFlMbOSEjg=: 00:36:06.400 10:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:06.400 10:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:36:06.401 10:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:06.401 10:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:06.401 10:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:06.401 10:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:06.401 10:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:06.401 10:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:36:06.401 10:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:06.401 10:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:06.401 10:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:06.401 10:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:06.401 10:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:06.401 10:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:06.401 10:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:06.401 10:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:06.401 10:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:06.401 10:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:06.401 10:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:06.401 10:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:06.401 10:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:06.401 10:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:06.401 10:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:06.401 10:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:06.401 10:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:06.659 nvme0n1 00:36:06.659 10:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:06.659 10:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:06.659 10:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:06.659 10:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:06.659 10:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:06.659 10:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:06.659 10:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:06.659 10:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:06.659 10:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:06.659 10:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:06.659 10:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:06.659 10:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:06.659 10:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:06.659 10:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:36:06.659 10:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:06.659 10:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:06.659 10:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:06.659 10:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:06.659 10:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDJjODBjYzM0MDFiNTA0ODEwNTljMDMzYjljMGMzMzfIPHwZ: 00:36:06.659 10:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Njc2OTQzNmI0ODUyYjhkZjQxMzE5Y2M4ZTRhZWNhZjcwNzk2ZDRjMjgzMWNjYWE0YTc3OGZhNjg2NDYxMjgyZQHH+LY=: 00:36:06.659 10:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:06.659 10:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:06.659 10:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDJjODBjYzM0MDFiNTA0ODEwNTljMDMzYjljMGMzMzfIPHwZ: 00:36:06.659 10:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Njc2OTQzNmI0ODUyYjhkZjQxMzE5Y2M4ZTRhZWNhZjcwNzk2ZDRjMjgzMWNjYWE0YTc3OGZhNjg2NDYxMjgyZQHH+LY=: ]] 00:36:06.659 10:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Njc2OTQzNmI0ODUyYjhkZjQxMzE5Y2M4ZTRhZWNhZjcwNzk2ZDRjMjgzMWNjYWE0YTc3OGZhNjg2NDYxMjgyZQHH+LY=: 00:36:06.659 10:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:36:06.659 10:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:06.659 10:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:06.659 10:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:06.659 10:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:06.659 10:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:06.659 10:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:36:06.659 10:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:06.659 10:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:06.660 10:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:06.660 10:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:06.660 10:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:06.660 10:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:06.660 10:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:06.660 10:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:06.660 10:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:06.660 10:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:06.660 10:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:06.660 10:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:06.660 10:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:06.660 10:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:06.660 10:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:06.660 10:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:06.660 10:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:06.918 nvme0n1 00:36:06.918 10:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:06.918 10:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:06.918 10:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:06.918 10:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:06.918 10:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:06.918 10:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:06.918 10:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:06.918 10:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:06.918 10:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:06.918 10:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:06.918 10:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:06.918 10:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:06.918 10:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:36:06.918 10:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:06.918 10:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:06.918 10:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:06.918 10:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:06.918 10:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWNlYmQzYWIxNWQyNGUzZmY2OTU1NGU0N2I2NGY3OTlhOGEyNzU0ZmJmYzhkMjg0bofh4A==: 00:36:06.918 10:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGJhMmE4NzI0N2U1YWU1Yzg5MDJhMmVjNmY4NWNjOGZjYjNmNzY2YWMxN2Y4NTM3zV7Lug==: 00:36:06.918 10:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:06.918 10:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:06.918 10:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWNlYmQzYWIxNWQyNGUzZmY2OTU1NGU0N2I2NGY3OTlhOGEyNzU0ZmJmYzhkMjg0bofh4A==: 00:36:06.918 10:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGJhMmE4NzI0N2U1YWU1Yzg5MDJhMmVjNmY4NWNjOGZjYjNmNzY2YWMxN2Y4NTM3zV7Lug==: ]] 00:36:06.918 10:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGJhMmE4NzI0N2U1YWU1Yzg5MDJhMmVjNmY4NWNjOGZjYjNmNzY2YWMxN2Y4NTM3zV7Lug==: 00:36:06.918 10:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:36:06.918 10:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:06.918 10:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:06.918 10:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:06.918 10:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:06.918 10:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:06.918 10:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:36:06.918 10:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:06.918 10:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:06.918 10:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:06.918 10:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:06.918 10:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:06.918 10:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:06.919 10:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:06.919 10:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:06.919 10:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:06.919 10:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:06.919 10:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:06.919 10:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:06.919 10:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:06.919 10:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:06.919 10:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:06.919 10:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:06.919 10:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:07.177 nvme0n1 00:36:07.177 10:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:07.177 10:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:07.177 10:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:07.177 10:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:07.177 10:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:07.178 10:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:07.178 10:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:07.178 10:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:07.178 10:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:07.178 10:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:07.178 10:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:07.178 10:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:07.178 10:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:36:07.178 10:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:07.178 10:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:07.178 10:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:07.178 10:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:07.178 10:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTViODJiYWQzNmM3N2I2Y2VkMTFlNzRiNmIzZjc5OWbt6JGf: 00:36:07.178 10:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzA2OTU2OTg2NTQ5ZjM1MjQ1MGZiODc2OTY3NGIzZDU4kR/r: 00:36:07.178 10:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:07.178 10:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:07.178 10:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTViODJiYWQzNmM3N2I2Y2VkMTFlNzRiNmIzZjc5OWbt6JGf: 00:36:07.178 10:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzA2OTU2OTg2NTQ5ZjM1MjQ1MGZiODc2OTY3NGIzZDU4kR/r: ]] 00:36:07.178 10:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzA2OTU2OTg2NTQ5ZjM1MjQ1MGZiODc2OTY3NGIzZDU4kR/r: 00:36:07.178 10:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:36:07.178 10:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:07.178 10:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:07.178 10:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:07.178 10:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:07.178 10:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:07.178 10:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:36:07.178 10:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:07.178 10:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:07.178 10:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:07.178 10:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:07.178 10:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:07.178 10:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:07.178 10:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:07.178 10:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:07.178 10:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:07.178 10:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:07.178 10:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:07.178 10:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:07.178 10:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:07.178 10:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:07.178 10:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:07.178 10:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:07.178 10:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:07.437 nvme0n1 00:36:07.437 10:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:07.437 10:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:07.437 10:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:07.437 10:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:07.437 10:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:07.437 10:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:07.437 10:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:07.437 10:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:07.437 10:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:07.437 10:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:07.437 10:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:07.437 10:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:07.437 10:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:36:07.437 10:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:07.437 10:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:07.437 10:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:07.437 10:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:07.437 10:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Nzc5YzczMGZlMDIyZDlkODEyYmVlYjc4ZjU2ZTNjOWNkMGIyM2NjNmY1ZGY3ZDZh152WXw==: 00:36:07.437 10:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDk2NDE1MWZlOWJiOWZkODAyYjVjMTRiZGFlZTM1MGTdIRAt: 00:36:07.437 10:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:07.437 10:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:07.437 10:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Nzc5YzczMGZlMDIyZDlkODEyYmVlYjc4ZjU2ZTNjOWNkMGIyM2NjNmY1ZGY3ZDZh152WXw==: 00:36:07.437 10:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDk2NDE1MWZlOWJiOWZkODAyYjVjMTRiZGFlZTM1MGTdIRAt: ]] 00:36:07.437 10:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDk2NDE1MWZlOWJiOWZkODAyYjVjMTRiZGFlZTM1MGTdIRAt: 00:36:07.437 10:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:36:07.437 10:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:07.437 10:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:07.437 10:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:07.437 10:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:07.437 10:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:07.437 10:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:36:07.437 10:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:07.437 10:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:07.437 10:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:07.437 10:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:07.437 10:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:07.437 10:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:07.437 10:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:07.437 10:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:07.437 10:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:07.437 10:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:07.437 10:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:07.437 10:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:07.437 10:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:07.437 10:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:07.437 10:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:07.437 10:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:07.437 10:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:07.696 nvme0n1 00:36:07.696 10:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:07.696 10:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:07.696 10:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:07.696 10:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:07.696 10:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:07.696 10:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:07.696 10:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:07.696 10:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:07.696 10:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:07.696 10:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:07.696 10:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:07.696 10:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:07.696 10:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:36:07.696 10:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:07.696 10:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:07.696 10:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:07.696 10:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:07.696 10:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTRlMzY4ZWQwOGUyZThjN2E2YjNjZjc4NzZkZGE3YjBlNmQ2MGUwZTM1MWQ5NTI1MDM2ZjhkODU4MDViZmFlMbOSEjg=: 00:36:07.696 10:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:07.696 10:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:07.696 10:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:07.696 10:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTRlMzY4ZWQwOGUyZThjN2E2YjNjZjc4NzZkZGE3YjBlNmQ2MGUwZTM1MWQ5NTI1MDM2ZjhkODU4MDViZmFlMbOSEjg=: 00:36:07.696 10:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:07.696 10:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:36:07.696 10:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:07.696 10:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:07.696 10:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:07.696 10:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:07.696 10:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:07.696 10:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:36:07.696 10:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:07.696 10:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:07.696 10:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:07.696 10:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:07.696 10:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:07.696 10:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:07.696 10:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:07.696 10:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:07.696 10:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:07.696 10:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:07.696 10:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:07.696 10:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:07.696 10:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:07.696 10:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:07.696 10:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:07.696 10:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:07.696 10:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:07.955 nvme0n1 00:36:07.955 10:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:07.955 10:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:07.955 10:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:07.955 10:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:07.955 10:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:07.955 10:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:07.955 10:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:07.955 10:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:07.955 10:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:07.955 10:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:07.955 10:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:07.955 10:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:07.955 10:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:07.955 10:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:36:07.955 10:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:07.955 10:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:07.955 10:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:07.955 10:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:07.955 10:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDJjODBjYzM0MDFiNTA0ODEwNTljMDMzYjljMGMzMzfIPHwZ: 00:36:07.955 10:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Njc2OTQzNmI0ODUyYjhkZjQxMzE5Y2M4ZTRhZWNhZjcwNzk2ZDRjMjgzMWNjYWE0YTc3OGZhNjg2NDYxMjgyZQHH+LY=: 00:36:07.955 10:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:07.955 10:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:07.955 10:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDJjODBjYzM0MDFiNTA0ODEwNTljMDMzYjljMGMzMzfIPHwZ: 00:36:07.955 10:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Njc2OTQzNmI0ODUyYjhkZjQxMzE5Y2M4ZTRhZWNhZjcwNzk2ZDRjMjgzMWNjYWE0YTc3OGZhNjg2NDYxMjgyZQHH+LY=: ]] 00:36:07.955 10:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Njc2OTQzNmI0ODUyYjhkZjQxMzE5Y2M4ZTRhZWNhZjcwNzk2ZDRjMjgzMWNjYWE0YTc3OGZhNjg2NDYxMjgyZQHH+LY=: 00:36:07.955 10:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:36:07.955 10:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:07.955 10:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:07.955 10:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:07.955 10:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:07.955 10:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:07.955 10:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:36:07.955 10:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:07.955 10:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:07.955 10:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:07.955 10:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:07.955 10:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:07.955 10:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:07.955 10:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:07.955 10:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:07.955 10:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:07.955 10:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:07.955 10:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:07.955 10:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:07.955 10:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:07.955 10:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:07.955 10:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:07.955 10:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:07.955 10:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:08.213 nvme0n1 00:36:08.214 10:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:08.214 10:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:08.214 10:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:08.214 10:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:08.214 10:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:08.214 10:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:08.214 10:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:08.214 10:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:08.214 10:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:08.214 10:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:08.214 10:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:08.214 10:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:08.214 10:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:36:08.214 10:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:08.214 10:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:08.214 10:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:08.214 10:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:08.214 10:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWNlYmQzYWIxNWQyNGUzZmY2OTU1NGU0N2I2NGY3OTlhOGEyNzU0ZmJmYzhkMjg0bofh4A==: 00:36:08.214 10:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGJhMmE4NzI0N2U1YWU1Yzg5MDJhMmVjNmY4NWNjOGZjYjNmNzY2YWMxN2Y4NTM3zV7Lug==: 00:36:08.214 10:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:08.214 10:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:08.214 10:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWNlYmQzYWIxNWQyNGUzZmY2OTU1NGU0N2I2NGY3OTlhOGEyNzU0ZmJmYzhkMjg0bofh4A==: 00:36:08.214 10:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGJhMmE4NzI0N2U1YWU1Yzg5MDJhMmVjNmY4NWNjOGZjYjNmNzY2YWMxN2Y4NTM3zV7Lug==: ]] 00:36:08.214 10:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGJhMmE4NzI0N2U1YWU1Yzg5MDJhMmVjNmY4NWNjOGZjYjNmNzY2YWMxN2Y4NTM3zV7Lug==: 00:36:08.214 10:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:36:08.214 10:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:08.214 10:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:08.214 10:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:08.214 10:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:08.214 10:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:08.214 10:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:36:08.214 10:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:08.214 10:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:08.472 10:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:08.472 10:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:08.472 10:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:08.472 10:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:08.472 10:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:08.472 10:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:08.472 10:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:08.472 10:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:08.472 10:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:08.472 10:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:08.472 10:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:08.472 10:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:08.472 10:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:08.472 10:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:08.472 10:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:08.472 nvme0n1 00:36:08.472 10:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:08.472 10:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:08.472 10:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:08.473 10:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:08.473 10:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:08.731 10:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:08.731 10:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:08.731 10:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:08.731 10:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:08.731 10:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:08.731 10:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:08.731 10:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:08.731 10:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:36:08.731 10:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:08.731 10:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:08.731 10:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:08.731 10:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:08.731 10:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTViODJiYWQzNmM3N2I2Y2VkMTFlNzRiNmIzZjc5OWbt6JGf: 00:36:08.731 10:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzA2OTU2OTg2NTQ5ZjM1MjQ1MGZiODc2OTY3NGIzZDU4kR/r: 00:36:08.731 10:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:08.731 10:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:08.731 10:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTViODJiYWQzNmM3N2I2Y2VkMTFlNzRiNmIzZjc5OWbt6JGf: 00:36:08.731 10:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzA2OTU2OTg2NTQ5ZjM1MjQ1MGZiODc2OTY3NGIzZDU4kR/r: ]] 00:36:08.732 10:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzA2OTU2OTg2NTQ5ZjM1MjQ1MGZiODc2OTY3NGIzZDU4kR/r: 00:36:08.732 10:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:36:08.732 10:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:08.732 10:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:08.732 10:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:08.732 10:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:08.732 10:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:08.732 10:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:36:08.732 10:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:08.732 10:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:08.732 10:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:08.732 10:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:08.732 10:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:08.732 10:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:08.732 10:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:08.732 10:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:08.732 10:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:08.732 10:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:08.732 10:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:08.732 10:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:08.732 10:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:08.732 10:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:08.732 10:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:08.732 10:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:08.732 10:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:08.991 nvme0n1 00:36:08.991 10:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:08.991 10:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:08.991 10:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:08.991 10:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:08.991 10:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:08.991 10:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:08.991 10:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:08.991 10:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:08.991 10:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:08.991 10:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:08.991 10:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:08.991 10:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:08.991 10:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:36:08.991 10:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:08.991 10:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:08.991 10:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:08.991 10:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:08.991 10:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Nzc5YzczMGZlMDIyZDlkODEyYmVlYjc4ZjU2ZTNjOWNkMGIyM2NjNmY1ZGY3ZDZh152WXw==: 00:36:08.991 10:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDk2NDE1MWZlOWJiOWZkODAyYjVjMTRiZGFlZTM1MGTdIRAt: 00:36:08.991 10:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:08.991 10:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:08.991 10:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Nzc5YzczMGZlMDIyZDlkODEyYmVlYjc4ZjU2ZTNjOWNkMGIyM2NjNmY1ZGY3ZDZh152WXw==: 00:36:08.991 10:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDk2NDE1MWZlOWJiOWZkODAyYjVjMTRiZGFlZTM1MGTdIRAt: ]] 00:36:08.991 10:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDk2NDE1MWZlOWJiOWZkODAyYjVjMTRiZGFlZTM1MGTdIRAt: 00:36:08.991 10:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:36:08.991 10:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:08.991 10:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:08.991 10:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:08.991 10:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:08.991 10:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:08.991 10:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:36:08.991 10:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:08.991 10:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:08.991 10:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:08.991 10:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:08.991 10:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:08.991 10:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:08.991 10:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:08.991 10:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:08.991 10:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:08.991 10:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:08.991 10:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:08.991 10:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:08.991 10:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:08.991 10:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:08.991 10:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:08.991 10:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:08.991 10:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:09.251 nvme0n1 00:36:09.251 10:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:09.251 10:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:09.251 10:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:09.251 10:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:09.251 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:09.251 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:09.251 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:09.251 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:09.251 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:09.251 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:09.251 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:09.251 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:09.251 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:36:09.251 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:09.251 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:09.251 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:09.251 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:09.251 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTRlMzY4ZWQwOGUyZThjN2E2YjNjZjc4NzZkZGE3YjBlNmQ2MGUwZTM1MWQ5NTI1MDM2ZjhkODU4MDViZmFlMbOSEjg=: 00:36:09.251 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:09.251 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:09.251 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:09.251 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTRlMzY4ZWQwOGUyZThjN2E2YjNjZjc4NzZkZGE3YjBlNmQ2MGUwZTM1MWQ5NTI1MDM2ZjhkODU4MDViZmFlMbOSEjg=: 00:36:09.251 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:09.251 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:36:09.251 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:09.251 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:09.251 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:09.251 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:09.251 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:09.251 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:36:09.251 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:09.251 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:09.251 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:09.251 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:09.251 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:09.251 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:09.251 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:09.251 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:09.251 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:09.251 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:09.251 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:09.251 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:09.251 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:09.251 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:09.251 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:09.251 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:09.251 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:09.510 nvme0n1 00:36:09.510 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:09.510 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:09.510 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:09.510 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:09.510 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:09.510 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:09.510 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:09.510 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:09.510 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:09.510 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:09.510 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:09.510 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:09.510 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:09.510 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:36:09.510 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:09.510 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:09.510 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:09.510 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:09.510 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDJjODBjYzM0MDFiNTA0ODEwNTljMDMzYjljMGMzMzfIPHwZ: 00:36:09.510 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Njc2OTQzNmI0ODUyYjhkZjQxMzE5Y2M4ZTRhZWNhZjcwNzk2ZDRjMjgzMWNjYWE0YTc3OGZhNjg2NDYxMjgyZQHH+LY=: 00:36:09.510 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:09.510 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:09.510 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDJjODBjYzM0MDFiNTA0ODEwNTljMDMzYjljMGMzMzfIPHwZ: 00:36:09.510 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Njc2OTQzNmI0ODUyYjhkZjQxMzE5Y2M4ZTRhZWNhZjcwNzk2ZDRjMjgzMWNjYWE0YTc3OGZhNjg2NDYxMjgyZQHH+LY=: ]] 00:36:09.510 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Njc2OTQzNmI0ODUyYjhkZjQxMzE5Y2M4ZTRhZWNhZjcwNzk2ZDRjMjgzMWNjYWE0YTc3OGZhNjg2NDYxMjgyZQHH+LY=: 00:36:09.510 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:36:09.510 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:09.510 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:09.510 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:09.510 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:09.510 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:09.510 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:36:09.510 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:09.510 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:09.510 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:09.510 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:09.510 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:09.510 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:09.510 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:09.510 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:09.510 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:09.510 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:09.510 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:09.510 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:09.510 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:09.510 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:09.510 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:09.510 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:09.510 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:10.078 nvme0n1 00:36:10.078 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:10.078 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:10.078 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:10.078 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:10.078 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:10.078 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:10.078 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:10.078 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:10.078 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:10.078 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:10.078 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:10.078 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:10.078 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:36:10.078 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:10.078 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:10.078 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:10.078 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:10.078 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWNlYmQzYWIxNWQyNGUzZmY2OTU1NGU0N2I2NGY3OTlhOGEyNzU0ZmJmYzhkMjg0bofh4A==: 00:36:10.078 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGJhMmE4NzI0N2U1YWU1Yzg5MDJhMmVjNmY4NWNjOGZjYjNmNzY2YWMxN2Y4NTM3zV7Lug==: 00:36:10.078 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:10.078 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:10.078 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWNlYmQzYWIxNWQyNGUzZmY2OTU1NGU0N2I2NGY3OTlhOGEyNzU0ZmJmYzhkMjg0bofh4A==: 00:36:10.078 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGJhMmE4NzI0N2U1YWU1Yzg5MDJhMmVjNmY4NWNjOGZjYjNmNzY2YWMxN2Y4NTM3zV7Lug==: ]] 00:36:10.078 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGJhMmE4NzI0N2U1YWU1Yzg5MDJhMmVjNmY4NWNjOGZjYjNmNzY2YWMxN2Y4NTM3zV7Lug==: 00:36:10.078 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:36:10.078 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:10.078 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:10.078 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:10.078 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:10.078 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:10.078 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:36:10.078 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:10.078 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:10.078 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:10.078 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:10.078 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:10.078 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:10.078 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:10.078 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:10.078 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:10.078 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:10.078 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:10.078 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:10.078 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:10.078 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:10.078 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:10.078 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:10.078 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:10.337 nvme0n1 00:36:10.337 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:10.337 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:10.337 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:10.337 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:10.337 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:10.337 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:10.596 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:10.596 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:10.596 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:10.596 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:10.596 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:10.596 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:10.596 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:36:10.596 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:10.596 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:10.596 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:10.596 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:10.596 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTViODJiYWQzNmM3N2I2Y2VkMTFlNzRiNmIzZjc5OWbt6JGf: 00:36:10.596 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzA2OTU2OTg2NTQ5ZjM1MjQ1MGZiODc2OTY3NGIzZDU4kR/r: 00:36:10.596 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:10.596 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:10.596 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTViODJiYWQzNmM3N2I2Y2VkMTFlNzRiNmIzZjc5OWbt6JGf: 00:36:10.596 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzA2OTU2OTg2NTQ5ZjM1MjQ1MGZiODc2OTY3NGIzZDU4kR/r: ]] 00:36:10.596 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzA2OTU2OTg2NTQ5ZjM1MjQ1MGZiODc2OTY3NGIzZDU4kR/r: 00:36:10.596 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:36:10.596 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:10.596 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:10.596 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:10.596 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:10.596 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:10.596 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:36:10.596 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:10.596 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:10.596 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:10.596 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:10.596 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:10.596 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:10.596 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:10.596 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:10.596 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:10.596 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:10.596 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:10.596 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:10.596 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:10.596 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:10.596 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:10.596 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:10.596 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:10.855 nvme0n1 00:36:10.855 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:10.855 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:10.855 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:10.855 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:10.855 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:10.855 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:10.855 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:10.855 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:10.855 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:10.855 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:10.855 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:10.855 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:10.855 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:36:10.855 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:10.855 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:10.855 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:10.855 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:10.855 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Nzc5YzczMGZlMDIyZDlkODEyYmVlYjc4ZjU2ZTNjOWNkMGIyM2NjNmY1ZGY3ZDZh152WXw==: 00:36:10.855 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDk2NDE1MWZlOWJiOWZkODAyYjVjMTRiZGFlZTM1MGTdIRAt: 00:36:10.855 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:10.855 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:10.855 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Nzc5YzczMGZlMDIyZDlkODEyYmVlYjc4ZjU2ZTNjOWNkMGIyM2NjNmY1ZGY3ZDZh152WXw==: 00:36:10.855 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDk2NDE1MWZlOWJiOWZkODAyYjVjMTRiZGFlZTM1MGTdIRAt: ]] 00:36:10.855 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDk2NDE1MWZlOWJiOWZkODAyYjVjMTRiZGFlZTM1MGTdIRAt: 00:36:10.855 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:36:10.855 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:10.855 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:10.855 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:10.855 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:10.855 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:10.855 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:36:10.855 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:10.856 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:10.856 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:11.114 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:11.114 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:11.114 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:11.114 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:11.114 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:11.114 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:11.114 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:11.114 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:11.114 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:11.114 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:11.114 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:11.114 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:11.114 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:11.114 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:11.375 nvme0n1 00:36:11.375 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:11.375 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:11.375 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:11.375 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:11.375 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:11.375 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:11.375 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:11.375 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:11.375 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:11.375 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:11.375 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:11.375 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:11.375 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:36:11.375 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:11.375 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:11.375 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:11.375 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:11.375 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTRlMzY4ZWQwOGUyZThjN2E2YjNjZjc4NzZkZGE3YjBlNmQ2MGUwZTM1MWQ5NTI1MDM2ZjhkODU4MDViZmFlMbOSEjg=: 00:36:11.375 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:11.375 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:11.375 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:11.375 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTRlMzY4ZWQwOGUyZThjN2E2YjNjZjc4NzZkZGE3YjBlNmQ2MGUwZTM1MWQ5NTI1MDM2ZjhkODU4MDViZmFlMbOSEjg=: 00:36:11.375 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:11.375 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:36:11.375 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:11.375 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:11.375 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:11.375 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:11.375 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:11.375 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:36:11.375 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:11.375 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:11.375 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:11.375 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:11.375 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:11.375 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:11.375 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:11.375 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:11.375 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:11.375 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:11.375 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:11.375 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:11.375 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:11.375 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:11.375 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:11.375 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:11.375 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:11.942 nvme0n1 00:36:11.942 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:11.942 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:11.942 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:11.942 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:11.942 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:11.942 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:11.942 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:11.942 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:11.942 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:11.942 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:11.942 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:11.942 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:11.942 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:11.942 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:36:11.942 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:11.942 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:11.942 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:11.942 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:11.943 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDJjODBjYzM0MDFiNTA0ODEwNTljMDMzYjljMGMzMzfIPHwZ: 00:36:11.943 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Njc2OTQzNmI0ODUyYjhkZjQxMzE5Y2M4ZTRhZWNhZjcwNzk2ZDRjMjgzMWNjYWE0YTc3OGZhNjg2NDYxMjgyZQHH+LY=: 00:36:11.943 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:11.943 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:11.943 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDJjODBjYzM0MDFiNTA0ODEwNTljMDMzYjljMGMzMzfIPHwZ: 00:36:11.943 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Njc2OTQzNmI0ODUyYjhkZjQxMzE5Y2M4ZTRhZWNhZjcwNzk2ZDRjMjgzMWNjYWE0YTc3OGZhNjg2NDYxMjgyZQHH+LY=: ]] 00:36:11.943 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Njc2OTQzNmI0ODUyYjhkZjQxMzE5Y2M4ZTRhZWNhZjcwNzk2ZDRjMjgzMWNjYWE0YTc3OGZhNjg2NDYxMjgyZQHH+LY=: 00:36:11.943 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:36:11.943 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:11.943 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:11.943 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:11.943 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:11.943 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:11.943 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:36:11.943 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:11.943 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:11.943 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:11.943 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:11.943 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:11.943 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:11.943 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:11.943 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:11.943 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:11.943 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:11.943 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:11.943 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:11.943 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:11.943 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:11.943 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:11.943 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:11.943 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:12.510 nvme0n1 00:36:12.510 10:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:12.510 10:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:12.510 10:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:12.510 10:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:12.510 10:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:12.510 10:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:12.510 10:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:12.510 10:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:12.510 10:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:12.510 10:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:12.510 10:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:12.510 10:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:12.510 10:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:36:12.510 10:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:12.510 10:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:12.510 10:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:12.510 10:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:12.510 10:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWNlYmQzYWIxNWQyNGUzZmY2OTU1NGU0N2I2NGY3OTlhOGEyNzU0ZmJmYzhkMjg0bofh4A==: 00:36:12.510 10:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGJhMmE4NzI0N2U1YWU1Yzg5MDJhMmVjNmY4NWNjOGZjYjNmNzY2YWMxN2Y4NTM3zV7Lug==: 00:36:12.510 10:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:12.510 10:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:12.510 10:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWNlYmQzYWIxNWQyNGUzZmY2OTU1NGU0N2I2NGY3OTlhOGEyNzU0ZmJmYzhkMjg0bofh4A==: 00:36:12.510 10:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGJhMmE4NzI0N2U1YWU1Yzg5MDJhMmVjNmY4NWNjOGZjYjNmNzY2YWMxN2Y4NTM3zV7Lug==: ]] 00:36:12.510 10:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGJhMmE4NzI0N2U1YWU1Yzg5MDJhMmVjNmY4NWNjOGZjYjNmNzY2YWMxN2Y4NTM3zV7Lug==: 00:36:12.510 10:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:36:12.510 10:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:12.510 10:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:12.510 10:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:12.510 10:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:12.510 10:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:12.511 10:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:36:12.511 10:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:12.511 10:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:12.511 10:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:12.511 10:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:12.511 10:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:12.511 10:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:12.511 10:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:12.511 10:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:12.511 10:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:12.511 10:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:12.511 10:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:12.511 10:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:12.511 10:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:12.511 10:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:12.511 10:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:12.511 10:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:12.511 10:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:13.078 nvme0n1 00:36:13.078 10:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:13.078 10:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:13.078 10:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:13.078 10:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:13.078 10:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:13.078 10:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:13.079 10:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:13.079 10:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:13.079 10:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:13.079 10:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:13.079 10:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:13.079 10:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:13.079 10:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:36:13.079 10:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:13.079 10:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:13.079 10:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:13.079 10:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:13.079 10:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTViODJiYWQzNmM3N2I2Y2VkMTFlNzRiNmIzZjc5OWbt6JGf: 00:36:13.079 10:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzA2OTU2OTg2NTQ5ZjM1MjQ1MGZiODc2OTY3NGIzZDU4kR/r: 00:36:13.079 10:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:13.079 10:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:13.079 10:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTViODJiYWQzNmM3N2I2Y2VkMTFlNzRiNmIzZjc5OWbt6JGf: 00:36:13.079 10:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzA2OTU2OTg2NTQ5ZjM1MjQ1MGZiODc2OTY3NGIzZDU4kR/r: ]] 00:36:13.079 10:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzA2OTU2OTg2NTQ5ZjM1MjQ1MGZiODc2OTY3NGIzZDU4kR/r: 00:36:13.079 10:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:36:13.079 10:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:13.079 10:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:13.079 10:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:13.079 10:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:13.079 10:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:13.079 10:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:36:13.079 10:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:13.079 10:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:13.079 10:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:13.079 10:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:13.079 10:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:13.079 10:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:13.079 10:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:13.079 10:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:13.079 10:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:13.079 10:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:13.079 10:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:13.079 10:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:13.079 10:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:13.079 10:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:13.079 10:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:13.079 10:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:13.079 10:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:13.647 nvme0n1 00:36:13.647 10:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:13.647 10:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:13.647 10:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:13.647 10:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:13.647 10:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:13.647 10:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:13.906 10:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:13.906 10:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:13.906 10:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:13.906 10:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:13.906 10:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:13.906 10:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:13.906 10:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:36:13.906 10:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:13.906 10:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:13.906 10:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:13.906 10:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:13.906 10:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Nzc5YzczMGZlMDIyZDlkODEyYmVlYjc4ZjU2ZTNjOWNkMGIyM2NjNmY1ZGY3ZDZh152WXw==: 00:36:13.906 10:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDk2NDE1MWZlOWJiOWZkODAyYjVjMTRiZGFlZTM1MGTdIRAt: 00:36:13.906 10:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:13.906 10:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:13.906 10:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Nzc5YzczMGZlMDIyZDlkODEyYmVlYjc4ZjU2ZTNjOWNkMGIyM2NjNmY1ZGY3ZDZh152WXw==: 00:36:13.906 10:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDk2NDE1MWZlOWJiOWZkODAyYjVjMTRiZGFlZTM1MGTdIRAt: ]] 00:36:13.906 10:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDk2NDE1MWZlOWJiOWZkODAyYjVjMTRiZGFlZTM1MGTdIRAt: 00:36:13.906 10:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:36:13.906 10:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:13.906 10:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:13.906 10:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:13.906 10:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:13.906 10:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:13.906 10:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:36:13.906 10:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:13.906 10:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:13.906 10:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:13.906 10:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:13.906 10:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:13.906 10:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:13.906 10:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:13.906 10:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:13.906 10:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:13.906 10:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:13.906 10:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:13.906 10:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:13.906 10:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:13.906 10:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:13.906 10:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:13.906 10:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:13.906 10:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:14.474 nvme0n1 00:36:14.474 10:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:14.474 10:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:14.474 10:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:14.474 10:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:14.474 10:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:14.474 10:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:14.474 10:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:14.474 10:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:14.474 10:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:14.474 10:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:14.474 10:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:14.474 10:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:14.474 10:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:36:14.474 10:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:14.474 10:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:14.474 10:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:14.474 10:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:14.474 10:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTRlMzY4ZWQwOGUyZThjN2E2YjNjZjc4NzZkZGE3YjBlNmQ2MGUwZTM1MWQ5NTI1MDM2ZjhkODU4MDViZmFlMbOSEjg=: 00:36:14.474 10:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:14.474 10:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:14.474 10:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:14.474 10:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTRlMzY4ZWQwOGUyZThjN2E2YjNjZjc4NzZkZGE3YjBlNmQ2MGUwZTM1MWQ5NTI1MDM2ZjhkODU4MDViZmFlMbOSEjg=: 00:36:14.474 10:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:14.474 10:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:36:14.474 10:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:14.474 10:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:14.474 10:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:14.474 10:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:14.474 10:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:14.474 10:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:36:14.474 10:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:14.474 10:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:14.474 10:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:14.474 10:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:14.474 10:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:14.474 10:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:14.475 10:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:14.475 10:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:14.475 10:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:14.475 10:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:14.475 10:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:14.475 10:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:14.475 10:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:14.475 10:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:14.475 10:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:14.475 10:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:14.475 10:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:15.043 nvme0n1 00:36:15.043 10:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:15.043 10:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:15.043 10:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:15.043 10:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:15.043 10:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:15.043 10:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:15.043 10:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:15.043 10:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:15.043 10:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:15.043 10:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:15.043 10:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:15.043 10:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:36:15.043 10:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:15.043 10:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:15.043 10:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:36:15.043 10:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:15.043 10:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:15.043 10:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:15.043 10:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:15.043 10:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDJjODBjYzM0MDFiNTA0ODEwNTljMDMzYjljMGMzMzfIPHwZ: 00:36:15.043 10:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Njc2OTQzNmI0ODUyYjhkZjQxMzE5Y2M4ZTRhZWNhZjcwNzk2ZDRjMjgzMWNjYWE0YTc3OGZhNjg2NDYxMjgyZQHH+LY=: 00:36:15.044 10:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:15.044 10:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:15.044 10:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDJjODBjYzM0MDFiNTA0ODEwNTljMDMzYjljMGMzMzfIPHwZ: 00:36:15.044 10:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Njc2OTQzNmI0ODUyYjhkZjQxMzE5Y2M4ZTRhZWNhZjcwNzk2ZDRjMjgzMWNjYWE0YTc3OGZhNjg2NDYxMjgyZQHH+LY=: ]] 00:36:15.044 10:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Njc2OTQzNmI0ODUyYjhkZjQxMzE5Y2M4ZTRhZWNhZjcwNzk2ZDRjMjgzMWNjYWE0YTc3OGZhNjg2NDYxMjgyZQHH+LY=: 00:36:15.044 10:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:36:15.044 10:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:15.044 10:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:15.044 10:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:15.044 10:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:15.044 10:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:15.044 10:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:36:15.044 10:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:15.044 10:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:15.044 10:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:15.044 10:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:15.044 10:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:15.044 10:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:15.044 10:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:15.044 10:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:15.044 10:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:15.044 10:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:15.044 10:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:15.044 10:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:15.044 10:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:15.044 10:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:15.044 10:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:15.044 10:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:15.044 10:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:15.303 nvme0n1 00:36:15.303 10:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:15.303 10:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:15.303 10:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:15.303 10:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:15.303 10:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:15.303 10:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:15.303 10:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:15.303 10:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:15.303 10:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:15.303 10:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:15.303 10:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:15.303 10:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:15.303 10:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:36:15.303 10:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:15.303 10:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:15.303 10:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:15.303 10:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:15.303 10:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWNlYmQzYWIxNWQyNGUzZmY2OTU1NGU0N2I2NGY3OTlhOGEyNzU0ZmJmYzhkMjg0bofh4A==: 00:36:15.303 10:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGJhMmE4NzI0N2U1YWU1Yzg5MDJhMmVjNmY4NWNjOGZjYjNmNzY2YWMxN2Y4NTM3zV7Lug==: 00:36:15.303 10:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:15.303 10:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:15.303 10:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWNlYmQzYWIxNWQyNGUzZmY2OTU1NGU0N2I2NGY3OTlhOGEyNzU0ZmJmYzhkMjg0bofh4A==: 00:36:15.303 10:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGJhMmE4NzI0N2U1YWU1Yzg5MDJhMmVjNmY4NWNjOGZjYjNmNzY2YWMxN2Y4NTM3zV7Lug==: ]] 00:36:15.303 10:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGJhMmE4NzI0N2U1YWU1Yzg5MDJhMmVjNmY4NWNjOGZjYjNmNzY2YWMxN2Y4NTM3zV7Lug==: 00:36:15.303 10:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:36:15.303 10:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:15.303 10:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:15.303 10:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:15.303 10:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:15.303 10:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:15.303 10:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:36:15.303 10:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:15.303 10:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:15.304 10:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:15.304 10:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:15.304 10:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:15.304 10:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:15.304 10:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:15.304 10:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:15.304 10:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:15.304 10:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:15.304 10:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:15.304 10:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:15.304 10:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:15.304 10:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:15.304 10:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:15.304 10:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:15.304 10:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:15.563 nvme0n1 00:36:15.563 10:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:15.563 10:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:15.563 10:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:15.563 10:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:15.563 10:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:15.563 10:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:15.563 10:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:15.563 10:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:15.563 10:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:15.563 10:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:15.563 10:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:15.563 10:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:15.563 10:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:36:15.563 10:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:15.563 10:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:15.563 10:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:15.563 10:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:15.563 10:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTViODJiYWQzNmM3N2I2Y2VkMTFlNzRiNmIzZjc5OWbt6JGf: 00:36:15.563 10:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzA2OTU2OTg2NTQ5ZjM1MjQ1MGZiODc2OTY3NGIzZDU4kR/r: 00:36:15.563 10:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:15.563 10:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:15.563 10:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTViODJiYWQzNmM3N2I2Y2VkMTFlNzRiNmIzZjc5OWbt6JGf: 00:36:15.563 10:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzA2OTU2OTg2NTQ5ZjM1MjQ1MGZiODc2OTY3NGIzZDU4kR/r: ]] 00:36:15.563 10:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzA2OTU2OTg2NTQ5ZjM1MjQ1MGZiODc2OTY3NGIzZDU4kR/r: 00:36:15.563 10:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:36:15.563 10:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:15.563 10:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:15.563 10:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:15.563 10:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:15.563 10:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:15.563 10:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:36:15.563 10:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:15.563 10:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:15.563 10:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:15.563 10:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:15.563 10:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:15.563 10:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:15.563 10:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:15.563 10:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:15.563 10:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:15.563 10:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:15.563 10:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:15.563 10:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:15.563 10:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:15.563 10:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:15.563 10:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:15.563 10:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:15.563 10:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:15.823 nvme0n1 00:36:15.823 10:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:15.823 10:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:15.823 10:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:15.823 10:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:15.823 10:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:15.823 10:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:15.823 10:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:15.823 10:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:15.823 10:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:15.823 10:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:15.823 10:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:15.823 10:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:15.823 10:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:36:15.823 10:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:15.823 10:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:15.823 10:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:15.823 10:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:15.823 10:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Nzc5YzczMGZlMDIyZDlkODEyYmVlYjc4ZjU2ZTNjOWNkMGIyM2NjNmY1ZGY3ZDZh152WXw==: 00:36:15.823 10:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDk2NDE1MWZlOWJiOWZkODAyYjVjMTRiZGFlZTM1MGTdIRAt: 00:36:15.823 10:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:15.823 10:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:15.823 10:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Nzc5YzczMGZlMDIyZDlkODEyYmVlYjc4ZjU2ZTNjOWNkMGIyM2NjNmY1ZGY3ZDZh152WXw==: 00:36:15.823 10:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDk2NDE1MWZlOWJiOWZkODAyYjVjMTRiZGFlZTM1MGTdIRAt: ]] 00:36:15.823 10:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDk2NDE1MWZlOWJiOWZkODAyYjVjMTRiZGFlZTM1MGTdIRAt: 00:36:15.823 10:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:36:15.823 10:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:15.823 10:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:15.823 10:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:15.823 10:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:15.823 10:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:15.823 10:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:36:15.823 10:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:15.823 10:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:15.823 10:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:15.823 10:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:15.823 10:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:15.823 10:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:15.823 10:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:15.823 10:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:15.823 10:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:15.823 10:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:15.823 10:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:15.823 10:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:15.823 10:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:15.823 10:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:15.823 10:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:15.823 10:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:15.823 10:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:15.823 nvme0n1 00:36:15.823 10:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:16.083 10:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:16.083 10:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:16.083 10:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:16.083 10:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:16.083 10:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:16.083 10:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:16.083 10:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:16.083 10:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:16.083 10:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:16.083 10:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:16.083 10:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:16.083 10:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:36:16.083 10:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:16.083 10:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:16.083 10:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:16.083 10:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:16.083 10:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTRlMzY4ZWQwOGUyZThjN2E2YjNjZjc4NzZkZGE3YjBlNmQ2MGUwZTM1MWQ5NTI1MDM2ZjhkODU4MDViZmFlMbOSEjg=: 00:36:16.083 10:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:16.083 10:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:16.083 10:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:16.083 10:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTRlMzY4ZWQwOGUyZThjN2E2YjNjZjc4NzZkZGE3YjBlNmQ2MGUwZTM1MWQ5NTI1MDM2ZjhkODU4MDViZmFlMbOSEjg=: 00:36:16.083 10:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:16.083 10:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:36:16.083 10:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:16.083 10:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:16.083 10:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:16.083 10:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:16.083 10:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:16.083 10:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:36:16.083 10:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:16.083 10:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:16.083 10:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:16.083 10:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:16.083 10:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:16.083 10:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:16.083 10:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:16.083 10:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:16.083 10:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:16.083 10:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:16.083 10:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:16.083 10:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:16.083 10:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:16.083 10:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:16.083 10:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:16.083 10:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:16.083 10:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:16.083 nvme0n1 00:36:16.083 10:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:16.083 10:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:16.083 10:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:16.083 10:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:16.083 10:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:16.083 10:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:16.343 10:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:16.343 10:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:16.343 10:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:16.343 10:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:16.343 10:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:16.343 10:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:16.343 10:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:16.343 10:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:36:16.343 10:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:16.343 10:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:16.343 10:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:16.343 10:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:16.343 10:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDJjODBjYzM0MDFiNTA0ODEwNTljMDMzYjljMGMzMzfIPHwZ: 00:36:16.343 10:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Njc2OTQzNmI0ODUyYjhkZjQxMzE5Y2M4ZTRhZWNhZjcwNzk2ZDRjMjgzMWNjYWE0YTc3OGZhNjg2NDYxMjgyZQHH+LY=: 00:36:16.343 10:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:16.343 10:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:16.343 10:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDJjODBjYzM0MDFiNTA0ODEwNTljMDMzYjljMGMzMzfIPHwZ: 00:36:16.343 10:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Njc2OTQzNmI0ODUyYjhkZjQxMzE5Y2M4ZTRhZWNhZjcwNzk2ZDRjMjgzMWNjYWE0YTc3OGZhNjg2NDYxMjgyZQHH+LY=: ]] 00:36:16.343 10:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Njc2OTQzNmI0ODUyYjhkZjQxMzE5Y2M4ZTRhZWNhZjcwNzk2ZDRjMjgzMWNjYWE0YTc3OGZhNjg2NDYxMjgyZQHH+LY=: 00:36:16.343 10:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:36:16.343 10:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:16.343 10:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:16.343 10:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:16.343 10:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:16.343 10:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:16.343 10:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:36:16.343 10:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:16.343 10:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:16.343 10:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:16.343 10:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:16.343 10:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:16.343 10:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:16.343 10:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:16.343 10:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:16.343 10:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:16.343 10:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:16.343 10:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:16.343 10:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:16.343 10:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:16.343 10:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:16.343 10:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:16.343 10:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:16.343 10:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:16.343 nvme0n1 00:36:16.343 10:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:16.343 10:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:16.343 10:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:16.343 10:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:16.343 10:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:16.343 10:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:16.603 10:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:16.603 10:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:16.603 10:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:16.603 10:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:16.603 10:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:16.603 10:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:16.603 10:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:36:16.603 10:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:16.603 10:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:16.603 10:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:16.603 10:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:16.603 10:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWNlYmQzYWIxNWQyNGUzZmY2OTU1NGU0N2I2NGY3OTlhOGEyNzU0ZmJmYzhkMjg0bofh4A==: 00:36:16.603 10:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGJhMmE4NzI0N2U1YWU1Yzg5MDJhMmVjNmY4NWNjOGZjYjNmNzY2YWMxN2Y4NTM3zV7Lug==: 00:36:16.603 10:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:16.603 10:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:16.603 10:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWNlYmQzYWIxNWQyNGUzZmY2OTU1NGU0N2I2NGY3OTlhOGEyNzU0ZmJmYzhkMjg0bofh4A==: 00:36:16.603 10:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGJhMmE4NzI0N2U1YWU1Yzg5MDJhMmVjNmY4NWNjOGZjYjNmNzY2YWMxN2Y4NTM3zV7Lug==: ]] 00:36:16.603 10:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGJhMmE4NzI0N2U1YWU1Yzg5MDJhMmVjNmY4NWNjOGZjYjNmNzY2YWMxN2Y4NTM3zV7Lug==: 00:36:16.603 10:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:36:16.603 10:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:16.603 10:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:16.603 10:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:16.603 10:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:16.603 10:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:16.603 10:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:36:16.603 10:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:16.603 10:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:16.603 10:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:16.603 10:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:16.603 10:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:16.603 10:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:16.603 10:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:16.603 10:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:16.603 10:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:16.603 10:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:16.603 10:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:16.603 10:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:16.603 10:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:16.603 10:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:16.603 10:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:16.603 10:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:16.603 10:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:16.603 nvme0n1 00:36:16.603 10:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:16.603 10:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:16.603 10:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:16.603 10:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:16.603 10:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:16.603 10:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:16.863 10:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:16.863 10:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:16.863 10:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:16.863 10:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:16.863 10:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:16.863 10:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:16.863 10:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:36:16.863 10:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:16.863 10:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:16.863 10:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:16.863 10:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:16.863 10:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTViODJiYWQzNmM3N2I2Y2VkMTFlNzRiNmIzZjc5OWbt6JGf: 00:36:16.863 10:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzA2OTU2OTg2NTQ5ZjM1MjQ1MGZiODc2OTY3NGIzZDU4kR/r: 00:36:16.863 10:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:16.863 10:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:16.863 10:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTViODJiYWQzNmM3N2I2Y2VkMTFlNzRiNmIzZjc5OWbt6JGf: 00:36:16.863 10:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzA2OTU2OTg2NTQ5ZjM1MjQ1MGZiODc2OTY3NGIzZDU4kR/r: ]] 00:36:16.863 10:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzA2OTU2OTg2NTQ5ZjM1MjQ1MGZiODc2OTY3NGIzZDU4kR/r: 00:36:16.863 10:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:36:16.863 10:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:16.863 10:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:16.863 10:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:16.863 10:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:16.863 10:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:16.863 10:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:36:16.863 10:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:16.863 10:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:16.863 10:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:16.863 10:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:16.863 10:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:16.863 10:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:16.863 10:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:16.863 10:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:16.863 10:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:16.863 10:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:16.863 10:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:16.863 10:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:16.863 10:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:16.863 10:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:16.863 10:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:16.863 10:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:16.863 10:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:16.863 nvme0n1 00:36:16.863 10:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:16.863 10:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:16.863 10:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:16.863 10:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:16.863 10:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:16.863 10:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:16.863 10:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:16.863 10:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:16.863 10:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:16.863 10:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:17.123 10:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:17.123 10:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:17.123 10:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:36:17.123 10:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:17.123 10:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:17.123 10:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:17.123 10:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:17.123 10:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Nzc5YzczMGZlMDIyZDlkODEyYmVlYjc4ZjU2ZTNjOWNkMGIyM2NjNmY1ZGY3ZDZh152WXw==: 00:36:17.123 10:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDk2NDE1MWZlOWJiOWZkODAyYjVjMTRiZGFlZTM1MGTdIRAt: 00:36:17.123 10:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:17.123 10:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:17.123 10:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Nzc5YzczMGZlMDIyZDlkODEyYmVlYjc4ZjU2ZTNjOWNkMGIyM2NjNmY1ZGY3ZDZh152WXw==: 00:36:17.123 10:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDk2NDE1MWZlOWJiOWZkODAyYjVjMTRiZGFlZTM1MGTdIRAt: ]] 00:36:17.123 10:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDk2NDE1MWZlOWJiOWZkODAyYjVjMTRiZGFlZTM1MGTdIRAt: 00:36:17.123 10:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:36:17.123 10:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:17.123 10:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:17.123 10:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:17.123 10:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:17.123 10:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:17.123 10:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:36:17.123 10:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:17.123 10:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:17.123 10:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:17.123 10:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:17.123 10:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:17.123 10:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:17.123 10:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:17.123 10:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:17.123 10:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:17.123 10:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:17.123 10:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:17.123 10:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:17.123 10:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:17.123 10:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:17.123 10:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:17.123 10:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:17.123 10:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:17.123 nvme0n1 00:36:17.123 10:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:17.123 10:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:17.123 10:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:17.123 10:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:17.123 10:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:17.123 10:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:17.123 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:17.123 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:17.123 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:17.123 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:17.381 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:17.381 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:17.381 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:36:17.381 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:17.381 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:17.381 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:17.381 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:17.381 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTRlMzY4ZWQwOGUyZThjN2E2YjNjZjc4NzZkZGE3YjBlNmQ2MGUwZTM1MWQ5NTI1MDM2ZjhkODU4MDViZmFlMbOSEjg=: 00:36:17.381 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:17.381 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:17.381 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:17.381 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTRlMzY4ZWQwOGUyZThjN2E2YjNjZjc4NzZkZGE3YjBlNmQ2MGUwZTM1MWQ5NTI1MDM2ZjhkODU4MDViZmFlMbOSEjg=: 00:36:17.381 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:17.381 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:36:17.381 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:17.381 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:17.381 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:17.381 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:17.381 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:17.381 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:36:17.381 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:17.381 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:17.381 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:17.381 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:17.381 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:17.381 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:17.381 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:17.381 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:17.381 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:17.381 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:17.381 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:17.381 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:17.381 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:17.381 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:17.381 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:17.381 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:17.381 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:17.381 nvme0n1 00:36:17.381 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:17.381 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:17.381 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:17.381 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:17.381 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:17.381 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:17.640 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:17.640 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:17.640 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:17.640 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:17.640 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:17.640 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:17.640 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:17.640 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:36:17.640 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:17.640 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:17.640 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:17.640 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:17.640 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDJjODBjYzM0MDFiNTA0ODEwNTljMDMzYjljMGMzMzfIPHwZ: 00:36:17.640 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Njc2OTQzNmI0ODUyYjhkZjQxMzE5Y2M4ZTRhZWNhZjcwNzk2ZDRjMjgzMWNjYWE0YTc3OGZhNjg2NDYxMjgyZQHH+LY=: 00:36:17.640 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:17.640 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:17.640 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDJjODBjYzM0MDFiNTA0ODEwNTljMDMzYjljMGMzMzfIPHwZ: 00:36:17.640 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Njc2OTQzNmI0ODUyYjhkZjQxMzE5Y2M4ZTRhZWNhZjcwNzk2ZDRjMjgzMWNjYWE0YTc3OGZhNjg2NDYxMjgyZQHH+LY=: ]] 00:36:17.640 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Njc2OTQzNmI0ODUyYjhkZjQxMzE5Y2M4ZTRhZWNhZjcwNzk2ZDRjMjgzMWNjYWE0YTc3OGZhNjg2NDYxMjgyZQHH+LY=: 00:36:17.640 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:36:17.640 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:17.640 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:17.640 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:17.640 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:17.640 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:17.640 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:36:17.640 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:17.640 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:17.640 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:17.640 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:17.640 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:17.640 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:17.640 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:17.640 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:17.640 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:17.640 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:17.641 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:17.641 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:17.641 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:17.641 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:17.641 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:17.641 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:17.641 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:17.900 nvme0n1 00:36:17.900 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:17.900 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:17.900 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:17.900 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:17.900 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:17.900 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:17.900 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:17.900 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:17.900 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:17.900 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:17.900 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:17.900 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:17.900 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:36:17.900 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:17.900 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:17.900 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:17.900 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:17.900 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWNlYmQzYWIxNWQyNGUzZmY2OTU1NGU0N2I2NGY3OTlhOGEyNzU0ZmJmYzhkMjg0bofh4A==: 00:36:17.900 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGJhMmE4NzI0N2U1YWU1Yzg5MDJhMmVjNmY4NWNjOGZjYjNmNzY2YWMxN2Y4NTM3zV7Lug==: 00:36:17.900 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:17.900 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:17.900 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWNlYmQzYWIxNWQyNGUzZmY2OTU1NGU0N2I2NGY3OTlhOGEyNzU0ZmJmYzhkMjg0bofh4A==: 00:36:17.900 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGJhMmE4NzI0N2U1YWU1Yzg5MDJhMmVjNmY4NWNjOGZjYjNmNzY2YWMxN2Y4NTM3zV7Lug==: ]] 00:36:17.900 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGJhMmE4NzI0N2U1YWU1Yzg5MDJhMmVjNmY4NWNjOGZjYjNmNzY2YWMxN2Y4NTM3zV7Lug==: 00:36:17.900 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:36:17.900 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:17.900 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:17.900 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:17.900 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:17.900 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:17.900 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:36:17.900 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:17.900 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:17.900 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:17.900 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:17.900 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:17.900 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:17.900 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:17.900 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:17.900 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:17.900 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:17.900 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:17.900 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:17.900 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:17.900 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:17.900 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:17.900 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:17.900 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:18.160 nvme0n1 00:36:18.160 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:18.160 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:18.160 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:18.160 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:18.160 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:18.160 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:18.160 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:18.160 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:18.160 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:18.160 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:18.160 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:18.160 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:18.160 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:36:18.160 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:18.160 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:18.160 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:18.160 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:18.160 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTViODJiYWQzNmM3N2I2Y2VkMTFlNzRiNmIzZjc5OWbt6JGf: 00:36:18.160 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzA2OTU2OTg2NTQ5ZjM1MjQ1MGZiODc2OTY3NGIzZDU4kR/r: 00:36:18.160 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:18.160 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:18.160 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTViODJiYWQzNmM3N2I2Y2VkMTFlNzRiNmIzZjc5OWbt6JGf: 00:36:18.160 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzA2OTU2OTg2NTQ5ZjM1MjQ1MGZiODc2OTY3NGIzZDU4kR/r: ]] 00:36:18.160 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzA2OTU2OTg2NTQ5ZjM1MjQ1MGZiODc2OTY3NGIzZDU4kR/r: 00:36:18.160 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:36:18.160 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:18.160 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:18.160 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:18.160 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:18.160 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:18.160 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:36:18.160 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:18.160 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:18.160 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:18.160 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:18.160 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:18.160 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:18.160 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:18.160 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:18.160 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:18.160 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:18.160 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:18.160 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:18.160 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:18.160 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:18.160 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:18.160 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:18.160 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:18.420 nvme0n1 00:36:18.420 10:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:18.420 10:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:18.420 10:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:18.420 10:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:18.420 10:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:18.420 10:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:18.420 10:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:18.420 10:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:18.420 10:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:18.420 10:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:18.420 10:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:18.420 10:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:18.420 10:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:36:18.420 10:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:18.420 10:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:18.420 10:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:18.420 10:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:18.420 10:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Nzc5YzczMGZlMDIyZDlkODEyYmVlYjc4ZjU2ZTNjOWNkMGIyM2NjNmY1ZGY3ZDZh152WXw==: 00:36:18.420 10:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDk2NDE1MWZlOWJiOWZkODAyYjVjMTRiZGFlZTM1MGTdIRAt: 00:36:18.420 10:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:18.420 10:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:18.420 10:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Nzc5YzczMGZlMDIyZDlkODEyYmVlYjc4ZjU2ZTNjOWNkMGIyM2NjNmY1ZGY3ZDZh152WXw==: 00:36:18.420 10:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDk2NDE1MWZlOWJiOWZkODAyYjVjMTRiZGFlZTM1MGTdIRAt: ]] 00:36:18.420 10:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDk2NDE1MWZlOWJiOWZkODAyYjVjMTRiZGFlZTM1MGTdIRAt: 00:36:18.420 10:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:36:18.420 10:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:18.420 10:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:18.420 10:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:18.420 10:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:18.420 10:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:18.420 10:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:36:18.420 10:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:18.420 10:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:18.420 10:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:18.420 10:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:18.420 10:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:18.420 10:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:18.420 10:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:18.420 10:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:18.420 10:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:18.420 10:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:18.420 10:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:18.420 10:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:18.420 10:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:18.420 10:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:18.420 10:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:18.420 10:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:18.420 10:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:18.679 nvme0n1 00:36:18.679 10:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:18.679 10:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:18.679 10:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:18.679 10:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:18.679 10:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:18.679 10:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:18.679 10:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:18.679 10:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:18.679 10:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:18.679 10:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:18.679 10:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:18.679 10:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:18.679 10:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:36:18.679 10:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:18.679 10:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:18.679 10:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:18.679 10:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:18.679 10:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTRlMzY4ZWQwOGUyZThjN2E2YjNjZjc4NzZkZGE3YjBlNmQ2MGUwZTM1MWQ5NTI1MDM2ZjhkODU4MDViZmFlMbOSEjg=: 00:36:18.679 10:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:18.679 10:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:18.679 10:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:18.680 10:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTRlMzY4ZWQwOGUyZThjN2E2YjNjZjc4NzZkZGE3YjBlNmQ2MGUwZTM1MWQ5NTI1MDM2ZjhkODU4MDViZmFlMbOSEjg=: 00:36:18.680 10:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:18.680 10:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:36:18.680 10:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:18.680 10:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:18.680 10:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:18.680 10:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:18.680 10:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:18.680 10:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:36:18.680 10:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:18.680 10:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:18.939 10:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:18.939 10:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:18.939 10:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:18.939 10:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:18.939 10:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:18.939 10:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:18.939 10:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:18.939 10:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:18.939 10:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:18.939 10:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:18.939 10:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:18.939 10:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:18.939 10:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:18.939 10:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:18.939 10:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:19.198 nvme0n1 00:36:19.198 10:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:19.198 10:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:19.198 10:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:19.198 10:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:19.198 10:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:19.198 10:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:19.198 10:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:19.198 10:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:19.198 10:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:19.198 10:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:19.198 10:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:19.198 10:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:19.198 10:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:19.198 10:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:36:19.198 10:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:19.198 10:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:19.198 10:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:19.198 10:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:19.198 10:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDJjODBjYzM0MDFiNTA0ODEwNTljMDMzYjljMGMzMzfIPHwZ: 00:36:19.198 10:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Njc2OTQzNmI0ODUyYjhkZjQxMzE5Y2M4ZTRhZWNhZjcwNzk2ZDRjMjgzMWNjYWE0YTc3OGZhNjg2NDYxMjgyZQHH+LY=: 00:36:19.198 10:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:19.198 10:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:19.198 10:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDJjODBjYzM0MDFiNTA0ODEwNTljMDMzYjljMGMzMzfIPHwZ: 00:36:19.198 10:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Njc2OTQzNmI0ODUyYjhkZjQxMzE5Y2M4ZTRhZWNhZjcwNzk2ZDRjMjgzMWNjYWE0YTc3OGZhNjg2NDYxMjgyZQHH+LY=: ]] 00:36:19.199 10:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Njc2OTQzNmI0ODUyYjhkZjQxMzE5Y2M4ZTRhZWNhZjcwNzk2ZDRjMjgzMWNjYWE0YTc3OGZhNjg2NDYxMjgyZQHH+LY=: 00:36:19.199 10:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:36:19.199 10:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:19.199 10:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:19.199 10:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:19.199 10:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:19.199 10:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:19.199 10:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:36:19.199 10:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:19.199 10:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:19.199 10:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:19.199 10:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:19.199 10:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:19.199 10:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:19.199 10:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:19.199 10:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:19.199 10:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:19.199 10:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:19.199 10:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:19.199 10:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:19.199 10:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:19.199 10:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:19.199 10:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:19.199 10:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:19.199 10:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:19.458 nvme0n1 00:36:19.458 10:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:19.458 10:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:19.458 10:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:19.458 10:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:19.458 10:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:19.458 10:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:19.458 10:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:19.458 10:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:19.458 10:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:19.458 10:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:19.458 10:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:19.458 10:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:19.458 10:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:36:19.458 10:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:19.458 10:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:19.458 10:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:19.458 10:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:19.458 10:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWNlYmQzYWIxNWQyNGUzZmY2OTU1NGU0N2I2NGY3OTlhOGEyNzU0ZmJmYzhkMjg0bofh4A==: 00:36:19.458 10:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGJhMmE4NzI0N2U1YWU1Yzg5MDJhMmVjNmY4NWNjOGZjYjNmNzY2YWMxN2Y4NTM3zV7Lug==: 00:36:19.458 10:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:19.458 10:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:19.458 10:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWNlYmQzYWIxNWQyNGUzZmY2OTU1NGU0N2I2NGY3OTlhOGEyNzU0ZmJmYzhkMjg0bofh4A==: 00:36:19.458 10:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGJhMmE4NzI0N2U1YWU1Yzg5MDJhMmVjNmY4NWNjOGZjYjNmNzY2YWMxN2Y4NTM3zV7Lug==: ]] 00:36:19.458 10:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGJhMmE4NzI0N2U1YWU1Yzg5MDJhMmVjNmY4NWNjOGZjYjNmNzY2YWMxN2Y4NTM3zV7Lug==: 00:36:19.458 10:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:36:19.458 10:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:19.458 10:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:19.458 10:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:19.458 10:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:19.458 10:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:19.458 10:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:36:19.458 10:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:19.458 10:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:19.458 10:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:19.458 10:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:19.458 10:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:19.458 10:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:19.458 10:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:19.458 10:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:19.458 10:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:19.458 10:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:19.458 10:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:19.458 10:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:19.458 10:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:19.458 10:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:19.458 10:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:19.458 10:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:19.458 10:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:20.027 nvme0n1 00:36:20.027 10:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:20.027 10:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:20.027 10:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:20.027 10:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:20.027 10:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:20.027 10:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:20.027 10:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:20.027 10:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:20.027 10:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:20.027 10:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:20.027 10:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:20.027 10:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:20.027 10:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:36:20.027 10:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:20.027 10:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:20.027 10:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:20.027 10:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:20.027 10:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTViODJiYWQzNmM3N2I2Y2VkMTFlNzRiNmIzZjc5OWbt6JGf: 00:36:20.027 10:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzA2OTU2OTg2NTQ5ZjM1MjQ1MGZiODc2OTY3NGIzZDU4kR/r: 00:36:20.027 10:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:20.027 10:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:20.027 10:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTViODJiYWQzNmM3N2I2Y2VkMTFlNzRiNmIzZjc5OWbt6JGf: 00:36:20.027 10:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzA2OTU2OTg2NTQ5ZjM1MjQ1MGZiODc2OTY3NGIzZDU4kR/r: ]] 00:36:20.027 10:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzA2OTU2OTg2NTQ5ZjM1MjQ1MGZiODc2OTY3NGIzZDU4kR/r: 00:36:20.027 10:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:36:20.027 10:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:20.027 10:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:20.027 10:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:20.027 10:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:20.027 10:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:20.027 10:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:36:20.027 10:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:20.027 10:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:20.027 10:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:20.027 10:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:20.027 10:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:20.027 10:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:20.027 10:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:20.027 10:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:20.027 10:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:20.027 10:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:20.027 10:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:20.027 10:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:20.027 10:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:20.027 10:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:20.027 10:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:20.027 10:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:20.027 10:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:20.286 nvme0n1 00:36:20.286 10:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:20.286 10:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:20.286 10:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:20.286 10:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:20.286 10:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:20.286 10:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:20.544 10:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:20.544 10:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:20.544 10:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:20.544 10:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:20.544 10:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:20.544 10:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:20.544 10:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:36:20.544 10:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:20.544 10:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:20.544 10:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:20.544 10:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:20.544 10:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Nzc5YzczMGZlMDIyZDlkODEyYmVlYjc4ZjU2ZTNjOWNkMGIyM2NjNmY1ZGY3ZDZh152WXw==: 00:36:20.545 10:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDk2NDE1MWZlOWJiOWZkODAyYjVjMTRiZGFlZTM1MGTdIRAt: 00:36:20.545 10:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:20.545 10:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:20.545 10:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Nzc5YzczMGZlMDIyZDlkODEyYmVlYjc4ZjU2ZTNjOWNkMGIyM2NjNmY1ZGY3ZDZh152WXw==: 00:36:20.545 10:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDk2NDE1MWZlOWJiOWZkODAyYjVjMTRiZGFlZTM1MGTdIRAt: ]] 00:36:20.545 10:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDk2NDE1MWZlOWJiOWZkODAyYjVjMTRiZGFlZTM1MGTdIRAt: 00:36:20.545 10:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:36:20.545 10:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:20.545 10:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:20.545 10:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:20.545 10:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:20.545 10:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:20.545 10:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:36:20.545 10:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:20.545 10:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:20.545 10:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:20.545 10:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:20.545 10:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:20.545 10:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:20.545 10:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:20.545 10:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:20.545 10:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:20.545 10:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:20.545 10:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:20.545 10:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:20.545 10:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:20.545 10:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:20.545 10:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:20.545 10:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:20.545 10:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:20.803 nvme0n1 00:36:20.803 10:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:20.803 10:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:20.803 10:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:20.803 10:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:20.803 10:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:20.804 10:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:20.804 10:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:20.804 10:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:20.804 10:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:20.804 10:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:20.804 10:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:20.804 10:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:20.804 10:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:36:20.804 10:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:20.804 10:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:20.804 10:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:20.804 10:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:20.804 10:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTRlMzY4ZWQwOGUyZThjN2E2YjNjZjc4NzZkZGE3YjBlNmQ2MGUwZTM1MWQ5NTI1MDM2ZjhkODU4MDViZmFlMbOSEjg=: 00:36:20.804 10:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:20.804 10:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:20.804 10:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:20.804 10:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTRlMzY4ZWQwOGUyZThjN2E2YjNjZjc4NzZkZGE3YjBlNmQ2MGUwZTM1MWQ5NTI1MDM2ZjhkODU4MDViZmFlMbOSEjg=: 00:36:20.804 10:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:20.804 10:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:36:20.804 10:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:20.804 10:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:20.804 10:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:20.804 10:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:20.804 10:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:20.804 10:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:36:20.804 10:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:20.804 10:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:20.804 10:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:20.804 10:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:20.804 10:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:20.804 10:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:20.804 10:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:20.804 10:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:20.804 10:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:20.804 10:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:20.804 10:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:20.804 10:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:20.804 10:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:20.804 10:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:20.804 10:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:20.804 10:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:20.804 10:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:21.372 nvme0n1 00:36:21.372 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:21.372 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:21.372 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:21.372 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:21.372 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:21.372 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:21.372 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:21.372 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:21.372 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:21.372 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:21.372 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:21.372 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:21.372 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:21.372 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:36:21.372 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:21.373 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:21.373 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:21.373 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:21.373 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDJjODBjYzM0MDFiNTA0ODEwNTljMDMzYjljMGMzMzfIPHwZ: 00:36:21.373 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Njc2OTQzNmI0ODUyYjhkZjQxMzE5Y2M4ZTRhZWNhZjcwNzk2ZDRjMjgzMWNjYWE0YTc3OGZhNjg2NDYxMjgyZQHH+LY=: 00:36:21.373 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:21.373 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:21.373 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDJjODBjYzM0MDFiNTA0ODEwNTljMDMzYjljMGMzMzfIPHwZ: 00:36:21.373 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Njc2OTQzNmI0ODUyYjhkZjQxMzE5Y2M4ZTRhZWNhZjcwNzk2ZDRjMjgzMWNjYWE0YTc3OGZhNjg2NDYxMjgyZQHH+LY=: ]] 00:36:21.373 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Njc2OTQzNmI0ODUyYjhkZjQxMzE5Y2M4ZTRhZWNhZjcwNzk2ZDRjMjgzMWNjYWE0YTc3OGZhNjg2NDYxMjgyZQHH+LY=: 00:36:21.373 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:36:21.373 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:21.373 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:21.373 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:21.373 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:21.373 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:21.373 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:36:21.373 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:21.373 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:21.373 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:21.373 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:21.373 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:21.373 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:21.373 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:21.373 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:21.373 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:21.373 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:21.373 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:21.373 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:21.373 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:21.373 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:21.373 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:21.373 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:21.373 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:21.941 nvme0n1 00:36:21.941 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:21.941 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:21.941 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:21.941 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:21.941 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:21.941 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:21.941 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:21.941 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:21.941 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:21.941 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:21.941 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:21.941 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:21.941 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:36:21.941 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:21.941 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:21.941 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:21.942 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:21.942 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWNlYmQzYWIxNWQyNGUzZmY2OTU1NGU0N2I2NGY3OTlhOGEyNzU0ZmJmYzhkMjg0bofh4A==: 00:36:21.942 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGJhMmE4NzI0N2U1YWU1Yzg5MDJhMmVjNmY4NWNjOGZjYjNmNzY2YWMxN2Y4NTM3zV7Lug==: 00:36:21.942 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:21.942 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:21.942 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWNlYmQzYWIxNWQyNGUzZmY2OTU1NGU0N2I2NGY3OTlhOGEyNzU0ZmJmYzhkMjg0bofh4A==: 00:36:21.942 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGJhMmE4NzI0N2U1YWU1Yzg5MDJhMmVjNmY4NWNjOGZjYjNmNzY2YWMxN2Y4NTM3zV7Lug==: ]] 00:36:21.942 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGJhMmE4NzI0N2U1YWU1Yzg5MDJhMmVjNmY4NWNjOGZjYjNmNzY2YWMxN2Y4NTM3zV7Lug==: 00:36:21.942 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:36:21.942 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:21.942 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:21.942 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:21.942 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:21.942 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:21.942 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:36:21.942 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:21.942 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:21.942 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:21.942 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:21.942 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:21.942 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:21.942 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:21.942 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:21.942 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:21.942 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:21.942 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:21.942 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:21.942 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:21.942 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:21.942 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:21.942 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:21.942 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:22.510 nvme0n1 00:36:22.510 10:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:22.510 10:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:22.510 10:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:22.510 10:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:22.510 10:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:22.510 10:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:22.769 10:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:22.769 10:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:22.769 10:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:22.769 10:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:22.769 10:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:22.769 10:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:22.769 10:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:36:22.769 10:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:22.769 10:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:22.769 10:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:22.769 10:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:22.769 10:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTViODJiYWQzNmM3N2I2Y2VkMTFlNzRiNmIzZjc5OWbt6JGf: 00:36:22.769 10:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzA2OTU2OTg2NTQ5ZjM1MjQ1MGZiODc2OTY3NGIzZDU4kR/r: 00:36:22.769 10:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:22.769 10:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:22.769 10:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTViODJiYWQzNmM3N2I2Y2VkMTFlNzRiNmIzZjc5OWbt6JGf: 00:36:22.770 10:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzA2OTU2OTg2NTQ5ZjM1MjQ1MGZiODc2OTY3NGIzZDU4kR/r: ]] 00:36:22.770 10:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzA2OTU2OTg2NTQ5ZjM1MjQ1MGZiODc2OTY3NGIzZDU4kR/r: 00:36:22.770 10:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:36:22.770 10:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:22.770 10:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:22.770 10:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:22.770 10:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:22.770 10:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:22.770 10:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:36:22.770 10:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:22.770 10:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:22.770 10:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:22.770 10:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:22.770 10:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:22.770 10:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:22.770 10:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:22.770 10:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:22.770 10:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:22.770 10:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:22.770 10:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:22.770 10:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:22.770 10:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:22.770 10:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:22.770 10:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:22.770 10:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:22.770 10:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:23.338 nvme0n1 00:36:23.338 10:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:23.338 10:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:23.338 10:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:23.338 10:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:23.338 10:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:23.338 10:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:23.338 10:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:23.338 10:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:23.338 10:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:23.338 10:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:23.338 10:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:23.338 10:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:23.338 10:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:36:23.338 10:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:23.338 10:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:23.338 10:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:23.338 10:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:23.338 10:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Nzc5YzczMGZlMDIyZDlkODEyYmVlYjc4ZjU2ZTNjOWNkMGIyM2NjNmY1ZGY3ZDZh152WXw==: 00:36:23.338 10:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDk2NDE1MWZlOWJiOWZkODAyYjVjMTRiZGFlZTM1MGTdIRAt: 00:36:23.338 10:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:23.338 10:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:23.338 10:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Nzc5YzczMGZlMDIyZDlkODEyYmVlYjc4ZjU2ZTNjOWNkMGIyM2NjNmY1ZGY3ZDZh152WXw==: 00:36:23.338 10:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDk2NDE1MWZlOWJiOWZkODAyYjVjMTRiZGFlZTM1MGTdIRAt: ]] 00:36:23.338 10:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDk2NDE1MWZlOWJiOWZkODAyYjVjMTRiZGFlZTM1MGTdIRAt: 00:36:23.338 10:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:36:23.338 10:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:23.338 10:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:23.338 10:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:23.338 10:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:23.338 10:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:23.338 10:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:36:23.338 10:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:23.338 10:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:23.338 10:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:23.338 10:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:23.338 10:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:23.338 10:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:23.338 10:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:23.338 10:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:23.338 10:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:23.338 10:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:23.338 10:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:23.338 10:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:23.338 10:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:23.338 10:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:23.338 10:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:23.338 10:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:23.338 10:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:23.907 nvme0n1 00:36:23.907 10:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:23.907 10:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:23.907 10:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:23.907 10:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:23.907 10:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:23.907 10:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:23.907 10:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:23.907 10:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:23.907 10:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:23.907 10:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:23.907 10:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:23.907 10:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:23.907 10:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:36:23.907 10:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:23.907 10:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:23.907 10:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:23.907 10:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:23.907 10:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTRlMzY4ZWQwOGUyZThjN2E2YjNjZjc4NzZkZGE3YjBlNmQ2MGUwZTM1MWQ5NTI1MDM2ZjhkODU4MDViZmFlMbOSEjg=: 00:36:23.907 10:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:23.907 10:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:23.907 10:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:23.907 10:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTRlMzY4ZWQwOGUyZThjN2E2YjNjZjc4NzZkZGE3YjBlNmQ2MGUwZTM1MWQ5NTI1MDM2ZjhkODU4MDViZmFlMbOSEjg=: 00:36:23.907 10:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:23.907 10:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:36:23.907 10:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:23.907 10:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:23.907 10:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:23.907 10:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:23.907 10:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:23.907 10:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:36:23.907 10:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:23.907 10:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:23.907 10:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:23.907 10:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:23.907 10:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:23.907 10:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:23.907 10:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:23.907 10:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:23.907 10:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:23.907 10:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:23.907 10:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:23.907 10:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:23.907 10:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:23.907 10:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:23.907 10:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:23.907 10:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:23.907 10:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:24.475 nvme0n1 00:36:24.475 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:24.475 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:24.475 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:24.475 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:24.475 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:24.475 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:24.475 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:24.475 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:24.475 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:24.475 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:24.475 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:24.475 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:36:24.475 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:24.475 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:24.476 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:36:24.476 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:24.476 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:24.476 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:24.476 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:24.476 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDJjODBjYzM0MDFiNTA0ODEwNTljMDMzYjljMGMzMzfIPHwZ: 00:36:24.476 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Njc2OTQzNmI0ODUyYjhkZjQxMzE5Y2M4ZTRhZWNhZjcwNzk2ZDRjMjgzMWNjYWE0YTc3OGZhNjg2NDYxMjgyZQHH+LY=: 00:36:24.476 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:24.476 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:24.476 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDJjODBjYzM0MDFiNTA0ODEwNTljMDMzYjljMGMzMzfIPHwZ: 00:36:24.476 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Njc2OTQzNmI0ODUyYjhkZjQxMzE5Y2M4ZTRhZWNhZjcwNzk2ZDRjMjgzMWNjYWE0YTc3OGZhNjg2NDYxMjgyZQHH+LY=: ]] 00:36:24.476 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Njc2OTQzNmI0ODUyYjhkZjQxMzE5Y2M4ZTRhZWNhZjcwNzk2ZDRjMjgzMWNjYWE0YTc3OGZhNjg2NDYxMjgyZQHH+LY=: 00:36:24.476 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:36:24.476 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:24.476 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:24.476 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:24.476 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:24.476 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:24.476 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:36:24.476 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:24.476 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:24.476 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:24.476 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:24.476 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:24.476 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:24.476 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:24.476 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:24.476 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:24.476 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:24.476 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:24.476 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:24.476 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:24.476 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:24.476 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:24.476 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:24.476 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:24.735 nvme0n1 00:36:24.735 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:24.735 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:24.735 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:24.735 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:24.735 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:24.735 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:24.735 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:24.735 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:24.735 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:24.735 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:24.735 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:24.735 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:24.735 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:36:24.735 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:24.735 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:24.735 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:24.735 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:24.735 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWNlYmQzYWIxNWQyNGUzZmY2OTU1NGU0N2I2NGY3OTlhOGEyNzU0ZmJmYzhkMjg0bofh4A==: 00:36:24.735 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGJhMmE4NzI0N2U1YWU1Yzg5MDJhMmVjNmY4NWNjOGZjYjNmNzY2YWMxN2Y4NTM3zV7Lug==: 00:36:24.735 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:24.735 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:24.735 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWNlYmQzYWIxNWQyNGUzZmY2OTU1NGU0N2I2NGY3OTlhOGEyNzU0ZmJmYzhkMjg0bofh4A==: 00:36:24.735 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGJhMmE4NzI0N2U1YWU1Yzg5MDJhMmVjNmY4NWNjOGZjYjNmNzY2YWMxN2Y4NTM3zV7Lug==: ]] 00:36:24.735 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGJhMmE4NzI0N2U1YWU1Yzg5MDJhMmVjNmY4NWNjOGZjYjNmNzY2YWMxN2Y4NTM3zV7Lug==: 00:36:24.735 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:36:24.735 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:24.735 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:24.735 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:24.735 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:24.735 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:24.735 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:36:24.735 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:24.735 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:24.735 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:24.735 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:24.735 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:24.735 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:24.735 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:24.735 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:24.735 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:24.735 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:24.735 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:24.735 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:24.735 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:24.735 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:24.735 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:24.735 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:24.735 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:24.994 nvme0n1 00:36:24.994 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:24.994 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:24.994 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:24.995 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:24.995 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:24.995 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:24.995 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:24.995 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:24.995 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:24.995 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:24.995 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:24.995 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:24.995 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:36:24.995 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:24.995 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:24.995 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:24.995 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:24.995 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTViODJiYWQzNmM3N2I2Y2VkMTFlNzRiNmIzZjc5OWbt6JGf: 00:36:24.995 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzA2OTU2OTg2NTQ5ZjM1MjQ1MGZiODc2OTY3NGIzZDU4kR/r: 00:36:24.995 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:24.995 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:24.995 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTViODJiYWQzNmM3N2I2Y2VkMTFlNzRiNmIzZjc5OWbt6JGf: 00:36:24.995 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzA2OTU2OTg2NTQ5ZjM1MjQ1MGZiODc2OTY3NGIzZDU4kR/r: ]] 00:36:24.995 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzA2OTU2OTg2NTQ5ZjM1MjQ1MGZiODc2OTY3NGIzZDU4kR/r: 00:36:24.995 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:36:24.995 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:24.995 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:24.995 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:24.995 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:24.995 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:24.995 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:36:24.995 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:24.995 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:24.995 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:24.995 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:24.995 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:24.995 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:24.995 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:24.995 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:24.995 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:24.995 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:24.995 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:24.995 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:24.995 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:24.995 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:24.995 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:24.995 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:24.995 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:25.254 nvme0n1 00:36:25.254 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:25.254 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:25.254 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:25.254 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:25.254 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:25.254 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:25.254 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:25.254 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:25.254 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:25.254 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:25.254 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:25.254 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:25.254 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:36:25.254 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:25.254 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:25.254 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:25.254 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:25.254 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Nzc5YzczMGZlMDIyZDlkODEyYmVlYjc4ZjU2ZTNjOWNkMGIyM2NjNmY1ZGY3ZDZh152WXw==: 00:36:25.254 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDk2NDE1MWZlOWJiOWZkODAyYjVjMTRiZGFlZTM1MGTdIRAt: 00:36:25.254 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:25.254 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:25.254 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Nzc5YzczMGZlMDIyZDlkODEyYmVlYjc4ZjU2ZTNjOWNkMGIyM2NjNmY1ZGY3ZDZh152WXw==: 00:36:25.254 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDk2NDE1MWZlOWJiOWZkODAyYjVjMTRiZGFlZTM1MGTdIRAt: ]] 00:36:25.255 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDk2NDE1MWZlOWJiOWZkODAyYjVjMTRiZGFlZTM1MGTdIRAt: 00:36:25.255 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:36:25.255 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:25.255 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:25.255 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:25.255 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:25.255 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:25.255 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:36:25.255 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:25.255 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:25.255 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:25.255 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:25.255 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:25.255 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:25.255 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:25.255 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:25.255 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:25.255 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:25.255 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:25.255 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:25.255 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:25.255 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:25.255 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:25.255 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:25.255 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:25.514 nvme0n1 00:36:25.514 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:25.514 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:25.514 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:25.514 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:25.514 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:25.514 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:25.514 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:25.514 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:25.514 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:25.514 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:25.514 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:25.514 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:25.514 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:36:25.514 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:25.514 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:25.514 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:25.514 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:25.514 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTRlMzY4ZWQwOGUyZThjN2E2YjNjZjc4NzZkZGE3YjBlNmQ2MGUwZTM1MWQ5NTI1MDM2ZjhkODU4MDViZmFlMbOSEjg=: 00:36:25.514 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:25.514 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:25.514 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:25.514 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTRlMzY4ZWQwOGUyZThjN2E2YjNjZjc4NzZkZGE3YjBlNmQ2MGUwZTM1MWQ5NTI1MDM2ZjhkODU4MDViZmFlMbOSEjg=: 00:36:25.514 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:25.514 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:36:25.514 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:25.514 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:25.514 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:25.514 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:25.514 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:25.514 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:36:25.514 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:25.514 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:25.514 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:25.514 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:25.514 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:25.514 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:25.514 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:25.514 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:25.515 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:25.515 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:25.515 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:25.515 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:25.515 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:25.515 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:25.515 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:25.515 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:25.515 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:25.515 nvme0n1 00:36:25.515 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:25.515 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:25.515 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:25.515 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:25.515 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:25.774 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:25.774 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:25.774 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:25.774 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:25.774 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:25.774 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:25.774 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:25.774 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:25.774 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:36:25.774 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:25.774 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:25.774 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:25.774 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:25.774 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDJjODBjYzM0MDFiNTA0ODEwNTljMDMzYjljMGMzMzfIPHwZ: 00:36:25.774 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Njc2OTQzNmI0ODUyYjhkZjQxMzE5Y2M4ZTRhZWNhZjcwNzk2ZDRjMjgzMWNjYWE0YTc3OGZhNjg2NDYxMjgyZQHH+LY=: 00:36:25.774 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:25.774 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:25.774 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDJjODBjYzM0MDFiNTA0ODEwNTljMDMzYjljMGMzMzfIPHwZ: 00:36:25.774 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Njc2OTQzNmI0ODUyYjhkZjQxMzE5Y2M4ZTRhZWNhZjcwNzk2ZDRjMjgzMWNjYWE0YTc3OGZhNjg2NDYxMjgyZQHH+LY=: ]] 00:36:25.774 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Njc2OTQzNmI0ODUyYjhkZjQxMzE5Y2M4ZTRhZWNhZjcwNzk2ZDRjMjgzMWNjYWE0YTc3OGZhNjg2NDYxMjgyZQHH+LY=: 00:36:25.774 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:36:25.774 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:25.774 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:25.774 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:25.774 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:25.774 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:25.774 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:36:25.774 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:25.774 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:25.774 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:25.774 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:25.774 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:25.774 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:25.774 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:25.774 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:25.774 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:25.774 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:25.774 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:25.774 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:25.775 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:25.775 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:25.775 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:25.775 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:25.775 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:25.775 nvme0n1 00:36:25.775 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:25.775 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:25.775 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:25.775 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:25.775 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:25.775 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:26.034 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:26.034 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:26.034 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:26.034 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:26.034 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:26.034 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:26.034 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:36:26.034 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:26.034 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:26.034 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:26.034 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:26.034 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWNlYmQzYWIxNWQyNGUzZmY2OTU1NGU0N2I2NGY3OTlhOGEyNzU0ZmJmYzhkMjg0bofh4A==: 00:36:26.034 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGJhMmE4NzI0N2U1YWU1Yzg5MDJhMmVjNmY4NWNjOGZjYjNmNzY2YWMxN2Y4NTM3zV7Lug==: 00:36:26.034 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:26.034 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:26.034 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWNlYmQzYWIxNWQyNGUzZmY2OTU1NGU0N2I2NGY3OTlhOGEyNzU0ZmJmYzhkMjg0bofh4A==: 00:36:26.034 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGJhMmE4NzI0N2U1YWU1Yzg5MDJhMmVjNmY4NWNjOGZjYjNmNzY2YWMxN2Y4NTM3zV7Lug==: ]] 00:36:26.034 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGJhMmE4NzI0N2U1YWU1Yzg5MDJhMmVjNmY4NWNjOGZjYjNmNzY2YWMxN2Y4NTM3zV7Lug==: 00:36:26.034 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:36:26.034 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:26.034 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:26.034 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:26.034 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:26.034 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:26.034 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:36:26.034 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:26.034 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:26.034 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:26.034 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:26.034 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:26.034 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:26.034 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:26.034 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:26.034 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:26.034 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:26.034 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:26.034 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:26.034 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:26.034 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:26.034 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:26.034 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:26.034 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:26.034 nvme0n1 00:36:26.034 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:26.034 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:26.034 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:26.034 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:26.034 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:26.294 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:26.294 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:26.294 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:26.294 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:26.294 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:26.294 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:26.294 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:26.294 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:36:26.294 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:26.294 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:26.294 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:26.294 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:26.294 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTViODJiYWQzNmM3N2I2Y2VkMTFlNzRiNmIzZjc5OWbt6JGf: 00:36:26.294 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzA2OTU2OTg2NTQ5ZjM1MjQ1MGZiODc2OTY3NGIzZDU4kR/r: 00:36:26.294 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:26.294 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:26.294 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTViODJiYWQzNmM3N2I2Y2VkMTFlNzRiNmIzZjc5OWbt6JGf: 00:36:26.294 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzA2OTU2OTg2NTQ5ZjM1MjQ1MGZiODc2OTY3NGIzZDU4kR/r: ]] 00:36:26.294 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzA2OTU2OTg2NTQ5ZjM1MjQ1MGZiODc2OTY3NGIzZDU4kR/r: 00:36:26.294 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:36:26.294 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:26.294 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:26.294 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:26.294 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:26.294 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:26.294 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:36:26.294 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:26.294 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:26.294 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:26.294 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:26.294 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:26.294 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:26.294 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:26.294 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:26.294 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:26.294 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:26.294 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:26.294 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:26.294 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:26.294 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:26.294 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:26.294 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:26.294 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:26.294 nvme0n1 00:36:26.294 10:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:26.294 10:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:26.294 10:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:26.294 10:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:26.294 10:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:26.294 10:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:26.554 10:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:26.554 10:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:26.554 10:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:26.554 10:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:26.554 10:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:26.554 10:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:26.554 10:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:36:26.554 10:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:26.554 10:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:26.554 10:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:26.554 10:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:26.554 10:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Nzc5YzczMGZlMDIyZDlkODEyYmVlYjc4ZjU2ZTNjOWNkMGIyM2NjNmY1ZGY3ZDZh152WXw==: 00:36:26.554 10:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDk2NDE1MWZlOWJiOWZkODAyYjVjMTRiZGFlZTM1MGTdIRAt: 00:36:26.554 10:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:26.554 10:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:26.554 10:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Nzc5YzczMGZlMDIyZDlkODEyYmVlYjc4ZjU2ZTNjOWNkMGIyM2NjNmY1ZGY3ZDZh152WXw==: 00:36:26.554 10:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDk2NDE1MWZlOWJiOWZkODAyYjVjMTRiZGFlZTM1MGTdIRAt: ]] 00:36:26.554 10:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDk2NDE1MWZlOWJiOWZkODAyYjVjMTRiZGFlZTM1MGTdIRAt: 00:36:26.554 10:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:36:26.554 10:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:26.554 10:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:26.554 10:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:26.554 10:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:26.554 10:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:26.554 10:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:36:26.554 10:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:26.554 10:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:26.554 10:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:26.554 10:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:26.554 10:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:26.554 10:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:26.554 10:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:26.554 10:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:26.554 10:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:26.554 10:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:26.554 10:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:26.554 10:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:26.554 10:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:26.554 10:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:26.554 10:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:26.554 10:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:26.554 10:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:26.554 nvme0n1 00:36:26.554 10:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:26.554 10:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:26.554 10:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:26.554 10:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:26.554 10:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:26.814 10:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:26.814 10:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:26.814 10:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:26.814 10:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:26.814 10:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:26.814 10:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:26.814 10:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:26.814 10:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:36:26.814 10:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:26.814 10:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:26.814 10:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:26.814 10:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:26.814 10:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTRlMzY4ZWQwOGUyZThjN2E2YjNjZjc4NzZkZGE3YjBlNmQ2MGUwZTM1MWQ5NTI1MDM2ZjhkODU4MDViZmFlMbOSEjg=: 00:36:26.814 10:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:26.814 10:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:26.814 10:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:26.814 10:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTRlMzY4ZWQwOGUyZThjN2E2YjNjZjc4NzZkZGE3YjBlNmQ2MGUwZTM1MWQ5NTI1MDM2ZjhkODU4MDViZmFlMbOSEjg=: 00:36:26.814 10:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:26.814 10:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:36:26.814 10:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:26.814 10:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:26.814 10:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:26.814 10:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:26.814 10:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:26.814 10:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:36:26.814 10:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:26.814 10:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:26.814 10:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:26.814 10:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:26.814 10:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:26.814 10:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:26.814 10:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:26.814 10:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:26.814 10:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:26.814 10:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:26.814 10:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:26.814 10:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:26.814 10:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:26.814 10:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:26.814 10:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:26.814 10:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:26.814 10:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:26.814 nvme0n1 00:36:26.814 10:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:26.814 10:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:26.814 10:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:26.814 10:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:26.814 10:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:27.073 10:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:27.073 10:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:27.073 10:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:27.073 10:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:27.073 10:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:27.073 10:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:27.074 10:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:27.074 10:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:27.074 10:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:36:27.074 10:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:27.074 10:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:27.074 10:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:27.074 10:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:27.074 10:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDJjODBjYzM0MDFiNTA0ODEwNTljMDMzYjljMGMzMzfIPHwZ: 00:36:27.074 10:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Njc2OTQzNmI0ODUyYjhkZjQxMzE5Y2M4ZTRhZWNhZjcwNzk2ZDRjMjgzMWNjYWE0YTc3OGZhNjg2NDYxMjgyZQHH+LY=: 00:36:27.074 10:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:27.074 10:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:27.074 10:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDJjODBjYzM0MDFiNTA0ODEwNTljMDMzYjljMGMzMzfIPHwZ: 00:36:27.074 10:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Njc2OTQzNmI0ODUyYjhkZjQxMzE5Y2M4ZTRhZWNhZjcwNzk2ZDRjMjgzMWNjYWE0YTc3OGZhNjg2NDYxMjgyZQHH+LY=: ]] 00:36:27.074 10:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Njc2OTQzNmI0ODUyYjhkZjQxMzE5Y2M4ZTRhZWNhZjcwNzk2ZDRjMjgzMWNjYWE0YTc3OGZhNjg2NDYxMjgyZQHH+LY=: 00:36:27.074 10:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:36:27.074 10:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:27.074 10:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:27.074 10:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:27.074 10:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:27.074 10:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:27.074 10:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:36:27.074 10:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:27.074 10:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:27.074 10:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:27.074 10:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:27.074 10:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:27.074 10:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:27.074 10:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:27.074 10:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:27.074 10:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:27.074 10:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:27.074 10:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:27.074 10:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:27.074 10:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:27.074 10:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:27.074 10:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:27.074 10:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:27.074 10:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:27.333 nvme0n1 00:36:27.333 10:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:27.333 10:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:27.333 10:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:27.333 10:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:27.333 10:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:27.333 10:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:27.333 10:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:27.333 10:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:27.333 10:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:27.333 10:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:27.333 10:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:27.333 10:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:27.333 10:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:36:27.333 10:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:27.333 10:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:27.333 10:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:27.333 10:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:27.333 10:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWNlYmQzYWIxNWQyNGUzZmY2OTU1NGU0N2I2NGY3OTlhOGEyNzU0ZmJmYzhkMjg0bofh4A==: 00:36:27.333 10:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGJhMmE4NzI0N2U1YWU1Yzg5MDJhMmVjNmY4NWNjOGZjYjNmNzY2YWMxN2Y4NTM3zV7Lug==: 00:36:27.333 10:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:27.333 10:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:27.333 10:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWNlYmQzYWIxNWQyNGUzZmY2OTU1NGU0N2I2NGY3OTlhOGEyNzU0ZmJmYzhkMjg0bofh4A==: 00:36:27.333 10:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGJhMmE4NzI0N2U1YWU1Yzg5MDJhMmVjNmY4NWNjOGZjYjNmNzY2YWMxN2Y4NTM3zV7Lug==: ]] 00:36:27.333 10:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGJhMmE4NzI0N2U1YWU1Yzg5MDJhMmVjNmY4NWNjOGZjYjNmNzY2YWMxN2Y4NTM3zV7Lug==: 00:36:27.333 10:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:36:27.333 10:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:27.333 10:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:27.333 10:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:27.333 10:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:27.333 10:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:27.333 10:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:36:27.333 10:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:27.333 10:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:27.333 10:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:27.333 10:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:27.333 10:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:27.333 10:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:27.333 10:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:27.333 10:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:27.333 10:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:27.333 10:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:27.333 10:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:27.333 10:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:27.333 10:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:27.333 10:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:27.333 10:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:27.334 10:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:27.334 10:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:27.593 nvme0n1 00:36:27.593 10:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:27.593 10:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:27.593 10:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:27.593 10:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:27.593 10:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:27.593 10:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:27.593 10:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:27.593 10:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:27.593 10:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:27.593 10:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:27.593 10:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:27.593 10:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:27.593 10:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:36:27.593 10:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:27.593 10:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:27.593 10:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:27.593 10:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:27.593 10:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTViODJiYWQzNmM3N2I2Y2VkMTFlNzRiNmIzZjc5OWbt6JGf: 00:36:27.593 10:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzA2OTU2OTg2NTQ5ZjM1MjQ1MGZiODc2OTY3NGIzZDU4kR/r: 00:36:27.593 10:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:27.593 10:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:27.593 10:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTViODJiYWQzNmM3N2I2Y2VkMTFlNzRiNmIzZjc5OWbt6JGf: 00:36:27.593 10:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzA2OTU2OTg2NTQ5ZjM1MjQ1MGZiODc2OTY3NGIzZDU4kR/r: ]] 00:36:27.593 10:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzA2OTU2OTg2NTQ5ZjM1MjQ1MGZiODc2OTY3NGIzZDU4kR/r: 00:36:27.593 10:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:36:27.593 10:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:27.593 10:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:27.593 10:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:27.593 10:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:27.593 10:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:27.593 10:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:36:27.593 10:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:27.593 10:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:27.593 10:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:27.593 10:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:27.593 10:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:27.593 10:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:27.593 10:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:27.593 10:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:27.593 10:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:27.593 10:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:27.593 10:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:27.593 10:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:27.593 10:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:27.593 10:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:27.593 10:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:27.593 10:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:27.593 10:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:27.853 nvme0n1 00:36:27.853 10:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:27.853 10:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:27.853 10:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:27.853 10:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:27.853 10:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:27.853 10:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:27.853 10:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:27.853 10:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:27.853 10:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:27.853 10:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:27.853 10:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:27.853 10:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:27.853 10:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:36:27.853 10:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:27.853 10:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:27.853 10:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:27.853 10:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:27.853 10:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Nzc5YzczMGZlMDIyZDlkODEyYmVlYjc4ZjU2ZTNjOWNkMGIyM2NjNmY1ZGY3ZDZh152WXw==: 00:36:27.853 10:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDk2NDE1MWZlOWJiOWZkODAyYjVjMTRiZGFlZTM1MGTdIRAt: 00:36:27.853 10:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:27.853 10:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:27.853 10:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Nzc5YzczMGZlMDIyZDlkODEyYmVlYjc4ZjU2ZTNjOWNkMGIyM2NjNmY1ZGY3ZDZh152WXw==: 00:36:27.853 10:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDk2NDE1MWZlOWJiOWZkODAyYjVjMTRiZGFlZTM1MGTdIRAt: ]] 00:36:27.853 10:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDk2NDE1MWZlOWJiOWZkODAyYjVjMTRiZGFlZTM1MGTdIRAt: 00:36:27.853 10:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:36:27.853 10:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:27.853 10:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:27.853 10:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:27.853 10:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:27.853 10:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:27.853 10:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:36:27.853 10:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:27.853 10:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:27.853 10:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:27.853 10:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:27.853 10:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:27.853 10:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:27.853 10:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:27.853 10:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:27.853 10:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:27.853 10:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:27.853 10:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:27.853 10:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:27.853 10:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:27.853 10:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:27.853 10:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:27.853 10:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:27.853 10:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:28.113 nvme0n1 00:36:28.113 10:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:28.113 10:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:28.113 10:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:28.113 10:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:28.113 10:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:28.113 10:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:28.375 10:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:28.375 10:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:28.375 10:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:28.375 10:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:28.375 10:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:28.375 10:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:28.375 10:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:36:28.375 10:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:28.375 10:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:28.375 10:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:28.375 10:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:28.375 10:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTRlMzY4ZWQwOGUyZThjN2E2YjNjZjc4NzZkZGE3YjBlNmQ2MGUwZTM1MWQ5NTI1MDM2ZjhkODU4MDViZmFlMbOSEjg=: 00:36:28.375 10:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:28.375 10:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:28.375 10:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:28.375 10:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTRlMzY4ZWQwOGUyZThjN2E2YjNjZjc4NzZkZGE3YjBlNmQ2MGUwZTM1MWQ5NTI1MDM2ZjhkODU4MDViZmFlMbOSEjg=: 00:36:28.375 10:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:28.375 10:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:36:28.375 10:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:28.375 10:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:28.375 10:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:28.375 10:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:28.375 10:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:28.375 10:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:36:28.375 10:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:28.375 10:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:28.375 10:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:28.375 10:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:28.375 10:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:28.375 10:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:28.375 10:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:28.375 10:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:28.375 10:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:28.375 10:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:28.375 10:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:28.375 10:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:28.375 10:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:28.375 10:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:28.375 10:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:28.375 10:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:28.375 10:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:28.634 nvme0n1 00:36:28.634 10:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:28.634 10:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:28.634 10:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:28.634 10:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:28.634 10:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:28.634 10:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:28.634 10:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:28.634 10:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:28.634 10:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:28.634 10:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:28.634 10:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:28.634 10:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:28.634 10:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:28.634 10:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:36:28.634 10:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:28.634 10:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:28.634 10:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:28.634 10:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:28.634 10:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDJjODBjYzM0MDFiNTA0ODEwNTljMDMzYjljMGMzMzfIPHwZ: 00:36:28.634 10:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Njc2OTQzNmI0ODUyYjhkZjQxMzE5Y2M4ZTRhZWNhZjcwNzk2ZDRjMjgzMWNjYWE0YTc3OGZhNjg2NDYxMjgyZQHH+LY=: 00:36:28.634 10:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:28.634 10:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:28.634 10:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDJjODBjYzM0MDFiNTA0ODEwNTljMDMzYjljMGMzMzfIPHwZ: 00:36:28.634 10:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Njc2OTQzNmI0ODUyYjhkZjQxMzE5Y2M4ZTRhZWNhZjcwNzk2ZDRjMjgzMWNjYWE0YTc3OGZhNjg2NDYxMjgyZQHH+LY=: ]] 00:36:28.634 10:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Njc2OTQzNmI0ODUyYjhkZjQxMzE5Y2M4ZTRhZWNhZjcwNzk2ZDRjMjgzMWNjYWE0YTc3OGZhNjg2NDYxMjgyZQHH+LY=: 00:36:28.634 10:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:36:28.634 10:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:28.634 10:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:28.634 10:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:28.634 10:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:28.634 10:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:28.634 10:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:36:28.634 10:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:28.634 10:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:28.634 10:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:28.634 10:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:28.634 10:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:28.634 10:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:28.634 10:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:28.634 10:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:28.635 10:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:28.635 10:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:28.635 10:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:28.635 10:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:28.635 10:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:28.635 10:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:28.635 10:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:28.635 10:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:28.635 10:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:28.894 nvme0n1 00:36:28.894 10:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:28.894 10:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:28.894 10:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:28.894 10:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:28.894 10:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:29.153 10:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:29.153 10:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:29.153 10:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:29.153 10:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:29.153 10:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:29.153 10:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:29.153 10:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:29.153 10:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:36:29.153 10:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:29.153 10:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:29.153 10:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:29.153 10:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:29.153 10:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWNlYmQzYWIxNWQyNGUzZmY2OTU1NGU0N2I2NGY3OTlhOGEyNzU0ZmJmYzhkMjg0bofh4A==: 00:36:29.153 10:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGJhMmE4NzI0N2U1YWU1Yzg5MDJhMmVjNmY4NWNjOGZjYjNmNzY2YWMxN2Y4NTM3zV7Lug==: 00:36:29.153 10:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:29.153 10:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:29.153 10:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWNlYmQzYWIxNWQyNGUzZmY2OTU1NGU0N2I2NGY3OTlhOGEyNzU0ZmJmYzhkMjg0bofh4A==: 00:36:29.153 10:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGJhMmE4NzI0N2U1YWU1Yzg5MDJhMmVjNmY4NWNjOGZjYjNmNzY2YWMxN2Y4NTM3zV7Lug==: ]] 00:36:29.153 10:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGJhMmE4NzI0N2U1YWU1Yzg5MDJhMmVjNmY4NWNjOGZjYjNmNzY2YWMxN2Y4NTM3zV7Lug==: 00:36:29.153 10:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:36:29.153 10:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:29.153 10:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:29.153 10:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:29.153 10:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:29.153 10:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:29.153 10:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:36:29.153 10:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:29.153 10:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:29.153 10:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:29.153 10:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:29.153 10:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:29.153 10:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:29.153 10:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:29.153 10:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:29.153 10:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:29.153 10:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:29.153 10:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:29.153 10:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:29.153 10:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:29.153 10:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:29.153 10:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:29.153 10:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:29.153 10:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:29.412 nvme0n1 00:36:29.412 10:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:29.412 10:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:29.412 10:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:29.412 10:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:29.412 10:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:29.412 10:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:29.412 10:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:29.412 10:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:29.412 10:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:29.412 10:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:29.412 10:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:29.413 10:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:29.413 10:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:36:29.413 10:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:29.413 10:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:29.413 10:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:29.413 10:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:29.413 10:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTViODJiYWQzNmM3N2I2Y2VkMTFlNzRiNmIzZjc5OWbt6JGf: 00:36:29.413 10:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzA2OTU2OTg2NTQ5ZjM1MjQ1MGZiODc2OTY3NGIzZDU4kR/r: 00:36:29.413 10:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:29.413 10:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:29.413 10:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTViODJiYWQzNmM3N2I2Y2VkMTFlNzRiNmIzZjc5OWbt6JGf: 00:36:29.413 10:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzA2OTU2OTg2NTQ5ZjM1MjQ1MGZiODc2OTY3NGIzZDU4kR/r: ]] 00:36:29.413 10:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzA2OTU2OTg2NTQ5ZjM1MjQ1MGZiODc2OTY3NGIzZDU4kR/r: 00:36:29.413 10:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:36:29.413 10:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:29.413 10:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:29.413 10:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:29.413 10:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:29.413 10:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:29.413 10:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:36:29.413 10:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:29.413 10:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:29.413 10:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:29.413 10:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:29.413 10:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:29.413 10:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:29.413 10:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:29.413 10:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:29.413 10:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:29.413 10:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:29.413 10:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:29.413 10:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:29.413 10:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:29.413 10:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:29.413 10:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:29.413 10:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:29.413 10:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:29.981 nvme0n1 00:36:29.981 10:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:29.981 10:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:29.981 10:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:29.981 10:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:29.981 10:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:29.981 10:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:29.981 10:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:29.981 10:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:29.981 10:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:29.981 10:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:29.981 10:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:29.981 10:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:29.981 10:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:36:29.981 10:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:29.981 10:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:29.981 10:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:29.981 10:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:29.981 10:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Nzc5YzczMGZlMDIyZDlkODEyYmVlYjc4ZjU2ZTNjOWNkMGIyM2NjNmY1ZGY3ZDZh152WXw==: 00:36:29.981 10:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDk2NDE1MWZlOWJiOWZkODAyYjVjMTRiZGFlZTM1MGTdIRAt: 00:36:29.982 10:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:29.982 10:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:29.982 10:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Nzc5YzczMGZlMDIyZDlkODEyYmVlYjc4ZjU2ZTNjOWNkMGIyM2NjNmY1ZGY3ZDZh152WXw==: 00:36:29.982 10:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDk2NDE1MWZlOWJiOWZkODAyYjVjMTRiZGFlZTM1MGTdIRAt: ]] 00:36:29.982 10:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDk2NDE1MWZlOWJiOWZkODAyYjVjMTRiZGFlZTM1MGTdIRAt: 00:36:29.982 10:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:36:29.982 10:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:29.982 10:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:29.982 10:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:29.982 10:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:29.982 10:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:29.982 10:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:36:29.982 10:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:29.982 10:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:29.982 10:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:29.982 10:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:29.982 10:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:29.982 10:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:29.982 10:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:29.982 10:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:29.982 10:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:29.982 10:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:29.982 10:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:29.982 10:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:29.982 10:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:29.982 10:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:29.982 10:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:29.982 10:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:29.982 10:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:30.241 nvme0n1 00:36:30.241 10:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:30.241 10:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:30.241 10:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:30.241 10:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:30.241 10:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:30.241 10:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:30.501 10:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:30.501 10:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:30.501 10:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:30.501 10:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:30.501 10:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:30.501 10:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:30.501 10:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:36:30.501 10:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:30.501 10:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:30.501 10:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:30.501 10:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:30.501 10:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTRlMzY4ZWQwOGUyZThjN2E2YjNjZjc4NzZkZGE3YjBlNmQ2MGUwZTM1MWQ5NTI1MDM2ZjhkODU4MDViZmFlMbOSEjg=: 00:36:30.501 10:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:30.501 10:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:30.501 10:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:30.501 10:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTRlMzY4ZWQwOGUyZThjN2E2YjNjZjc4NzZkZGE3YjBlNmQ2MGUwZTM1MWQ5NTI1MDM2ZjhkODU4MDViZmFlMbOSEjg=: 00:36:30.501 10:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:30.501 10:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:36:30.501 10:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:30.501 10:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:30.501 10:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:30.501 10:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:30.501 10:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:30.501 10:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:36:30.501 10:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:30.501 10:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:30.501 10:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:30.501 10:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:30.501 10:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:30.501 10:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:30.501 10:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:30.501 10:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:30.501 10:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:30.501 10:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:30.501 10:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:30.501 10:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:30.501 10:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:30.501 10:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:30.501 10:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:30.501 10:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:30.501 10:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:30.761 nvme0n1 00:36:30.761 10:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:30.761 10:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:30.761 10:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:30.761 10:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:30.761 10:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:30.761 10:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:30.761 10:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:30.761 10:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:30.761 10:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:30.761 10:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:30.761 10:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:30.761 10:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:30.761 10:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:30.761 10:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:36:30.761 10:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:30.761 10:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:30.761 10:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:30.761 10:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:30.761 10:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDJjODBjYzM0MDFiNTA0ODEwNTljMDMzYjljMGMzMzfIPHwZ: 00:36:30.761 10:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Njc2OTQzNmI0ODUyYjhkZjQxMzE5Y2M4ZTRhZWNhZjcwNzk2ZDRjMjgzMWNjYWE0YTc3OGZhNjg2NDYxMjgyZQHH+LY=: 00:36:30.761 10:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:30.761 10:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:30.761 10:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDJjODBjYzM0MDFiNTA0ODEwNTljMDMzYjljMGMzMzfIPHwZ: 00:36:30.761 10:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Njc2OTQzNmI0ODUyYjhkZjQxMzE5Y2M4ZTRhZWNhZjcwNzk2ZDRjMjgzMWNjYWE0YTc3OGZhNjg2NDYxMjgyZQHH+LY=: ]] 00:36:30.761 10:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Njc2OTQzNmI0ODUyYjhkZjQxMzE5Y2M4ZTRhZWNhZjcwNzk2ZDRjMjgzMWNjYWE0YTc3OGZhNjg2NDYxMjgyZQHH+LY=: 00:36:30.761 10:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:36:30.761 10:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:30.761 10:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:30.761 10:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:30.761 10:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:30.761 10:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:30.761 10:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:36:30.761 10:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:30.761 10:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:30.761 10:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:30.761 10:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:30.761 10:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:30.761 10:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:30.761 10:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:30.761 10:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:30.761 10:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:30.761 10:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:30.761 10:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:30.761 10:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:30.761 10:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:30.761 10:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:30.761 10:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:30.761 10:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:30.761 10:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:31.329 nvme0n1 00:36:31.329 10:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:31.329 10:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:31.329 10:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:31.329 10:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:31.329 10:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:31.329 10:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:31.607 10:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:31.607 10:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:31.607 10:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:31.607 10:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:31.607 10:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:31.607 10:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:31.607 10:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:36:31.607 10:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:31.607 10:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:31.607 10:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:31.607 10:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:31.607 10:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWNlYmQzYWIxNWQyNGUzZmY2OTU1NGU0N2I2NGY3OTlhOGEyNzU0ZmJmYzhkMjg0bofh4A==: 00:36:31.607 10:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGJhMmE4NzI0N2U1YWU1Yzg5MDJhMmVjNmY4NWNjOGZjYjNmNzY2YWMxN2Y4NTM3zV7Lug==: 00:36:31.607 10:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:31.607 10:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:31.607 10:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWNlYmQzYWIxNWQyNGUzZmY2OTU1NGU0N2I2NGY3OTlhOGEyNzU0ZmJmYzhkMjg0bofh4A==: 00:36:31.607 10:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGJhMmE4NzI0N2U1YWU1Yzg5MDJhMmVjNmY4NWNjOGZjYjNmNzY2YWMxN2Y4NTM3zV7Lug==: ]] 00:36:31.607 10:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGJhMmE4NzI0N2U1YWU1Yzg5MDJhMmVjNmY4NWNjOGZjYjNmNzY2YWMxN2Y4NTM3zV7Lug==: 00:36:31.607 10:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:36:31.607 10:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:31.607 10:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:31.607 10:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:31.607 10:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:31.607 10:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:31.607 10:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:36:31.607 10:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:31.607 10:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:31.607 10:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:31.607 10:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:31.607 10:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:31.607 10:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:31.607 10:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:31.607 10:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:31.607 10:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:31.607 10:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:31.607 10:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:31.607 10:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:31.607 10:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:31.607 10:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:31.607 10:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:31.607 10:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:31.607 10:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:32.253 nvme0n1 00:36:32.253 10:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:32.253 10:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:32.253 10:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:32.253 10:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:32.253 10:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:32.253 10:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:32.253 10:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:32.253 10:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:32.253 10:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:32.253 10:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:32.253 10:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:32.253 10:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:32.253 10:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:36:32.253 10:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:32.253 10:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:32.253 10:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:32.253 10:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:32.253 10:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTViODJiYWQzNmM3N2I2Y2VkMTFlNzRiNmIzZjc5OWbt6JGf: 00:36:32.253 10:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzA2OTU2OTg2NTQ5ZjM1MjQ1MGZiODc2OTY3NGIzZDU4kR/r: 00:36:32.253 10:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:32.253 10:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:32.253 10:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTViODJiYWQzNmM3N2I2Y2VkMTFlNzRiNmIzZjc5OWbt6JGf: 00:36:32.253 10:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzA2OTU2OTg2NTQ5ZjM1MjQ1MGZiODc2OTY3NGIzZDU4kR/r: ]] 00:36:32.253 10:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzA2OTU2OTg2NTQ5ZjM1MjQ1MGZiODc2OTY3NGIzZDU4kR/r: 00:36:32.253 10:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:36:32.253 10:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:32.253 10:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:32.253 10:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:32.253 10:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:32.253 10:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:32.253 10:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:36:32.253 10:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:32.253 10:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:32.253 10:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:32.253 10:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:32.253 10:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:32.253 10:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:32.253 10:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:32.253 10:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:32.253 10:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:32.253 10:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:32.253 10:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:32.253 10:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:32.253 10:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:32.253 10:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:32.253 10:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:32.253 10:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:32.253 10:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:32.842 nvme0n1 00:36:32.842 10:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:32.842 10:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:32.842 10:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:32.842 10:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:32.842 10:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:32.842 10:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:32.842 10:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:32.842 10:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:32.842 10:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:32.842 10:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:32.842 10:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:32.842 10:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:32.842 10:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:36:32.842 10:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:32.842 10:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:32.842 10:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:32.842 10:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:32.842 10:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Nzc5YzczMGZlMDIyZDlkODEyYmVlYjc4ZjU2ZTNjOWNkMGIyM2NjNmY1ZGY3ZDZh152WXw==: 00:36:32.842 10:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDk2NDE1MWZlOWJiOWZkODAyYjVjMTRiZGFlZTM1MGTdIRAt: 00:36:32.842 10:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:32.842 10:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:32.842 10:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Nzc5YzczMGZlMDIyZDlkODEyYmVlYjc4ZjU2ZTNjOWNkMGIyM2NjNmY1ZGY3ZDZh152WXw==: 00:36:32.842 10:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDk2NDE1MWZlOWJiOWZkODAyYjVjMTRiZGFlZTM1MGTdIRAt: ]] 00:36:32.842 10:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDk2NDE1MWZlOWJiOWZkODAyYjVjMTRiZGFlZTM1MGTdIRAt: 00:36:32.842 10:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:36:32.842 10:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:32.842 10:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:32.842 10:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:32.842 10:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:32.842 10:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:32.842 10:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:36:32.842 10:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:32.842 10:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:32.843 10:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:32.843 10:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:32.843 10:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:32.843 10:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:32.843 10:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:32.843 10:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:32.843 10:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:32.843 10:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:32.843 10:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:32.843 10:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:32.843 10:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:32.843 10:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:32.843 10:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:32.843 10:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:32.843 10:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:33.414 nvme0n1 00:36:33.414 10:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:33.414 10:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:33.414 10:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:33.414 10:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:33.414 10:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:33.414 10:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:33.414 10:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:33.414 10:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:33.414 10:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:33.414 10:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:33.414 10:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:33.414 10:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:33.414 10:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:36:33.414 10:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:33.414 10:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:33.414 10:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:33.414 10:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:33.414 10:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTRlMzY4ZWQwOGUyZThjN2E2YjNjZjc4NzZkZGE3YjBlNmQ2MGUwZTM1MWQ5NTI1MDM2ZjhkODU4MDViZmFlMbOSEjg=: 00:36:33.414 10:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:33.414 10:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:33.414 10:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:33.414 10:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTRlMzY4ZWQwOGUyZThjN2E2YjNjZjc4NzZkZGE3YjBlNmQ2MGUwZTM1MWQ5NTI1MDM2ZjhkODU4MDViZmFlMbOSEjg=: 00:36:33.414 10:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:33.414 10:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:36:33.415 10:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:33.415 10:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:33.415 10:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:33.415 10:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:33.415 10:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:33.415 10:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:36:33.415 10:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:33.415 10:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:33.415 10:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:33.415 10:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:33.415 10:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:33.415 10:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:33.415 10:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:33.415 10:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:33.415 10:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:33.415 10:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:33.415 10:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:33.415 10:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:33.415 10:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:33.415 10:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:33.415 10:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:33.415 10:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:33.415 10:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:33.982 nvme0n1 00:36:33.982 10:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:33.982 10:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:33.982 10:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:33.982 10:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:33.982 10:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:33.982 10:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:33.982 10:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:33.982 10:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:33.982 10:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:33.982 10:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:33.982 10:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:33.982 10:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:36:33.982 10:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:33.982 10:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:33.982 10:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:33.982 10:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:33.982 10:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWNlYmQzYWIxNWQyNGUzZmY2OTU1NGU0N2I2NGY3OTlhOGEyNzU0ZmJmYzhkMjg0bofh4A==: 00:36:33.982 10:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGJhMmE4NzI0N2U1YWU1Yzg5MDJhMmVjNmY4NWNjOGZjYjNmNzY2YWMxN2Y4NTM3zV7Lug==: 00:36:33.982 10:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:33.982 10:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:33.982 10:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWNlYmQzYWIxNWQyNGUzZmY2OTU1NGU0N2I2NGY3OTlhOGEyNzU0ZmJmYzhkMjg0bofh4A==: 00:36:33.982 10:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGJhMmE4NzI0N2U1YWU1Yzg5MDJhMmVjNmY4NWNjOGZjYjNmNzY2YWMxN2Y4NTM3zV7Lug==: ]] 00:36:33.982 10:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGJhMmE4NzI0N2U1YWU1Yzg5MDJhMmVjNmY4NWNjOGZjYjNmNzY2YWMxN2Y4NTM3zV7Lug==: 00:36:33.982 10:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:36:33.982 10:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:33.982 10:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:33.982 10:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:33.982 10:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:36:33.982 10:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:33.982 10:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:33.982 10:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:33.982 10:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:33.982 10:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:33.982 10:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:33.982 10:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:33.982 10:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:33.982 10:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:33.982 10:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:33.982 10:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:36:33.982 10:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:36:33.982 10:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:36:33.982 10:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:36:33.982 10:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:33.982 10:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:36:33.982 10:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:33.982 10:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:36:33.982 10:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:33.982 10:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:34.242 request: 00:36:34.242 { 00:36:34.242 "name": "nvme0", 00:36:34.242 "trtype": "tcp", 00:36:34.242 "traddr": "10.0.0.1", 00:36:34.242 "adrfam": "ipv4", 00:36:34.242 "trsvcid": "4420", 00:36:34.242 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:36:34.242 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:36:34.242 "prchk_reftag": false, 00:36:34.242 "prchk_guard": false, 00:36:34.242 "hdgst": false, 00:36:34.242 "ddgst": false, 00:36:34.242 "allow_unrecognized_csi": false, 00:36:34.242 "method": "bdev_nvme_attach_controller", 00:36:34.242 "req_id": 1 00:36:34.242 } 00:36:34.242 Got JSON-RPC error response 00:36:34.242 response: 00:36:34.242 { 00:36:34.242 "code": -5, 00:36:34.242 "message": "Input/output error" 00:36:34.242 } 00:36:34.242 10:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:36:34.242 10:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:36:34.242 10:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:36:34.242 10:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:36:34.242 10:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:36:34.242 10:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:36:34.242 10:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:36:34.242 10:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:34.242 10:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:34.242 10:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:34.242 10:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:36:34.242 10:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:36:34.242 10:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:34.242 10:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:34.242 10:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:34.242 10:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:34.242 10:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:34.242 10:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:34.242 10:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:34.242 10:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:34.242 10:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:34.242 10:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:34.242 10:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:36:34.242 10:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:36:34.242 10:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:36:34.242 10:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:36:34.242 10:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:34.242 10:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:36:34.242 10:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:34.242 10:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:36:34.242 10:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:34.242 10:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:34.242 request: 00:36:34.242 { 00:36:34.242 "name": "nvme0", 00:36:34.242 "trtype": "tcp", 00:36:34.242 "traddr": "10.0.0.1", 00:36:34.242 "adrfam": "ipv4", 00:36:34.242 "trsvcid": "4420", 00:36:34.242 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:36:34.242 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:36:34.242 "prchk_reftag": false, 00:36:34.242 "prchk_guard": false, 00:36:34.242 "hdgst": false, 00:36:34.242 "ddgst": false, 00:36:34.242 "dhchap_key": "key2", 00:36:34.242 "allow_unrecognized_csi": false, 00:36:34.242 "method": "bdev_nvme_attach_controller", 00:36:34.242 "req_id": 1 00:36:34.242 } 00:36:34.242 Got JSON-RPC error response 00:36:34.242 response: 00:36:34.242 { 00:36:34.242 "code": -5, 00:36:34.242 "message": "Input/output error" 00:36:34.242 } 00:36:34.242 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:36:34.242 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:36:34.242 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:36:34.242 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:36:34.242 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:36:34.242 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:36:34.242 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:36:34.242 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:34.242 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:34.242 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:34.242 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:36:34.242 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:36:34.242 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:34.242 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:34.242 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:34.242 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:34.242 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:34.242 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:34.242 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:34.242 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:34.242 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:34.242 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:34.242 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:36:34.242 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:36:34.242 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:36:34.242 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:36:34.242 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:34.242 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:36:34.242 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:34.242 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:36:34.242 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:34.242 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:34.502 request: 00:36:34.502 { 00:36:34.502 "name": "nvme0", 00:36:34.502 "trtype": "tcp", 00:36:34.502 "traddr": "10.0.0.1", 00:36:34.502 "adrfam": "ipv4", 00:36:34.502 "trsvcid": "4420", 00:36:34.502 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:36:34.502 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:36:34.502 "prchk_reftag": false, 00:36:34.502 "prchk_guard": false, 00:36:34.502 "hdgst": false, 00:36:34.502 "ddgst": false, 00:36:34.502 "dhchap_key": "key1", 00:36:34.502 "dhchap_ctrlr_key": "ckey2", 00:36:34.502 "allow_unrecognized_csi": false, 00:36:34.502 "method": "bdev_nvme_attach_controller", 00:36:34.502 "req_id": 1 00:36:34.502 } 00:36:34.502 Got JSON-RPC error response 00:36:34.502 response: 00:36:34.502 { 00:36:34.502 "code": -5, 00:36:34.502 "message": "Input/output error" 00:36:34.502 } 00:36:34.502 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:36:34.502 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:36:34.502 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:36:34.502 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:36:34.502 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:36:34.502 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:36:34.502 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:34.502 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:34.502 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:34.502 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:34.502 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:34.502 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:34.502 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:34.502 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:34.502 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:34.502 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:34.502 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:36:34.502 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:34.502 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:34.502 nvme0n1 00:36:34.502 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:34.502 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:36:34.502 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:34.502 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:34.502 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:34.502 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:34.503 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTViODJiYWQzNmM3N2I2Y2VkMTFlNzRiNmIzZjc5OWbt6JGf: 00:36:34.503 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzA2OTU2OTg2NTQ5ZjM1MjQ1MGZiODc2OTY3NGIzZDU4kR/r: 00:36:34.503 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:34.503 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:34.503 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTViODJiYWQzNmM3N2I2Y2VkMTFlNzRiNmIzZjc5OWbt6JGf: 00:36:34.503 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzA2OTU2OTg2NTQ5ZjM1MjQ1MGZiODc2OTY3NGIzZDU4kR/r: ]] 00:36:34.503 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzA2OTU2OTg2NTQ5ZjM1MjQ1MGZiODc2OTY3NGIzZDU4kR/r: 00:36:34.503 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:34.503 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:34.503 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:34.503 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:34.503 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:36:34.503 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:36:34.503 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:34.503 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:34.503 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:34.762 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:34.762 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:36:34.762 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:36:34.762 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:36:34.762 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:36:34.762 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:34.762 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:36:34.762 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:34.762 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:36:34.762 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:34.762 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:34.762 request: 00:36:34.762 { 00:36:34.762 "name": "nvme0", 00:36:34.762 "dhchap_key": "key1", 00:36:34.762 "dhchap_ctrlr_key": "ckey2", 00:36:34.762 "method": "bdev_nvme_set_keys", 00:36:34.762 "req_id": 1 00:36:34.762 } 00:36:34.762 Got JSON-RPC error response 00:36:34.762 response: 00:36:34.762 { 00:36:34.762 "code": -13, 00:36:34.762 "message": "Permission denied" 00:36:34.762 } 00:36:34.762 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:36:34.762 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:36:34.762 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:36:34.762 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:36:34.762 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:36:34.762 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:36:34.762 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:34.762 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:34.762 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:36:34.762 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:34.762 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:36:34.762 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:36:35.698 10:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:36:35.698 10:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:36:35.698 10:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:35.698 10:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:35.698 10:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:35.698 10:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:36:35.698 10:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:36:37.073 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:36:37.073 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:36:37.073 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:37.073 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:37.073 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:37.073 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:36:37.073 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:36:37.073 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:37.073 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:37.073 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:37.073 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:37.073 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWNlYmQzYWIxNWQyNGUzZmY2OTU1NGU0N2I2NGY3OTlhOGEyNzU0ZmJmYzhkMjg0bofh4A==: 00:36:37.073 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGJhMmE4NzI0N2U1YWU1Yzg5MDJhMmVjNmY4NWNjOGZjYjNmNzY2YWMxN2Y4NTM3zV7Lug==: 00:36:37.073 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:37.073 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:37.073 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWNlYmQzYWIxNWQyNGUzZmY2OTU1NGU0N2I2NGY3OTlhOGEyNzU0ZmJmYzhkMjg0bofh4A==: 00:36:37.073 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGJhMmE4NzI0N2U1YWU1Yzg5MDJhMmVjNmY4NWNjOGZjYjNmNzY2YWMxN2Y4NTM3zV7Lug==: ]] 00:36:37.073 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGJhMmE4NzI0N2U1YWU1Yzg5MDJhMmVjNmY4NWNjOGZjYjNmNzY2YWMxN2Y4NTM3zV7Lug==: 00:36:37.073 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:36:37.073 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:37.073 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:37.073 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:37.073 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:37.073 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:37.073 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:37.073 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:37.073 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:37.073 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:37.073 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:37.073 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:36:37.073 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:37.073 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:37.073 nvme0n1 00:36:37.073 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:37.073 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:36:37.073 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:37.073 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:37.073 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:37.073 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:37.073 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTViODJiYWQzNmM3N2I2Y2VkMTFlNzRiNmIzZjc5OWbt6JGf: 00:36:37.073 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzA2OTU2OTg2NTQ5ZjM1MjQ1MGZiODc2OTY3NGIzZDU4kR/r: 00:36:37.073 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:37.073 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:37.073 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTViODJiYWQzNmM3N2I2Y2VkMTFlNzRiNmIzZjc5OWbt6JGf: 00:36:37.073 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzA2OTU2OTg2NTQ5ZjM1MjQ1MGZiODc2OTY3NGIzZDU4kR/r: ]] 00:36:37.073 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzA2OTU2OTg2NTQ5ZjM1MjQ1MGZiODc2OTY3NGIzZDU4kR/r: 00:36:37.073 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:36:37.073 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:36:37.073 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:36:37.073 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:36:37.073 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:37.073 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:36:37.073 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:37.073 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:36:37.073 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:37.073 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:37.073 request: 00:36:37.073 { 00:36:37.073 "name": "nvme0", 00:36:37.073 "dhchap_key": "key2", 00:36:37.073 "dhchap_ctrlr_key": "ckey1", 00:36:37.073 "method": "bdev_nvme_set_keys", 00:36:37.073 "req_id": 1 00:36:37.073 } 00:36:37.073 Got JSON-RPC error response 00:36:37.073 response: 00:36:37.073 { 00:36:37.073 "code": -13, 00:36:37.073 "message": "Permission denied" 00:36:37.073 } 00:36:37.073 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:36:37.073 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:36:37.073 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:36:37.073 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:36:37.073 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:36:37.073 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:36:37.073 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:36:37.073 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:37.073 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:37.073 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:37.073 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:36:37.073 10:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:36:38.008 10:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:36:38.008 10:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:36:38.008 10:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:38.008 10:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:38.266 10:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:38.266 10:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:36:38.266 10:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:36:38.266 10:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:36:38.266 10:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:36:38.266 10:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:36:38.266 10:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:36:38.266 10:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:38.266 10:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:36:38.266 10:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:38.266 10:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:38.266 rmmod nvme_tcp 00:36:38.266 rmmod nvme_fabrics 00:36:38.266 10:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:38.266 10:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:36:38.266 10:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:36:38.266 10:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 4128751 ']' 00:36:38.266 10:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 4128751 00:36:38.266 10:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' -z 4128751 ']' 00:36:38.266 10:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # kill -0 4128751 00:36:38.266 10:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # uname 00:36:38.266 10:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:38.266 10:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4128751 00:36:38.266 10:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:36:38.266 10:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:36:38.266 10:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4128751' 00:36:38.266 killing process with pid 4128751 00:36:38.266 10:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@973 -- # kill 4128751 00:36:38.266 10:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@978 -- # wait 4128751 00:36:39.203 10:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:36:39.203 10:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:36:39.203 10:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:36:39.203 10:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:36:39.203 10:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 00:36:39.203 10:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 00:36:39.203 10:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:36:39.203 10:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:39.203 10:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:39.203 10:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:39.203 10:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:39.203 10:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:41.105 10:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:41.105 10:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:36:41.105 10:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:36:41.105 10:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:36:41.105 10:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:36:41.105 10:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:36:41.105 10:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:36:41.362 10:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:36:41.363 10:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:36:41.363 10:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:36:41.363 10:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:36:41.363 10:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:36:41.363 10:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:36:43.895 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:36:43.895 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:36:43.895 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:36:43.895 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:36:43.895 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:36:43.895 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:36:43.895 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:36:43.895 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:36:43.895 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:36:43.895 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:36:43.895 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:36:43.895 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:36:43.895 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:36:44.154 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:36:44.154 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:36:44.154 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:36:44.720 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:36:44.979 10:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.U1U /tmp/spdk.key-null.K0e /tmp/spdk.key-sha256.x8h /tmp/spdk.key-sha384.i1W /tmp/spdk.key-sha512.I0b /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:36:44.979 10:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:36:47.513 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:36:47.513 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:36:47.513 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:36:47.513 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:36:47.513 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:36:47.513 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:36:47.513 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:36:47.513 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:36:47.513 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:36:47.513 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:36:47.513 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:36:47.513 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:36:47.513 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:36:47.513 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:36:47.513 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:36:47.513 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:36:47.513 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:36:47.513 00:36:47.513 real 0m54.164s 00:36:47.513 user 0m49.394s 00:36:47.513 sys 0m11.986s 00:36:47.513 10:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:47.513 10:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:47.513 ************************************ 00:36:47.513 END TEST nvmf_auth_host 00:36:47.513 ************************************ 00:36:47.513 10:38:41 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:36:47.513 10:38:41 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:36:47.513 10:38:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:36:47.513 10:38:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:47.513 10:38:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:36:47.772 ************************************ 00:36:47.772 START TEST nvmf_digest 00:36:47.772 ************************************ 00:36:47.772 10:38:41 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:36:47.772 * Looking for test storage... 00:36:47.772 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:36:47.772 10:38:41 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:36:47.772 10:38:41 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # lcov --version 00:36:47.772 10:38:41 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:36:47.772 10:38:41 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:36:47.772 10:38:41 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:47.772 10:38:41 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:47.772 10:38:41 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:47.772 10:38:41 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:36:47.772 10:38:41 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:36:47.772 10:38:41 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:36:47.772 10:38:41 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:36:47.772 10:38:41 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:36:47.772 10:38:41 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:36:47.772 10:38:41 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:36:47.772 10:38:41 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:47.772 10:38:41 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:36:47.772 10:38:41 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:36:47.772 10:38:41 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:47.772 10:38:41 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:47.772 10:38:41 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:36:47.772 10:38:41 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:36:47.772 10:38:41 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:47.772 10:38:41 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:36:47.772 10:38:41 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:36:47.772 10:38:41 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:36:47.772 10:38:41 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:36:47.772 10:38:41 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:47.772 10:38:41 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:36:47.772 10:38:41 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:36:47.772 10:38:41 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:47.772 10:38:41 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:47.772 10:38:41 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:36:47.772 10:38:41 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:47.772 10:38:41 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:36:47.772 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:47.772 --rc genhtml_branch_coverage=1 00:36:47.772 --rc genhtml_function_coverage=1 00:36:47.772 --rc genhtml_legend=1 00:36:47.772 --rc geninfo_all_blocks=1 00:36:47.772 --rc geninfo_unexecuted_blocks=1 00:36:47.772 00:36:47.772 ' 00:36:47.772 10:38:41 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:36:47.772 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:47.772 --rc genhtml_branch_coverage=1 00:36:47.772 --rc genhtml_function_coverage=1 00:36:47.772 --rc genhtml_legend=1 00:36:47.772 --rc geninfo_all_blocks=1 00:36:47.772 --rc geninfo_unexecuted_blocks=1 00:36:47.772 00:36:47.772 ' 00:36:47.772 10:38:41 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:36:47.772 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:47.772 --rc genhtml_branch_coverage=1 00:36:47.772 --rc genhtml_function_coverage=1 00:36:47.772 --rc genhtml_legend=1 00:36:47.772 --rc geninfo_all_blocks=1 00:36:47.772 --rc geninfo_unexecuted_blocks=1 00:36:47.772 00:36:47.772 ' 00:36:47.772 10:38:41 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:36:47.772 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:47.772 --rc genhtml_branch_coverage=1 00:36:47.772 --rc genhtml_function_coverage=1 00:36:47.772 --rc genhtml_legend=1 00:36:47.772 --rc geninfo_all_blocks=1 00:36:47.772 --rc geninfo_unexecuted_blocks=1 00:36:47.772 00:36:47.772 ' 00:36:47.772 10:38:41 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:47.772 10:38:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:36:47.772 10:38:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:47.772 10:38:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:47.772 10:38:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:47.772 10:38:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:47.772 10:38:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:47.773 10:38:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:47.773 10:38:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:47.773 10:38:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:47.773 10:38:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:47.773 10:38:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:47.773 10:38:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:36:47.773 10:38:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:36:47.773 10:38:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:47.773 10:38:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:47.773 10:38:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:47.773 10:38:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:47.773 10:38:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:47.773 10:38:41 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:36:47.773 10:38:41 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:47.773 10:38:41 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:47.773 10:38:41 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:47.773 10:38:41 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:47.773 10:38:41 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:47.773 10:38:41 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:47.773 10:38:41 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:36:47.773 10:38:41 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:47.773 10:38:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:36:47.773 10:38:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:47.773 10:38:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:47.773 10:38:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:47.773 10:38:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:47.773 10:38:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:47.773 10:38:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:36:47.773 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:36:47.773 10:38:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:47.773 10:38:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:47.773 10:38:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:47.773 10:38:41 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:36:47.773 10:38:41 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:36:47.773 10:38:41 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:36:47.773 10:38:41 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:36:47.773 10:38:41 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:36:47.773 10:38:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:36:47.773 10:38:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:47.773 10:38:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 00:36:47.773 10:38:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 00:36:47.773 10:38:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 00:36:47.773 10:38:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:47.773 10:38:41 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:47.773 10:38:41 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:47.773 10:38:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:36:47.773 10:38:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:36:47.773 10:38:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@309 -- # xtrace_disable 00:36:47.773 10:38:41 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:36:54.340 10:38:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:54.340 10:38:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # pci_devs=() 00:36:54.340 10:38:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:54.340 10:38:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:54.340 10:38:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:54.340 10:38:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:54.340 10:38:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:54.341 10:38:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # net_devs=() 00:36:54.341 10:38:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:54.341 10:38:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # e810=() 00:36:54.341 10:38:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # local -ga e810 00:36:54.341 10:38:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # x722=() 00:36:54.341 10:38:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # local -ga x722 00:36:54.341 10:38:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # mlx=() 00:36:54.341 10:38:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # local -ga mlx 00:36:54.341 10:38:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:54.341 10:38:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:54.341 10:38:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:54.341 10:38:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:54.341 10:38:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:54.341 10:38:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:54.341 10:38:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:54.341 10:38:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:54.341 10:38:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:54.341 10:38:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:54.341 10:38:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:54.341 10:38:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:54.341 10:38:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:54.341 10:38:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:54.341 10:38:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:54.341 10:38:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:54.341 10:38:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:54.341 10:38:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:54.341 10:38:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:54.341 10:38:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:36:54.341 Found 0000:af:00.0 (0x8086 - 0x159b) 00:36:54.341 10:38:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:54.341 10:38:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:54.341 10:38:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:54.341 10:38:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:54.341 10:38:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:54.341 10:38:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:54.341 10:38:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:36:54.341 Found 0000:af:00.1 (0x8086 - 0x159b) 00:36:54.341 10:38:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:54.341 10:38:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:54.341 10:38:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:54.341 10:38:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:54.341 10:38:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:54.341 10:38:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:54.341 10:38:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:54.341 10:38:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:54.341 10:38:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:54.341 10:38:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:54.341 10:38:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:54.341 10:38:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:54.341 10:38:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:54.341 10:38:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:54.341 10:38:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:54.341 10:38:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:36:54.341 Found net devices under 0000:af:00.0: cvl_0_0 00:36:54.341 10:38:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:54.341 10:38:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:54.341 10:38:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:54.341 10:38:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:54.341 10:38:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:54.341 10:38:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:54.341 10:38:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:54.341 10:38:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:54.341 10:38:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:36:54.341 Found net devices under 0000:af:00.1: cvl_0_1 00:36:54.341 10:38:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:54.341 10:38:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:36:54.341 10:38:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # is_hw=yes 00:36:54.341 10:38:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:36:54.341 10:38:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:36:54.341 10:38:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:36:54.341 10:38:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:54.341 10:38:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:54.341 10:38:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:54.341 10:38:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:54.341 10:38:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:54.341 10:38:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:54.341 10:38:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:54.341 10:38:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:54.341 10:38:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:54.341 10:38:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:54.341 10:38:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:54.341 10:38:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:54.341 10:38:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:54.341 10:38:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:54.341 10:38:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:54.341 10:38:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:54.341 10:38:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:54.341 10:38:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:54.341 10:38:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:54.341 10:38:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:54.341 10:38:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:54.341 10:38:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:54.341 10:38:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:54.341 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:54.341 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.321 ms 00:36:54.341 00:36:54.341 --- 10.0.0.2 ping statistics --- 00:36:54.341 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:54.341 rtt min/avg/max/mdev = 0.321/0.321/0.321/0.000 ms 00:36:54.341 10:38:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:54.341 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:54.341 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.218 ms 00:36:54.341 00:36:54.341 --- 10.0.0.1 ping statistics --- 00:36:54.341 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:54.341 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:36:54.341 10:38:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:54.341 10:38:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # return 0 00:36:54.341 10:38:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:36:54.341 10:38:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:54.341 10:38:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:36:54.341 10:38:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:36:54.341 10:38:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:54.341 10:38:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:36:54.341 10:38:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:36:54.341 10:38:47 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:36:54.342 10:38:47 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:36:54.342 10:38:47 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:36:54.342 10:38:47 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:36:54.342 10:38:47 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:54.342 10:38:47 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:36:54.342 ************************************ 00:36:54.342 START TEST nvmf_digest_clean 00:36:54.342 ************************************ 00:36:54.342 10:38:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1129 -- # run_digest 00:36:54.342 10:38:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:36:54.342 10:38:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:36:54.342 10:38:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:36:54.342 10:38:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:36:54.342 10:38:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:36:54.342 10:38:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:36:54.342 10:38:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:54.342 10:38:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:36:54.342 10:38:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=4142386 00:36:54.342 10:38:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 4142386 00:36:54.342 10:38:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 4142386 ']' 00:36:54.342 10:38:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:54.342 10:38:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:54.342 10:38:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:36:54.342 10:38:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:54.342 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:54.342 10:38:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:54.342 10:38:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:36:54.342 [2024-12-13 10:38:47.311778] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:36:54.342 [2024-12-13 10:38:47.311866] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:54.342 [2024-12-13 10:38:47.431978] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:54.342 [2024-12-13 10:38:47.534218] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:54.342 [2024-12-13 10:38:47.534263] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:54.342 [2024-12-13 10:38:47.534273] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:54.342 [2024-12-13 10:38:47.534299] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:54.342 [2024-12-13 10:38:47.534308] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:54.342 [2024-12-13 10:38:47.535769] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:36:54.342 10:38:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:54.342 10:38:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:36:54.342 10:38:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:36:54.342 10:38:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:54.342 10:38:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:36:54.342 10:38:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:54.342 10:38:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:36:54.342 10:38:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:36:54.342 10:38:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:36:54.342 10:38:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:54.342 10:38:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:36:54.601 null0 00:36:54.601 [2024-12-13 10:38:48.481170] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:54.859 [2024-12-13 10:38:48.505410] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:54.859 10:38:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:54.859 10:38:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:36:54.859 10:38:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:36:54.859 10:38:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:36:54.859 10:38:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:36:54.859 10:38:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:36:54.859 10:38:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:36:54.859 10:38:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:36:54.859 10:38:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=4142559 00:36:54.859 10:38:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 4142559 /var/tmp/bperf.sock 00:36:54.859 10:38:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 4142559 ']' 00:36:54.859 10:38:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:54.859 10:38:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:54.859 10:38:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:54.859 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:54.859 10:38:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:36:54.859 10:38:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:54.859 10:38:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:36:54.859 [2024-12-13 10:38:48.581695] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:36:54.859 [2024-12-13 10:38:48.581774] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4142559 ] 00:36:54.859 [2024-12-13 10:38:48.695772] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:55.118 [2024-12-13 10:38:48.809342] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:36:55.686 10:38:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:55.686 10:38:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:36:55.686 10:38:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:36:55.686 10:38:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:36:55.686 10:38:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:36:56.254 10:38:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:56.254 10:38:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:56.254 nvme0n1 00:36:56.254 10:38:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:36:56.254 10:38:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:56.513 Running I/O for 2 seconds... 00:36:58.385 21008.00 IOPS, 82.06 MiB/s [2024-12-13T09:38:52.534Z] 21230.00 IOPS, 82.93 MiB/s 00:36:58.643 Latency(us) 00:36:58.643 [2024-12-13T09:38:52.534Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:58.643 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:36:58.643 nvme0n1 : 2.05 20824.53 81.35 0.00 0.00 6017.27 2668.25 49432.87 00:36:58.643 [2024-12-13T09:38:52.534Z] =================================================================================================================== 00:36:58.643 [2024-12-13T09:38:52.534Z] Total : 20824.53 81.35 0.00 0.00 6017.27 2668.25 49432.87 00:36:58.643 { 00:36:58.643 "results": [ 00:36:58.643 { 00:36:58.643 "job": "nvme0n1", 00:36:58.643 "core_mask": "0x2", 00:36:58.643 "workload": "randread", 00:36:58.643 "status": "finished", 00:36:58.643 "queue_depth": 128, 00:36:58.643 "io_size": 4096, 00:36:58.643 "runtime": 2.045088, 00:36:58.643 "iops": 20824.53175609069, 00:36:58.643 "mibps": 81.34582717222926, 00:36:58.643 "io_failed": 0, 00:36:58.643 "io_timeout": 0, 00:36:58.643 "avg_latency_us": 6017.270708583235, 00:36:58.643 "min_latency_us": 2668.2514285714287, 00:36:58.643 "max_latency_us": 49432.868571428575 00:36:58.643 } 00:36:58.644 ], 00:36:58.644 "core_count": 1 00:36:58.644 } 00:36:58.644 10:38:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:36:58.644 10:38:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:36:58.644 10:38:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:36:58.644 10:38:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:36:58.644 | select(.opcode=="crc32c") 00:36:58.644 | "\(.module_name) \(.executed)"' 00:36:58.644 10:38:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:36:58.644 10:38:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:36:58.644 10:38:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:36:58.644 10:38:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:36:58.644 10:38:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:36:58.644 10:38:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 4142559 00:36:58.644 10:38:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 4142559 ']' 00:36:58.644 10:38:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 4142559 00:36:58.644 10:38:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:36:58.644 10:38:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:58.644 10:38:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4142559 00:36:58.902 10:38:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:36:58.902 10:38:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:36:58.902 10:38:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4142559' 00:36:58.902 killing process with pid 4142559 00:36:58.902 10:38:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 4142559 00:36:58.902 Received shutdown signal, test time was about 2.000000 seconds 00:36:58.902 00:36:58.902 Latency(us) 00:36:58.902 [2024-12-13T09:38:52.793Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:58.902 [2024-12-13T09:38:52.793Z] =================================================================================================================== 00:36:58.902 [2024-12-13T09:38:52.793Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:58.902 10:38:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 4142559 00:36:59.839 10:38:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:36:59.839 10:38:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:36:59.839 10:38:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:36:59.839 10:38:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:36:59.839 10:38:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:36:59.839 10:38:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:36:59.839 10:38:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:36:59.839 10:38:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=4143449 00:36:59.839 10:38:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 4143449 /var/tmp/bperf.sock 00:36:59.839 10:38:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 4143449 ']' 00:36:59.839 10:38:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:59.839 10:38:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:59.839 10:38:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:59.839 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:59.839 10:38:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:59.839 10:38:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:36:59.839 10:38:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:36:59.839 [2024-12-13 10:38:53.496443] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:36:59.839 [2024-12-13 10:38:53.496542] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4143449 ] 00:36:59.839 I/O size of 131072 is greater than zero copy threshold (65536). 00:36:59.839 Zero copy mechanism will not be used. 00:36:59.839 [2024-12-13 10:38:53.608191] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:59.839 [2024-12-13 10:38:53.716234] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:37:00.407 10:38:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:00.407 10:38:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:37:00.407 10:38:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:37:00.407 10:38:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:37:00.407 10:38:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:37:00.975 10:38:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:00.975 10:38:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:01.233 nvme0n1 00:37:01.233 10:38:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:37:01.233 10:38:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:37:01.491 I/O size of 131072 is greater than zero copy threshold (65536). 00:37:01.491 Zero copy mechanism will not be used. 00:37:01.491 Running I/O for 2 seconds... 00:37:03.362 5360.00 IOPS, 670.00 MiB/s [2024-12-13T09:38:57.253Z] 5282.00 IOPS, 660.25 MiB/s 00:37:03.362 Latency(us) 00:37:03.362 [2024-12-13T09:38:57.253Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:03.362 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:37:03.362 nvme0n1 : 2.00 5283.34 660.42 0.00 0.00 3025.35 1068.86 9362.29 00:37:03.362 [2024-12-13T09:38:57.253Z] =================================================================================================================== 00:37:03.362 [2024-12-13T09:38:57.253Z] Total : 5283.34 660.42 0.00 0.00 3025.35 1068.86 9362.29 00:37:03.362 { 00:37:03.362 "results": [ 00:37:03.362 { 00:37:03.362 "job": "nvme0n1", 00:37:03.362 "core_mask": "0x2", 00:37:03.362 "workload": "randread", 00:37:03.362 "status": "finished", 00:37:03.362 "queue_depth": 16, 00:37:03.362 "io_size": 131072, 00:37:03.362 "runtime": 2.00252, 00:37:03.362 "iops": 5283.342987835327, 00:37:03.362 "mibps": 660.4178734794159, 00:37:03.362 "io_failed": 0, 00:37:03.362 "io_timeout": 0, 00:37:03.362 "avg_latency_us": 3025.346379332073, 00:37:03.362 "min_latency_us": 1068.8609523809523, 00:37:03.362 "max_latency_us": 9362.285714285714 00:37:03.362 } 00:37:03.362 ], 00:37:03.362 "core_count": 1 00:37:03.362 } 00:37:03.362 10:38:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:37:03.362 10:38:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:37:03.362 10:38:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:37:03.362 10:38:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:37:03.362 | select(.opcode=="crc32c") 00:37:03.362 | "\(.module_name) \(.executed)"' 00:37:03.362 10:38:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:37:03.621 10:38:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:37:03.621 10:38:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:37:03.621 10:38:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:37:03.621 10:38:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:37:03.621 10:38:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 4143449 00:37:03.621 10:38:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 4143449 ']' 00:37:03.621 10:38:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 4143449 00:37:03.621 10:38:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:37:03.621 10:38:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:03.621 10:38:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4143449 00:37:03.621 10:38:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:37:03.621 10:38:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:37:03.621 10:38:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4143449' 00:37:03.621 killing process with pid 4143449 00:37:03.621 10:38:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 4143449 00:37:03.621 Received shutdown signal, test time was about 2.000000 seconds 00:37:03.621 00:37:03.621 Latency(us) 00:37:03.621 [2024-12-13T09:38:57.512Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:03.621 [2024-12-13T09:38:57.512Z] =================================================================================================================== 00:37:03.621 [2024-12-13T09:38:57.512Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:03.621 10:38:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 4143449 00:37:04.558 10:38:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:37:04.558 10:38:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:37:04.558 10:38:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:37:04.558 10:38:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:37:04.558 10:38:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:37:04.558 10:38:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:37:04.558 10:38:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:37:04.558 10:38:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=4144138 00:37:04.558 10:38:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 4144138 /var/tmp/bperf.sock 00:37:04.558 10:38:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:37:04.558 10:38:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 4144138 ']' 00:37:04.558 10:38:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:37:04.558 10:38:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:04.558 10:38:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:37:04.558 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:37:04.558 10:38:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:04.558 10:38:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:37:04.558 [2024-12-13 10:38:58.358650] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:37:04.558 [2024-12-13 10:38:58.358754] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4144138 ] 00:37:04.817 [2024-12-13 10:38:58.471525] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:04.817 [2024-12-13 10:38:58.581268] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:37:05.384 10:38:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:05.384 10:38:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:37:05.384 10:38:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:37:05.384 10:38:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:37:05.384 10:38:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:37:05.950 10:38:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:05.950 10:38:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:06.209 nvme0n1 00:37:06.209 10:39:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:37:06.209 10:39:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:37:06.468 Running I/O for 2 seconds... 00:37:08.340 24295.00 IOPS, 94.90 MiB/s [2024-12-13T09:39:02.231Z] 24515.00 IOPS, 95.76 MiB/s 00:37:08.340 Latency(us) 00:37:08.340 [2024-12-13T09:39:02.231Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:08.340 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:08.340 nvme0n1 : 2.00 24529.62 95.82 0.00 0.00 5213.54 2574.63 13419.28 00:37:08.340 [2024-12-13T09:39:02.231Z] =================================================================================================================== 00:37:08.340 [2024-12-13T09:39:02.231Z] Total : 24529.62 95.82 0.00 0.00 5213.54 2574.63 13419.28 00:37:08.340 { 00:37:08.340 "results": [ 00:37:08.340 { 00:37:08.340 "job": "nvme0n1", 00:37:08.340 "core_mask": "0x2", 00:37:08.340 "workload": "randwrite", 00:37:08.340 "status": "finished", 00:37:08.340 "queue_depth": 128, 00:37:08.340 "io_size": 4096, 00:37:08.340 "runtime": 2.004026, 00:37:08.340 "iops": 24529.621871173327, 00:37:08.340 "mibps": 95.81883543427081, 00:37:08.340 "io_failed": 0, 00:37:08.340 "io_timeout": 0, 00:37:08.340 "avg_latency_us": 5213.542152708758, 00:37:08.340 "min_latency_us": 2574.6285714285714, 00:37:08.340 "max_latency_us": 13419.27619047619 00:37:08.340 } 00:37:08.340 ], 00:37:08.340 "core_count": 1 00:37:08.340 } 00:37:08.340 10:39:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:37:08.340 10:39:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:37:08.340 10:39:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:37:08.340 10:39:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:37:08.340 | select(.opcode=="crc32c") 00:37:08.340 | "\(.module_name) \(.executed)"' 00:37:08.340 10:39:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:37:08.598 10:39:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:37:08.598 10:39:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:37:08.599 10:39:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:37:08.599 10:39:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:37:08.599 10:39:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 4144138 00:37:08.599 10:39:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 4144138 ']' 00:37:08.599 10:39:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 4144138 00:37:08.599 10:39:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:37:08.599 10:39:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:08.599 10:39:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4144138 00:37:08.599 10:39:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:37:08.599 10:39:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:37:08.599 10:39:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4144138' 00:37:08.599 killing process with pid 4144138 00:37:08.599 10:39:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 4144138 00:37:08.599 Received shutdown signal, test time was about 2.000000 seconds 00:37:08.599 00:37:08.599 Latency(us) 00:37:08.599 [2024-12-13T09:39:02.490Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:08.599 [2024-12-13T09:39:02.490Z] =================================================================================================================== 00:37:08.599 [2024-12-13T09:39:02.490Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:08.599 10:39:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 4144138 00:37:09.535 10:39:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:37:09.535 10:39:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:37:09.535 10:39:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:37:09.535 10:39:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:37:09.535 10:39:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:37:09.535 10:39:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:37:09.535 10:39:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:37:09.535 10:39:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=4145158 00:37:09.535 10:39:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 4145158 /var/tmp/bperf.sock 00:37:09.535 10:39:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:37:09.535 10:39:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 4145158 ']' 00:37:09.536 10:39:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:37:09.536 10:39:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:09.536 10:39:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:37:09.536 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:37:09.536 10:39:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:09.536 10:39:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:37:09.536 [2024-12-13 10:39:03.397645] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:37:09.536 [2024-12-13 10:39:03.397732] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4145158 ] 00:37:09.536 I/O size of 131072 is greater than zero copy threshold (65536). 00:37:09.536 Zero copy mechanism will not be used. 00:37:09.794 [2024-12-13 10:39:03.511176] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:09.794 [2024-12-13 10:39:03.624282] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:37:10.361 10:39:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:10.361 10:39:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:37:10.361 10:39:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:37:10.361 10:39:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:37:10.362 10:39:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:37:10.929 10:39:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:10.929 10:39:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:11.187 nvme0n1 00:37:11.187 10:39:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:37:11.187 10:39:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:37:11.446 I/O size of 131072 is greater than zero copy threshold (65536). 00:37:11.446 Zero copy mechanism will not be used. 00:37:11.446 Running I/O for 2 seconds... 00:37:13.316 5059.00 IOPS, 632.38 MiB/s [2024-12-13T09:39:07.207Z] 5376.50 IOPS, 672.06 MiB/s 00:37:13.316 Latency(us) 00:37:13.316 [2024-12-13T09:39:07.207Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:13.316 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:37:13.317 nvme0n1 : 2.00 5375.49 671.94 0.00 0.00 2971.29 1989.49 5118.05 00:37:13.317 [2024-12-13T09:39:07.208Z] =================================================================================================================== 00:37:13.317 [2024-12-13T09:39:07.208Z] Total : 5375.49 671.94 0.00 0.00 2971.29 1989.49 5118.05 00:37:13.317 { 00:37:13.317 "results": [ 00:37:13.317 { 00:37:13.317 "job": "nvme0n1", 00:37:13.317 "core_mask": "0x2", 00:37:13.317 "workload": "randwrite", 00:37:13.317 "status": "finished", 00:37:13.317 "queue_depth": 16, 00:37:13.317 "io_size": 131072, 00:37:13.317 "runtime": 2.003911, 00:37:13.317 "iops": 5375.488232760837, 00:37:13.317 "mibps": 671.9360290951046, 00:37:13.317 "io_failed": 0, 00:37:13.317 "io_timeout": 0, 00:37:13.317 "avg_latency_us": 2971.2940823652148, 00:37:13.317 "min_latency_us": 1989.4857142857143, 00:37:13.317 "max_latency_us": 5118.049523809524 00:37:13.317 } 00:37:13.317 ], 00:37:13.317 "core_count": 1 00:37:13.317 } 00:37:13.317 10:39:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:37:13.317 10:39:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:37:13.317 10:39:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:37:13.317 10:39:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:37:13.317 | select(.opcode=="crc32c") 00:37:13.317 | "\(.module_name) \(.executed)"' 00:37:13.317 10:39:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:37:13.575 10:39:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:37:13.575 10:39:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:37:13.575 10:39:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:37:13.575 10:39:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:37:13.575 10:39:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 4145158 00:37:13.575 10:39:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 4145158 ']' 00:37:13.575 10:39:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 4145158 00:37:13.575 10:39:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:37:13.575 10:39:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:13.575 10:39:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4145158 00:37:13.575 10:39:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:37:13.575 10:39:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:37:13.575 10:39:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4145158' 00:37:13.575 killing process with pid 4145158 00:37:13.575 10:39:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 4145158 00:37:13.576 Received shutdown signal, test time was about 2.000000 seconds 00:37:13.576 00:37:13.576 Latency(us) 00:37:13.576 [2024-12-13T09:39:07.467Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:13.576 [2024-12-13T09:39:07.467Z] =================================================================================================================== 00:37:13.576 [2024-12-13T09:39:07.467Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:13.576 10:39:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 4145158 00:37:14.512 10:39:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 4142386 00:37:14.512 10:39:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 4142386 ']' 00:37:14.512 10:39:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 4142386 00:37:14.512 10:39:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:37:14.512 10:39:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:14.512 10:39:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4142386 00:37:14.512 10:39:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:37:14.512 10:39:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:37:14.512 10:39:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4142386' 00:37:14.512 killing process with pid 4142386 00:37:14.512 10:39:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 4142386 00:37:14.512 10:39:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 4142386 00:37:15.888 00:37:15.888 real 0m22.277s 00:37:15.888 user 0m41.936s 00:37:15.888 sys 0m4.683s 00:37:15.888 10:39:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:15.888 10:39:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:37:15.888 ************************************ 00:37:15.888 END TEST nvmf_digest_clean 00:37:15.888 ************************************ 00:37:15.888 10:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:37:15.888 10:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:37:15.888 10:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:15.888 10:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:37:15.888 ************************************ 00:37:15.888 START TEST nvmf_digest_error 00:37:15.888 ************************************ 00:37:15.888 10:39:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1129 -- # run_digest_error 00:37:15.888 10:39:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:37:15.888 10:39:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:37:15.888 10:39:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:15.888 10:39:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:15.888 10:39:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:37:15.888 10:39:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=4146348 00:37:15.888 10:39:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 4146348 00:37:15.888 10:39:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 4146348 ']' 00:37:15.888 10:39:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:15.888 10:39:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:15.888 10:39:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:15.888 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:15.888 10:39:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:15.888 10:39:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:15.888 [2024-12-13 10:39:09.650204] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:37:15.888 [2024-12-13 10:39:09.650293] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:15.888 [2024-12-13 10:39:09.762780] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:16.147 [2024-12-13 10:39:09.869625] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:16.147 [2024-12-13 10:39:09.869669] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:16.147 [2024-12-13 10:39:09.869679] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:16.147 [2024-12-13 10:39:09.869691] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:16.147 [2024-12-13 10:39:09.869698] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:16.147 [2024-12-13 10:39:09.871125] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:37:16.713 10:39:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:16.713 10:39:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:37:16.713 10:39:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:37:16.713 10:39:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:16.713 10:39:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:16.713 10:39:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:16.713 10:39:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:37:16.713 10:39:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:16.713 10:39:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:16.713 [2024-12-13 10:39:10.493244] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:37:16.713 10:39:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:16.713 10:39:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:37:16.713 10:39:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:37:16.713 10:39:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:16.713 10:39:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:16.972 null0 00:37:16.972 [2024-12-13 10:39:10.827582] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:16.972 [2024-12-13 10:39:10.851812] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:16.972 10:39:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:16.972 10:39:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:37:16.972 10:39:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:37:16.972 10:39:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:37:16.972 10:39:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:37:16.972 10:39:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:37:16.972 10:39:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=4146720 00:37:16.972 10:39:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:37:16.972 10:39:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 4146720 /var/tmp/bperf.sock 00:37:16.972 10:39:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 4146720 ']' 00:37:16.972 10:39:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:37:16.972 10:39:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:16.972 10:39:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:37:16.972 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:37:16.972 10:39:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:16.972 10:39:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:17.231 [2024-12-13 10:39:10.928549] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:37:17.231 [2024-12-13 10:39:10.928630] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4146720 ] 00:37:17.231 [2024-12-13 10:39:11.039426] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:17.490 [2024-12-13 10:39:11.146287] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:37:18.057 10:39:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:18.057 10:39:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:37:18.057 10:39:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:37:18.057 10:39:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:37:18.058 10:39:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:37:18.058 10:39:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:18.058 10:39:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:18.058 10:39:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:18.058 10:39:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:18.058 10:39:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:18.625 nvme0n1 00:37:18.625 10:39:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:37:18.625 10:39:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:18.625 10:39:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:18.625 10:39:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:18.625 10:39:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:37:18.625 10:39:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:37:18.625 Running I/O for 2 seconds... 00:37:18.625 [2024-12-13 10:39:12.439487] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:18.625 [2024-12-13 10:39:12.439535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13653 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:18.625 [2024-12-13 10:39:12.439552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:18.625 [2024-12-13 10:39:12.453469] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:18.625 [2024-12-13 10:39:12.453510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11860 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:18.625 [2024-12-13 10:39:12.453524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:18.625 [2024-12-13 10:39:12.467497] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:18.625 [2024-12-13 10:39:12.467527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:20867 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:18.625 [2024-12-13 10:39:12.467540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:18.625 [2024-12-13 10:39:12.476545] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:18.625 [2024-12-13 10:39:12.476572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:23638 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:18.625 [2024-12-13 10:39:12.476584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:18.625 [2024-12-13 10:39:12.491134] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:18.625 [2024-12-13 10:39:12.491166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:3645 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:18.625 [2024-12-13 10:39:12.491178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:18.625 [2024-12-13 10:39:12.503311] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:18.625 [2024-12-13 10:39:12.503338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:25330 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:18.625 [2024-12-13 10:39:12.503350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:18.625 [2024-12-13 10:39:12.514347] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:18.625 [2024-12-13 10:39:12.514375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:4642 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:18.625 [2024-12-13 10:39:12.514387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:18.935 [2024-12-13 10:39:12.525702] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:18.935 [2024-12-13 10:39:12.525730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:16985 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:18.935 [2024-12-13 10:39:12.525743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:18.935 [2024-12-13 10:39:12.535962] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:18.935 [2024-12-13 10:39:12.535990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:1029 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:18.935 [2024-12-13 10:39:12.536002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:18.935 [2024-12-13 10:39:12.550159] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:18.935 [2024-12-13 10:39:12.550185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:13423 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:18.935 [2024-12-13 10:39:12.550197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:18.935 [2024-12-13 10:39:12.564583] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:18.935 [2024-12-13 10:39:12.564610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:8098 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:18.935 [2024-12-13 10:39:12.564650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:18.935 [2024-12-13 10:39:12.578725] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:18.935 [2024-12-13 10:39:12.578752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:20212 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:18.935 [2024-12-13 10:39:12.578764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:18.935 [2024-12-13 10:39:12.589035] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:18.935 [2024-12-13 10:39:12.589062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:329 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:18.935 [2024-12-13 10:39:12.589077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:18.935 [2024-12-13 10:39:12.603457] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:18.935 [2024-12-13 10:39:12.603485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:11447 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:18.935 [2024-12-13 10:39:12.603498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:18.935 [2024-12-13 10:39:12.614816] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:18.935 [2024-12-13 10:39:12.614843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:12438 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:18.935 [2024-12-13 10:39:12.614855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:18.935 [2024-12-13 10:39:12.624849] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:18.935 [2024-12-13 10:39:12.624876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3527 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:18.935 [2024-12-13 10:39:12.624887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:18.935 [2024-12-13 10:39:12.634710] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:18.935 [2024-12-13 10:39:12.634736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25111 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:18.935 [2024-12-13 10:39:12.634748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:18.935 [2024-12-13 10:39:12.646071] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:18.935 [2024-12-13 10:39:12.646098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:5793 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:18.935 [2024-12-13 10:39:12.646110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:18.935 [2024-12-13 10:39:12.657584] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:18.935 [2024-12-13 10:39:12.657610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:24582 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:18.935 [2024-12-13 10:39:12.657622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:18.935 [2024-12-13 10:39:12.666753] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:18.935 [2024-12-13 10:39:12.666778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24164 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:18.935 [2024-12-13 10:39:12.666790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:18.935 [2024-12-13 10:39:12.679170] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:18.935 [2024-12-13 10:39:12.679196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:493 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:18.935 [2024-12-13 10:39:12.679208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:18.935 [2024-12-13 10:39:12.690473] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:18.935 [2024-12-13 10:39:12.690503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:9209 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:18.935 [2024-12-13 10:39:12.690515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:18.935 [2024-12-13 10:39:12.701592] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:18.935 [2024-12-13 10:39:12.701620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19301 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:18.935 [2024-12-13 10:39:12.701632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:18.935 [2024-12-13 10:39:12.713900] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:18.935 [2024-12-13 10:39:12.713927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:587 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:18.935 [2024-12-13 10:39:12.713939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:18.935 [2024-12-13 10:39:12.723350] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:18.935 [2024-12-13 10:39:12.723376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9658 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:18.935 [2024-12-13 10:39:12.723388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:18.935 [2024-12-13 10:39:12.736733] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:18.935 [2024-12-13 10:39:12.736758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:2934 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:18.935 [2024-12-13 10:39:12.736769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:18.935 [2024-12-13 10:39:12.746196] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:18.935 [2024-12-13 10:39:12.746223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:14492 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:18.935 [2024-12-13 10:39:12.746235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:18.935 [2024-12-13 10:39:12.760110] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:18.935 [2024-12-13 10:39:12.760135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19184 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:18.935 [2024-12-13 10:39:12.760146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:18.935 [2024-12-13 10:39:12.774163] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:18.935 [2024-12-13 10:39:12.774191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:18903 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:18.935 [2024-12-13 10:39:12.774202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:18.935 [2024-12-13 10:39:12.787489] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:18.935 [2024-12-13 10:39:12.787518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:7874 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:18.935 [2024-12-13 10:39:12.787534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:19.240 [2024-12-13 10:39:12.803264] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:19.240 [2024-12-13 10:39:12.803291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:13309 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.240 [2024-12-13 10:39:12.803303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:19.240 [2024-12-13 10:39:12.813164] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:19.240 [2024-12-13 10:39:12.813193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:2996 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.240 [2024-12-13 10:39:12.813205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:19.240 [2024-12-13 10:39:12.825249] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:19.240 [2024-12-13 10:39:12.825277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:1046 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.240 [2024-12-13 10:39:12.825289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:19.240 [2024-12-13 10:39:12.834716] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:19.240 [2024-12-13 10:39:12.834743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:22649 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.240 [2024-12-13 10:39:12.834756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:19.240 [2024-12-13 10:39:12.845636] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:19.240 [2024-12-13 10:39:12.845663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:24144 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.240 [2024-12-13 10:39:12.845675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:19.240 [2024-12-13 10:39:12.856501] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:19.240 [2024-12-13 10:39:12.856528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:14311 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.240 [2024-12-13 10:39:12.856540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:19.240 [2024-12-13 10:39:12.870758] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:19.240 [2024-12-13 10:39:12.870785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:434 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.240 [2024-12-13 10:39:12.870796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:19.240 [2024-12-13 10:39:12.883285] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:19.240 [2024-12-13 10:39:12.883318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:17953 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.240 [2024-12-13 10:39:12.883330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:19.240 [2024-12-13 10:39:12.897033] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:19.240 [2024-12-13 10:39:12.897066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:19145 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.240 [2024-12-13 10:39:12.897078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:19.240 [2024-12-13 10:39:12.906872] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:19.240 [2024-12-13 10:39:12.906898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:19659 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.240 [2024-12-13 10:39:12.906910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:19.240 [2024-12-13 10:39:12.920238] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:19.240 [2024-12-13 10:39:12.920265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:23898 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.240 [2024-12-13 10:39:12.920277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:19.240 [2024-12-13 10:39:12.934119] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:19.240 [2024-12-13 10:39:12.934146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5686 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.240 [2024-12-13 10:39:12.934157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:19.240 [2024-12-13 10:39:12.944265] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:19.240 [2024-12-13 10:39:12.944292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16654 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.240 [2024-12-13 10:39:12.944304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:19.240 [2024-12-13 10:39:12.957137] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:19.240 [2024-12-13 10:39:12.957164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:6035 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.240 [2024-12-13 10:39:12.957177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:19.240 [2024-12-13 10:39:12.967073] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:19.241 [2024-12-13 10:39:12.967101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:18729 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.241 [2024-12-13 10:39:12.967113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:19.241 [2024-12-13 10:39:12.982079] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:19.241 [2024-12-13 10:39:12.982107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:3616 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.241 [2024-12-13 10:39:12.982118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:19.241 [2024-12-13 10:39:12.994839] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:19.241 [2024-12-13 10:39:12.994867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:15157 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.241 [2024-12-13 10:39:12.994879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:19.241 [2024-12-13 10:39:13.006186] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:19.241 [2024-12-13 10:39:13.006213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16222 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.241 [2024-12-13 10:39:13.006225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:19.241 [2024-12-13 10:39:13.018131] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:19.241 [2024-12-13 10:39:13.018158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:18608 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.241 [2024-12-13 10:39:13.018170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:19.241 [2024-12-13 10:39:13.032516] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:19.241 [2024-12-13 10:39:13.032544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:9273 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.241 [2024-12-13 10:39:13.032557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:19.241 [2024-12-13 10:39:13.045186] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:19.241 [2024-12-13 10:39:13.045213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:22466 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.241 [2024-12-13 10:39:13.045225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:19.241 [2024-12-13 10:39:13.054983] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:19.241 [2024-12-13 10:39:13.055010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13119 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.241 [2024-12-13 10:39:13.055022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:19.241 [2024-12-13 10:39:13.065685] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:19.241 [2024-12-13 10:39:13.065712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:7914 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.241 [2024-12-13 10:39:13.065724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:19.241 [2024-12-13 10:39:13.076841] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:19.241 [2024-12-13 10:39:13.076876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:7628 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.241 [2024-12-13 10:39:13.076888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:19.241 [2024-12-13 10:39:13.086760] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:19.241 [2024-12-13 10:39:13.086787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:12204 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.241 [2024-12-13 10:39:13.086800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:19.241 [2024-12-13 10:39:13.099024] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:19.241 [2024-12-13 10:39:13.099057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:23609 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.241 [2024-12-13 10:39:13.099069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:19.500 [2024-12-13 10:39:13.112139] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:19.500 [2024-12-13 10:39:13.112167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:18501 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.500 [2024-12-13 10:39:13.112180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:19.500 [2024-12-13 10:39:13.122379] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:19.500 [2024-12-13 10:39:13.122406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:24848 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.500 [2024-12-13 10:39:13.122418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:19.500 [2024-12-13 10:39:13.136718] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:19.500 [2024-12-13 10:39:13.136746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:18521 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.501 [2024-12-13 10:39:13.136758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:19.501 [2024-12-13 10:39:13.149934] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:19.501 [2024-12-13 10:39:13.149961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:17211 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.501 [2024-12-13 10:39:13.149972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:19.501 [2024-12-13 10:39:13.160121] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:19.501 [2024-12-13 10:39:13.160149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:21663 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.501 [2024-12-13 10:39:13.160161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:19.501 [2024-12-13 10:39:13.172571] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:19.501 [2024-12-13 10:39:13.172598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18223 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.501 [2024-12-13 10:39:13.172610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:19.501 [2024-12-13 10:39:13.183673] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:19.501 [2024-12-13 10:39:13.183698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:24987 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.501 [2024-12-13 10:39:13.183710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:19.501 [2024-12-13 10:39:13.193086] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:19.501 [2024-12-13 10:39:13.193113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:3878 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.501 [2024-12-13 10:39:13.193124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:19.501 [2024-12-13 10:39:13.205473] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:19.501 [2024-12-13 10:39:13.205501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3398 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.501 [2024-12-13 10:39:13.205513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:19.501 [2024-12-13 10:39:13.216780] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:19.501 [2024-12-13 10:39:13.216807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:13892 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.501 [2024-12-13 10:39:13.216818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:19.501 [2024-12-13 10:39:13.229314] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:19.501 [2024-12-13 10:39:13.229342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:3004 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.501 [2024-12-13 10:39:13.229354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:19.501 [2024-12-13 10:39:13.239447] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:19.501 [2024-12-13 10:39:13.239480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:23246 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.501 [2024-12-13 10:39:13.239492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:19.501 [2024-12-13 10:39:13.250736] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:19.501 [2024-12-13 10:39:13.250762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:9973 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.501 [2024-12-13 10:39:13.250775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:19.501 [2024-12-13 10:39:13.262651] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:19.501 [2024-12-13 10:39:13.262677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:12120 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.501 [2024-12-13 10:39:13.262689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:19.501 [2024-12-13 10:39:13.272343] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:19.501 [2024-12-13 10:39:13.272370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:15727 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.501 [2024-12-13 10:39:13.272383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:19.501 [2024-12-13 10:39:13.283597] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:19.501 [2024-12-13 10:39:13.283624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25331 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.501 [2024-12-13 10:39:13.283636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:19.501 [2024-12-13 10:39:13.296463] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:19.501 [2024-12-13 10:39:13.296494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14301 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.501 [2024-12-13 10:39:13.296506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:19.501 [2024-12-13 10:39:13.306381] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:19.501 [2024-12-13 10:39:13.306409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:23652 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.501 [2024-12-13 10:39:13.306421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:19.501 [2024-12-13 10:39:13.319171] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:19.501 [2024-12-13 10:39:13.319200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:4570 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.501 [2024-12-13 10:39:13.319213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:19.501 [2024-12-13 10:39:13.328991] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:19.501 [2024-12-13 10:39:13.329017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:10311 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.501 [2024-12-13 10:39:13.329028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:19.501 [2024-12-13 10:39:13.343076] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:19.501 [2024-12-13 10:39:13.343104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:14745 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.501 [2024-12-13 10:39:13.343116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:19.501 [2024-12-13 10:39:13.352207] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:19.501 [2024-12-13 10:39:13.352234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:10832 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.501 [2024-12-13 10:39:13.352245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:19.501 [2024-12-13 10:39:13.365090] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:19.501 [2024-12-13 10:39:13.365117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:9875 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.501 [2024-12-13 10:39:13.365129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:19.501 [2024-12-13 10:39:13.377711] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:19.501 [2024-12-13 10:39:13.377738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:15724 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.501 [2024-12-13 10:39:13.377749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:19.501 [2024-12-13 10:39:13.392083] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:19.501 [2024-12-13 10:39:13.392111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:16183 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.501 [2024-12-13 10:39:13.392123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:19.760 [2024-12-13 10:39:13.401740] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:19.760 [2024-12-13 10:39:13.401767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:13048 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.760 [2024-12-13 10:39:13.401778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:19.761 [2024-12-13 10:39:13.415488] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:19.761 [2024-12-13 10:39:13.415517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:1438 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.761 [2024-12-13 10:39:13.415528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:19.761 21286.00 IOPS, 83.15 MiB/s [2024-12-13T09:39:13.652Z] [2024-12-13 10:39:13.431215] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:19.761 [2024-12-13 10:39:13.431242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:21747 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.761 [2024-12-13 10:39:13.431254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:19.761 [2024-12-13 10:39:13.444686] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:19.761 [2024-12-13 10:39:13.444713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:19253 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.761 [2024-12-13 10:39:13.444725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:19.761 [2024-12-13 10:39:13.458880] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:19.761 [2024-12-13 10:39:13.458906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:19237 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.761 [2024-12-13 10:39:13.458918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:19.761 [2024-12-13 10:39:13.469372] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:19.761 [2024-12-13 10:39:13.469398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:18290 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.761 [2024-12-13 10:39:13.469410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:19.761 [2024-12-13 10:39:13.483319] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:19.761 [2024-12-13 10:39:13.483345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:21340 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.761 [2024-12-13 10:39:13.483356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:19.761 [2024-12-13 10:39:13.496317] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:19.761 [2024-12-13 10:39:13.496343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:3456 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.761 [2024-12-13 10:39:13.496354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:19.761 [2024-12-13 10:39:13.505032] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:19.761 [2024-12-13 10:39:13.505062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:6288 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.761 [2024-12-13 10:39:13.505074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:19.761 [2024-12-13 10:39:13.516297] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:19.761 [2024-12-13 10:39:13.516324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24820 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.761 [2024-12-13 10:39:13.516336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:19.761 [2024-12-13 10:39:13.529120] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:19.761 [2024-12-13 10:39:13.529146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:11547 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.761 [2024-12-13 10:39:13.529158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:19.761 [2024-12-13 10:39:13.539084] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:19.761 [2024-12-13 10:39:13.539110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:20854 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.761 [2024-12-13 10:39:13.539122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:19.761 [2024-12-13 10:39:13.551601] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:19.761 [2024-12-13 10:39:13.551628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:11235 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.761 [2024-12-13 10:39:13.551640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:19.761 [2024-12-13 10:39:13.564111] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:19.761 [2024-12-13 10:39:13.564136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:16761 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.761 [2024-12-13 10:39:13.564148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:19.761 [2024-12-13 10:39:13.573923] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:19.761 [2024-12-13 10:39:13.573949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:9360 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.761 [2024-12-13 10:39:13.573962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:19.761 [2024-12-13 10:39:13.587675] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:19.761 [2024-12-13 10:39:13.587702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:24548 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.761 [2024-12-13 10:39:13.587714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:19.761 [2024-12-13 10:39:13.600288] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:19.761 [2024-12-13 10:39:13.600315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:17079 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.761 [2024-12-13 10:39:13.600326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:19.761 [2024-12-13 10:39:13.609425] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:19.761 [2024-12-13 10:39:13.609458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:19655 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.761 [2024-12-13 10:39:13.609471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:19.761 [2024-12-13 10:39:13.622221] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:19.761 [2024-12-13 10:39:13.622247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:24589 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.761 [2024-12-13 10:39:13.622258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:19.761 [2024-12-13 10:39:13.634901] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:19.761 [2024-12-13 10:39:13.634927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:23816 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.761 [2024-12-13 10:39:13.634939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:19.761 [2024-12-13 10:39:13.646197] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:19.761 [2024-12-13 10:39:13.646223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7573 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:19.761 [2024-12-13 10:39:13.646235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:20.020 [2024-12-13 10:39:13.655962] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:20.020 [2024-12-13 10:39:13.655989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:23828 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.020 [2024-12-13 10:39:13.656001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:20.020 [2024-12-13 10:39:13.668897] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:20.020 [2024-12-13 10:39:13.668924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:13267 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.020 [2024-12-13 10:39:13.668935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:20.020 [2024-12-13 10:39:13.681626] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:20.020 [2024-12-13 10:39:13.681653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:20111 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.021 [2024-12-13 10:39:13.681664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:20.021 [2024-12-13 10:39:13.690749] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:20.021 [2024-12-13 10:39:13.690775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:3575 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.021 [2024-12-13 10:39:13.690786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:20.021 [2024-12-13 10:39:13.703635] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:20.021 [2024-12-13 10:39:13.703665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:13572 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.021 [2024-12-13 10:39:13.703680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:20.021 [2024-12-13 10:39:13.716624] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:20.021 [2024-12-13 10:39:13.716650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:22547 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.021 [2024-12-13 10:39:13.716662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:20.021 [2024-12-13 10:39:13.727854] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:20.021 [2024-12-13 10:39:13.727881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:11671 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.021 [2024-12-13 10:39:13.727893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:20.021 [2024-12-13 10:39:13.738982] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:20.021 [2024-12-13 10:39:13.739009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:17479 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.021 [2024-12-13 10:39:13.739021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:20.021 [2024-12-13 10:39:13.752435] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:20.021 [2024-12-13 10:39:13.752467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:10671 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.021 [2024-12-13 10:39:13.752479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:20.021 [2024-12-13 10:39:13.763454] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:20.021 [2024-12-13 10:39:13.763480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:2801 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.021 [2024-12-13 10:39:13.763492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:20.021 [2024-12-13 10:39:13.773172] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:20.021 [2024-12-13 10:39:13.773199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:1197 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.021 [2024-12-13 10:39:13.773210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:20.021 [2024-12-13 10:39:13.786652] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:20.021 [2024-12-13 10:39:13.786678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:17887 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.021 [2024-12-13 10:39:13.786690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:20.021 [2024-12-13 10:39:13.799261] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:20.021 [2024-12-13 10:39:13.799286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:152 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.021 [2024-12-13 10:39:13.799298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:20.021 [2024-12-13 10:39:13.808456] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:20.021 [2024-12-13 10:39:13.808482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:2685 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.021 [2024-12-13 10:39:13.808494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:20.021 [2024-12-13 10:39:13.822236] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:20.021 [2024-12-13 10:39:13.822264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:22991 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.021 [2024-12-13 10:39:13.822275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:20.021 [2024-12-13 10:39:13.835618] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:20.021 [2024-12-13 10:39:13.835645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16134 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.021 [2024-12-13 10:39:13.835657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:20.021 [2024-12-13 10:39:13.845768] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:20.021 [2024-12-13 10:39:13.845794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:23990 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.021 [2024-12-13 10:39:13.845806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:20.021 [2024-12-13 10:39:13.857214] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:20.021 [2024-12-13 10:39:13.857241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:15031 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.021 [2024-12-13 10:39:13.857252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:20.021 [2024-12-13 10:39:13.870438] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:20.021 [2024-12-13 10:39:13.870471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:4202 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.021 [2024-12-13 10:39:13.870483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:20.021 [2024-12-13 10:39:13.880314] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:20.021 [2024-12-13 10:39:13.880342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:21753 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.021 [2024-12-13 10:39:13.880354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:20.021 [2024-12-13 10:39:13.893177] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:20.021 [2024-12-13 10:39:13.893205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:6510 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.021 [2024-12-13 10:39:13.893217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:20.021 [2024-12-13 10:39:13.905095] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:20.021 [2024-12-13 10:39:13.905126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:915 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.021 [2024-12-13 10:39:13.905138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:20.280 [2024-12-13 10:39:13.914653] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:20.281 [2024-12-13 10:39:13.914680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16219 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.281 [2024-12-13 10:39:13.914691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:20.281 [2024-12-13 10:39:13.925949] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:20.281 [2024-12-13 10:39:13.925975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:6109 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.281 [2024-12-13 10:39:13.925986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:20.281 [2024-12-13 10:39:13.937404] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:20.281 [2024-12-13 10:39:13.937430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:9877 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.281 [2024-12-13 10:39:13.937441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:20.281 [2024-12-13 10:39:13.947705] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:20.281 [2024-12-13 10:39:13.947732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:19184 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.281 [2024-12-13 10:39:13.947743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:20.281 [2024-12-13 10:39:13.961772] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:20.281 [2024-12-13 10:39:13.961799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:8946 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.281 [2024-12-13 10:39:13.961811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:20.281 [2024-12-13 10:39:13.973130] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:20.281 [2024-12-13 10:39:13.973157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20688 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.281 [2024-12-13 10:39:13.973169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:20.281 [2024-12-13 10:39:13.983286] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:20.281 [2024-12-13 10:39:13.983313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:9329 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.281 [2024-12-13 10:39:13.983324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:20.281 [2024-12-13 10:39:13.998171] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:20.281 [2024-12-13 10:39:13.998197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:11938 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.281 [2024-12-13 10:39:13.998209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:20.281 [2024-12-13 10:39:14.011518] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:20.281 [2024-12-13 10:39:14.011545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:20201 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.281 [2024-12-13 10:39:14.011556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:20.281 [2024-12-13 10:39:14.021871] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:20.281 [2024-12-13 10:39:14.021896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:13273 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.281 [2024-12-13 10:39:14.021908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:20.281 [2024-12-13 10:39:14.034568] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:20.281 [2024-12-13 10:39:14.034594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:1208 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.281 [2024-12-13 10:39:14.034606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:20.281 [2024-12-13 10:39:14.049189] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:20.281 [2024-12-13 10:39:14.049216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:15977 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.281 [2024-12-13 10:39:14.049227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:20.281 [2024-12-13 10:39:14.062838] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:20.281 [2024-12-13 10:39:14.062864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:18544 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.281 [2024-12-13 10:39:14.062876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:20.281 [2024-12-13 10:39:14.072316] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:20.281 [2024-12-13 10:39:14.072342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22003 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.281 [2024-12-13 10:39:14.072354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:20.281 [2024-12-13 10:39:14.085541] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:20.281 [2024-12-13 10:39:14.085568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:12686 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.281 [2024-12-13 10:39:14.085588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:20.281 [2024-12-13 10:39:14.094973] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:20.281 [2024-12-13 10:39:14.094999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:1527 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.281 [2024-12-13 10:39:14.095012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:20.281 [2024-12-13 10:39:14.107709] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:20.281 [2024-12-13 10:39:14.107741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:9252 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.281 [2024-12-13 10:39:14.107753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:20.281 [2024-12-13 10:39:14.120225] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:20.281 [2024-12-13 10:39:14.120252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:9989 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.281 [2024-12-13 10:39:14.120264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:20.281 [2024-12-13 10:39:14.132819] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:20.281 [2024-12-13 10:39:14.132845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:906 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.281 [2024-12-13 10:39:14.132857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:20.281 [2024-12-13 10:39:14.142358] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:20.281 [2024-12-13 10:39:14.142383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:24301 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.281 [2024-12-13 10:39:14.142395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:20.281 [2024-12-13 10:39:14.156912] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:20.281 [2024-12-13 10:39:14.156939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:5138 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.281 [2024-12-13 10:39:14.156950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:20.281 [2024-12-13 10:39:14.169200] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:20.281 [2024-12-13 10:39:14.169227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:17387 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.281 [2024-12-13 10:39:14.169239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:20.541 [2024-12-13 10:39:14.179529] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:20.541 [2024-12-13 10:39:14.179556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:20402 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.541 [2024-12-13 10:39:14.179567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:20.541 [2024-12-13 10:39:14.191432] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:20.541 [2024-12-13 10:39:14.191464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:6877 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.541 [2024-12-13 10:39:14.191475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:20.541 [2024-12-13 10:39:14.205523] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:20.541 [2024-12-13 10:39:14.205549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:10680 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.541 [2024-12-13 10:39:14.205562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:20.541 [2024-12-13 10:39:14.218197] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:20.541 [2024-12-13 10:39:14.218223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:6254 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.541 [2024-12-13 10:39:14.218234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:20.541 [2024-12-13 10:39:14.227515] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:20.541 [2024-12-13 10:39:14.227542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21882 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.541 [2024-12-13 10:39:14.227554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:20.541 [2024-12-13 10:39:14.239765] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:20.541 [2024-12-13 10:39:14.239793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14665 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.541 [2024-12-13 10:39:14.239805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:20.541 [2024-12-13 10:39:14.249477] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:20.541 [2024-12-13 10:39:14.249504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:24579 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.541 [2024-12-13 10:39:14.249515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:20.541 [2024-12-13 10:39:14.262931] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:20.541 [2024-12-13 10:39:14.262959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:18759 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.541 [2024-12-13 10:39:14.262970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:20.541 [2024-12-13 10:39:14.274882] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:20.541 [2024-12-13 10:39:14.274910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:7367 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.541 [2024-12-13 10:39:14.274922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:20.541 [2024-12-13 10:39:14.286097] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:20.541 [2024-12-13 10:39:14.286125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:23969 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.541 [2024-12-13 10:39:14.286136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:20.541 [2024-12-13 10:39:14.296641] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:20.541 [2024-12-13 10:39:14.296667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:11601 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.541 [2024-12-13 10:39:14.296679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:20.541 [2024-12-13 10:39:14.308073] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:20.541 [2024-12-13 10:39:14.308105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:4639 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.541 [2024-12-13 10:39:14.308117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:20.541 [2024-12-13 10:39:14.317341] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:20.541 [2024-12-13 10:39:14.317368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:3057 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.541 [2024-12-13 10:39:14.317380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:20.541 [2024-12-13 10:39:14.328390] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:20.541 [2024-12-13 10:39:14.328417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:16712 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.541 [2024-12-13 10:39:14.328429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:20.541 [2024-12-13 10:39:14.339929] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:20.541 [2024-12-13 10:39:14.339957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:18798 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.541 [2024-12-13 10:39:14.339969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:20.541 [2024-12-13 10:39:14.352063] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:20.541 [2024-12-13 10:39:14.352091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:8080 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.541 [2024-12-13 10:39:14.352102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:20.541 [2024-12-13 10:39:14.361638] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:20.541 [2024-12-13 10:39:14.361665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:4539 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.541 [2024-12-13 10:39:14.361677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:20.541 [2024-12-13 10:39:14.372632] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:20.541 [2024-12-13 10:39:14.372660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:17806 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.541 [2024-12-13 10:39:14.372672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:20.541 [2024-12-13 10:39:14.383713] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:20.541 [2024-12-13 10:39:14.383740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:21402 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.541 [2024-12-13 10:39:14.383751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:20.541 [2024-12-13 10:39:14.393495] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:20.541 [2024-12-13 10:39:14.393522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:18725 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.541 [2024-12-13 10:39:14.393534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:20.541 [2024-12-13 10:39:14.404366] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:20.541 [2024-12-13 10:39:14.404392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:17243 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.541 [2024-12-13 10:39:14.404404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:20.541 [2024-12-13 10:39:14.415178] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:20.541 [2024-12-13 10:39:14.415205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:3510 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.541 [2024-12-13 10:39:14.415217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:20.541 21541.50 IOPS, 84.15 MiB/s [2024-12-13T09:39:14.432Z] [2024-12-13 10:39:14.426834] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:20.541 [2024-12-13 10:39:14.426858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:17537 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:20.541 [2024-12-13 10:39:14.426870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:20.541 00:37:20.541 Latency(us) 00:37:20.542 [2024-12-13T09:39:14.433Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:20.542 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:37:20.542 nvme0n1 : 2.00 21537.09 84.13 0.00 0.00 5935.83 3011.54 18225.25 00:37:20.542 [2024-12-13T09:39:14.433Z] =================================================================================================================== 00:37:20.542 [2024-12-13T09:39:14.433Z] Total : 21537.09 84.13 0.00 0.00 5935.83 3011.54 18225.25 00:37:20.801 { 00:37:20.801 "results": [ 00:37:20.801 { 00:37:20.801 "job": "nvme0n1", 00:37:20.801 "core_mask": "0x2", 00:37:20.801 "workload": "randread", 00:37:20.801 "status": "finished", 00:37:20.801 "queue_depth": 128, 00:37:20.801 "io_size": 4096, 00:37:20.801 "runtime": 2.004867, 00:37:20.801 "iops": 21537.089492719468, 00:37:20.801 "mibps": 84.12925583093542, 00:37:20.801 "io_failed": 0, 00:37:20.801 "io_timeout": 0, 00:37:20.801 "avg_latency_us": 5935.833295329851, 00:37:20.801 "min_latency_us": 3011.535238095238, 00:37:20.801 "max_latency_us": 18225.249523809525 00:37:20.801 } 00:37:20.801 ], 00:37:20.801 "core_count": 1 00:37:20.801 } 00:37:20.801 10:39:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:37:20.801 10:39:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:37:20.801 10:39:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:37:20.801 | .driver_specific 00:37:20.801 | .nvme_error 00:37:20.801 | .status_code 00:37:20.801 | .command_transient_transport_error' 00:37:20.801 10:39:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:37:20.801 10:39:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 169 > 0 )) 00:37:20.801 10:39:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 4146720 00:37:20.801 10:39:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 4146720 ']' 00:37:20.801 10:39:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 4146720 00:37:20.801 10:39:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:37:20.801 10:39:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:20.801 10:39:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4146720 00:37:21.060 10:39:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:37:21.060 10:39:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:37:21.060 10:39:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4146720' 00:37:21.060 killing process with pid 4146720 00:37:21.060 10:39:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 4146720 00:37:21.060 Received shutdown signal, test time was about 2.000000 seconds 00:37:21.060 00:37:21.060 Latency(us) 00:37:21.060 [2024-12-13T09:39:14.951Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:21.060 [2024-12-13T09:39:14.951Z] =================================================================================================================== 00:37:21.060 [2024-12-13T09:39:14.951Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:21.060 10:39:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 4146720 00:37:22.001 10:39:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:37:22.001 10:39:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:37:22.001 10:39:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:37:22.001 10:39:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:37:22.001 10:39:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:37:22.001 10:39:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=4147416 00:37:22.001 10:39:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 4147416 /var/tmp/bperf.sock 00:37:22.001 10:39:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:37:22.001 10:39:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 4147416 ']' 00:37:22.001 10:39:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:37:22.001 10:39:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:22.001 10:39:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:37:22.001 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:37:22.001 10:39:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:22.001 10:39:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:22.001 [2024-12-13 10:39:15.641125] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:37:22.001 [2024-12-13 10:39:15.641212] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4147416 ] 00:37:22.001 I/O size of 131072 is greater than zero copy threshold (65536). 00:37:22.001 Zero copy mechanism will not be used. 00:37:22.001 [2024-12-13 10:39:15.754782] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:22.001 [2024-12-13 10:39:15.859549] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:37:22.568 10:39:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:22.568 10:39:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:37:22.568 10:39:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:37:22.568 10:39:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:37:22.827 10:39:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:37:22.827 10:39:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:22.827 10:39:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:22.827 10:39:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:22.827 10:39:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:22.827 10:39:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:23.086 nvme0n1 00:37:23.086 10:39:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:37:23.086 10:39:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:23.086 10:39:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:23.086 10:39:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:23.086 10:39:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:37:23.086 10:39:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:37:23.345 I/O size of 131072 is greater than zero copy threshold (65536). 00:37:23.345 Zero copy mechanism will not be used. 00:37:23.345 Running I/O for 2 seconds... 00:37:23.345 [2024-12-13 10:39:17.021714] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:23.345 [2024-12-13 10:39:17.021760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.345 [2024-12-13 10:39:17.021777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:23.345 [2024-12-13 10:39:17.028053] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:23.345 [2024-12-13 10:39:17.028087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.345 [2024-12-13 10:39:17.028101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:23.345 [2024-12-13 10:39:17.034072] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:23.345 [2024-12-13 10:39:17.034100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.345 [2024-12-13 10:39:17.034112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:23.345 [2024-12-13 10:39:17.040108] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:23.345 [2024-12-13 10:39:17.040137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.345 [2024-12-13 10:39:17.040150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:23.345 [2024-12-13 10:39:17.046090] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:23.345 [2024-12-13 10:39:17.046122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.345 [2024-12-13 10:39:17.046135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:23.345 [2024-12-13 10:39:17.052091] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:23.345 [2024-12-13 10:39:17.052119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.345 [2024-12-13 10:39:17.052131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:23.345 [2024-12-13 10:39:17.058082] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:23.345 [2024-12-13 10:39:17.058109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.345 [2024-12-13 10:39:17.058121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:23.345 [2024-12-13 10:39:17.063996] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:23.345 [2024-12-13 10:39:17.064024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.345 [2024-12-13 10:39:17.064036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:23.345 [2024-12-13 10:39:17.070079] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:23.345 [2024-12-13 10:39:17.070106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.345 [2024-12-13 10:39:17.070117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:23.345 [2024-12-13 10:39:17.076022] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:23.345 [2024-12-13 10:39:17.076049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.346 [2024-12-13 10:39:17.076068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:23.346 [2024-12-13 10:39:17.082014] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:23.346 [2024-12-13 10:39:17.082041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.346 [2024-12-13 10:39:17.082052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:23.346 [2024-12-13 10:39:17.087925] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:23.346 [2024-12-13 10:39:17.087954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.346 [2024-12-13 10:39:17.087966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:23.346 [2024-12-13 10:39:17.093986] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:23.346 [2024-12-13 10:39:17.094013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.346 [2024-12-13 10:39:17.094026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:23.346 [2024-12-13 10:39:17.099943] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:23.346 [2024-12-13 10:39:17.099970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.346 [2024-12-13 10:39:17.099982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:23.346 [2024-12-13 10:39:17.105877] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:23.346 [2024-12-13 10:39:17.105904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.346 [2024-12-13 10:39:17.105916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:23.346 [2024-12-13 10:39:17.111752] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:23.346 [2024-12-13 10:39:17.111780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.346 [2024-12-13 10:39:17.111792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:23.346 [2024-12-13 10:39:17.117802] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:23.346 [2024-12-13 10:39:17.117831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.346 [2024-12-13 10:39:17.117842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:23.346 [2024-12-13 10:39:17.123743] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:23.346 [2024-12-13 10:39:17.123771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.346 [2024-12-13 10:39:17.123782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:23.346 [2024-12-13 10:39:17.129713] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:23.346 [2024-12-13 10:39:17.129741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.346 [2024-12-13 10:39:17.129753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:23.346 [2024-12-13 10:39:17.135698] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:23.346 [2024-12-13 10:39:17.135726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.346 [2024-12-13 10:39:17.135738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:23.346 [2024-12-13 10:39:17.141713] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:23.346 [2024-12-13 10:39:17.141741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.346 [2024-12-13 10:39:17.141754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:23.346 [2024-12-13 10:39:17.147754] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:23.346 [2024-12-13 10:39:17.147786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.346 [2024-12-13 10:39:17.147797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:23.346 [2024-12-13 10:39:17.153756] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:23.346 [2024-12-13 10:39:17.153783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.346 [2024-12-13 10:39:17.153794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:23.346 [2024-12-13 10:39:17.159733] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:23.346 [2024-12-13 10:39:17.159760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.346 [2024-12-13 10:39:17.159772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:23.346 [2024-12-13 10:39:17.165719] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:23.346 [2024-12-13 10:39:17.165747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.346 [2024-12-13 10:39:17.165759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:23.346 [2024-12-13 10:39:17.171671] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:23.346 [2024-12-13 10:39:17.171698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.346 [2024-12-13 10:39:17.171710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:23.346 [2024-12-13 10:39:17.177683] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:23.346 [2024-12-13 10:39:17.177711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.346 [2024-12-13 10:39:17.177722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:23.346 [2024-12-13 10:39:17.183633] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:23.346 [2024-12-13 10:39:17.183660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.346 [2024-12-13 10:39:17.183671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:23.346 [2024-12-13 10:39:17.189546] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:23.346 [2024-12-13 10:39:17.189574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.346 [2024-12-13 10:39:17.189585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:23.346 [2024-12-13 10:39:17.195427] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:23.346 [2024-12-13 10:39:17.195460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.346 [2024-12-13 10:39:17.195472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:23.346 [2024-12-13 10:39:17.201211] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:23.346 [2024-12-13 10:39:17.201238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.346 [2024-12-13 10:39:17.201250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:23.346 [2024-12-13 10:39:17.206862] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:23.346 [2024-12-13 10:39:17.206891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.346 [2024-12-13 10:39:17.206902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:23.346 [2024-12-13 10:39:17.212720] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:23.346 [2024-12-13 10:39:17.212748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.346 [2024-12-13 10:39:17.212760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:23.346 [2024-12-13 10:39:17.218697] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:23.346 [2024-12-13 10:39:17.218725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.346 [2024-12-13 10:39:17.218737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:23.346 [2024-12-13 10:39:17.224710] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:23.346 [2024-12-13 10:39:17.224737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.346 [2024-12-13 10:39:17.224748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:23.346 [2024-12-13 10:39:17.230675] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:23.346 [2024-12-13 10:39:17.230702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.346 [2024-12-13 10:39:17.230713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:23.606 [2024-12-13 10:39:17.236809] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:23.606 [2024-12-13 10:39:17.236837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.606 [2024-12-13 10:39:17.236850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:23.606 [2024-12-13 10:39:17.243015] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:23.606 [2024-12-13 10:39:17.243042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.606 [2024-12-13 10:39:17.243053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:23.606 [2024-12-13 10:39:17.249190] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:23.606 [2024-12-13 10:39:17.249217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.606 [2024-12-13 10:39:17.249235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:23.606 [2024-12-13 10:39:17.255124] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:23.606 [2024-12-13 10:39:17.255152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.606 [2024-12-13 10:39:17.255163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:23.606 [2024-12-13 10:39:17.261022] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:23.606 [2024-12-13 10:39:17.261050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.606 [2024-12-13 10:39:17.261061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:23.606 [2024-12-13 10:39:17.267097] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:23.606 [2024-12-13 10:39:17.267125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.606 [2024-12-13 10:39:17.267137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:23.606 [2024-12-13 10:39:17.273469] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:23.606 [2024-12-13 10:39:17.273497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.606 [2024-12-13 10:39:17.273509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:23.606 [2024-12-13 10:39:17.279770] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:23.606 [2024-12-13 10:39:17.279798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.606 [2024-12-13 10:39:17.279810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:23.606 [2024-12-13 10:39:17.285979] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:23.606 [2024-12-13 10:39:17.286006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.606 [2024-12-13 10:39:17.286018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:23.606 [2024-12-13 10:39:17.292057] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:23.606 [2024-12-13 10:39:17.292085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.607 [2024-12-13 10:39:17.292097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:23.607 [2024-12-13 10:39:17.297789] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:23.607 [2024-12-13 10:39:17.297816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.607 [2024-12-13 10:39:17.297828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:23.607 [2024-12-13 10:39:17.303904] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:23.607 [2024-12-13 10:39:17.303932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.607 [2024-12-13 10:39:17.303944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:23.607 [2024-12-13 10:39:17.310012] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:23.607 [2024-12-13 10:39:17.310040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.607 [2024-12-13 10:39:17.310052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:23.607 [2024-12-13 10:39:17.316095] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:23.607 [2024-12-13 10:39:17.316122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.607 [2024-12-13 10:39:17.316134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:23.607 [2024-12-13 10:39:17.322272] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:23.607 [2024-12-13 10:39:17.322299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.607 [2024-12-13 10:39:17.322311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:23.607 [2024-12-13 10:39:17.328512] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:23.607 [2024-12-13 10:39:17.328538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.607 [2024-12-13 10:39:17.328550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:23.607 [2024-12-13 10:39:17.334619] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:23.607 [2024-12-13 10:39:17.334652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.607 [2024-12-13 10:39:17.334663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:23.607 [2024-12-13 10:39:17.340911] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:23.607 [2024-12-13 10:39:17.340939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.607 [2024-12-13 10:39:17.340952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:23.607 [2024-12-13 10:39:17.347220] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:23.607 [2024-12-13 10:39:17.347248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.607 [2024-12-13 10:39:17.347260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:23.607 [2024-12-13 10:39:17.353318] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:23.607 [2024-12-13 10:39:17.353345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.607 [2024-12-13 10:39:17.353361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:23.607 [2024-12-13 10:39:17.356828] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:23.607 [2024-12-13 10:39:17.356854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.607 [2024-12-13 10:39:17.356866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:23.607 [2024-12-13 10:39:17.363265] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:23.607 [2024-12-13 10:39:17.363292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.607 [2024-12-13 10:39:17.363304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:23.607 [2024-12-13 10:39:17.369832] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:23.607 [2024-12-13 10:39:17.369859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.607 [2024-12-13 10:39:17.369871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:23.607 [2024-12-13 10:39:17.375675] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:23.607 [2024-12-13 10:39:17.375701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.607 [2024-12-13 10:39:17.375713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:23.607 [2024-12-13 10:39:17.381530] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:23.607 [2024-12-13 10:39:17.381557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.607 [2024-12-13 10:39:17.381569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:23.607 [2024-12-13 10:39:17.387408] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:23.607 [2024-12-13 10:39:17.387435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.607 [2024-12-13 10:39:17.387446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:23.607 [2024-12-13 10:39:17.393262] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:23.607 [2024-12-13 10:39:17.393288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.607 [2024-12-13 10:39:17.393299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:23.607 [2024-12-13 10:39:17.399175] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:23.607 [2024-12-13 10:39:17.399202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.607 [2024-12-13 10:39:17.399214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:23.607 [2024-12-13 10:39:17.405327] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:23.607 [2024-12-13 10:39:17.405355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.607 [2024-12-13 10:39:17.405367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:23.607 [2024-12-13 10:39:17.411729] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:23.607 [2024-12-13 10:39:17.411758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.607 [2024-12-13 10:39:17.411771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:23.607 [2024-12-13 10:39:17.417981] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:23.607 [2024-12-13 10:39:17.418009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.607 [2024-12-13 10:39:17.418021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:23.607 [2024-12-13 10:39:17.424209] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:23.607 [2024-12-13 10:39:17.424236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.607 [2024-12-13 10:39:17.424248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:23.607 [2024-12-13 10:39:17.431085] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:23.607 [2024-12-13 10:39:17.431113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.607 [2024-12-13 10:39:17.431126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:23.607 [2024-12-13 10:39:17.439018] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:23.607 [2024-12-13 10:39:17.439046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.607 [2024-12-13 10:39:17.439058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:23.607 [2024-12-13 10:39:17.446509] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:23.607 [2024-12-13 10:39:17.446538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.607 [2024-12-13 10:39:17.446550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:23.607 [2024-12-13 10:39:17.453254] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:23.607 [2024-12-13 10:39:17.453281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.607 [2024-12-13 10:39:17.453293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:23.607 [2024-12-13 10:39:17.459258] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:23.607 [2024-12-13 10:39:17.459284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.607 [2024-12-13 10:39:17.459300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:23.608 [2024-12-13 10:39:17.465270] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:23.608 [2024-12-13 10:39:17.465297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.608 [2024-12-13 10:39:17.465309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:23.608 [2024-12-13 10:39:17.471696] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:23.608 [2024-12-13 10:39:17.471723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.608 [2024-12-13 10:39:17.471735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:23.608 [2024-12-13 10:39:17.478801] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:23.608 [2024-12-13 10:39:17.478829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.608 [2024-12-13 10:39:17.478841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:23.608 [2024-12-13 10:39:17.486894] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:23.608 [2024-12-13 10:39:17.486922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.608 [2024-12-13 10:39:17.486935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:23.608 [2024-12-13 10:39:17.493983] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:23.608 [2024-12-13 10:39:17.494011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.608 [2024-12-13 10:39:17.494024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:23.867 [2024-12-13 10:39:17.500806] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:23.867 [2024-12-13 10:39:17.500833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.867 [2024-12-13 10:39:17.500846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:23.867 [2024-12-13 10:39:17.506991] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:23.867 [2024-12-13 10:39:17.507018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.867 [2024-12-13 10:39:17.507030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:23.867 [2024-12-13 10:39:17.513078] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:23.867 [2024-12-13 10:39:17.513106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.867 [2024-12-13 10:39:17.513118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:23.867 [2024-12-13 10:39:17.519844] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:23.867 [2024-12-13 10:39:17.519871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.867 [2024-12-13 10:39:17.519883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:23.867 [2024-12-13 10:39:17.527559] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:23.867 [2024-12-13 10:39:17.527587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.868 [2024-12-13 10:39:17.527599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:23.868 [2024-12-13 10:39:17.535332] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:23.868 [2024-12-13 10:39:17.535360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.868 [2024-12-13 10:39:17.535372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:23.868 [2024-12-13 10:39:17.542252] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:23.868 [2024-12-13 10:39:17.542283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.868 [2024-12-13 10:39:17.542296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:23.868 [2024-12-13 10:39:17.549015] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:23.868 [2024-12-13 10:39:17.549043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.868 [2024-12-13 10:39:17.549055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:23.868 [2024-12-13 10:39:17.555352] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:23.868 [2024-12-13 10:39:17.555379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.868 [2024-12-13 10:39:17.555391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:23.868 [2024-12-13 10:39:17.561585] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:23.868 [2024-12-13 10:39:17.561612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.868 [2024-12-13 10:39:17.561624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:23.868 [2024-12-13 10:39:17.567804] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:23.868 [2024-12-13 10:39:17.567831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.868 [2024-12-13 10:39:17.567842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:23.868 [2024-12-13 10:39:17.574020] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:23.868 [2024-12-13 10:39:17.574047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.868 [2024-12-13 10:39:17.574063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:23.868 [2024-12-13 10:39:17.580067] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:23.868 [2024-12-13 10:39:17.580094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.868 [2024-12-13 10:39:17.580106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:23.868 [2024-12-13 10:39:17.586193] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:23.868 [2024-12-13 10:39:17.586220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.868 [2024-12-13 10:39:17.586232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:23.868 [2024-12-13 10:39:17.592589] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:23.868 [2024-12-13 10:39:17.592616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.868 [2024-12-13 10:39:17.592629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:23.868 [2024-12-13 10:39:17.598813] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:23.868 [2024-12-13 10:39:17.598840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.868 [2024-12-13 10:39:17.598852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:23.868 [2024-12-13 10:39:17.604982] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:23.868 [2024-12-13 10:39:17.605009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.868 [2024-12-13 10:39:17.605021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:23.868 [2024-12-13 10:39:17.610906] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:23.868 [2024-12-13 10:39:17.610934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.868 [2024-12-13 10:39:17.610946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:23.868 [2024-12-13 10:39:17.616821] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:23.868 [2024-12-13 10:39:17.616848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.868 [2024-12-13 10:39:17.616861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:23.868 [2024-12-13 10:39:17.622654] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:23.868 [2024-12-13 10:39:17.622681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.868 [2024-12-13 10:39:17.622693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:23.868 [2024-12-13 10:39:17.628803] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:23.868 [2024-12-13 10:39:17.628830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.868 [2024-12-13 10:39:17.628842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:23.868 [2024-12-13 10:39:17.634868] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:23.868 [2024-12-13 10:39:17.634895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.868 [2024-12-13 10:39:17.634907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:23.868 [2024-12-13 10:39:17.640923] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:23.868 [2024-12-13 10:39:17.640949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.868 [2024-12-13 10:39:17.640961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:23.868 [2024-12-13 10:39:17.647036] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:23.868 [2024-12-13 10:39:17.647064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.868 [2024-12-13 10:39:17.647076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:23.868 [2024-12-13 10:39:17.653073] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:23.868 [2024-12-13 10:39:17.653100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.868 [2024-12-13 10:39:17.653112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:23.868 [2024-12-13 10:39:17.659080] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:23.868 [2024-12-13 10:39:17.659107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.868 [2024-12-13 10:39:17.659119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:23.868 [2024-12-13 10:39:17.665191] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:23.868 [2024-12-13 10:39:17.665217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.868 [2024-12-13 10:39:17.665228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:23.868 [2024-12-13 10:39:17.671309] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:23.868 [2024-12-13 10:39:17.671336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.868 [2024-12-13 10:39:17.671347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:23.868 [2024-12-13 10:39:17.677457] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:23.868 [2024-12-13 10:39:17.677483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.868 [2024-12-13 10:39:17.677498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:23.868 [2024-12-13 10:39:17.683745] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:23.868 [2024-12-13 10:39:17.683772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.868 [2024-12-13 10:39:17.683784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:23.868 [2024-12-13 10:39:17.689952] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:23.868 [2024-12-13 10:39:17.689979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.868 [2024-12-13 10:39:17.689991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:23.868 [2024-12-13 10:39:17.696201] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:23.868 [2024-12-13 10:39:17.696228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.868 [2024-12-13 10:39:17.696240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:23.869 [2024-12-13 10:39:17.702436] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:23.869 [2024-12-13 10:39:17.702468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.869 [2024-12-13 10:39:17.702496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:23.869 [2024-12-13 10:39:17.708771] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:23.869 [2024-12-13 10:39:17.708798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.869 [2024-12-13 10:39:17.708810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:23.869 [2024-12-13 10:39:17.714934] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:23.869 [2024-12-13 10:39:17.714960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.869 [2024-12-13 10:39:17.714972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:23.869 [2024-12-13 10:39:17.721104] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:23.869 [2024-12-13 10:39:17.721132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.869 [2024-12-13 10:39:17.721144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:23.869 [2024-12-13 10:39:17.727391] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:23.869 [2024-12-13 10:39:17.727417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.869 [2024-12-13 10:39:17.727429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:23.869 [2024-12-13 10:39:17.733636] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:23.869 [2024-12-13 10:39:17.733662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.869 [2024-12-13 10:39:17.733674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:23.869 [2024-12-13 10:39:17.739914] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:23.869 [2024-12-13 10:39:17.739940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.869 [2024-12-13 10:39:17.739952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:23.869 [2024-12-13 10:39:17.745705] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:23.869 [2024-12-13 10:39:17.745731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.869 [2024-12-13 10:39:17.745743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:23.869 [2024-12-13 10:39:17.751799] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:23.869 [2024-12-13 10:39:17.751825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.869 [2024-12-13 10:39:17.751836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:23.869 [2024-12-13 10:39:17.758027] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:23.869 [2024-12-13 10:39:17.758054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:23.869 [2024-12-13 10:39:17.758066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:24.129 [2024-12-13 10:39:17.764353] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:24.129 [2024-12-13 10:39:17.764379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.129 [2024-12-13 10:39:17.764391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:24.129 [2024-12-13 10:39:17.770592] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:24.129 [2024-12-13 10:39:17.770618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.129 [2024-12-13 10:39:17.770630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:24.129 [2024-12-13 10:39:17.776741] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:24.129 [2024-12-13 10:39:17.776767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.129 [2024-12-13 10:39:17.776779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:24.129 [2024-12-13 10:39:17.782714] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:24.129 [2024-12-13 10:39:17.782740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.129 [2024-12-13 10:39:17.782755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:24.129 [2024-12-13 10:39:17.789218] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:24.129 [2024-12-13 10:39:17.789246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.129 [2024-12-13 10:39:17.789258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:24.129 [2024-12-13 10:39:17.795550] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:24.129 [2024-12-13 10:39:17.795577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.129 [2024-12-13 10:39:17.795589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:24.129 [2024-12-13 10:39:17.801803] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:24.129 [2024-12-13 10:39:17.801829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.129 [2024-12-13 10:39:17.801841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:24.129 [2024-12-13 10:39:17.807894] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:24.129 [2024-12-13 10:39:17.807921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.129 [2024-12-13 10:39:17.807933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:24.129 [2024-12-13 10:39:17.814066] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:24.129 [2024-12-13 10:39:17.814093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.129 [2024-12-13 10:39:17.814105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:24.129 [2024-12-13 10:39:17.820432] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:24.129 [2024-12-13 10:39:17.820465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.129 [2024-12-13 10:39:17.820478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:24.129 [2024-12-13 10:39:17.826601] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:24.129 [2024-12-13 10:39:17.826628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.129 [2024-12-13 10:39:17.826640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:24.129 [2024-12-13 10:39:17.832974] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:24.129 [2024-12-13 10:39:17.832999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.129 [2024-12-13 10:39:17.833011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:24.129 [2024-12-13 10:39:17.839181] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:24.129 [2024-12-13 10:39:17.839211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.129 [2024-12-13 10:39:17.839223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:24.129 [2024-12-13 10:39:17.845489] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:24.129 [2024-12-13 10:39:17.845515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.129 [2024-12-13 10:39:17.845527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:24.129 [2024-12-13 10:39:17.852105] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:24.129 [2024-12-13 10:39:17.852132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.129 [2024-12-13 10:39:17.852144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:24.129 [2024-12-13 10:39:17.858304] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:24.129 [2024-12-13 10:39:17.858330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.129 [2024-12-13 10:39:17.858341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:24.129 [2024-12-13 10:39:17.864457] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:24.129 [2024-12-13 10:39:17.864483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.129 [2024-12-13 10:39:17.864495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:24.129 [2024-12-13 10:39:17.870618] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:24.129 [2024-12-13 10:39:17.870645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.129 [2024-12-13 10:39:17.870665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:24.129 [2024-12-13 10:39:17.876600] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:24.129 [2024-12-13 10:39:17.876627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.129 [2024-12-13 10:39:17.876639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:24.129 [2024-12-13 10:39:17.882720] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:24.129 [2024-12-13 10:39:17.882746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.129 [2024-12-13 10:39:17.882758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:24.129 [2024-12-13 10:39:17.888921] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:24.129 [2024-12-13 10:39:17.888948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.129 [2024-12-13 10:39:17.888963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:24.129 [2024-12-13 10:39:17.895025] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:24.129 [2024-12-13 10:39:17.895051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.129 [2024-12-13 10:39:17.895063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:24.129 [2024-12-13 10:39:17.901188] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:24.129 [2024-12-13 10:39:17.901214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.129 [2024-12-13 10:39:17.901226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:24.129 [2024-12-13 10:39:17.907278] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:24.129 [2024-12-13 10:39:17.907305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.129 [2024-12-13 10:39:17.907317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:24.130 [2024-12-13 10:39:17.913186] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:24.130 [2024-12-13 10:39:17.913212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.130 [2024-12-13 10:39:17.913224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:24.130 [2024-12-13 10:39:17.918940] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:24.130 [2024-12-13 10:39:17.918966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.130 [2024-12-13 10:39:17.918978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:24.130 [2024-12-13 10:39:17.924877] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:24.130 [2024-12-13 10:39:17.924904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.130 [2024-12-13 10:39:17.924916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:24.130 [2024-12-13 10:39:17.930850] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:24.130 [2024-12-13 10:39:17.930876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.130 [2024-12-13 10:39:17.930888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:24.130 [2024-12-13 10:39:17.936927] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:24.130 [2024-12-13 10:39:17.936952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.130 [2024-12-13 10:39:17.936964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:24.130 [2024-12-13 10:39:17.943253] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:24.130 [2024-12-13 10:39:17.943283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.130 [2024-12-13 10:39:17.943295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:24.130 [2024-12-13 10:39:17.949357] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:24.130 [2024-12-13 10:39:17.949383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.130 [2024-12-13 10:39:17.949396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:24.130 [2024-12-13 10:39:17.955236] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:24.130 [2024-12-13 10:39:17.955262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.130 [2024-12-13 10:39:17.955273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:24.130 [2024-12-13 10:39:17.961070] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:24.130 [2024-12-13 10:39:17.961096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.130 [2024-12-13 10:39:17.961107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:24.130 [2024-12-13 10:39:17.966756] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:24.130 [2024-12-13 10:39:17.966782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.130 [2024-12-13 10:39:17.966793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:24.130 [2024-12-13 10:39:17.972922] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:24.130 [2024-12-13 10:39:17.972948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.130 [2024-12-13 10:39:17.972960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:24.130 [2024-12-13 10:39:17.979808] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:24.130 [2024-12-13 10:39:17.979835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.130 [2024-12-13 10:39:17.979847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:24.130 [2024-12-13 10:39:17.987297] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:24.130 [2024-12-13 10:39:17.987324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.130 [2024-12-13 10:39:17.987336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:24.130 [2024-12-13 10:39:17.994837] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:24.130 [2024-12-13 10:39:17.994864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.130 [2024-12-13 10:39:17.994880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:24.130 [2024-12-13 10:39:18.002011] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:24.130 [2024-12-13 10:39:18.002037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.130 [2024-12-13 10:39:18.002049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:24.130 [2024-12-13 10:39:18.008095] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:24.130 [2024-12-13 10:39:18.008123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.130 [2024-12-13 10:39:18.008136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:24.130 [2024-12-13 10:39:18.014292] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:24.130 [2024-12-13 10:39:18.014319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.130 [2024-12-13 10:39:18.014331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:24.390 4971.00 IOPS, 621.38 MiB/s [2024-12-13T09:39:18.281Z] [2024-12-13 10:39:18.022015] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:24.390 [2024-12-13 10:39:18.022042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.390 [2024-12-13 10:39:18.022054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:24.390 [2024-12-13 10:39:18.028786] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:24.390 [2024-12-13 10:39:18.028814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.390 [2024-12-13 10:39:18.028826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:24.390 [2024-12-13 10:39:18.036732] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:24.390 [2024-12-13 10:39:18.036761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.390 [2024-12-13 10:39:18.036773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:24.390 [2024-12-13 10:39:18.044228] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:24.390 [2024-12-13 10:39:18.044257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.390 [2024-12-13 10:39:18.044269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:24.390 [2024-12-13 10:39:18.051312] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:24.390 [2024-12-13 10:39:18.051341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.390 [2024-12-13 10:39:18.051354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:24.390 [2024-12-13 10:39:18.059960] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:24.390 [2024-12-13 10:39:18.059993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.390 [2024-12-13 10:39:18.060006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:24.390 [2024-12-13 10:39:18.067991] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:24.390 [2024-12-13 10:39:18.068019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.390 [2024-12-13 10:39:18.068032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:24.390 [2024-12-13 10:39:18.076151] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:24.390 [2024-12-13 10:39:18.076179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.390 [2024-12-13 10:39:18.076191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:24.390 [2024-12-13 10:39:18.084279] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:24.390 [2024-12-13 10:39:18.084307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.390 [2024-12-13 10:39:18.084319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:24.390 [2024-12-13 10:39:18.092362] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:24.390 [2024-12-13 10:39:18.092391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.390 [2024-12-13 10:39:18.092403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:24.390 [2024-12-13 10:39:18.100230] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:24.390 [2024-12-13 10:39:18.100259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.390 [2024-12-13 10:39:18.100271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:24.390 [2024-12-13 10:39:18.108284] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:24.390 [2024-12-13 10:39:18.108313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.390 [2024-12-13 10:39:18.108326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:24.390 [2024-12-13 10:39:18.116101] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:24.390 [2024-12-13 10:39:18.116128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.390 [2024-12-13 10:39:18.116141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:24.390 [2024-12-13 10:39:18.120515] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:24.390 [2024-12-13 10:39:18.120542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.390 [2024-12-13 10:39:18.120559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:24.390 [2024-12-13 10:39:18.127182] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:24.390 [2024-12-13 10:39:18.127208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.390 [2024-12-13 10:39:18.127220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:24.390 [2024-12-13 10:39:18.134786] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:24.390 [2024-12-13 10:39:18.134814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.390 [2024-12-13 10:39:18.134826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:24.390 [2024-12-13 10:39:18.143271] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:24.390 [2024-12-13 10:39:18.143298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.390 [2024-12-13 10:39:18.143310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:24.390 [2024-12-13 10:39:18.151093] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:24.390 [2024-12-13 10:39:18.151120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.390 [2024-12-13 10:39:18.151132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:24.390 [2024-12-13 10:39:18.157850] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:24.390 [2024-12-13 10:39:18.157876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.390 [2024-12-13 10:39:18.157888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:24.390 [2024-12-13 10:39:18.164119] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:24.390 [2024-12-13 10:39:18.164153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.390 [2024-12-13 10:39:18.164165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:24.390 [2024-12-13 10:39:18.170418] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:24.390 [2024-12-13 10:39:18.170444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.390 [2024-12-13 10:39:18.170463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:24.391 [2024-12-13 10:39:18.176728] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:24.391 [2024-12-13 10:39:18.176755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.391 [2024-12-13 10:39:18.176767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:24.391 [2024-12-13 10:39:18.182992] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:24.391 [2024-12-13 10:39:18.183023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.391 [2024-12-13 10:39:18.183035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:24.391 [2024-12-13 10:39:18.189235] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:24.391 [2024-12-13 10:39:18.189261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.391 [2024-12-13 10:39:18.189273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:24.391 [2024-12-13 10:39:18.195367] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:24.391 [2024-12-13 10:39:18.195394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.391 [2024-12-13 10:39:18.195406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:24.391 [2024-12-13 10:39:18.201721] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:24.391 [2024-12-13 10:39:18.201748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.391 [2024-12-13 10:39:18.201760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:24.391 [2024-12-13 10:39:18.207883] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:24.391 [2024-12-13 10:39:18.207910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.391 [2024-12-13 10:39:18.207922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:24.391 [2024-12-13 10:39:18.214161] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:24.391 [2024-12-13 10:39:18.214188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.391 [2024-12-13 10:39:18.214200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:24.391 [2024-12-13 10:39:18.220382] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:24.391 [2024-12-13 10:39:18.220409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.391 [2024-12-13 10:39:18.220421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:24.391 [2024-12-13 10:39:18.226557] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:24.391 [2024-12-13 10:39:18.226583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.391 [2024-12-13 10:39:18.226595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:24.391 [2024-12-13 10:39:18.232816] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:24.391 [2024-12-13 10:39:18.232849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.391 [2024-12-13 10:39:18.232864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:24.391 [2024-12-13 10:39:18.238985] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:24.391 [2024-12-13 10:39:18.239013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.391 [2024-12-13 10:39:18.239025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:24.391 [2024-12-13 10:39:18.245007] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:24.391 [2024-12-13 10:39:18.245034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.391 [2024-12-13 10:39:18.245047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:24.391 [2024-12-13 10:39:18.251546] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:24.391 [2024-12-13 10:39:18.251572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.391 [2024-12-13 10:39:18.251584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:24.391 [2024-12-13 10:39:18.257688] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:24.391 [2024-12-13 10:39:18.257716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.391 [2024-12-13 10:39:18.257727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:24.391 [2024-12-13 10:39:18.263989] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:24.391 [2024-12-13 10:39:18.264016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.391 [2024-12-13 10:39:18.264028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:24.391 [2024-12-13 10:39:18.270058] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:24.391 [2024-12-13 10:39:18.270085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.391 [2024-12-13 10:39:18.270099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:24.391 [2024-12-13 10:39:18.276078] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:24.391 [2024-12-13 10:39:18.276105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.391 [2024-12-13 10:39:18.276134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:24.651 [2024-12-13 10:39:18.282059] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:24.651 [2024-12-13 10:39:18.282086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.651 [2024-12-13 10:39:18.282098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:24.651 [2024-12-13 10:39:18.288416] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:24.651 [2024-12-13 10:39:18.288446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.651 [2024-12-13 10:39:18.288466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:24.651 [2024-12-13 10:39:18.294820] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:24.651 [2024-12-13 10:39:18.294848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.651 [2024-12-13 10:39:18.294860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:24.651 [2024-12-13 10:39:18.301059] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:24.651 [2024-12-13 10:39:18.301087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.651 [2024-12-13 10:39:18.301099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:24.651 [2024-12-13 10:39:18.307809] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:24.651 [2024-12-13 10:39:18.307836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.651 [2024-12-13 10:39:18.307847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:24.651 [2024-12-13 10:39:18.314056] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:24.651 [2024-12-13 10:39:18.314083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.651 [2024-12-13 10:39:18.314095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:24.651 [2024-12-13 10:39:18.319906] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:24.651 [2024-12-13 10:39:18.319931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.651 [2024-12-13 10:39:18.319941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:24.651 [2024-12-13 10:39:18.325788] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:24.651 [2024-12-13 10:39:18.325814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.651 [2024-12-13 10:39:18.325825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:24.651 [2024-12-13 10:39:18.331823] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:24.651 [2024-12-13 10:39:18.331849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.651 [2024-12-13 10:39:18.331861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:24.651 [2024-12-13 10:39:18.337836] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:24.651 [2024-12-13 10:39:18.337862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.651 [2024-12-13 10:39:18.337874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:24.651 [2024-12-13 10:39:18.343873] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:24.651 [2024-12-13 10:39:18.343899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.651 [2024-12-13 10:39:18.343911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:24.651 [2024-12-13 10:39:18.349589] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:24.651 [2024-12-13 10:39:18.349615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.651 [2024-12-13 10:39:18.349626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:24.651 [2024-12-13 10:39:18.355375] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:24.651 [2024-12-13 10:39:18.355401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.651 [2024-12-13 10:39:18.355412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:24.651 [2024-12-13 10:39:18.361323] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:24.651 [2024-12-13 10:39:18.361349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.651 [2024-12-13 10:39:18.361360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:24.651 [2024-12-13 10:39:18.367317] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:24.651 [2024-12-13 10:39:18.367344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.651 [2024-12-13 10:39:18.367355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:24.651 [2024-12-13 10:39:18.373298] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:24.651 [2024-12-13 10:39:18.373325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.651 [2024-12-13 10:39:18.373336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:24.651 [2024-12-13 10:39:18.379386] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:24.651 [2024-12-13 10:39:18.379413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.651 [2024-12-13 10:39:18.379424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:24.651 [2024-12-13 10:39:18.385069] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:24.651 [2024-12-13 10:39:18.385096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.651 [2024-12-13 10:39:18.385107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:24.651 [2024-12-13 10:39:18.390806] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:24.651 [2024-12-13 10:39:18.390836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.651 [2024-12-13 10:39:18.390848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:24.651 [2024-12-13 10:39:18.396561] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:24.651 [2024-12-13 10:39:18.396587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.651 [2024-12-13 10:39:18.396599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:24.651 [2024-12-13 10:39:18.402277] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:24.651 [2024-12-13 10:39:18.402303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.651 [2024-12-13 10:39:18.402315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:24.651 [2024-12-13 10:39:18.408127] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:24.651 [2024-12-13 10:39:18.408155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.652 [2024-12-13 10:39:18.408167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:24.652 [2024-12-13 10:39:18.414099] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:24.652 [2024-12-13 10:39:18.414126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.652 [2024-12-13 10:39:18.414138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:24.652 [2024-12-13 10:39:18.420089] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:24.652 [2024-12-13 10:39:18.420115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.652 [2024-12-13 10:39:18.420128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:24.652 [2024-12-13 10:39:18.426064] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:24.652 [2024-12-13 10:39:18.426090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.652 [2024-12-13 10:39:18.426101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:24.652 [2024-12-13 10:39:18.432016] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:24.652 [2024-12-13 10:39:18.432043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.652 [2024-12-13 10:39:18.432055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:24.652 [2024-12-13 10:39:18.437833] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:24.652 [2024-12-13 10:39:18.437859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.652 [2024-12-13 10:39:18.437871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:24.652 [2024-12-13 10:39:18.443842] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:24.652 [2024-12-13 10:39:18.443868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.652 [2024-12-13 10:39:18.443879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:24.652 [2024-12-13 10:39:18.450825] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:24.652 [2024-12-13 10:39:18.450853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.652 [2024-12-13 10:39:18.450865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:24.652 [2024-12-13 10:39:18.458815] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:24.652 [2024-12-13 10:39:18.458845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.652 [2024-12-13 10:39:18.458857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:24.652 [2024-12-13 10:39:18.466521] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:24.652 [2024-12-13 10:39:18.466547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.652 [2024-12-13 10:39:18.466559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:24.652 [2024-12-13 10:39:18.474325] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:24.652 [2024-12-13 10:39:18.474354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.652 [2024-12-13 10:39:18.474367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:24.652 [2024-12-13 10:39:18.482319] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:24.652 [2024-12-13 10:39:18.482347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.652 [2024-12-13 10:39:18.482359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:24.652 [2024-12-13 10:39:18.490321] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:24.652 [2024-12-13 10:39:18.490349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.652 [2024-12-13 10:39:18.490361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:24.652 [2024-12-13 10:39:18.498149] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:24.652 [2024-12-13 10:39:18.498176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.652 [2024-12-13 10:39:18.498188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:24.652 [2024-12-13 10:39:18.506021] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:24.652 [2024-12-13 10:39:18.506053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.652 [2024-12-13 10:39:18.506066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:24.652 [2024-12-13 10:39:18.513852] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:24.652 [2024-12-13 10:39:18.513880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.652 [2024-12-13 10:39:18.513892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:24.652 [2024-12-13 10:39:18.521639] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:24.652 [2024-12-13 10:39:18.521668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.652 [2024-12-13 10:39:18.521680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:24.652 [2024-12-13 10:39:18.529580] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:24.652 [2024-12-13 10:39:18.529607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.652 [2024-12-13 10:39:18.529619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:24.652 [2024-12-13 10:39:18.537682] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:24.652 [2024-12-13 10:39:18.537711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.652 [2024-12-13 10:39:18.537724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:24.912 [2024-12-13 10:39:18.545812] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:24.912 [2024-12-13 10:39:18.545840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.912 [2024-12-13 10:39:18.545853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:24.912 [2024-12-13 10:39:18.553359] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:24.912 [2024-12-13 10:39:18.553387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.912 [2024-12-13 10:39:18.553400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:24.912 [2024-12-13 10:39:18.561058] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:24.912 [2024-12-13 10:39:18.561088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.912 [2024-12-13 10:39:18.561100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:24.912 [2024-12-13 10:39:18.567918] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:24.912 [2024-12-13 10:39:18.567948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.912 [2024-12-13 10:39:18.567966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:24.912 [2024-12-13 10:39:18.574938] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:24.912 [2024-12-13 10:39:18.574966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.912 [2024-12-13 10:39:18.574979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:24.912 [2024-12-13 10:39:18.581796] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:24.912 [2024-12-13 10:39:18.581824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.912 [2024-12-13 10:39:18.581837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:24.912 [2024-12-13 10:39:18.588544] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:24.912 [2024-12-13 10:39:18.588572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.912 [2024-12-13 10:39:18.588585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:24.912 [2024-12-13 10:39:18.594539] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:24.912 [2024-12-13 10:39:18.594566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.912 [2024-12-13 10:39:18.594577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:24.912 [2024-12-13 10:39:18.600408] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:24.912 [2024-12-13 10:39:18.600436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.912 [2024-12-13 10:39:18.600454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:24.912 [2024-12-13 10:39:18.606249] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:24.912 [2024-12-13 10:39:18.606277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.912 [2024-12-13 10:39:18.606289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:24.912 [2024-12-13 10:39:18.612217] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:24.912 [2024-12-13 10:39:18.612245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.912 [2024-12-13 10:39:18.612257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:24.912 [2024-12-13 10:39:18.618252] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:24.913 [2024-12-13 10:39:18.618279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.913 [2024-12-13 10:39:18.618291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:24.913 [2024-12-13 10:39:18.624274] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:24.913 [2024-12-13 10:39:18.624301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.913 [2024-12-13 10:39:18.624317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:24.913 [2024-12-13 10:39:18.630381] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:24.913 [2024-12-13 10:39:18.630408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.913 [2024-12-13 10:39:18.630420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:24.913 [2024-12-13 10:39:18.636364] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:24.913 [2024-12-13 10:39:18.636391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.913 [2024-12-13 10:39:18.636403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:24.913 [2024-12-13 10:39:18.642407] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:24.913 [2024-12-13 10:39:18.642435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.913 [2024-12-13 10:39:18.642447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:24.913 [2024-12-13 10:39:18.648572] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:24.913 [2024-12-13 10:39:18.648599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.913 [2024-12-13 10:39:18.648611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:24.913 [2024-12-13 10:39:18.654732] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:24.913 [2024-12-13 10:39:18.654759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.913 [2024-12-13 10:39:18.654770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:24.913 [2024-12-13 10:39:18.660748] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:24.913 [2024-12-13 10:39:18.660775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.913 [2024-12-13 10:39:18.660786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:24.913 [2024-12-13 10:39:18.666777] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:24.913 [2024-12-13 10:39:18.666804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.913 [2024-12-13 10:39:18.666816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:24.913 [2024-12-13 10:39:18.672794] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:24.913 [2024-12-13 10:39:18.672822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.913 [2024-12-13 10:39:18.672833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:24.913 [2024-12-13 10:39:18.678894] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:24.913 [2024-12-13 10:39:18.678921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.913 [2024-12-13 10:39:18.678933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:24.913 [2024-12-13 10:39:18.684708] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:24.913 [2024-12-13 10:39:18.684735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.913 [2024-12-13 10:39:18.684747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:24.913 [2024-12-13 10:39:18.690527] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:24.913 [2024-12-13 10:39:18.690554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.913 [2024-12-13 10:39:18.690565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:24.913 [2024-12-13 10:39:18.696605] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:24.913 [2024-12-13 10:39:18.696632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.913 [2024-12-13 10:39:18.696643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:24.913 [2024-12-13 10:39:18.702607] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:24.913 [2024-12-13 10:39:18.702635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.913 [2024-12-13 10:39:18.702646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:24.913 [2024-12-13 10:39:18.708682] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:24.913 [2024-12-13 10:39:18.708709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.913 [2024-12-13 10:39:18.708729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:24.913 [2024-12-13 10:39:18.714784] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:24.913 [2024-12-13 10:39:18.714811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.913 [2024-12-13 10:39:18.714823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:24.913 [2024-12-13 10:39:18.720849] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:24.913 [2024-12-13 10:39:18.720876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.913 [2024-12-13 10:39:18.720888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:24.913 [2024-12-13 10:39:18.726938] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:24.913 [2024-12-13 10:39:18.726965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.913 [2024-12-13 10:39:18.726980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:24.913 [2024-12-13 10:39:18.732995] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:24.913 [2024-12-13 10:39:18.733022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.913 [2024-12-13 10:39:18.733034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:24.913 [2024-12-13 10:39:18.739090] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:24.913 [2024-12-13 10:39:18.739117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.913 [2024-12-13 10:39:18.739129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:24.913 [2024-12-13 10:39:18.745115] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:24.913 [2024-12-13 10:39:18.745142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.913 [2024-12-13 10:39:18.745154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:24.913 [2024-12-13 10:39:18.751156] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:24.913 [2024-12-13 10:39:18.751183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.913 [2024-12-13 10:39:18.751195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:24.913 [2024-12-13 10:39:18.757148] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:24.913 [2024-12-13 10:39:18.757174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.913 [2024-12-13 10:39:18.757188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:24.913 [2024-12-13 10:39:18.763078] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:24.913 [2024-12-13 10:39:18.763107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.913 [2024-12-13 10:39:18.763120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:24.913 [2024-12-13 10:39:18.769103] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:24.913 [2024-12-13 10:39:18.769129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.913 [2024-12-13 10:39:18.769141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:24.913 [2024-12-13 10:39:18.775139] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:24.913 [2024-12-13 10:39:18.775166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.913 [2024-12-13 10:39:18.775178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:24.913 [2024-12-13 10:39:18.781133] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:24.913 [2024-12-13 10:39:18.781160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.914 [2024-12-13 10:39:18.781172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:24.914 [2024-12-13 10:39:18.787128] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:24.914 [2024-12-13 10:39:18.787155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.914 [2024-12-13 10:39:18.787167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:24.914 [2024-12-13 10:39:18.793198] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:24.914 [2024-12-13 10:39:18.793225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.914 [2024-12-13 10:39:18.793237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:24.914 [2024-12-13 10:39:18.799226] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:24.914 [2024-12-13 10:39:18.799253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.914 [2024-12-13 10:39:18.799265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:25.174 [2024-12-13 10:39:18.805338] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:25.174 [2024-12-13 10:39:18.805366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:25.174 [2024-12-13 10:39:18.805378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:25.174 [2024-12-13 10:39:18.811520] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:25.174 [2024-12-13 10:39:18.811548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:25.174 [2024-12-13 10:39:18.811560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:25.174 [2024-12-13 10:39:18.817586] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:25.174 [2024-12-13 10:39:18.817617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:25.174 [2024-12-13 10:39:18.817630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:25.174 [2024-12-13 10:39:18.823633] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:25.174 [2024-12-13 10:39:18.823661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:25.174 [2024-12-13 10:39:18.823673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:25.174 [2024-12-13 10:39:18.829700] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:25.174 [2024-12-13 10:39:18.829727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:25.174 [2024-12-13 10:39:18.829743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:25.174 [2024-12-13 10:39:18.835710] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:25.174 [2024-12-13 10:39:18.835736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:25.174 [2024-12-13 10:39:18.835747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:25.174 [2024-12-13 10:39:18.841760] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:25.174 [2024-12-13 10:39:18.841787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:25.174 [2024-12-13 10:39:18.841799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:25.174 [2024-12-13 10:39:18.847822] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:25.174 [2024-12-13 10:39:18.847850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:25.175 [2024-12-13 10:39:18.847862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:25.175 [2024-12-13 10:39:18.854023] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:25.175 [2024-12-13 10:39:18.854051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:25.175 [2024-12-13 10:39:18.854063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:25.175 [2024-12-13 10:39:18.860078] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:25.175 [2024-12-13 10:39:18.860105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:25.175 [2024-12-13 10:39:18.860116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:25.175 [2024-12-13 10:39:18.866196] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:25.175 [2024-12-13 10:39:18.866223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:25.175 [2024-12-13 10:39:18.866235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:25.175 [2024-12-13 10:39:18.872297] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:25.175 [2024-12-13 10:39:18.872324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:25.175 [2024-12-13 10:39:18.872336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:25.175 [2024-12-13 10:39:18.878364] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:25.175 [2024-12-13 10:39:18.878391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:25.175 [2024-12-13 10:39:18.878403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:25.175 [2024-12-13 10:39:18.884391] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:25.175 [2024-12-13 10:39:18.884422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:25.175 [2024-12-13 10:39:18.884434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:25.175 [2024-12-13 10:39:18.890317] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:25.175 [2024-12-13 10:39:18.890345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:25.175 [2024-12-13 10:39:18.890358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:25.175 [2024-12-13 10:39:18.896538] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:25.175 [2024-12-13 10:39:18.896565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:25.175 [2024-12-13 10:39:18.896577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:25.175 [2024-12-13 10:39:18.902530] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:25.175 [2024-12-13 10:39:18.902558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:25.175 [2024-12-13 10:39:18.902570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:25.175 [2024-12-13 10:39:18.908526] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:25.175 [2024-12-13 10:39:18.908554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:25.175 [2024-12-13 10:39:18.908566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:25.175 [2024-12-13 10:39:18.914589] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:25.175 [2024-12-13 10:39:18.914619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:25.175 [2024-12-13 10:39:18.914631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:25.175 [2024-12-13 10:39:18.920684] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:25.175 [2024-12-13 10:39:18.920712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:25.175 [2024-12-13 10:39:18.920724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:25.175 [2024-12-13 10:39:18.926729] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:25.175 [2024-12-13 10:39:18.926757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:25.175 [2024-12-13 10:39:18.926769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:25.175 [2024-12-13 10:39:18.932733] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:25.175 [2024-12-13 10:39:18.932760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:25.175 [2024-12-13 10:39:18.932776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:25.175 [2024-12-13 10:39:18.938790] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:25.175 [2024-12-13 10:39:18.938818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:25.175 [2024-12-13 10:39:18.938829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:25.175 [2024-12-13 10:39:18.944829] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:25.175 [2024-12-13 10:39:18.944857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:25.175 [2024-12-13 10:39:18.944868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:25.175 [2024-12-13 10:39:18.950884] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:25.175 [2024-12-13 10:39:18.950912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:25.175 [2024-12-13 10:39:18.950923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:25.175 [2024-12-13 10:39:18.956963] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:25.175 [2024-12-13 10:39:18.956991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:25.175 [2024-12-13 10:39:18.957003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:25.175 [2024-12-13 10:39:18.963846] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:25.175 [2024-12-13 10:39:18.963874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:25.175 [2024-12-13 10:39:18.963886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:25.175 [2024-12-13 10:39:18.971508] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:25.175 [2024-12-13 10:39:18.971543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:25.175 [2024-12-13 10:39:18.971555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:25.175 [2024-12-13 10:39:18.978574] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:25.175 [2024-12-13 10:39:18.978602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:25.175 [2024-12-13 10:39:18.978615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:25.175 [2024-12-13 10:39:18.982842] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:25.175 [2024-12-13 10:39:18.982870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:25.175 [2024-12-13 10:39:18.982882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:25.175 [2024-12-13 10:39:18.990149] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:25.175 [2024-12-13 10:39:18.990190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:25.175 [2024-12-13 10:39:18.990203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:25.175 [2024-12-13 10:39:18.997314] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:25.175 [2024-12-13 10:39:18.997343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:25.175 [2024-12-13 10:39:18.997355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:25.175 [2024-12-13 10:39:19.006341] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:25.175 [2024-12-13 10:39:19.006370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:25.175 [2024-12-13 10:39:19.006383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:25.175 [2024-12-13 10:39:19.014116] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:25.175 [2024-12-13 10:39:19.014143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:25.175 [2024-12-13 10:39:19.014155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:25.175 4875.00 IOPS, 609.38 MiB/s [2024-12-13T09:39:19.066Z] [2024-12-13 10:39:19.022384] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:25.175 [2024-12-13 10:39:19.022412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:25.176 [2024-12-13 10:39:19.022425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:25.176 00:37:25.176 Latency(us) 00:37:25.176 [2024-12-13T09:39:19.067Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:25.176 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:37:25.176 nvme0n1 : 2.01 4871.19 608.90 0.00 0.00 3281.03 709.97 12170.97 00:37:25.176 [2024-12-13T09:39:19.067Z] =================================================================================================================== 00:37:25.176 [2024-12-13T09:39:19.067Z] Total : 4871.19 608.90 0.00 0.00 3281.03 709.97 12170.97 00:37:25.176 { 00:37:25.176 "results": [ 00:37:25.176 { 00:37:25.176 "job": "nvme0n1", 00:37:25.176 "core_mask": "0x2", 00:37:25.176 "workload": "randread", 00:37:25.176 "status": "finished", 00:37:25.176 "queue_depth": 16, 00:37:25.176 "io_size": 131072, 00:37:25.176 "runtime": 2.005054, 00:37:25.176 "iops": 4871.19050160245, 00:37:25.176 "mibps": 608.8988127003063, 00:37:25.176 "io_failed": 0, 00:37:25.176 "io_timeout": 0, 00:37:25.176 "avg_latency_us": 3281.031456946862, 00:37:25.176 "min_latency_us": 709.9733333333334, 00:37:25.176 "max_latency_us": 12170.971428571429 00:37:25.176 } 00:37:25.176 ], 00:37:25.176 "core_count": 1 00:37:25.176 } 00:37:25.176 10:39:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:37:25.176 10:39:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:37:25.176 10:39:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:37:25.176 10:39:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:37:25.176 | .driver_specific 00:37:25.176 | .nvme_error 00:37:25.176 | .status_code 00:37:25.176 | .command_transient_transport_error' 00:37:25.434 10:39:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 316 > 0 )) 00:37:25.434 10:39:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 4147416 00:37:25.434 10:39:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 4147416 ']' 00:37:25.434 10:39:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 4147416 00:37:25.434 10:39:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:37:25.434 10:39:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:25.434 10:39:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4147416 00:37:25.434 10:39:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:37:25.434 10:39:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:37:25.434 10:39:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4147416' 00:37:25.434 killing process with pid 4147416 00:37:25.434 10:39:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 4147416 00:37:25.434 Received shutdown signal, test time was about 2.000000 seconds 00:37:25.434 00:37:25.434 Latency(us) 00:37:25.434 [2024-12-13T09:39:19.325Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:25.434 [2024-12-13T09:39:19.325Z] =================================================================================================================== 00:37:25.434 [2024-12-13T09:39:19.325Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:25.434 10:39:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 4147416 00:37:26.370 10:39:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:37:26.370 10:39:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:37:26.370 10:39:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:37:26.370 10:39:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:37:26.370 10:39:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:37:26.370 10:39:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=4148194 00:37:26.370 10:39:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 4148194 /var/tmp/bperf.sock 00:37:26.370 10:39:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:37:26.370 10:39:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 4148194 ']' 00:37:26.370 10:39:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:37:26.370 10:39:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:26.370 10:39:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:37:26.370 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:37:26.370 10:39:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:26.370 10:39:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:26.370 [2024-12-13 10:39:20.240246] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:37:26.371 [2024-12-13 10:39:20.240337] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4148194 ] 00:37:26.629 [2024-12-13 10:39:20.353488] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:26.629 [2024-12-13 10:39:20.463791] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:37:27.197 10:39:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:27.197 10:39:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:37:27.197 10:39:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:37:27.197 10:39:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:37:27.455 10:39:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:37:27.456 10:39:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:27.456 10:39:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:27.456 10:39:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:27.456 10:39:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:27.456 10:39:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:28.023 nvme0n1 00:37:28.023 10:39:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:37:28.023 10:39:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:28.023 10:39:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:28.023 10:39:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:28.023 10:39:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:37:28.023 10:39:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:37:28.023 Running I/O for 2 seconds... 00:37:28.023 [2024-12-13 10:39:21.761683] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173edd58 00:37:28.023 [2024-12-13 10:39:21.762526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:9221 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:28.023 [2024-12-13 10:39:21.762563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:37:28.023 [2024-12-13 10:39:21.771611] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fa3a0 00:37:28.023 [2024-12-13 10:39:21.772407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:2716 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:28.023 [2024-12-13 10:39:21.772438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:37:28.023 [2024-12-13 10:39:21.782599] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e2c28 00:37:28.023 [2024-12-13 10:39:21.783546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16277 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:28.023 [2024-12-13 10:39:21.783574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:37:28.023 [2024-12-13 10:39:21.795171] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e3060 00:37:28.023 [2024-12-13 10:39:21.796700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:10027 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:28.023 [2024-12-13 10:39:21.796727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:37:28.023 [2024-12-13 10:39:21.805987] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f9f68 00:37:28.023 [2024-12-13 10:39:21.807627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:24241 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:28.023 [2024-12-13 10:39:21.807654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:37:28.023 [2024-12-13 10:39:21.813269] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173eff18 00:37:28.023 [2024-12-13 10:39:21.813960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:14603 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:28.023 [2024-12-13 10:39:21.813987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:37:28.023 [2024-12-13 10:39:21.823674] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fac10 00:37:28.024 [2024-12-13 10:39:21.824349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:15290 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:28.024 [2024-12-13 10:39:21.824376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:37:28.024 [2024-12-13 10:39:21.833960] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fac10 00:37:28.024 [2024-12-13 10:39:21.834641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:4215 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:28.024 [2024-12-13 10:39:21.834668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:37:28.024 [2024-12-13 10:39:21.843526] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173eaef0 00:37:28.024 [2024-12-13 10:39:21.844190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:13844 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:28.024 [2024-12-13 10:39:21.844215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:37:28.024 [2024-12-13 10:39:21.854294] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f6cc8 00:37:28.024 [2024-12-13 10:39:21.855197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:682 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:28.024 [2024-12-13 10:39:21.855222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:37:28.024 [2024-12-13 10:39:21.865063] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f1ca0 00:37:28.024 [2024-12-13 10:39:21.866092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:21466 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:28.024 [2024-12-13 10:39:21.866117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:37:28.024 [2024-12-13 10:39:21.875847] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f0ff8 00:37:28.024 [2024-12-13 10:39:21.877025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:12557 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:28.024 [2024-12-13 10:39:21.877054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:37:28.024 [2024-12-13 10:39:21.886599] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e5a90 00:37:28.024 [2024-12-13 10:39:21.887899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:22160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:28.024 [2024-12-13 10:39:21.887925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:37:28.024 [2024-12-13 10:39:21.897536] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f6cc8 00:37:28.024 [2024-12-13 10:39:21.898987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21353 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:28.024 [2024-12-13 10:39:21.899012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:37:28.024 [2024-12-13 10:39:21.908282] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e01f8 00:37:28.024 [2024-12-13 10:39:21.909870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:1744 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:28.024 [2024-12-13 10:39:21.909898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:37:28.283 [2024-12-13 10:39:21.918107] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f2948 00:37:28.283 [2024-12-13 10:39:21.919212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:17686 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:28.283 [2024-12-13 10:39:21.919240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:37:28.283 [2024-12-13 10:39:21.927582] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f6890 00:37:28.283 [2024-12-13 10:39:21.928734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:17147 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:28.283 [2024-12-13 10:39:21.928759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:37:28.283 [2024-12-13 10:39:21.938365] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173dfdc0 00:37:28.283 [2024-12-13 10:39:21.939724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:19755 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:28.283 [2024-12-13 10:39:21.939750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:37:28.283 [2024-12-13 10:39:21.949351] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e8088 00:37:28.283 [2024-12-13 10:39:21.950725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:9449 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:28.283 [2024-12-13 10:39:21.950751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:37:28.283 [2024-12-13 10:39:21.959547] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173df550 00:37:28.283 [2024-12-13 10:39:21.960622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:11693 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:28.283 [2024-12-13 10:39:21.960647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:37:28.283 [2024-12-13 10:39:21.968943] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173eff18 00:37:28.283 [2024-12-13 10:39:21.970015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:18075 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:28.283 [2024-12-13 10:39:21.970040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:37:28.283 [2024-12-13 10:39:21.979728] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fb048 00:37:28.283 [2024-12-13 10:39:21.980965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:12736 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:28.283 [2024-12-13 10:39:21.980989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:37:28.283 [2024-12-13 10:39:21.990583] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e5a90 00:37:28.283 [2024-12-13 10:39:21.991919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:7653 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:28.283 [2024-12-13 10:39:21.991949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:37:28.283 [2024-12-13 10:39:22.001391] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e4140 00:37:28.283 [2024-12-13 10:39:22.002868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:4826 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:28.283 [2024-12-13 10:39:22.002893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:37:28.283 [2024-12-13 10:39:22.012228] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e27f0 00:37:28.283 [2024-12-13 10:39:22.013917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:14118 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:28.283 [2024-12-13 10:39:22.013943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:37:28.283 [2024-12-13 10:39:22.019712] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f6890 00:37:28.283 [2024-12-13 10:39:22.020470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:1259 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:28.283 [2024-12-13 10:39:22.020495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:37:28.283 [2024-12-13 10:39:22.030354] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fbcf0 00:37:28.283 [2024-12-13 10:39:22.031051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:15088 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:28.283 [2024-12-13 10:39:22.031077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:37:28.283 [2024-12-13 10:39:22.040057] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f7100 00:37:28.283 [2024-12-13 10:39:22.040784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:17815 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:28.283 [2024-12-13 10:39:22.040809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:37:28.283 [2024-12-13 10:39:22.050813] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f0788 00:37:28.283 [2024-12-13 10:39:22.051698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:24256 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:28.283 [2024-12-13 10:39:22.051722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:37:28.283 [2024-12-13 10:39:22.061605] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ef270 00:37:28.283 [2024-12-13 10:39:22.062616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:19037 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:28.283 [2024-12-13 10:39:22.062642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:37:28.283 [2024-12-13 10:39:22.072362] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f1868 00:37:28.283 [2024-12-13 10:39:22.073518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:4264 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:28.283 [2024-12-13 10:39:22.073543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:37:28.283 [2024-12-13 10:39:22.083135] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ebb98 00:37:28.283 [2024-12-13 10:39:22.084419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:11726 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:28.283 [2024-12-13 10:39:22.084444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:37:28.283 [2024-12-13 10:39:22.093793] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f0788 00:37:28.283 [2024-12-13 10:39:22.095215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:22016 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:28.283 [2024-12-13 10:39:22.095239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:37:28.283 [2024-12-13 10:39:22.104562] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ff3c8 00:37:28.283 [2024-12-13 10:39:22.106117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:7096 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:28.283 [2024-12-13 10:39:22.106142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:37:28.283 [2024-12-13 10:39:22.115308] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173eee38 00:37:28.283 [2024-12-13 10:39:22.117005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:5402 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:28.283 [2024-12-13 10:39:22.117031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:37:28.283 [2024-12-13 10:39:22.122576] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f7970 00:37:28.283 [2024-12-13 10:39:22.123305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:18520 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:28.284 [2024-12-13 10:39:22.123330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:37:28.284 [2024-12-13 10:39:22.132320] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173df550 00:37:28.284 [2024-12-13 10:39:22.133051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:7660 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:28.284 [2024-12-13 10:39:22.133077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:37:28.284 [2024-12-13 10:39:22.143791] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ff3c8 00:37:28.284 [2024-12-13 10:39:22.144664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:16012 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:28.284 [2024-12-13 10:39:22.144690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:37:28.284 [2024-12-13 10:39:22.154399] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e1b48 00:37:28.284 [2024-12-13 10:39:22.155394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:24003 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:28.284 [2024-12-13 10:39:22.155420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:37:28.284 [2024-12-13 10:39:22.164664] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e1f80 00:37:28.284 [2024-12-13 10:39:22.165822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:19710 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:28.284 [2024-12-13 10:39:22.165848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:37:28.284 [2024-12-13 10:39:22.174442] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f20d8 00:37:28.543 [2024-12-13 10:39:22.175182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:23031 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:28.543 [2024-12-13 10:39:22.175208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:37:28.543 [2024-12-13 10:39:22.184732] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e9168 00:37:28.543 [2024-12-13 10:39:22.185457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:6748 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:28.543 [2024-12-13 10:39:22.185482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:37:28.543 [2024-12-13 10:39:22.195347] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f0bc0 00:37:28.543 [2024-12-13 10:39:22.195863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:3474 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:28.543 [2024-12-13 10:39:22.195888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:37:28.543 [2024-12-13 10:39:22.207169] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173eea00 00:37:28.543 [2024-12-13 10:39:22.208582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:24699 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:28.543 [2024-12-13 10:39:22.208607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:37:28.543 [2024-12-13 10:39:22.216789] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fd640 00:37:28.543 [2024-12-13 10:39:22.217792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:7689 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:28.543 [2024-12-13 10:39:22.217817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:37:28.543 [2024-12-13 10:39:22.227201] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e01f8 00:37:28.543 [2024-12-13 10:39:22.227990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:9879 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:28.543 [2024-12-13 10:39:22.228015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:37:28.543 [2024-12-13 10:39:22.238984] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ed4e8 00:37:28.543 [2024-12-13 10:39:22.240688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:23268 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:28.543 [2024-12-13 10:39:22.240713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:37:28.543 [2024-12-13 10:39:22.246299] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fdeb0 00:37:28.543 [2024-12-13 10:39:22.247020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:7800 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:28.543 [2024-12-13 10:39:22.247045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:37:28.543 [2024-12-13 10:39:22.258552] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f0350 00:37:28.543 [2024-12-13 10:39:22.259810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:7286 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:28.543 [2024-12-13 10:39:22.259835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:37:28.543 [2024-12-13 10:39:22.268794] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f0350 00:37:28.543 [2024-12-13 10:39:22.270233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:2211 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:28.543 [2024-12-13 10:39:22.270258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:37:28.543 [2024-12-13 10:39:22.278664] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173de8a8 00:37:28.543 [2024-12-13 10:39:22.279672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:7558 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:28.543 [2024-12-13 10:39:22.279707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:28.543 [2024-12-13 10:39:22.289207] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e49b0 00:37:28.543 [2024-12-13 10:39:22.289986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:7053 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:28.543 [2024-12-13 10:39:22.290011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:37:28.543 [2024-12-13 10:39:22.300955] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173eff18 00:37:28.543 [2024-12-13 10:39:22.302628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:2052 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:28.543 [2024-12-13 10:39:22.302654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:37:28.543 [2024-12-13 10:39:22.308237] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e49b0 00:37:28.543 [2024-12-13 10:39:22.308947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:10701 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:28.543 [2024-12-13 10:39:22.308972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:37:28.543 [2024-12-13 10:39:22.320527] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173de8a8 00:37:28.543 [2024-12-13 10:39:22.321779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:5302 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:28.543 [2024-12-13 10:39:22.321808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:37:28.543 [2024-12-13 10:39:22.329236] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f46d0 00:37:28.543 [2024-12-13 10:39:22.330046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:9029 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:28.543 [2024-12-13 10:39:22.330071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:28.543 [2024-12-13 10:39:22.339704] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173eea00 00:37:28.543 [2024-12-13 10:39:22.340413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:6632 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:28.543 [2024-12-13 10:39:22.340438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:37:28.543 [2024-12-13 10:39:22.351319] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ef6a8 00:37:28.543 [2024-12-13 10:39:22.352677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:11609 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:28.543 [2024-12-13 10:39:22.352703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:37:28.543 [2024-12-13 10:39:22.360982] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173eb328 00:37:28.543 [2024-12-13 10:39:22.361953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:21793 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:28.543 [2024-12-13 10:39:22.361977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:28.543 [2024-12-13 10:39:22.371123] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e6b70 00:37:28.543 [2024-12-13 10:39:22.372132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:2585 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:28.543 [2024-12-13 10:39:22.372156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:28.543 [2024-12-13 10:39:22.381421] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f7100 00:37:28.543 [2024-12-13 10:39:22.382399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:1993 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:28.543 [2024-12-13 10:39:22.382425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:28.543 [2024-12-13 10:39:22.391699] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173eee38 00:37:28.543 [2024-12-13 10:39:22.392681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:15992 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:28.544 [2024-12-13 10:39:22.392705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:28.544 [2024-12-13 10:39:22.402026] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173de470 00:37:28.544 [2024-12-13 10:39:22.403005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:7908 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:28.544 [2024-12-13 10:39:22.403029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:28.544 [2024-12-13 10:39:22.412248] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e38d0 00:37:28.544 [2024-12-13 10:39:22.413243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:23411 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:28.544 [2024-12-13 10:39:22.413269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:28.544 [2024-12-13 10:39:22.422552] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f35f0 00:37:28.544 [2024-12-13 10:39:22.423527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:12731 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:28.544 [2024-12-13 10:39:22.423552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:28.544 [2024-12-13 10:39:22.432939] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ef270 00:37:28.544 [2024-12-13 10:39:22.433945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:13898 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:28.544 [2024-12-13 10:39:22.433971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:28.803 [2024-12-13 10:39:22.443619] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173dfdc0 00:37:28.803 [2024-12-13 10:39:22.444601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:3982 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:28.803 [2024-12-13 10:39:22.444627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:28.803 [2024-12-13 10:39:22.453924] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173feb58 00:37:28.803 [2024-12-13 10:39:22.454910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:4012 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:28.803 [2024-12-13 10:39:22.454935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:28.803 [2024-12-13 10:39:22.464196] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f1ca0 00:37:28.803 [2024-12-13 10:39:22.465183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:5188 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:28.803 [2024-12-13 10:39:22.465208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:28.803 [2024-12-13 10:39:22.474359] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f92c0 00:37:28.803 [2024-12-13 10:39:22.475342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:19940 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:28.803 [2024-12-13 10:39:22.475366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:28.803 [2024-12-13 10:39:22.484642] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ebfd0 00:37:28.803 [2024-12-13 10:39:22.485625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:11292 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:28.803 [2024-12-13 10:39:22.485650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:28.803 [2024-12-13 10:39:22.494937] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e84c0 00:37:28.803 [2024-12-13 10:39:22.495940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:16910 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:28.803 [2024-12-13 10:39:22.495965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:28.803 [2024-12-13 10:39:22.505518] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f57b0 00:37:28.803 [2024-12-13 10:39:22.506388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:10842 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:28.803 [2024-12-13 10:39:22.506413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:37:28.803 [2024-12-13 10:39:22.515230] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e6b70 00:37:28.803 [2024-12-13 10:39:22.516658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:9284 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:28.803 [2024-12-13 10:39:22.516684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:37:28.803 [2024-12-13 10:39:22.524722] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fbcf0 00:37:28.803 [2024-12-13 10:39:22.525459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:21242 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:28.803 [2024-12-13 10:39:22.525484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:37:28.803 [2024-12-13 10:39:22.535259] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e4140 00:37:28.803 [2024-12-13 10:39:22.535932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:6865 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:28.803 [2024-12-13 10:39:22.535957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:37:28.803 [2024-12-13 10:39:22.545680] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173edd58 00:37:28.803 [2024-12-13 10:39:22.546422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:2913 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:28.803 [2024-12-13 10:39:22.546454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:37:28.803 [2024-12-13 10:39:22.556366] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e7c50 00:37:28.803 [2024-12-13 10:39:22.557217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:16478 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:28.803 [2024-12-13 10:39:22.557243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:37:28.803 [2024-12-13 10:39:22.567069] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e3060 00:37:28.803 [2024-12-13 10:39:22.567951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:10760 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:28.803 [2024-12-13 10:39:22.567977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:37:28.803 [2024-12-13 10:39:22.577380] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173dece0 00:37:28.803 [2024-12-13 10:39:22.578241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:18720 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:28.803 [2024-12-13 10:39:22.578266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:37:28.803 [2024-12-13 10:39:22.587703] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f7970 00:37:28.803 [2024-12-13 10:39:22.588562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:2352 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:28.803 [2024-12-13 10:39:22.588587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:37:28.803 [2024-12-13 10:39:22.597998] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fe2e8 00:37:28.803 [2024-12-13 10:39:22.598883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:18193 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:28.803 [2024-12-13 10:39:22.598907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:37:28.804 [2024-12-13 10:39:22.608322] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f0ff8 00:37:28.804 [2024-12-13 10:39:22.609219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:5266 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:28.804 [2024-12-13 10:39:22.609243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:37:28.804 [2024-12-13 10:39:22.618608] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ddc00 00:37:28.804 [2024-12-13 10:39:22.619470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:21781 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:28.804 [2024-12-13 10:39:22.619495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:37:28.804 [2024-12-13 10:39:22.628920] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fac10 00:37:28.804 [2024-12-13 10:39:22.629785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:7692 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:28.804 [2024-12-13 10:39:22.629810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:37:28.804 [2024-12-13 10:39:22.639200] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e3498 00:37:28.804 [2024-12-13 10:39:22.640066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:8497 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:28.804 [2024-12-13 10:39:22.640091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:37:28.804 [2024-12-13 10:39:22.649718] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173feb58 00:37:28.804 [2024-12-13 10:39:22.650585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:21845 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:28.804 [2024-12-13 10:39:22.650611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:37:28.804 [2024-12-13 10:39:22.660007] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173dfdc0 00:37:28.804 [2024-12-13 10:39:22.660873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:23992 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:28.804 [2024-12-13 10:39:22.660899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:37:28.804 [2024-12-13 10:39:22.670317] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ef270 00:37:28.804 [2024-12-13 10:39:22.671217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:16043 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:28.804 [2024-12-13 10:39:22.671241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:37:28.804 [2024-12-13 10:39:22.680743] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f35f0 00:37:28.804 [2024-12-13 10:39:22.681608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:11645 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:28.804 [2024-12-13 10:39:22.681633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:37:28.804 [2024-12-13 10:39:22.691082] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f8e88 00:37:28.804 [2024-12-13 10:39:22.691996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:23680 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:28.804 [2024-12-13 10:39:22.692021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:37:29.063 [2024-12-13 10:39:22.701605] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e0a68 00:37:29.063 [2024-12-13 10:39:22.702466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:2818 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:29.063 [2024-12-13 10:39:22.702492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:37:29.063 [2024-12-13 10:39:22.711883] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ebb98 00:37:29.063 [2024-12-13 10:39:22.712742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:20528 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:29.063 [2024-12-13 10:39:22.712767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:37:29.063 [2024-12-13 10:39:22.722176] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173eff18 00:37:29.063 [2024-12-13 10:39:22.723038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:23209 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:29.063 [2024-12-13 10:39:22.723062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:37:29.063 [2024-12-13 10:39:22.732439] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173eb328 00:37:29.063 [2024-12-13 10:39:22.733300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:4528 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:29.063 [2024-12-13 10:39:22.733325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:37:29.063 [2024-12-13 10:39:22.742733] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173de8a8 00:37:29.063 [2024-12-13 10:39:22.743597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:20447 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:29.063 [2024-12-13 10:39:22.743622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:37:29.063 24623.00 IOPS, 96.18 MiB/s [2024-12-13T09:39:22.954Z] [2024-12-13 10:39:22.752978] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173eea00 00:37:29.063 [2024-12-13 10:39:22.753821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:8757 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:29.063 [2024-12-13 10:39:22.753847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:37:29.063 [2024-12-13 10:39:22.763532] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ed0b0 00:37:29.063 [2024-12-13 10:39:22.764155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:16725 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:29.063 [2024-12-13 10:39:22.764183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:37:29.063 [2024-12-13 10:39:22.775335] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f3a28 00:37:29.063 [2024-12-13 10:39:22.776897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:15857 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:29.063 [2024-12-13 10:39:22.776923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:37:29.063 [2024-12-13 10:39:22.786434] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f0bc0 00:37:29.063 [2024-12-13 10:39:22.788088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:19495 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:29.063 [2024-12-13 10:39:22.788113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:37:29.063 [2024-12-13 10:39:22.793721] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f1868 00:37:29.063 [2024-12-13 10:39:22.794411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:6331 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:29.063 [2024-12-13 10:39:22.794436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:37:29.063 [2024-12-13 10:39:22.805581] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173dece0 00:37:29.063 [2024-12-13 10:39:22.806465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:19987 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:29.063 [2024-12-13 10:39:22.806490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:37:29.063 [2024-12-13 10:39:22.817227] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ef270 00:37:29.063 [2024-12-13 10:39:22.818755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:3523 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:29.063 [2024-12-13 10:39:22.818781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:37:29.063 [2024-12-13 10:39:22.825982] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f4f40 00:37:29.064 [2024-12-13 10:39:22.826947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:18083 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:29.064 [2024-12-13 10:39:22.826972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:37:29.064 [2024-12-13 10:39:22.836119] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f20d8 00:37:29.064 [2024-12-13 10:39:22.837119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:571 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:29.064 [2024-12-13 10:39:22.837144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:37:29.064 [2024-12-13 10:39:22.846515] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e73e0 00:37:29.064 [2024-12-13 10:39:22.847419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:18276 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:29.064 [2024-12-13 10:39:22.847444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:37:29.064 [2024-12-13 10:39:22.856639] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e73e0 00:37:29.064 [2024-12-13 10:39:22.857664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:7953 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:29.064 [2024-12-13 10:39:22.857689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:37:29.064 [2024-12-13 10:39:22.867140] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e73e0 00:37:29.064 [2024-12-13 10:39:22.868175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:18190 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:29.064 [2024-12-13 10:39:22.868201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:37:29.064 [2024-12-13 10:39:22.877542] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e73e0 00:37:29.064 [2024-12-13 10:39:22.878541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:8600 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:29.064 [2024-12-13 10:39:22.878567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:37:29.064 [2024-12-13 10:39:22.888181] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173de470 00:37:29.064 [2024-12-13 10:39:22.889220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:11194 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:29.064 [2024-12-13 10:39:22.889246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:37:29.064 [2024-12-13 10:39:22.898588] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fb480 00:37:29.064 [2024-12-13 10:39:22.899690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:826 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:29.064 [2024-12-13 10:39:22.899716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:37:29.064 [2024-12-13 10:39:22.908876] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fb480 00:37:29.064 [2024-12-13 10:39:22.910002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2064 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:29.064 [2024-12-13 10:39:22.910028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:37:29.064 [2024-12-13 10:39:22.920462] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fb480 00:37:29.064 [2024-12-13 10:39:22.922125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:16425 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:29.064 [2024-12-13 10:39:22.922151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:37:29.064 [2024-12-13 10:39:22.931247] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173eea00 00:37:29.064 [2024-12-13 10:39:22.932972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:20342 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:29.064 [2024-12-13 10:39:22.932998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:37:29.064 [2024-12-13 10:39:22.938545] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fa3a0 00:37:29.064 [2024-12-13 10:39:22.939380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:4364 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:29.064 [2024-12-13 10:39:22.939412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:37:29.064 [2024-12-13 10:39:22.949023] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ed920 00:37:29.064 [2024-12-13 10:39:22.949791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:11588 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:29.064 [2024-12-13 10:39:22.949816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:37:29.323 [2024-12-13 10:39:22.958894] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ddc00 00:37:29.323 [2024-12-13 10:39:22.959675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:9139 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:29.323 [2024-12-13 10:39:22.959701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:37:29.323 [2024-12-13 10:39:22.969737] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f6cc8 00:37:29.323 [2024-12-13 10:39:22.970620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:19179 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:29.323 [2024-12-13 10:39:22.970645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:37:29.323 [2024-12-13 10:39:22.980223] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173dece0 00:37:29.323 [2024-12-13 10:39:22.981174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:20164 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:29.323 [2024-12-13 10:39:22.981199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:37:29.323 [2024-12-13 10:39:22.990924] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e8d30 00:37:29.323 [2024-12-13 10:39:22.991869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:14641 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:29.323 [2024-12-13 10:39:22.991895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:37:29.323 [2024-12-13 10:39:23.000752] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ed920 00:37:29.323 [2024-12-13 10:39:23.001519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:20587 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:29.323 [2024-12-13 10:39:23.001544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:37:29.323 [2024-12-13 10:39:23.011192] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ed920 00:37:29.323 [2024-12-13 10:39:23.012051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:11352 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:29.323 [2024-12-13 10:39:23.012076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:37:29.323 [2024-12-13 10:39:23.021831] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f57b0 00:37:29.323 [2024-12-13 10:39:23.022430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:13326 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:29.323 [2024-12-13 10:39:23.022464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:37:29.324 [2024-12-13 10:39:23.032446] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e88f8 00:37:29.324 [2024-12-13 10:39:23.033373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:851 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:29.324 [2024-12-13 10:39:23.033400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:37:29.324 [2024-12-13 10:39:23.043145] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e3060 00:37:29.324 [2024-12-13 10:39:23.043912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21240 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:29.324 [2024-12-13 10:39:23.043937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:37:29.324 [2024-12-13 10:39:23.054181] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f0bc0 00:37:29.324 [2024-12-13 10:39:23.055085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23025 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:29.324 [2024-12-13 10:39:23.055111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:37:29.324 [2024-12-13 10:39:23.064114] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e8d30 00:37:29.324 [2024-12-13 10:39:23.065636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:12297 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:29.324 [2024-12-13 10:39:23.065662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:37:29.324 [2024-12-13 10:39:23.073010] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e1b48 00:37:29.324 [2024-12-13 10:39:23.073756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:5650 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:29.324 [2024-12-13 10:39:23.073782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:37:29.324 [2024-12-13 10:39:23.085134] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f35f0 00:37:29.324 [2024-12-13 10:39:23.086160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:14957 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:29.324 [2024-12-13 10:39:23.086186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:37:29.324 [2024-12-13 10:39:23.094737] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f0ff8 00:37:29.324 [2024-12-13 10:39:23.095751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:25558 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:29.324 [2024-12-13 10:39:23.095776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:37:29.324 [2024-12-13 10:39:23.105556] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ed920 00:37:29.324 [2024-12-13 10:39:23.106766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:2884 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:29.324 [2024-12-13 10:39:23.106792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:37:29.324 [2024-12-13 10:39:23.115240] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e6300 00:37:29.324 [2024-12-13 10:39:23.116100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:22730 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:29.324 [2024-12-13 10:39:23.116127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:37:29.324 [2024-12-13 10:39:23.124702] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ea680 00:37:29.324 [2024-12-13 10:39:23.125434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:5714 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:29.324 [2024-12-13 10:39:23.125467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:37:29.324 [2024-12-13 10:39:23.135569] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f0ff8 00:37:29.324 [2024-12-13 10:39:23.136466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:24088 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:29.324 [2024-12-13 10:39:23.136493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:37:29.324 [2024-12-13 10:39:23.147941] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f1430 00:37:29.324 [2024-12-13 10:39:23.149231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:6585 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:29.324 [2024-12-13 10:39:23.149258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:37:29.324 [2024-12-13 10:39:23.158556] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f1430 00:37:29.324 [2024-12-13 10:39:23.159830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:7617 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:29.324 [2024-12-13 10:39:23.159855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:37:29.324 [2024-12-13 10:39:23.168964] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f1430 00:37:29.324 [2024-12-13 10:39:23.170118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:24731 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:29.324 [2024-12-13 10:39:23.170143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:37:29.324 [2024-12-13 10:39:23.179277] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f1430 00:37:29.324 [2024-12-13 10:39:23.180534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:20154 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:29.324 [2024-12-13 10:39:23.180559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:37:29.324 [2024-12-13 10:39:23.188902] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e5658 00:37:29.324 [2024-12-13 10:39:23.190042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:11479 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:29.324 [2024-12-13 10:39:23.190067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:37:29.324 [2024-12-13 10:39:23.199700] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e0ea0 00:37:29.324 [2024-12-13 10:39:23.201052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:18482 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:29.324 [2024-12-13 10:39:23.201077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:37:29.324 [2024-12-13 10:39:23.210603] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fd640 00:37:29.324 [2024-12-13 10:39:23.212073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:5039 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:29.324 [2024-12-13 10:39:23.212102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:37:29.583 [2024-12-13 10:39:23.221663] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f1ca0 00:37:29.583 [2024-12-13 10:39:23.223321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:11182 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:29.583 [2024-12-13 10:39:23.223346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:37:29.583 [2024-12-13 10:39:23.232599] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f1868 00:37:29.583 [2024-12-13 10:39:23.234338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:15741 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:29.583 [2024-12-13 10:39:23.234364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:37:29.583 [2024-12-13 10:39:23.239994] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f0bc0 00:37:29.583 [2024-12-13 10:39:23.240727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:2631 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:29.583 [2024-12-13 10:39:23.240752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:37:29.583 [2024-12-13 10:39:23.250384] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f96f8 00:37:29.583 [2024-12-13 10:39:23.251113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:13409 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:29.583 [2024-12-13 10:39:23.251139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:37:29.583 [2024-12-13 10:39:23.259988] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f0788 00:37:29.583 [2024-12-13 10:39:23.260749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:19667 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:29.583 [2024-12-13 10:39:23.260774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:37:29.583 [2024-12-13 10:39:23.272615] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ec408 00:37:29.583 [2024-12-13 10:39:23.273836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:12789 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:29.584 [2024-12-13 10:39:23.273862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:37:29.584 [2024-12-13 10:39:23.282267] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e7818 00:37:29.584 [2024-12-13 10:39:23.283731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:11032 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:29.584 [2024-12-13 10:39:23.283755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:37:29.584 [2024-12-13 10:39:23.291333] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f92c0 00:37:29.584 [2024-12-13 10:39:23.292172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:21497 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:29.584 [2024-12-13 10:39:23.292198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:37:29.584 [2024-12-13 10:39:23.302257] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173eaab8 00:37:29.584 [2024-12-13 10:39:23.303240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:2557 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:29.584 [2024-12-13 10:39:23.303266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:37:29.584 [2024-12-13 10:39:23.313257] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ec840 00:37:29.584 [2024-12-13 10:39:23.314358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:22529 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:29.584 [2024-12-13 10:39:23.314384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:37:29.584 [2024-12-13 10:39:23.324136] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fe720 00:37:29.584 [2024-12-13 10:39:23.325332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:2225 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:29.584 [2024-12-13 10:39:23.325358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:37:29.584 [2024-12-13 10:39:23.333729] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f0350 00:37:29.584 [2024-12-13 10:39:23.334445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:20012 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:29.584 [2024-12-13 10:39:23.334475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:37:29.584 [2024-12-13 10:39:23.344202] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f5be8 00:37:29.584 [2024-12-13 10:39:23.345145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:13546 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:29.584 [2024-12-13 10:39:23.345170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:37:29.584 [2024-12-13 10:39:23.354918] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f3e60 00:37:29.584 [2024-12-13 10:39:23.355613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:20172 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:29.584 [2024-12-13 10:39:23.355638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:37:29.584 [2024-12-13 10:39:23.365696] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ddc00 00:37:29.584 [2024-12-13 10:39:23.366537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:23051 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:29.584 [2024-12-13 10:39:23.366562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:37:29.584 [2024-12-13 10:39:23.375403] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ed4e8 00:37:29.584 [2024-12-13 10:39:23.376839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:2846 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:29.584 [2024-12-13 10:39:23.376863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:37:29.584 [2024-12-13 10:39:23.384243] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e5658 00:37:29.584 [2024-12-13 10:39:23.385006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:4944 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:29.584 [2024-12-13 10:39:23.385034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:37:29.584 [2024-12-13 10:39:23.395003] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e3498 00:37:29.584 [2024-12-13 10:39:23.395916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:22771 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:29.584 [2024-12-13 10:39:23.395940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:37:29.584 [2024-12-13 10:39:23.405752] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f3e60 00:37:29.584 [2024-12-13 10:39:23.406816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:12556 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:29.584 [2024-12-13 10:39:23.406840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:37:29.584 [2024-12-13 10:39:23.416548] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e1710 00:37:29.584 [2024-12-13 10:39:23.417727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:21862 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:29.584 [2024-12-13 10:39:23.417752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:37:29.584 [2024-12-13 10:39:23.427336] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e3d08 00:37:29.584 [2024-12-13 10:39:23.428669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:23758 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:29.584 [2024-12-13 10:39:23.428693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:37:29.584 [2024-12-13 10:39:23.438154] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e3498 00:37:29.584 [2024-12-13 10:39:23.439630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:22149 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:29.584 [2024-12-13 10:39:23.439655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:37:29.584 [2024-12-13 10:39:23.448883] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fbcf0 00:37:29.584 [2024-12-13 10:39:23.450484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:21650 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:29.584 [2024-12-13 10:39:23.450508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:37:29.584 [2024-12-13 10:39:23.458431] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ed0b0 00:37:29.584 [2024-12-13 10:39:23.459549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:17416 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:29.584 [2024-12-13 10:39:23.459574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:37:29.584 [2024-12-13 10:39:23.468076] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e73e0 00:37:29.584 [2024-12-13 10:39:23.469625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:17654 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:29.584 [2024-12-13 10:39:23.469650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:37:29.844 [2024-12-13 10:39:23.477196] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f4f40 00:37:29.844 [2024-12-13 10:39:23.477998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:18174 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:29.844 [2024-12-13 10:39:23.478024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:37:29.844 [2024-12-13 10:39:23.488102] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e38d0 00:37:29.844 [2024-12-13 10:39:23.489005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:10853 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:29.844 [2024-12-13 10:39:23.489031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:37:29.844 [2024-12-13 10:39:23.498907] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173de8a8 00:37:29.844 [2024-12-13 10:39:23.499972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:10363 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:29.844 [2024-12-13 10:39:23.499997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:37:29.844 [2024-12-13 10:39:23.509690] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ebb98 00:37:29.844 [2024-12-13 10:39:23.510888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:17736 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:29.844 [2024-12-13 10:39:23.510913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:37:29.844 [2024-12-13 10:39:23.520554] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fb8b8 00:37:29.844 [2024-12-13 10:39:23.521880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:5965 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:29.844 [2024-12-13 10:39:23.521905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:37:29.844 [2024-12-13 10:39:23.531333] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e38d0 00:37:29.844 [2024-12-13 10:39:23.532789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:20796 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:29.844 [2024-12-13 10:39:23.532813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:37:29.844 [2024-12-13 10:39:23.542158] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ee5c8 00:37:29.844 [2024-12-13 10:39:23.543838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:8356 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:29.844 [2024-12-13 10:39:23.543864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:37:29.844 [2024-12-13 10:39:23.553251] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e6fa8 00:37:29.844 [2024-12-13 10:39:23.555003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:7993 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:29.844 [2024-12-13 10:39:23.555029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:37:29.844 [2024-12-13 10:39:23.560561] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e9e10 00:37:29.844 [2024-12-13 10:39:23.561377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:11213 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:29.844 [2024-12-13 10:39:23.561402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:37:29.844 [2024-12-13 10:39:23.570375] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f0bc0 00:37:29.844 [2024-12-13 10:39:23.571136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:3780 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:29.844 [2024-12-13 10:39:23.571162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:37:29.844 [2024-12-13 10:39:23.581220] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f8e88 00:37:29.844 [2024-12-13 10:39:23.582063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:21720 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:29.844 [2024-12-13 10:39:23.582088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:37:29.844 [2024-12-13 10:39:23.592055] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f96f8 00:37:29.844 [2024-12-13 10:39:23.593038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:6695 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:29.844 [2024-12-13 10:39:23.593063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:37:29.844 [2024-12-13 10:39:23.602196] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f2d80 00:37:29.844 [2024-12-13 10:39:23.602907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:4686 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:29.844 [2024-12-13 10:39:23.602933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:37:29.844 [2024-12-13 10:39:23.611650] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173df118 00:37:29.844 [2024-12-13 10:39:23.612424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:5526 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:29.844 [2024-12-13 10:39:23.612454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:37:29.844 [2024-12-13 10:39:23.622436] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e9e10 00:37:29.844 [2024-12-13 10:39:23.623344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:24030 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:29.844 [2024-12-13 10:39:23.623370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:37:29.844 [2024-12-13 10:39:23.633241] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ec408 00:37:29.844 [2024-12-13 10:39:23.634286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:15716 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:29.844 [2024-12-13 10:39:23.634310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:37:29.844 [2024-12-13 10:39:23.644032] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fbcf0 00:37:29.844 [2024-12-13 10:39:23.645271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:20494 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:29.844 [2024-12-13 10:39:23.645295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:37:29.844 [2024-12-13 10:39:23.654545] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e88f8 00:37:29.844 [2024-12-13 10:39:23.655734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:18233 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:29.844 [2024-12-13 10:39:23.655760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:37:29.844 [2024-12-13 10:39:23.665474] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fb8b8 00:37:29.844 [2024-12-13 10:39:23.666784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:8938 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:29.844 [2024-12-13 10:39:23.666809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:37:29.844 [2024-12-13 10:39:23.676259] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e73e0 00:37:29.844 [2024-12-13 10:39:23.677714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:10804 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:29.844 [2024-12-13 10:39:23.677738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:37:29.844 [2024-12-13 10:39:23.687062] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f8618 00:37:29.844 [2024-12-13 10:39:23.688799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:12216 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:29.844 [2024-12-13 10:39:23.688825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:37:29.844 [2024-12-13 10:39:23.697988] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e0630 00:37:29.844 [2024-12-13 10:39:23.699749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:21033 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:29.844 [2024-12-13 10:39:23.699774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:37:29.844 [2024-12-13 10:39:23.705282] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e5658 00:37:29.844 [2024-12-13 10:39:23.706049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:12195 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:29.844 [2024-12-13 10:39:23.706075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:37:29.844 [2024-12-13 10:39:23.714964] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f57b0 00:37:29.844 [2024-12-13 10:39:23.715760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:6599 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:29.844 [2024-12-13 10:39:23.715785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:37:29.844 [2024-12-13 10:39:23.725789] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ee190 00:37:29.845 [2024-12-13 10:39:23.726686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:19307 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:29.845 [2024-12-13 10:39:23.726711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:37:30.103 [2024-12-13 10:39:23.736796] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f2948 00:37:30.103 [2024-12-13 10:39:23.737868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:24526 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:30.103 [2024-12-13 10:39:23.737894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:37:30.103 [2024-12-13 10:39:23.747712] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f7538 00:37:30.103 [2024-12-13 10:39:23.748887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:22044 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:30.103 [2024-12-13 10:39:23.748912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:37:30.103 24651.00 IOPS, 96.29 MiB/s 00:37:30.103 Latency(us) 00:37:30.103 [2024-12-13T09:39:23.994Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:30.103 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:30.103 nvme0n1 : 2.00 24656.05 96.31 0.00 0.00 5185.30 2449.80 13419.28 00:37:30.103 [2024-12-13T09:39:23.994Z] =================================================================================================================== 00:37:30.103 [2024-12-13T09:39:23.995Z] Total : 24656.05 96.31 0.00 0.00 5185.30 2449.80 13419.28 00:37:30.104 { 00:37:30.104 "results": [ 00:37:30.104 { 00:37:30.104 "job": "nvme0n1", 00:37:30.104 "core_mask": "0x2", 00:37:30.104 "workload": "randwrite", 00:37:30.104 "status": "finished", 00:37:30.104 "queue_depth": 128, 00:37:30.104 "io_size": 4096, 00:37:30.104 "runtime": 2.004092, 00:37:30.104 "iops": 24656.053714100948, 00:37:30.104 "mibps": 96.31270982070683, 00:37:30.104 "io_failed": 0, 00:37:30.104 "io_timeout": 0, 00:37:30.104 "avg_latency_us": 5185.2973545230525, 00:37:30.104 "min_latency_us": 2449.7980952380954, 00:37:30.104 "max_latency_us": 13419.27619047619 00:37:30.104 } 00:37:30.104 ], 00:37:30.104 "core_count": 1 00:37:30.104 } 00:37:30.104 10:39:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:37:30.104 10:39:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:37:30.104 10:39:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:37:30.104 | .driver_specific 00:37:30.104 | .nvme_error 00:37:30.104 | .status_code 00:37:30.104 | .command_transient_transport_error' 00:37:30.104 10:39:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:37:30.104 10:39:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 193 > 0 )) 00:37:30.104 10:39:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 4148194 00:37:30.104 10:39:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 4148194 ']' 00:37:30.104 10:39:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 4148194 00:37:30.104 10:39:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:37:30.104 10:39:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:30.104 10:39:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4148194 00:37:30.363 10:39:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:37:30.363 10:39:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:37:30.363 10:39:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4148194' 00:37:30.363 killing process with pid 4148194 00:37:30.363 10:39:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 4148194 00:37:30.363 Received shutdown signal, test time was about 2.000000 seconds 00:37:30.363 00:37:30.363 Latency(us) 00:37:30.363 [2024-12-13T09:39:24.254Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:30.363 [2024-12-13T09:39:24.254Z] =================================================================================================================== 00:37:30.363 [2024-12-13T09:39:24.254Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:30.363 10:39:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 4148194 00:37:31.300 10:39:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:37:31.300 10:39:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:37:31.300 10:39:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:37:31.300 10:39:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:37:31.300 10:39:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:37:31.300 10:39:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=4148990 00:37:31.300 10:39:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 4148990 /var/tmp/bperf.sock 00:37:31.300 10:39:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:37:31.300 10:39:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 4148990 ']' 00:37:31.300 10:39:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:37:31.300 10:39:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:31.300 10:39:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:37:31.300 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:37:31.300 10:39:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:31.300 10:39:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:31.300 [2024-12-13 10:39:24.969661] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:37:31.300 [2024-12-13 10:39:24.969748] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4148990 ] 00:37:31.300 I/O size of 131072 is greater than zero copy threshold (65536). 00:37:31.300 Zero copy mechanism will not be used. 00:37:31.300 [2024-12-13 10:39:25.080386] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:31.300 [2024-12-13 10:39:25.189620] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:37:32.235 10:39:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:32.235 10:39:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:37:32.235 10:39:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:37:32.235 10:39:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:37:32.235 10:39:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:37:32.235 10:39:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:32.235 10:39:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:32.235 10:39:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:32.235 10:39:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:32.235 10:39:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:32.494 nvme0n1 00:37:32.495 10:39:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:37:32.495 10:39:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:32.495 10:39:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:32.495 10:39:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:32.495 10:39:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:37:32.495 10:39:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:37:32.495 I/O size of 131072 is greater than zero copy threshold (65536). 00:37:32.495 Zero copy mechanism will not be used. 00:37:32.495 Running I/O for 2 seconds... 00:37:32.495 [2024-12-13 10:39:26.286515] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:32.495 [2024-12-13 10:39:26.286613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.495 [2024-12-13 10:39:26.286652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:32.495 [2024-12-13 10:39:26.292765] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:32.495 [2024-12-13 10:39:26.292845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.495 [2024-12-13 10:39:26.292875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:32.495 [2024-12-13 10:39:26.297960] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:32.495 [2024-12-13 10:39:26.298045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.495 [2024-12-13 10:39:26.298073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:32.495 [2024-12-13 10:39:26.303166] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:32.495 [2024-12-13 10:39:26.303246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.495 [2024-12-13 10:39:26.303275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:32.495 [2024-12-13 10:39:26.308259] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:32.495 [2024-12-13 10:39:26.308338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.495 [2024-12-13 10:39:26.308364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:32.495 [2024-12-13 10:39:26.313371] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:32.495 [2024-12-13 10:39:26.313466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.495 [2024-12-13 10:39:26.313494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:32.495 [2024-12-13 10:39:26.318484] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:32.495 [2024-12-13 10:39:26.318578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.495 [2024-12-13 10:39:26.318605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:32.495 [2024-12-13 10:39:26.323569] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:32.495 [2024-12-13 10:39:26.323635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.495 [2024-12-13 10:39:26.323662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:32.495 [2024-12-13 10:39:26.328684] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:32.495 [2024-12-13 10:39:26.328766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.495 [2024-12-13 10:39:26.328792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:32.495 [2024-12-13 10:39:26.333702] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:32.495 [2024-12-13 10:39:26.333785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.495 [2024-12-13 10:39:26.333812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:32.495 [2024-12-13 10:39:26.338809] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:32.495 [2024-12-13 10:39:26.338891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.495 [2024-12-13 10:39:26.338917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:32.495 [2024-12-13 10:39:26.343871] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:32.495 [2024-12-13 10:39:26.343946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.495 [2024-12-13 10:39:26.343973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:32.495 [2024-12-13 10:39:26.348935] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:32.495 [2024-12-13 10:39:26.349010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.495 [2024-12-13 10:39:26.349037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:32.495 [2024-12-13 10:39:26.353980] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:32.495 [2024-12-13 10:39:26.354053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.495 [2024-12-13 10:39:26.354078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:32.495 [2024-12-13 10:39:26.359119] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:32.495 [2024-12-13 10:39:26.359204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.495 [2024-12-13 10:39:26.359237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:32.495 [2024-12-13 10:39:26.364258] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:32.495 [2024-12-13 10:39:26.364334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.495 [2024-12-13 10:39:26.364360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:32.495 [2024-12-13 10:39:26.369359] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:32.495 [2024-12-13 10:39:26.369435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.495 [2024-12-13 10:39:26.369467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:32.495 [2024-12-13 10:39:26.374511] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:32.495 [2024-12-13 10:39:26.374592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.495 [2024-12-13 10:39:26.374618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:32.495 [2024-12-13 10:39:26.379520] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:32.495 [2024-12-13 10:39:26.379612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.495 [2024-12-13 10:39:26.379638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:32.495 [2024-12-13 10:39:26.384756] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:32.495 [2024-12-13 10:39:26.384833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.495 [2024-12-13 10:39:26.384860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:32.755 [2024-12-13 10:39:26.389867] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:32.755 [2024-12-13 10:39:26.389946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.755 [2024-12-13 10:39:26.389972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:32.755 [2024-12-13 10:39:26.395001] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:32.755 [2024-12-13 10:39:26.395074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.755 [2024-12-13 10:39:26.395100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:32.755 [2024-12-13 10:39:26.400083] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:32.755 [2024-12-13 10:39:26.400161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.755 [2024-12-13 10:39:26.400186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:32.755 [2024-12-13 10:39:26.405310] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:32.755 [2024-12-13 10:39:26.405426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.755 [2024-12-13 10:39:26.405479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:32.755 [2024-12-13 10:39:26.411261] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:32.755 [2024-12-13 10:39:26.411430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.755 [2024-12-13 10:39:26.411461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:32.755 [2024-12-13 10:39:26.417816] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:32.755 [2024-12-13 10:39:26.417983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.755 [2024-12-13 10:39:26.418010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:32.755 [2024-12-13 10:39:26.424583] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:32.755 [2024-12-13 10:39:26.424715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.756 [2024-12-13 10:39:26.424740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:32.756 [2024-12-13 10:39:26.430827] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:32.756 [2024-12-13 10:39:26.430961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.756 [2024-12-13 10:39:26.430987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:32.756 [2024-12-13 10:39:26.437691] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:32.756 [2024-12-13 10:39:26.437850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.756 [2024-12-13 10:39:26.437875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:32.756 [2024-12-13 10:39:26.444122] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:32.756 [2024-12-13 10:39:26.444271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.756 [2024-12-13 10:39:26.444296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:32.756 [2024-12-13 10:39:26.451511] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:32.756 [2024-12-13 10:39:26.451655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.756 [2024-12-13 10:39:26.451681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:32.756 [2024-12-13 10:39:26.459730] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:32.756 [2024-12-13 10:39:26.459921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.756 [2024-12-13 10:39:26.459947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:32.756 [2024-12-13 10:39:26.467300] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:32.756 [2024-12-13 10:39:26.467418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.756 [2024-12-13 10:39:26.467444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:32.756 [2024-12-13 10:39:26.474771] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:32.756 [2024-12-13 10:39:26.474912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.756 [2024-12-13 10:39:26.474938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:32.756 [2024-12-13 10:39:26.482229] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:32.756 [2024-12-13 10:39:26.482376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.756 [2024-12-13 10:39:26.482402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:32.756 [2024-12-13 10:39:26.489524] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:32.756 [2024-12-13 10:39:26.489876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.756 [2024-12-13 10:39:26.489901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:32.756 [2024-12-13 10:39:26.497212] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:32.756 [2024-12-13 10:39:26.497360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.756 [2024-12-13 10:39:26.497387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:32.756 [2024-12-13 10:39:26.504485] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:32.756 [2024-12-13 10:39:26.504636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.756 [2024-12-13 10:39:26.504662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:32.756 [2024-12-13 10:39:26.512120] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:32.756 [2024-12-13 10:39:26.512282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.756 [2024-12-13 10:39:26.512308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:32.756 [2024-12-13 10:39:26.519904] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:32.756 [2024-12-13 10:39:26.520070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.756 [2024-12-13 10:39:26.520096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:32.756 [2024-12-13 10:39:26.527062] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:32.756 [2024-12-13 10:39:26.527213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.756 [2024-12-13 10:39:26.527245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:32.756 [2024-12-13 10:39:26.534113] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:32.756 [2024-12-13 10:39:26.534212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.756 [2024-12-13 10:39:26.534236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:32.756 [2024-12-13 10:39:26.541649] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:32.756 [2024-12-13 10:39:26.541820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.756 [2024-12-13 10:39:26.541846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:32.756 [2024-12-13 10:39:26.549616] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:32.756 [2024-12-13 10:39:26.549743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.756 [2024-12-13 10:39:26.549770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:32.756 [2024-12-13 10:39:26.557089] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:32.756 [2024-12-13 10:39:26.557216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.756 [2024-12-13 10:39:26.557244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:32.756 [2024-12-13 10:39:26.564848] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:32.756 [2024-12-13 10:39:26.565001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.756 [2024-12-13 10:39:26.565028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:32.756 [2024-12-13 10:39:26.572502] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:32.756 [2024-12-13 10:39:26.572609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.756 [2024-12-13 10:39:26.572635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:32.756 [2024-12-13 10:39:26.578264] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:32.756 [2024-12-13 10:39:26.578417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.756 [2024-12-13 10:39:26.578443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:32.756 [2024-12-13 10:39:26.584005] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:32.756 [2024-12-13 10:39:26.584160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.756 [2024-12-13 10:39:26.584185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:32.756 [2024-12-13 10:39:26.589659] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:32.756 [2024-12-13 10:39:26.589745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.756 [2024-12-13 10:39:26.589771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:32.756 [2024-12-13 10:39:26.595273] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:32.756 [2024-12-13 10:39:26.595423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.756 [2024-12-13 10:39:26.595454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:32.756 [2024-12-13 10:39:26.600470] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:32.756 [2024-12-13 10:39:26.600562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.756 [2024-12-13 10:39:26.600586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:32.756 [2024-12-13 10:39:26.605664] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:32.756 [2024-12-13 10:39:26.605737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.756 [2024-12-13 10:39:26.605761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:32.756 [2024-12-13 10:39:26.611719] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:32.756 [2024-12-13 10:39:26.611799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.756 [2024-12-13 10:39:26.611823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:32.757 [2024-12-13 10:39:26.618290] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:32.757 [2024-12-13 10:39:26.618413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.757 [2024-12-13 10:39:26.618439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:32.757 [2024-12-13 10:39:26.623924] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:32.757 [2024-12-13 10:39:26.624037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.757 [2024-12-13 10:39:26.624061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:32.757 [2024-12-13 10:39:26.629157] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:32.757 [2024-12-13 10:39:26.629286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.757 [2024-12-13 10:39:26.629312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:32.757 [2024-12-13 10:39:26.634488] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:32.757 [2024-12-13 10:39:26.634561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.757 [2024-12-13 10:39:26.634585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:32.757 [2024-12-13 10:39:26.639569] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:32.757 [2024-12-13 10:39:26.639658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.757 [2024-12-13 10:39:26.639682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:32.757 [2024-12-13 10:39:26.644741] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:32.757 [2024-12-13 10:39:26.644828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.757 [2024-12-13 10:39:26.644852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:33.017 [2024-12-13 10:39:26.650184] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:33.017 [2024-12-13 10:39:26.650267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.017 [2024-12-13 10:39:26.650293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:33.017 [2024-12-13 10:39:26.655360] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:33.017 [2024-12-13 10:39:26.655434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.017 [2024-12-13 10:39:26.655466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:33.017 [2024-12-13 10:39:26.660476] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:33.017 [2024-12-13 10:39:26.660605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.017 [2024-12-13 10:39:26.660631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:33.017 [2024-12-13 10:39:26.665734] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:33.017 [2024-12-13 10:39:26.665807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.017 [2024-12-13 10:39:26.665831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:33.017 [2024-12-13 10:39:26.671374] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:33.017 [2024-12-13 10:39:26.671511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.017 [2024-12-13 10:39:26.671537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:33.017 [2024-12-13 10:39:26.677261] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:33.017 [2024-12-13 10:39:26.677381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.017 [2024-12-13 10:39:26.677408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:33.017 [2024-12-13 10:39:26.682632] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:33.017 [2024-12-13 10:39:26.682751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.017 [2024-12-13 10:39:26.682779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:33.017 [2024-12-13 10:39:26.687946] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:33.017 [2024-12-13 10:39:26.688025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.017 [2024-12-13 10:39:26.688049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:33.017 [2024-12-13 10:39:26.693223] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:33.017 [2024-12-13 10:39:26.693346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.017 [2024-12-13 10:39:26.693371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:33.017 [2024-12-13 10:39:26.698361] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:33.017 [2024-12-13 10:39:26.698459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.017 [2024-12-13 10:39:26.698484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:33.017 [2024-12-13 10:39:26.703560] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:33.017 [2024-12-13 10:39:26.703652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.017 [2024-12-13 10:39:26.703676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:33.017 [2024-12-13 10:39:26.709160] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:33.017 [2024-12-13 10:39:26.709298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.017 [2024-12-13 10:39:26.709323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:33.017 [2024-12-13 10:39:26.715184] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:33.017 [2024-12-13 10:39:26.715257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.017 [2024-12-13 10:39:26.715282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:33.017 [2024-12-13 10:39:26.721239] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:33.017 [2024-12-13 10:39:26.721374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.017 [2024-12-13 10:39:26.721399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:33.017 [2024-12-13 10:39:26.727530] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:33.017 [2024-12-13 10:39:26.727598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.017 [2024-12-13 10:39:26.727623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:33.017 [2024-12-13 10:39:26.734021] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:33.017 [2024-12-13 10:39:26.734113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.017 [2024-12-13 10:39:26.734138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:33.017 [2024-12-13 10:39:26.740019] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:33.017 [2024-12-13 10:39:26.740152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.017 [2024-12-13 10:39:26.740178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:33.017 [2024-12-13 10:39:26.745279] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:33.017 [2024-12-13 10:39:26.745391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.017 [2024-12-13 10:39:26.745415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:33.017 [2024-12-13 10:39:26.750513] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:33.017 [2024-12-13 10:39:26.750581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.017 [2024-12-13 10:39:26.750613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:33.017 [2024-12-13 10:39:26.755725] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:33.017 [2024-12-13 10:39:26.755834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.017 [2024-12-13 10:39:26.755858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:33.017 [2024-12-13 10:39:26.761080] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:33.017 [2024-12-13 10:39:26.761167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.017 [2024-12-13 10:39:26.761190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:33.018 [2024-12-13 10:39:26.766403] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:33.018 [2024-12-13 10:39:26.766549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.018 [2024-12-13 10:39:26.766574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:33.018 [2024-12-13 10:39:26.771757] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:33.018 [2024-12-13 10:39:26.771854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.018 [2024-12-13 10:39:26.771878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:33.018 [2024-12-13 10:39:26.777119] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:33.018 [2024-12-13 10:39:26.777201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.018 [2024-12-13 10:39:26.777230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:33.018 [2024-12-13 10:39:26.782256] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:33.018 [2024-12-13 10:39:26.782332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.018 [2024-12-13 10:39:26.782357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:33.018 [2024-12-13 10:39:26.787567] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:33.018 [2024-12-13 10:39:26.787661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.018 [2024-12-13 10:39:26.787685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:33.018 [2024-12-13 10:39:26.793289] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:33.018 [2024-12-13 10:39:26.793360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.018 [2024-12-13 10:39:26.793384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:33.018 [2024-12-13 10:39:26.799387] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:33.018 [2024-12-13 10:39:26.799473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.018 [2024-12-13 10:39:26.799498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:33.018 [2024-12-13 10:39:26.804888] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:33.018 [2024-12-13 10:39:26.805007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.018 [2024-12-13 10:39:26.805032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:33.018 [2024-12-13 10:39:26.810186] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:33.018 [2024-12-13 10:39:26.810318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.018 [2024-12-13 10:39:26.810344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:33.018 [2024-12-13 10:39:26.815540] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:33.018 [2024-12-13 10:39:26.815683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.018 [2024-12-13 10:39:26.815709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:33.018 [2024-12-13 10:39:26.820742] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:33.018 [2024-12-13 10:39:26.820843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.018 [2024-12-13 10:39:26.820868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:33.018 [2024-12-13 10:39:26.825965] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:33.018 [2024-12-13 10:39:26.826066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.018 [2024-12-13 10:39:26.826091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:33.018 [2024-12-13 10:39:26.831300] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:33.018 [2024-12-13 10:39:26.831456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.018 [2024-12-13 10:39:26.831481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:33.018 [2024-12-13 10:39:26.837641] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:33.018 [2024-12-13 10:39:26.837709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.018 [2024-12-13 10:39:26.837733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:33.018 [2024-12-13 10:39:26.844055] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:33.018 [2024-12-13 10:39:26.844351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.018 [2024-12-13 10:39:26.844377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:33.018 [2024-12-13 10:39:26.850201] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:33.018 [2024-12-13 10:39:26.850322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.018 [2024-12-13 10:39:26.850348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:33.018 [2024-12-13 10:39:26.856209] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:33.018 [2024-12-13 10:39:26.856302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.018 [2024-12-13 10:39:26.856326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:33.018 [2024-12-13 10:39:26.862282] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:33.018 [2024-12-13 10:39:26.862354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.018 [2024-12-13 10:39:26.862378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:33.018 [2024-12-13 10:39:26.868187] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:33.018 [2024-12-13 10:39:26.868279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.018 [2024-12-13 10:39:26.868303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:33.018 [2024-12-13 10:39:26.873939] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:33.018 [2024-12-13 10:39:26.874011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.018 [2024-12-13 10:39:26.874035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:33.018 [2024-12-13 10:39:26.880293] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:33.018 [2024-12-13 10:39:26.880361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.018 [2024-12-13 10:39:26.880385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:33.018 [2024-12-13 10:39:26.886256] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:33.018 [2024-12-13 10:39:26.886322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.018 [2024-12-13 10:39:26.886345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:33.018 [2024-12-13 10:39:26.892037] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:33.018 [2024-12-13 10:39:26.892173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.018 [2024-12-13 10:39:26.892198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:33.018 [2024-12-13 10:39:26.897426] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:33.018 [2024-12-13 10:39:26.897516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.018 [2024-12-13 10:39:26.897540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:33.018 [2024-12-13 10:39:26.902649] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:33.018 [2024-12-13 10:39:26.902720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.018 [2024-12-13 10:39:26.902761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:33.279 [2024-12-13 10:39:26.907966] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:33.279 [2024-12-13 10:39:26.908036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.279 [2024-12-13 10:39:26.908061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:33.279 [2024-12-13 10:39:26.913271] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:33.279 [2024-12-13 10:39:26.913401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.279 [2024-12-13 10:39:26.913427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:33.279 [2024-12-13 10:39:26.918627] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:33.279 [2024-12-13 10:39:26.918696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.279 [2024-12-13 10:39:26.918720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:33.279 [2024-12-13 10:39:26.923917] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:33.279 [2024-12-13 10:39:26.924033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.279 [2024-12-13 10:39:26.924057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:33.279 [2024-12-13 10:39:26.929190] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:33.279 [2024-12-13 10:39:26.929274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.279 [2024-12-13 10:39:26.929298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:33.279 [2024-12-13 10:39:26.934321] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:33.279 [2024-12-13 10:39:26.934394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.279 [2024-12-13 10:39:26.934419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:33.279 [2024-12-13 10:39:26.939629] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:33.279 [2024-12-13 10:39:26.939698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.279 [2024-12-13 10:39:26.939723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:33.279 [2024-12-13 10:39:26.945420] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:33.279 [2024-12-13 10:39:26.945549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.279 [2024-12-13 10:39:26.945574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:33.279 [2024-12-13 10:39:26.951135] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:33.279 [2024-12-13 10:39:26.951208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.279 [2024-12-13 10:39:26.951232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:33.279 [2024-12-13 10:39:26.956317] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:33.279 [2024-12-13 10:39:26.956388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.279 [2024-12-13 10:39:26.956412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:33.279 [2024-12-13 10:39:26.961628] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:33.279 [2024-12-13 10:39:26.961750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.279 [2024-12-13 10:39:26.961775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:33.279 [2024-12-13 10:39:26.966804] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:33.279 [2024-12-13 10:39:26.966878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.279 [2024-12-13 10:39:26.966903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:33.279 [2024-12-13 10:39:26.971934] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:33.279 [2024-12-13 10:39:26.972030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.279 [2024-12-13 10:39:26.972056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:33.279 [2024-12-13 10:39:26.977175] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:33.279 [2024-12-13 10:39:26.977295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.279 [2024-12-13 10:39:26.977320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:33.279 [2024-12-13 10:39:26.982333] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:33.279 [2024-12-13 10:39:26.982459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.279 [2024-12-13 10:39:26.982487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:33.279 [2024-12-13 10:39:26.988378] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:33.279 [2024-12-13 10:39:26.988463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.279 [2024-12-13 10:39:26.988487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:33.279 [2024-12-13 10:39:26.994498] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:33.279 [2024-12-13 10:39:26.994567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.279 [2024-12-13 10:39:26.994592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:33.279 [2024-12-13 10:39:27.000334] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:33.279 [2024-12-13 10:39:27.000407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.279 [2024-12-13 10:39:27.000431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:33.279 [2024-12-13 10:39:27.005971] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:33.279 [2024-12-13 10:39:27.006053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.279 [2024-12-13 10:39:27.006078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:33.279 [2024-12-13 10:39:27.012090] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:33.279 [2024-12-13 10:39:27.012181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.279 [2024-12-13 10:39:27.012204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:33.279 [2024-12-13 10:39:27.018012] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:33.279 [2024-12-13 10:39:27.018141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.279 [2024-12-13 10:39:27.018171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:33.279 [2024-12-13 10:39:27.024132] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:33.279 [2024-12-13 10:39:27.024204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.279 [2024-12-13 10:39:27.024229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:33.279 [2024-12-13 10:39:27.030615] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:33.279 [2024-12-13 10:39:27.030742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.279 [2024-12-13 10:39:27.030768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:33.279 [2024-12-13 10:39:27.036899] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:33.279 [2024-12-13 10:39:27.036970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.279 [2024-12-13 10:39:27.036994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:33.279 [2024-12-13 10:39:27.043159] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:33.279 [2024-12-13 10:39:27.043224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.279 [2024-12-13 10:39:27.043249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:33.279 [2024-12-13 10:39:27.049159] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:33.279 [2024-12-13 10:39:27.049250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.279 [2024-12-13 10:39:27.049274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:33.279 [2024-12-13 10:39:27.054888] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:33.279 [2024-12-13 10:39:27.054960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.279 [2024-12-13 10:39:27.054986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:33.280 [2024-12-13 10:39:27.060617] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:33.280 [2024-12-13 10:39:27.060690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.280 [2024-12-13 10:39:27.060716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:33.280 [2024-12-13 10:39:27.066864] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:33.280 [2024-12-13 10:39:27.066959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.280 [2024-12-13 10:39:27.066985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:33.280 [2024-12-13 10:39:27.073621] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:33.280 [2024-12-13 10:39:27.073722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.280 [2024-12-13 10:39:27.073748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:33.280 [2024-12-13 10:39:27.079643] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:33.280 [2024-12-13 10:39:27.079743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.280 [2024-12-13 10:39:27.079768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:33.280 [2024-12-13 10:39:27.085824] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:33.280 [2024-12-13 10:39:27.085919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.280 [2024-12-13 10:39:27.085943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:33.280 [2024-12-13 10:39:27.092055] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:33.280 [2024-12-13 10:39:27.092122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.280 [2024-12-13 10:39:27.092146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:33.280 [2024-12-13 10:39:27.098827] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:33.280 [2024-12-13 10:39:27.098918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.280 [2024-12-13 10:39:27.098943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:33.280 [2024-12-13 10:39:27.105058] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:33.280 [2024-12-13 10:39:27.105322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.280 [2024-12-13 10:39:27.105347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:33.280 [2024-12-13 10:39:27.111344] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:33.280 [2024-12-13 10:39:27.111422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.280 [2024-12-13 10:39:27.111446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:33.280 [2024-12-13 10:39:27.117436] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:33.280 [2024-12-13 10:39:27.117526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.280 [2024-12-13 10:39:27.117558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:33.280 [2024-12-13 10:39:27.123461] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:33.280 [2024-12-13 10:39:27.123535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.280 [2024-12-13 10:39:27.123564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:33.280 [2024-12-13 10:39:27.130416] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:33.280 [2024-12-13 10:39:27.130520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.280 [2024-12-13 10:39:27.130544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:33.280 [2024-12-13 10:39:27.136742] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:33.280 [2024-12-13 10:39:27.136832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.280 [2024-12-13 10:39:27.136856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:33.280 [2024-12-13 10:39:27.142688] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:33.280 [2024-12-13 10:39:27.142757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.280 [2024-12-13 10:39:27.142780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:33.280 [2024-12-13 10:39:27.148830] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:33.280 [2024-12-13 10:39:27.148902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.280 [2024-12-13 10:39:27.148926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:33.280 [2024-12-13 10:39:27.154638] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:33.280 [2024-12-13 10:39:27.154721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.280 [2024-12-13 10:39:27.154745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:33.280 [2024-12-13 10:39:27.160349] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:33.280 [2024-12-13 10:39:27.160424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.280 [2024-12-13 10:39:27.160456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:33.280 [2024-12-13 10:39:27.167038] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:33.280 [2024-12-13 10:39:27.167133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.280 [2024-12-13 10:39:27.167159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:33.540 [2024-12-13 10:39:27.173494] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:33.540 [2024-12-13 10:39:27.173562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.540 [2024-12-13 10:39:27.173588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:33.540 [2024-12-13 10:39:27.179231] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:33.540 [2024-12-13 10:39:27.179325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.540 [2024-12-13 10:39:27.179350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:33.540 [2024-12-13 10:39:27.184858] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:33.540 [2024-12-13 10:39:27.184932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.540 [2024-12-13 10:39:27.184958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:33.540 [2024-12-13 10:39:27.190116] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:33.540 [2024-12-13 10:39:27.190191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.540 [2024-12-13 10:39:27.190215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:33.540 [2024-12-13 10:39:27.195385] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:33.540 [2024-12-13 10:39:27.195474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.540 [2024-12-13 10:39:27.195500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:33.540 [2024-12-13 10:39:27.200866] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:33.540 [2024-12-13 10:39:27.200942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.540 [2024-12-13 10:39:27.200968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:33.540 [2024-12-13 10:39:27.206425] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:33.540 [2024-12-13 10:39:27.206528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.540 [2024-12-13 10:39:27.206553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:33.540 [2024-12-13 10:39:27.211901] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:33.540 [2024-12-13 10:39:27.211993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.540 [2024-12-13 10:39:27.212017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:33.540 [2024-12-13 10:39:27.217293] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:33.540 [2024-12-13 10:39:27.217381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.540 [2024-12-13 10:39:27.217406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:33.540 [2024-12-13 10:39:27.222661] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:33.540 [2024-12-13 10:39:27.222748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.540 [2024-12-13 10:39:27.222772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:33.540 [2024-12-13 10:39:27.227982] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:33.540 [2024-12-13 10:39:27.228053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.540 [2024-12-13 10:39:27.228078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:33.541 [2024-12-13 10:39:27.233627] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:33.541 [2024-12-13 10:39:27.233777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.541 [2024-12-13 10:39:27.233802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:33.541 [2024-12-13 10:39:27.240231] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:33.541 [2024-12-13 10:39:27.240384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.541 [2024-12-13 10:39:27.240410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:33.541 [2024-12-13 10:39:27.247047] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:33.541 [2024-12-13 10:39:27.247203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.541 [2024-12-13 10:39:27.247229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:33.541 [2024-12-13 10:39:27.253781] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:33.541 [2024-12-13 10:39:27.253932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.541 [2024-12-13 10:39:27.253958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:33.541 [2024-12-13 10:39:27.260357] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:33.541 [2024-12-13 10:39:27.260510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.541 [2024-12-13 10:39:27.260536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:33.541 [2024-12-13 10:39:27.267064] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:33.541 [2024-12-13 10:39:27.267231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.541 [2024-12-13 10:39:27.267258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:33.541 [2024-12-13 10:39:27.273658] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:33.541 [2024-12-13 10:39:27.273795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.541 [2024-12-13 10:39:27.273822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:33.541 [2024-12-13 10:39:27.280813] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:33.541 [2024-12-13 10:39:27.280954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.541 [2024-12-13 10:39:27.280979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:33.541 [2024-12-13 10:39:27.287317] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:33.541 [2024-12-13 10:39:27.288917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.541 [2024-12-13 10:39:27.288945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:33.541 5270.00 IOPS, 658.75 MiB/s [2024-12-13T09:39:27.432Z] [2024-12-13 10:39:27.294761] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:33.541 [2024-12-13 10:39:27.294895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.541 [2024-12-13 10:39:27.294921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:33.541 [2024-12-13 10:39:27.301607] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:33.541 [2024-12-13 10:39:27.301757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.541 [2024-12-13 10:39:27.301783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:33.541 [2024-12-13 10:39:27.308835] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:33.541 [2024-12-13 10:39:27.308985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.541 [2024-12-13 10:39:27.309012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:33.541 [2024-12-13 10:39:27.315336] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:33.541 [2024-12-13 10:39:27.315513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.541 [2024-12-13 10:39:27.315540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:33.541 [2024-12-13 10:39:27.321981] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:33.541 [2024-12-13 10:39:27.322124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.541 [2024-12-13 10:39:27.322150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:33.541 [2024-12-13 10:39:27.328783] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:33.541 [2024-12-13 10:39:27.328940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.541 [2024-12-13 10:39:27.328967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:33.541 [2024-12-13 10:39:27.335080] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:33.541 [2024-12-13 10:39:27.335225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.541 [2024-12-13 10:39:27.335251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:33.541 [2024-12-13 10:39:27.340588] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:33.541 [2024-12-13 10:39:27.340714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.541 [2024-12-13 10:39:27.340740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:33.541 [2024-12-13 10:39:27.346415] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:33.541 [2024-12-13 10:39:27.346541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.541 [2024-12-13 10:39:27.346567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:33.541 [2024-12-13 10:39:27.351979] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:33.541 [2024-12-13 10:39:27.352121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.541 [2024-12-13 10:39:27.352147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:33.541 [2024-12-13 10:39:27.357852] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:33.541 [2024-12-13 10:39:27.358012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.541 [2024-12-13 10:39:27.358037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:33.541 [2024-12-13 10:39:27.364853] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:33.541 [2024-12-13 10:39:27.365000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.541 [2024-12-13 10:39:27.365026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:33.541 [2024-12-13 10:39:27.371340] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:33.541 [2024-12-13 10:39:27.371478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.541 [2024-12-13 10:39:27.371504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:33.541 [2024-12-13 10:39:27.377116] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:33.541 [2024-12-13 10:39:27.377185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.541 [2024-12-13 10:39:27.377210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:33.541 [2024-12-13 10:39:27.382421] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:33.541 [2024-12-13 10:39:27.382499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.541 [2024-12-13 10:39:27.382523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:33.541 [2024-12-13 10:39:27.387692] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:33.541 [2024-12-13 10:39:27.387761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.541 [2024-12-13 10:39:27.387789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:33.541 [2024-12-13 10:39:27.393056] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:33.541 [2024-12-13 10:39:27.393125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.541 [2024-12-13 10:39:27.393150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:33.541 [2024-12-13 10:39:27.398368] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:33.541 [2024-12-13 10:39:27.398466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.541 [2024-12-13 10:39:27.398490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:33.542 [2024-12-13 10:39:27.403667] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:33.542 [2024-12-13 10:39:27.403747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.542 [2024-12-13 10:39:27.403771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:33.542 [2024-12-13 10:39:27.408957] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:33.542 [2024-12-13 10:39:27.409044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.542 [2024-12-13 10:39:27.409068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:33.542 [2024-12-13 10:39:27.414262] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:33.542 [2024-12-13 10:39:27.414351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.542 [2024-12-13 10:39:27.414375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:33.542 [2024-12-13 10:39:27.419507] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:33.542 [2024-12-13 10:39:27.419577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.542 [2024-12-13 10:39:27.419602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:33.542 [2024-12-13 10:39:27.424789] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:33.542 [2024-12-13 10:39:27.424863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.542 [2024-12-13 10:39:27.424888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:33.542 [2024-12-13 10:39:27.430129] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:33.542 [2024-12-13 10:39:27.430214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.542 [2024-12-13 10:39:27.430240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:33.802 [2024-12-13 10:39:27.435288] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:33.802 [2024-12-13 10:39:27.435371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.802 [2024-12-13 10:39:27.435396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:33.802 [2024-12-13 10:39:27.440470] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:33.802 [2024-12-13 10:39:27.440569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.802 [2024-12-13 10:39:27.440594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:33.802 [2024-12-13 10:39:27.445629] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:33.802 [2024-12-13 10:39:27.445715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.802 [2024-12-13 10:39:27.445740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:33.802 [2024-12-13 10:39:27.450784] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:33.802 [2024-12-13 10:39:27.450861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.802 [2024-12-13 10:39:27.450885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:33.802 [2024-12-13 10:39:27.455865] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:33.802 [2024-12-13 10:39:27.455955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.802 [2024-12-13 10:39:27.455979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:33.802 [2024-12-13 10:39:27.461024] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:33.802 [2024-12-13 10:39:27.461111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.802 [2024-12-13 10:39:27.461137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:33.802 [2024-12-13 10:39:27.466350] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:33.802 [2024-12-13 10:39:27.466434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.802 [2024-12-13 10:39:27.466466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:33.802 [2024-12-13 10:39:27.471550] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:33.802 [2024-12-13 10:39:27.471632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.802 [2024-12-13 10:39:27.471656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:33.802 [2024-12-13 10:39:27.476920] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:33.802 [2024-12-13 10:39:27.477012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.802 [2024-12-13 10:39:27.477041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:33.802 [2024-12-13 10:39:27.482347] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:33.802 [2024-12-13 10:39:27.482420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.802 [2024-12-13 10:39:27.482445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:33.802 [2024-12-13 10:39:27.487922] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:33.802 [2024-12-13 10:39:27.488002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.802 [2024-12-13 10:39:27.488027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:33.802 [2024-12-13 10:39:27.493139] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:33.802 [2024-12-13 10:39:27.493213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.802 [2024-12-13 10:39:27.493245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:33.802 [2024-12-13 10:39:27.498692] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:33.802 [2024-12-13 10:39:27.498828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.802 [2024-12-13 10:39:27.498854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:33.802 [2024-12-13 10:39:27.504637] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:33.802 [2024-12-13 10:39:27.504782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.802 [2024-12-13 10:39:27.504808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:33.802 [2024-12-13 10:39:27.510763] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:33.802 [2024-12-13 10:39:27.510880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.802 [2024-12-13 10:39:27.510905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:33.802 [2024-12-13 10:39:27.516383] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:33.802 [2024-12-13 10:39:27.516489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.802 [2024-12-13 10:39:27.516514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:33.802 [2024-12-13 10:39:27.521924] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:33.802 [2024-12-13 10:39:27.521996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.802 [2024-12-13 10:39:27.522020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:33.802 [2024-12-13 10:39:27.527493] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:33.802 [2024-12-13 10:39:27.527571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.802 [2024-12-13 10:39:27.527596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:33.802 [2024-12-13 10:39:27.532886] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:33.802 [2024-12-13 10:39:27.532957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.802 [2024-12-13 10:39:27.532981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:33.802 [2024-12-13 10:39:27.538370] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:33.802 [2024-12-13 10:39:27.538459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.802 [2024-12-13 10:39:27.538484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:33.802 [2024-12-13 10:39:27.543792] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:33.802 [2024-12-13 10:39:27.543867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.802 [2024-12-13 10:39:27.543890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:33.802 [2024-12-13 10:39:27.549949] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:33.802 [2024-12-13 10:39:27.550018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.802 [2024-12-13 10:39:27.550042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:33.802 [2024-12-13 10:39:27.556581] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:33.802 [2024-12-13 10:39:27.556653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.802 [2024-12-13 10:39:27.556678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:33.802 [2024-12-13 10:39:27.562620] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:33.802 [2024-12-13 10:39:27.562808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.802 [2024-12-13 10:39:27.562833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:33.802 [2024-12-13 10:39:27.568842] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:33.802 [2024-12-13 10:39:27.568928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.802 [2024-12-13 10:39:27.568954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:33.802 [2024-12-13 10:39:27.574661] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:33.802 [2024-12-13 10:39:27.574792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.802 [2024-12-13 10:39:27.574819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:33.802 [2024-12-13 10:39:27.580535] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:33.802 [2024-12-13 10:39:27.580617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.802 [2024-12-13 10:39:27.580641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:33.802 [2024-12-13 10:39:27.586571] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:33.802 [2024-12-13 10:39:27.586665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.802 [2024-12-13 10:39:27.586690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:33.802 [2024-12-13 10:39:27.592025] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:33.802 [2024-12-13 10:39:27.592151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.802 [2024-12-13 10:39:27.592178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:33.802 [2024-12-13 10:39:27.597530] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:33.802 [2024-12-13 10:39:27.597661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.802 [2024-12-13 10:39:27.597687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:33.802 [2024-12-13 10:39:27.603902] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:33.802 [2024-12-13 10:39:27.604063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.802 [2024-12-13 10:39:27.604088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:33.802 [2024-12-13 10:39:27.611461] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:33.802 [2024-12-13 10:39:27.611594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.802 [2024-12-13 10:39:27.611620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:33.802 [2024-12-13 10:39:27.618274] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:33.802 [2024-12-13 10:39:27.618524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.802 [2024-12-13 10:39:27.618550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:33.802 [2024-12-13 10:39:27.625946] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:33.802 [2024-12-13 10:39:27.626325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.802 [2024-12-13 10:39:27.626351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:33.802 [2024-12-13 10:39:27.633301] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:33.802 [2024-12-13 10:39:27.633719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.802 [2024-12-13 10:39:27.633745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:33.802 [2024-12-13 10:39:27.641078] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:33.802 [2024-12-13 10:39:27.641516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.802 [2024-12-13 10:39:27.641542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:33.802 [2024-12-13 10:39:27.648818] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:33.802 [2024-12-13 10:39:27.649238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.802 [2024-12-13 10:39:27.649264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:33.802 [2024-12-13 10:39:27.656601] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:33.802 [2024-12-13 10:39:27.656986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.802 [2024-12-13 10:39:27.657012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:33.802 [2024-12-13 10:39:27.664368] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:33.802 [2024-12-13 10:39:27.664752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.802 [2024-12-13 10:39:27.664779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:33.802 [2024-12-13 10:39:27.672197] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:33.802 [2024-12-13 10:39:27.672666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.802 [2024-12-13 10:39:27.672693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:33.802 [2024-12-13 10:39:27.679637] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:33.802 [2024-12-13 10:39:27.680028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.802 [2024-12-13 10:39:27.680055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:33.802 [2024-12-13 10:39:27.687537] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:33.802 [2024-12-13 10:39:27.688005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:33.802 [2024-12-13 10:39:27.688032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:34.062 [2024-12-13 10:39:27.695643] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.062 [2024-12-13 10:39:27.696026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.062 [2024-12-13 10:39:27.696052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:34.062 [2024-12-13 10:39:27.703172] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.062 [2024-12-13 10:39:27.703728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.062 [2024-12-13 10:39:27.703754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:34.062 [2024-12-13 10:39:27.709829] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.062 [2024-12-13 10:39:27.710211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.062 [2024-12-13 10:39:27.710237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:34.062 [2024-12-13 10:39:27.716898] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.062 [2024-12-13 10:39:27.717272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.062 [2024-12-13 10:39:27.717299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:34.062 [2024-12-13 10:39:27.723540] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.062 [2024-12-13 10:39:27.723936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.062 [2024-12-13 10:39:27.723962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:34.062 [2024-12-13 10:39:27.731143] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.062 [2024-12-13 10:39:27.731611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.062 [2024-12-13 10:39:27.731637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:34.062 [2024-12-13 10:39:27.737534] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.062 [2024-12-13 10:39:27.737879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.062 [2024-12-13 10:39:27.737906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:34.062 [2024-12-13 10:39:27.743016] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.062 [2024-12-13 10:39:27.743369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.062 [2024-12-13 10:39:27.743394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:34.062 [2024-12-13 10:39:27.748616] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.062 [2024-12-13 10:39:27.748963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.062 [2024-12-13 10:39:27.748988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:34.062 [2024-12-13 10:39:27.754074] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.062 [2024-12-13 10:39:27.754423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.062 [2024-12-13 10:39:27.754458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:34.062 [2024-12-13 10:39:27.759591] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.062 [2024-12-13 10:39:27.760058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.062 [2024-12-13 10:39:27.760084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:34.062 [2024-12-13 10:39:27.766059] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.062 [2024-12-13 10:39:27.766465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.062 [2024-12-13 10:39:27.766491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:34.062 [2024-12-13 10:39:27.772062] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.062 [2024-12-13 10:39:27.772407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.062 [2024-12-13 10:39:27.772433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:34.062 [2024-12-13 10:39:27.777663] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.062 [2024-12-13 10:39:27.778006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.062 [2024-12-13 10:39:27.778032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:34.062 [2024-12-13 10:39:27.783043] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.062 [2024-12-13 10:39:27.783390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.062 [2024-12-13 10:39:27.783416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:34.062 [2024-12-13 10:39:27.789096] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.062 [2024-12-13 10:39:27.789516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.062 [2024-12-13 10:39:27.789542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:34.062 [2024-12-13 10:39:27.795995] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.062 [2024-12-13 10:39:27.796357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.062 [2024-12-13 10:39:27.796383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:34.062 [2024-12-13 10:39:27.801721] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.062 [2024-12-13 10:39:27.802081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.062 [2024-12-13 10:39:27.802108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:34.062 [2024-12-13 10:39:27.807368] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.062 [2024-12-13 10:39:27.807746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.062 [2024-12-13 10:39:27.807773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:34.063 [2024-12-13 10:39:27.812937] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.063 [2024-12-13 10:39:27.813307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.063 [2024-12-13 10:39:27.813334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:34.063 [2024-12-13 10:39:27.818572] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.063 [2024-12-13 10:39:27.818929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.063 [2024-12-13 10:39:27.818956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:34.063 [2024-12-13 10:39:27.823952] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.063 [2024-12-13 10:39:27.824328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.063 [2024-12-13 10:39:27.824354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:34.063 [2024-12-13 10:39:27.830031] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.063 [2024-12-13 10:39:27.830419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.063 [2024-12-13 10:39:27.830446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:34.063 [2024-12-13 10:39:27.835345] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.063 [2024-12-13 10:39:27.835721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.063 [2024-12-13 10:39:27.835747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:34.063 [2024-12-13 10:39:27.840549] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.063 [2024-12-13 10:39:27.840896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.063 [2024-12-13 10:39:27.840921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:34.063 [2024-12-13 10:39:27.845673] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.063 [2024-12-13 10:39:27.846030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.063 [2024-12-13 10:39:27.846056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:34.063 [2024-12-13 10:39:27.850884] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.063 [2024-12-13 10:39:27.851233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.063 [2024-12-13 10:39:27.851262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:34.063 [2024-12-13 10:39:27.856018] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.063 [2024-12-13 10:39:27.856372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.063 [2024-12-13 10:39:27.856397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:34.063 [2024-12-13 10:39:27.861091] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.063 [2024-12-13 10:39:27.861440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.063 [2024-12-13 10:39:27.861471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:34.063 [2024-12-13 10:39:27.866116] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.063 [2024-12-13 10:39:27.866513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.063 [2024-12-13 10:39:27.866539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:34.063 [2024-12-13 10:39:27.871344] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.063 [2024-12-13 10:39:27.871647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.063 [2024-12-13 10:39:27.871673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:34.063 [2024-12-13 10:39:27.876667] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.063 [2024-12-13 10:39:27.876996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.063 [2024-12-13 10:39:27.877022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:34.063 [2024-12-13 10:39:27.882397] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.063 [2024-12-13 10:39:27.882732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.063 [2024-12-13 10:39:27.882757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:34.063 [2024-12-13 10:39:27.887979] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.063 [2024-12-13 10:39:27.888316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.063 [2024-12-13 10:39:27.888350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:34.063 [2024-12-13 10:39:27.893679] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.063 [2024-12-13 10:39:27.894012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.063 [2024-12-13 10:39:27.894037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:34.063 [2024-12-13 10:39:27.899280] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.063 [2024-12-13 10:39:27.899631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.063 [2024-12-13 10:39:27.899656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:34.063 [2024-12-13 10:39:27.905354] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.063 [2024-12-13 10:39:27.905684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.063 [2024-12-13 10:39:27.905710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:34.063 [2024-12-13 10:39:27.911379] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.063 [2024-12-13 10:39:27.911717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.063 [2024-12-13 10:39:27.911743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:34.063 [2024-12-13 10:39:27.916996] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.063 [2024-12-13 10:39:27.917338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.063 [2024-12-13 10:39:27.917365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:34.063 [2024-12-13 10:39:27.922183] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.063 [2024-12-13 10:39:27.922529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.063 [2024-12-13 10:39:27.922554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:34.063 [2024-12-13 10:39:27.927291] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.063 [2024-12-13 10:39:27.927628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.063 [2024-12-13 10:39:27.927655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:34.063 [2024-12-13 10:39:27.932330] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.063 [2024-12-13 10:39:27.932668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.063 [2024-12-13 10:39:27.932694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:34.063 [2024-12-13 10:39:27.937686] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.063 [2024-12-13 10:39:27.938033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.063 [2024-12-13 10:39:27.938058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:34.063 [2024-12-13 10:39:27.942863] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.063 [2024-12-13 10:39:27.943224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.063 [2024-12-13 10:39:27.943249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:34.063 [2024-12-13 10:39:27.947859] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.063 [2024-12-13 10:39:27.948194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.063 [2024-12-13 10:39:27.948219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:34.063 [2024-12-13 10:39:27.952946] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.063 [2024-12-13 10:39:27.953281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.063 [2024-12-13 10:39:27.953307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:34.323 [2024-12-13 10:39:27.958234] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.323 [2024-12-13 10:39:27.958578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.323 [2024-12-13 10:39:27.958604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:34.323 [2024-12-13 10:39:27.965015] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.323 [2024-12-13 10:39:27.965328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.323 [2024-12-13 10:39:27.965354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:34.323 [2024-12-13 10:39:27.971043] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.323 [2024-12-13 10:39:27.971385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.323 [2024-12-13 10:39:27.971410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:34.323 [2024-12-13 10:39:27.976650] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.323 [2024-12-13 10:39:27.976971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.323 [2024-12-13 10:39:27.976997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:34.323 [2024-12-13 10:39:27.981669] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.323 [2024-12-13 10:39:27.982008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.323 [2024-12-13 10:39:27.982034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:34.323 [2024-12-13 10:39:27.986805] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.323 [2024-12-13 10:39:27.987151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.323 [2024-12-13 10:39:27.987176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:34.323 [2024-12-13 10:39:27.991840] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.323 [2024-12-13 10:39:27.992181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.323 [2024-12-13 10:39:27.992206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:34.323 [2024-12-13 10:39:27.997028] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.323 [2024-12-13 10:39:27.997354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.323 [2024-12-13 10:39:27.997380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:34.323 [2024-12-13 10:39:28.002184] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.323 [2024-12-13 10:39:28.002534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.323 [2024-12-13 10:39:28.002560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:34.323 [2024-12-13 10:39:28.008018] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.323 [2024-12-13 10:39:28.008356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.323 [2024-12-13 10:39:28.008381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:34.323 [2024-12-13 10:39:28.014266] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.323 [2024-12-13 10:39:28.014644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.323 [2024-12-13 10:39:28.014670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:34.323 [2024-12-13 10:39:28.019967] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.323 [2024-12-13 10:39:28.020300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.323 [2024-12-13 10:39:28.020328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:34.323 [2024-12-13 10:39:28.025540] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.323 [2024-12-13 10:39:28.025868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.323 [2024-12-13 10:39:28.025893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:34.323 [2024-12-13 10:39:28.032188] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.323 [2024-12-13 10:39:28.032523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.323 [2024-12-13 10:39:28.032549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:34.323 [2024-12-13 10:39:28.039372] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.323 [2024-12-13 10:39:28.039767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.324 [2024-12-13 10:39:28.039793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:34.324 [2024-12-13 10:39:28.046532] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.324 [2024-12-13 10:39:28.046931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.324 [2024-12-13 10:39:28.046957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:34.324 [2024-12-13 10:39:28.053988] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.324 [2024-12-13 10:39:28.054361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.324 [2024-12-13 10:39:28.054387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:34.324 [2024-12-13 10:39:28.061176] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.324 [2024-12-13 10:39:28.061554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.324 [2024-12-13 10:39:28.061580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:34.324 [2024-12-13 10:39:28.067212] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.324 [2024-12-13 10:39:28.067557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.324 [2024-12-13 10:39:28.067583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:34.324 [2024-12-13 10:39:28.073810] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.324 [2024-12-13 10:39:28.074207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.324 [2024-12-13 10:39:28.074232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:34.324 [2024-12-13 10:39:28.080493] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.324 [2024-12-13 10:39:28.080834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.324 [2024-12-13 10:39:28.080860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:34.324 [2024-12-13 10:39:28.086327] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.324 [2024-12-13 10:39:28.086683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.324 [2024-12-13 10:39:28.086709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:34.324 [2024-12-13 10:39:28.091848] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.324 [2024-12-13 10:39:28.092179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.324 [2024-12-13 10:39:28.092205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:34.324 [2024-12-13 10:39:28.097706] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.324 [2024-12-13 10:39:28.098071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.324 [2024-12-13 10:39:28.098102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:34.324 [2024-12-13 10:39:28.103659] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.324 [2024-12-13 10:39:28.103990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.324 [2024-12-13 10:39:28.104016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:34.324 [2024-12-13 10:39:28.109132] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.324 [2024-12-13 10:39:28.109461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.324 [2024-12-13 10:39:28.109487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:34.324 [2024-12-13 10:39:28.114600] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.324 [2024-12-13 10:39:28.114922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.324 [2024-12-13 10:39:28.114948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:34.324 [2024-12-13 10:39:28.120096] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.324 [2024-12-13 10:39:28.120438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.324 [2024-12-13 10:39:28.120470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:34.324 [2024-12-13 10:39:28.125867] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.324 [2024-12-13 10:39:28.126209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.324 [2024-12-13 10:39:28.126234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:34.324 [2024-12-13 10:39:28.131744] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.324 [2024-12-13 10:39:28.132074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.324 [2024-12-13 10:39:28.132100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:34.324 [2024-12-13 10:39:28.137785] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.324 [2024-12-13 10:39:28.138197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.324 [2024-12-13 10:39:28.138222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:34.324 [2024-12-13 10:39:28.144209] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.324 [2024-12-13 10:39:28.144543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.324 [2024-12-13 10:39:28.144568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:34.324 [2024-12-13 10:39:28.150209] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.324 [2024-12-13 10:39:28.150554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.324 [2024-12-13 10:39:28.150580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:34.324 [2024-12-13 10:39:28.156020] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.324 [2024-12-13 10:39:28.156352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.324 [2024-12-13 10:39:28.156377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:34.324 [2024-12-13 10:39:28.162083] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.324 [2024-12-13 10:39:28.162401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.324 [2024-12-13 10:39:28.162426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:34.324 [2024-12-13 10:39:28.168397] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.324 [2024-12-13 10:39:28.168734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.324 [2024-12-13 10:39:28.168760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:34.324 [2024-12-13 10:39:28.174174] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.324 [2024-12-13 10:39:28.174502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.324 [2024-12-13 10:39:28.174528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:34.324 [2024-12-13 10:39:28.179912] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.324 [2024-12-13 10:39:28.180244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.324 [2024-12-13 10:39:28.180270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:34.324 [2024-12-13 10:39:28.186380] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.324 [2024-12-13 10:39:28.186721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.324 [2024-12-13 10:39:28.186748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:34.324 [2024-12-13 10:39:28.192789] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.324 [2024-12-13 10:39:28.193172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.324 [2024-12-13 10:39:28.193198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:34.324 [2024-12-13 10:39:28.200325] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.324 [2024-12-13 10:39:28.200719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.324 [2024-12-13 10:39:28.200749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:34.325 [2024-12-13 10:39:28.207658] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.325 [2024-12-13 10:39:28.208007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.325 [2024-12-13 10:39:28.208033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:34.325 [2024-12-13 10:39:28.214044] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.584 [2024-12-13 10:39:28.214366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.584 [2024-12-13 10:39:28.214392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:34.584 [2024-12-13 10:39:28.219731] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.584 [2024-12-13 10:39:28.220072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.584 [2024-12-13 10:39:28.220098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:34.584 [2024-12-13 10:39:28.225478] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.584 [2024-12-13 10:39:28.225813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.584 [2024-12-13 10:39:28.225840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:34.584 [2024-12-13 10:39:28.231222] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.584 [2024-12-13 10:39:28.231554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.584 [2024-12-13 10:39:28.231580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:34.584 [2024-12-13 10:39:28.237297] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.584 [2024-12-13 10:39:28.237671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.584 [2024-12-13 10:39:28.237697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:34.584 [2024-12-13 10:39:28.243074] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.584 [2024-12-13 10:39:28.243384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.584 [2024-12-13 10:39:28.243410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:34.584 [2024-12-13 10:39:28.248564] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.584 [2024-12-13 10:39:28.248903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.584 [2024-12-13 10:39:28.248929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:34.584 [2024-12-13 10:39:28.253699] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.584 [2024-12-13 10:39:28.254043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.584 [2024-12-13 10:39:28.254068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:34.584 [2024-12-13 10:39:28.258957] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.584 [2024-12-13 10:39:28.259291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.584 [2024-12-13 10:39:28.259316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:34.584 [2024-12-13 10:39:28.264563] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.584 [2024-12-13 10:39:28.264887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.584 [2024-12-13 10:39:28.264940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:34.584 [2024-12-13 10:39:28.269709] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.584 [2024-12-13 10:39:28.270045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.584 [2024-12-13 10:39:28.270071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:34.584 [2024-12-13 10:39:28.274750] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.584 [2024-12-13 10:39:28.275084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.584 [2024-12-13 10:39:28.275110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:34.584 [2024-12-13 10:39:28.279707] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.584 [2024-12-13 10:39:28.280041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.584 [2024-12-13 10:39:28.280067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:34.584 [2024-12-13 10:39:28.284754] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.584 [2024-12-13 10:39:28.285099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.584 [2024-12-13 10:39:28.285125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:34.584 [2024-12-13 10:39:28.289744] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:34.584 [2024-12-13 10:39:28.291420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:34.584 [2024-12-13 10:39:28.291445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:34.584 5254.50 IOPS, 656.81 MiB/s 00:37:34.584 Latency(us) 00:37:34.584 [2024-12-13T09:39:28.475Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:34.584 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:37:34.584 nvme0n1 : 2.00 5252.59 656.57 0.00 0.00 3041.24 1747.63 8488.47 00:37:34.584 [2024-12-13T09:39:28.475Z] =================================================================================================================== 00:37:34.584 [2024-12-13T09:39:28.475Z] Total : 5252.59 656.57 0.00 0.00 3041.24 1747.63 8488.47 00:37:34.584 { 00:37:34.584 "results": [ 00:37:34.584 { 00:37:34.584 "job": "nvme0n1", 00:37:34.584 "core_mask": "0x2", 00:37:34.584 "workload": "randwrite", 00:37:34.584 "status": "finished", 00:37:34.584 "queue_depth": 16, 00:37:34.584 "io_size": 131072, 00:37:34.584 "runtime": 2.003772, 00:37:34.584 "iops": 5252.593608454455, 00:37:34.584 "mibps": 656.5742010568068, 00:37:34.584 "io_failed": 0, 00:37:34.584 "io_timeout": 0, 00:37:34.584 "avg_latency_us": 3041.2358478905103, 00:37:34.584 "min_latency_us": 1747.6266666666668, 00:37:34.584 "max_latency_us": 8488.47238095238 00:37:34.584 } 00:37:34.584 ], 00:37:34.584 "core_count": 1 00:37:34.584 } 00:37:34.584 10:39:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:37:34.584 10:39:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:37:34.584 10:39:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:37:34.584 10:39:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:37:34.584 | .driver_specific 00:37:34.584 | .nvme_error 00:37:34.584 | .status_code 00:37:34.584 | .command_transient_transport_error' 00:37:34.844 10:39:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 340 > 0 )) 00:37:34.844 10:39:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 4148990 00:37:34.844 10:39:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 4148990 ']' 00:37:34.844 10:39:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 4148990 00:37:34.844 10:39:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:37:34.844 10:39:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:34.844 10:39:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4148990 00:37:34.844 10:39:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:37:34.844 10:39:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:37:34.844 10:39:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4148990' 00:37:34.844 killing process with pid 4148990 00:37:34.844 10:39:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 4148990 00:37:34.844 Received shutdown signal, test time was about 2.000000 seconds 00:37:34.844 00:37:34.844 Latency(us) 00:37:34.844 [2024-12-13T09:39:28.735Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:34.844 [2024-12-13T09:39:28.735Z] =================================================================================================================== 00:37:34.844 [2024-12-13T09:39:28.735Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:34.844 10:39:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 4148990 00:37:35.781 10:39:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 4146348 00:37:35.781 10:39:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 4146348 ']' 00:37:35.781 10:39:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 4146348 00:37:35.781 10:39:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:37:35.781 10:39:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:35.781 10:39:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4146348 00:37:35.781 10:39:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:37:35.781 10:39:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:37:35.781 10:39:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4146348' 00:37:35.781 killing process with pid 4146348 00:37:35.781 10:39:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 4146348 00:37:35.781 10:39:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 4146348 00:37:36.716 00:37:36.716 real 0m21.016s 00:37:36.716 user 0m39.513s 00:37:36.716 sys 0m4.568s 00:37:36.716 10:39:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:36.716 10:39:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:36.716 ************************************ 00:37:36.716 END TEST nvmf_digest_error 00:37:36.716 ************************************ 00:37:36.975 10:39:30 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:37:36.975 10:39:30 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:37:36.975 10:39:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 00:37:36.975 10:39:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:37:36.975 10:39:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:36.975 10:39:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:37:36.975 10:39:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:36.975 10:39:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:36.975 rmmod nvme_tcp 00:37:36.975 rmmod nvme_fabrics 00:37:36.975 rmmod nvme_keyring 00:37:36.975 10:39:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:36.975 10:39:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:37:36.975 10:39:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:37:36.975 10:39:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 4146348 ']' 00:37:36.975 10:39:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 4146348 00:37:36.975 10:39:30 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # '[' -z 4146348 ']' 00:37:36.975 10:39:30 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@958 -- # kill -0 4146348 00:37:36.975 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (4146348) - No such process 00:37:36.975 10:39:30 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@981 -- # echo 'Process with pid 4146348 is not found' 00:37:36.975 Process with pid 4146348 is not found 00:37:36.975 10:39:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:37:36.975 10:39:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:37:36.975 10:39:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:37:36.975 10:39:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:37:36.975 10:39:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 00:37:36.975 10:39:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:37:36.975 10:39:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 00:37:36.975 10:39:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:36.975 10:39:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:36.975 10:39:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:36.975 10:39:30 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:36.975 10:39:30 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:38.885 10:39:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:38.885 00:37:38.885 real 0m51.321s 00:37:38.885 user 1m23.245s 00:37:38.885 sys 0m13.509s 00:37:38.885 10:39:32 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:38.885 10:39:32 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:37:38.885 ************************************ 00:37:38.885 END TEST nvmf_digest 00:37:38.885 ************************************ 00:37:39.145 10:39:32 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:37:39.145 10:39:32 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:37:39.145 10:39:32 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:37:39.145 10:39:32 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:37:39.145 10:39:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:37:39.145 10:39:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:39.145 10:39:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:37:39.145 ************************************ 00:37:39.145 START TEST nvmf_bdevperf 00:37:39.145 ************************************ 00:37:39.145 10:39:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:37:39.145 * Looking for test storage... 00:37:39.145 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:37:39.145 10:39:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:37:39.145 10:39:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1711 -- # lcov --version 00:37:39.145 10:39:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:37:39.145 10:39:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:37:39.145 10:39:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:39.145 10:39:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:39.145 10:39:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:39.145 10:39:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:37:39.145 10:39:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:37:39.145 10:39:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:37:39.145 10:39:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:37:39.145 10:39:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:37:39.145 10:39:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:37:39.145 10:39:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:37:39.145 10:39:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:39.145 10:39:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:37:39.145 10:39:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:37:39.145 10:39:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:39.145 10:39:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:39.145 10:39:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:37:39.145 10:39:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:37:39.145 10:39:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:39.145 10:39:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:37:39.145 10:39:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:37:39.145 10:39:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:37:39.145 10:39:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:37:39.145 10:39:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:39.145 10:39:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:37:39.145 10:39:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:37:39.145 10:39:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:39.145 10:39:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:39.145 10:39:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:37:39.145 10:39:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:39.145 10:39:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:37:39.145 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:39.145 --rc genhtml_branch_coverage=1 00:37:39.145 --rc genhtml_function_coverage=1 00:37:39.145 --rc genhtml_legend=1 00:37:39.145 --rc geninfo_all_blocks=1 00:37:39.145 --rc geninfo_unexecuted_blocks=1 00:37:39.145 00:37:39.145 ' 00:37:39.145 10:39:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:37:39.145 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:39.145 --rc genhtml_branch_coverage=1 00:37:39.145 --rc genhtml_function_coverage=1 00:37:39.145 --rc genhtml_legend=1 00:37:39.145 --rc geninfo_all_blocks=1 00:37:39.145 --rc geninfo_unexecuted_blocks=1 00:37:39.145 00:37:39.145 ' 00:37:39.145 10:39:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:37:39.145 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:39.145 --rc genhtml_branch_coverage=1 00:37:39.145 --rc genhtml_function_coverage=1 00:37:39.145 --rc genhtml_legend=1 00:37:39.145 --rc geninfo_all_blocks=1 00:37:39.145 --rc geninfo_unexecuted_blocks=1 00:37:39.145 00:37:39.145 ' 00:37:39.145 10:39:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:37:39.145 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:39.145 --rc genhtml_branch_coverage=1 00:37:39.145 --rc genhtml_function_coverage=1 00:37:39.145 --rc genhtml_legend=1 00:37:39.145 --rc geninfo_all_blocks=1 00:37:39.145 --rc geninfo_unexecuted_blocks=1 00:37:39.145 00:37:39.145 ' 00:37:39.145 10:39:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:39.145 10:39:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:37:39.145 10:39:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:39.145 10:39:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:39.145 10:39:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:39.145 10:39:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:39.145 10:39:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:39.145 10:39:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:39.145 10:39:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:39.145 10:39:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:39.145 10:39:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:39.145 10:39:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:39.145 10:39:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:37:39.145 10:39:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:37:39.145 10:39:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:39.145 10:39:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:39.145 10:39:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:39.145 10:39:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:39.145 10:39:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:39.145 10:39:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:37:39.145 10:39:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:39.145 10:39:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:39.146 10:39:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:39.146 10:39:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:39.146 10:39:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:39.146 10:39:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:39.146 10:39:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:37:39.146 10:39:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:39.146 10:39:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # : 0 00:37:39.146 10:39:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:39.146 10:39:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:39.405 10:39:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:39.405 10:39:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:39.405 10:39:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:39.406 10:39:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:37:39.406 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:37:39.406 10:39:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:39.406 10:39:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:39.406 10:39:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:39.406 10:39:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:37:39.406 10:39:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:37:39.406 10:39:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:37:39.406 10:39:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:37:39.406 10:39:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:39.406 10:39:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@476 -- # prepare_net_devs 00:37:39.406 10:39:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:37:39.406 10:39:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:37:39.406 10:39:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:39.406 10:39:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:39.406 10:39:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:39.406 10:39:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:37:39.406 10:39:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:37:39.406 10:39:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # xtrace_disable 00:37:39.406 10:39:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:44.681 10:39:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:44.681 10:39:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # pci_devs=() 00:37:44.681 10:39:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:44.681 10:39:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:44.681 10:39:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:44.681 10:39:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:44.681 10:39:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:44.681 10:39:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # net_devs=() 00:37:44.681 10:39:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:44.681 10:39:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # e810=() 00:37:44.681 10:39:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # local -ga e810 00:37:44.682 10:39:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # x722=() 00:37:44.682 10:39:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # local -ga x722 00:37:44.682 10:39:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # mlx=() 00:37:44.682 10:39:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # local -ga mlx 00:37:44.682 10:39:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:44.682 10:39:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:44.682 10:39:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:44.682 10:39:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:44.682 10:39:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:44.682 10:39:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:44.682 10:39:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:44.682 10:39:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:37:44.682 10:39:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:44.682 10:39:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:44.682 10:39:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:44.682 10:39:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:44.682 10:39:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:37:44.682 10:39:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:37:44.682 10:39:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:37:44.682 10:39:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:37:44.682 10:39:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:37:44.682 10:39:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:37:44.682 10:39:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:44.682 10:39:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:37:44.682 Found 0000:af:00.0 (0x8086 - 0x159b) 00:37:44.682 10:39:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:44.682 10:39:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:44.682 10:39:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:44.682 10:39:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:44.682 10:39:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:44.682 10:39:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:44.682 10:39:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:37:44.682 Found 0000:af:00.1 (0x8086 - 0x159b) 00:37:44.682 10:39:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:44.682 10:39:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:44.682 10:39:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:44.682 10:39:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:44.682 10:39:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:44.682 10:39:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:37:44.682 10:39:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:37:44.682 10:39:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:37:44.682 10:39:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:44.682 10:39:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:44.682 10:39:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:44.682 10:39:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:44.682 10:39:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:44.682 10:39:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:44.682 10:39:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:44.682 10:39:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:37:44.682 Found net devices under 0000:af:00.0: cvl_0_0 00:37:44.682 10:39:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:44.682 10:39:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:44.682 10:39:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:44.682 10:39:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:44.682 10:39:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:44.682 10:39:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:44.682 10:39:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:44.682 10:39:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:44.682 10:39:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:37:44.682 Found net devices under 0000:af:00.1: cvl_0_1 00:37:44.682 10:39:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:44.682 10:39:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:37:44.682 10:39:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # is_hw=yes 00:37:44.682 10:39:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:37:44.682 10:39:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:37:44.682 10:39:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:37:44.682 10:39:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:44.682 10:39:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:44.682 10:39:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:44.682 10:39:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:44.682 10:39:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:44.682 10:39:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:44.682 10:39:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:44.682 10:39:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:44.682 10:39:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:44.682 10:39:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:44.682 10:39:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:44.682 10:39:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:44.682 10:39:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:44.682 10:39:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:44.682 10:39:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:44.682 10:39:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:44.682 10:39:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:44.682 10:39:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:44.682 10:39:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:44.682 10:39:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:44.682 10:39:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:44.682 10:39:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:44.682 10:39:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:44.682 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:44.682 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.326 ms 00:37:44.682 00:37:44.682 --- 10.0.0.2 ping statistics --- 00:37:44.682 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:44.682 rtt min/avg/max/mdev = 0.326/0.326/0.326/0.000 ms 00:37:44.682 10:39:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:44.682 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:44.682 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.219 ms 00:37:44.682 00:37:44.682 --- 10.0.0.1 ping statistics --- 00:37:44.682 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:44.682 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:37:44.682 10:39:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:44.682 10:39:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # return 0 00:37:44.682 10:39:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:37:44.682 10:39:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:44.682 10:39:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:37:44.682 10:39:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:37:44.682 10:39:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:44.682 10:39:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:37:44.682 10:39:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:37:44.682 10:39:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:37:44.682 10:39:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:37:44.682 10:39:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:37:44.682 10:39:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:44.683 10:39:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:44.683 10:39:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=4153162 00:37:44.683 10:39:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 4153162 00:37:44.683 10:39:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:37:44.683 10:39:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 4153162 ']' 00:37:44.683 10:39:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:44.683 10:39:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:44.683 10:39:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:44.683 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:44.683 10:39:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:44.683 10:39:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:44.941 [2024-12-13 10:39:38.591114] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:37:44.942 [2024-12-13 10:39:38.591199] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:44.942 [2024-12-13 10:39:38.710405] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:37:44.942 [2024-12-13 10:39:38.817359] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:44.942 [2024-12-13 10:39:38.817401] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:44.942 [2024-12-13 10:39:38.817411] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:44.942 [2024-12-13 10:39:38.817421] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:44.942 [2024-12-13 10:39:38.817428] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:44.942 [2024-12-13 10:39:38.819765] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:37:44.942 [2024-12-13 10:39:38.819827] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:37:44.942 [2024-12-13 10:39:38.819847] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:37:45.510 10:39:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:45.510 10:39:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:37:45.769 10:39:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:37:45.769 10:39:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:45.769 10:39:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:45.769 10:39:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:45.769 10:39:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:37:45.769 10:39:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:45.769 10:39:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:45.770 [2024-12-13 10:39:39.441693] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:45.770 10:39:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:45.770 10:39:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:37:45.770 10:39:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:45.770 10:39:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:45.770 Malloc0 00:37:45.770 10:39:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:45.770 10:39:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:37:45.770 10:39:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:45.770 10:39:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:45.770 10:39:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:45.770 10:39:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:37:45.770 10:39:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:45.770 10:39:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:45.770 10:39:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:45.770 10:39:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:45.770 10:39:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:45.770 10:39:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:45.770 [2024-12-13 10:39:39.551925] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:45.770 10:39:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:45.770 10:39:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:37:45.770 10:39:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:37:45.770 10:39:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:37:45.770 10:39:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:37:45.770 10:39:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:37:45.770 10:39:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:37:45.770 { 00:37:45.770 "params": { 00:37:45.770 "name": "Nvme$subsystem", 00:37:45.770 "trtype": "$TEST_TRANSPORT", 00:37:45.770 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:45.770 "adrfam": "ipv4", 00:37:45.770 "trsvcid": "$NVMF_PORT", 00:37:45.770 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:45.770 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:45.770 "hdgst": ${hdgst:-false}, 00:37:45.770 "ddgst": ${ddgst:-false} 00:37:45.770 }, 00:37:45.770 "method": "bdev_nvme_attach_controller" 00:37:45.770 } 00:37:45.770 EOF 00:37:45.770 )") 00:37:45.770 10:39:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:37:45.770 10:39:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:37:45.770 10:39:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:37:45.770 10:39:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:37:45.770 "params": { 00:37:45.770 "name": "Nvme1", 00:37:45.770 "trtype": "tcp", 00:37:45.770 "traddr": "10.0.0.2", 00:37:45.770 "adrfam": "ipv4", 00:37:45.770 "trsvcid": "4420", 00:37:45.770 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:37:45.770 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:37:45.770 "hdgst": false, 00:37:45.770 "ddgst": false 00:37:45.770 }, 00:37:45.770 "method": "bdev_nvme_attach_controller" 00:37:45.770 }' 00:37:45.770 [2024-12-13 10:39:39.630208] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:37:45.770 [2024-12-13 10:39:39.630286] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4153402 ] 00:37:46.029 [2024-12-13 10:39:39.742383] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:46.029 [2024-12-13 10:39:39.860525] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:37:46.597 Running I/O for 1 seconds... 00:37:47.534 9641.00 IOPS, 37.66 MiB/s 00:37:47.534 Latency(us) 00:37:47.534 [2024-12-13T09:39:41.425Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:47.534 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:37:47.534 Verification LBA range: start 0x0 length 0x4000 00:37:47.534 Nvme1n1 : 1.01 9669.53 37.77 0.00 0.00 13184.50 1185.89 10548.18 00:37:47.534 [2024-12-13T09:39:41.425Z] =================================================================================================================== 00:37:47.534 [2024-12-13T09:39:41.425Z] Total : 9669.53 37.77 0.00 0.00 13184.50 1185.89 10548.18 00:37:48.471 10:39:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=4153850 00:37:48.471 10:39:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:37:48.471 10:39:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:37:48.471 10:39:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:37:48.471 10:39:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:37:48.471 10:39:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:37:48.471 10:39:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:37:48.471 10:39:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:37:48.471 { 00:37:48.471 "params": { 00:37:48.471 "name": "Nvme$subsystem", 00:37:48.471 "trtype": "$TEST_TRANSPORT", 00:37:48.471 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:48.471 "adrfam": "ipv4", 00:37:48.471 "trsvcid": "$NVMF_PORT", 00:37:48.471 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:48.471 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:48.471 "hdgst": ${hdgst:-false}, 00:37:48.471 "ddgst": ${ddgst:-false} 00:37:48.471 }, 00:37:48.471 "method": "bdev_nvme_attach_controller" 00:37:48.471 } 00:37:48.471 EOF 00:37:48.471 )") 00:37:48.471 10:39:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:37:48.471 10:39:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:37:48.471 10:39:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:37:48.471 10:39:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:37:48.471 "params": { 00:37:48.471 "name": "Nvme1", 00:37:48.471 "trtype": "tcp", 00:37:48.471 "traddr": "10.0.0.2", 00:37:48.471 "adrfam": "ipv4", 00:37:48.471 "trsvcid": "4420", 00:37:48.471 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:37:48.471 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:37:48.471 "hdgst": false, 00:37:48.471 "ddgst": false 00:37:48.471 }, 00:37:48.471 "method": "bdev_nvme_attach_controller" 00:37:48.471 }' 00:37:48.471 [2024-12-13 10:39:42.319873] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:37:48.471 [2024-12-13 10:39:42.319952] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4153850 ] 00:37:48.730 [2024-12-13 10:39:42.431873] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:48.730 [2024-12-13 10:39:42.546508] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:37:49.298 Running I/O for 15 seconds... 00:37:51.615 9507.00 IOPS, 37.14 MiB/s [2024-12-13T09:39:45.506Z] 9537.00 IOPS, 37.25 MiB/s [2024-12-13T09:39:45.506Z] 10:39:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 4153162 00:37:51.615 10:39:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:37:51.615 [2024-12-13 10:39:45.271479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:34624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:51.615 [2024-12-13 10:39:45.271531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:51.615 [2024-12-13 10:39:45.271558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:34632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:51.615 [2024-12-13 10:39:45.271571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:51.615 [2024-12-13 10:39:45.271585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:34640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:51.615 [2024-12-13 10:39:45.271596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:51.615 [2024-12-13 10:39:45.271609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:34648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:51.615 [2024-12-13 10:39:45.271618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:51.615 [2024-12-13 10:39:45.271635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:34656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:51.615 [2024-12-13 10:39:45.271646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:51.615 [2024-12-13 10:39:45.271657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:34664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:51.615 [2024-12-13 10:39:45.271667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:51.615 [2024-12-13 10:39:45.271678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:34672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:51.615 [2024-12-13 10:39:45.271689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:51.615 [2024-12-13 10:39:45.271700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:34680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:51.615 [2024-12-13 10:39:45.271710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:51.615 [2024-12-13 10:39:45.271722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:33728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.615 [2024-12-13 10:39:45.271732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:51.615 [2024-12-13 10:39:45.271745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:33736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.615 [2024-12-13 10:39:45.271758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:51.615 [2024-12-13 10:39:45.271771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:33744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.615 [2024-12-13 10:39:45.271782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:51.615 [2024-12-13 10:39:45.271805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:33752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.615 [2024-12-13 10:39:45.271817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:51.615 [2024-12-13 10:39:45.271829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:33760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.615 [2024-12-13 10:39:45.271839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:51.615 [2024-12-13 10:39:45.271850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:33768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.615 [2024-12-13 10:39:45.271859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:51.615 [2024-12-13 10:39:45.271870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:33776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.615 [2024-12-13 10:39:45.271879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:51.615 [2024-12-13 10:39:45.271890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:34688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:51.615 [2024-12-13 10:39:45.271899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:51.615 [2024-12-13 10:39:45.271910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:34696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:51.615 [2024-12-13 10:39:45.271921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:51.615 [2024-12-13 10:39:45.271933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:34704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:51.615 [2024-12-13 10:39:45.271944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:51.615 [2024-12-13 10:39:45.271956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:34712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:51.615 [2024-12-13 10:39:45.271965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:51.615 [2024-12-13 10:39:45.271976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:34720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:51.615 [2024-12-13 10:39:45.271985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:51.615 [2024-12-13 10:39:45.271996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:34728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:51.615 [2024-12-13 10:39:45.272008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:51.615 [2024-12-13 10:39:45.272021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:34736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:51.615 [2024-12-13 10:39:45.272031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:51.615 [2024-12-13 10:39:45.272042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:33784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.615 [2024-12-13 10:39:45.272051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:51.615 [2024-12-13 10:39:45.272062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:33792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.615 [2024-12-13 10:39:45.272071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:51.615 [2024-12-13 10:39:45.272082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:33800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.615 [2024-12-13 10:39:45.272091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:51.615 [2024-12-13 10:39:45.272102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:33808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.615 [2024-12-13 10:39:45.272111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:51.615 [2024-12-13 10:39:45.272121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:33816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.615 [2024-12-13 10:39:45.272130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:51.615 [2024-12-13 10:39:45.272141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:33824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.615 [2024-12-13 10:39:45.272150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:51.615 [2024-12-13 10:39:45.272161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:33832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.615 [2024-12-13 10:39:45.272170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:51.615 [2024-12-13 10:39:45.272183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:33840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.615 [2024-12-13 10:39:45.272192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:51.615 [2024-12-13 10:39:45.272203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:33848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.615 [2024-12-13 10:39:45.272212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:51.615 [2024-12-13 10:39:45.272223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:34744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:51.615 [2024-12-13 10:39:45.272232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:51.615 [2024-12-13 10:39:45.272243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:33856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.615 [2024-12-13 10:39:45.272252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:51.615 [2024-12-13 10:39:45.272263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:33864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.615 [2024-12-13 10:39:45.272272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:51.616 [2024-12-13 10:39:45.272284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:33872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.616 [2024-12-13 10:39:45.272293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:51.616 [2024-12-13 10:39:45.272304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:33880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.616 [2024-12-13 10:39:45.272313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:51.616 [2024-12-13 10:39:45.272324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:33888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.616 [2024-12-13 10:39:45.272334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:51.616 [2024-12-13 10:39:45.272345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:33896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.616 [2024-12-13 10:39:45.272354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:51.616 [2024-12-13 10:39:45.272365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:33904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.616 [2024-12-13 10:39:45.272375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:51.616 [2024-12-13 10:39:45.272385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:33912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.616 [2024-12-13 10:39:45.272394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:51.616 [2024-12-13 10:39:45.272405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:33920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.616 [2024-12-13 10:39:45.272414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:51.616 [2024-12-13 10:39:45.272425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:33928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.616 [2024-12-13 10:39:45.272440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:51.616 [2024-12-13 10:39:45.272456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:33936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.616 [2024-12-13 10:39:45.272466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:51.616 [2024-12-13 10:39:45.272477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:33944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.616 [2024-12-13 10:39:45.272487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:51.616 [2024-12-13 10:39:45.272498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:33952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.616 [2024-12-13 10:39:45.272507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:51.616 [2024-12-13 10:39:45.272518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:33960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.616 [2024-12-13 10:39:45.272527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:51.616 [2024-12-13 10:39:45.272538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:33968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.616 [2024-12-13 10:39:45.272548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:51.616 [2024-12-13 10:39:45.272559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:33976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.616 [2024-12-13 10:39:45.272568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:51.616 [2024-12-13 10:39:45.272578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:33984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.616 [2024-12-13 10:39:45.272588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:51.616 [2024-12-13 10:39:45.272599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:33992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.616 [2024-12-13 10:39:45.272608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:51.616 [2024-12-13 10:39:45.272619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:34000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.616 [2024-12-13 10:39:45.272629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:51.616 [2024-12-13 10:39:45.272640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:34008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.616 [2024-12-13 10:39:45.272649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:51.616 [2024-12-13 10:39:45.272660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:34016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.616 [2024-12-13 10:39:45.272670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:51.616 [2024-12-13 10:39:45.272680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:34024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.616 [2024-12-13 10:39:45.272689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:51.616 [2024-12-13 10:39:45.272702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:34032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.616 [2024-12-13 10:39:45.272711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:51.616 [2024-12-13 10:39:45.272722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:34040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.616 [2024-12-13 10:39:45.272731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:51.616 [2024-12-13 10:39:45.272742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:34048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.616 [2024-12-13 10:39:45.272751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:51.616 [2024-12-13 10:39:45.272762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:34056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.616 [2024-12-13 10:39:45.272771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:51.616 [2024-12-13 10:39:45.272783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:34064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.616 [2024-12-13 10:39:45.272792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:51.616 [2024-12-13 10:39:45.272803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:34072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.616 [2024-12-13 10:39:45.272812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:51.616 [2024-12-13 10:39:45.272822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:34080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.616 [2024-12-13 10:39:45.272832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:51.616 [2024-12-13 10:39:45.272842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:34088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.616 [2024-12-13 10:39:45.272853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:51.616 [2024-12-13 10:39:45.272864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:34096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.616 [2024-12-13 10:39:45.272873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:51.616 [2024-12-13 10:39:45.272884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:34104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.616 [2024-12-13 10:39:45.272893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:51.616 [2024-12-13 10:39:45.272904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:34112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.616 [2024-12-13 10:39:45.272913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:51.616 [2024-12-13 10:39:45.272925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:34120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.616 [2024-12-13 10:39:45.272934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:51.616 [2024-12-13 10:39:45.272945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:34128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.616 [2024-12-13 10:39:45.272955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:51.616 [2024-12-13 10:39:45.272967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:34136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.616 [2024-12-13 10:39:45.272976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:51.616 [2024-12-13 10:39:45.272987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:34144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.616 [2024-12-13 10:39:45.272997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:51.616 [2024-12-13 10:39:45.273008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:34152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.616 [2024-12-13 10:39:45.273017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:51.616 [2024-12-13 10:39:45.273028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:34160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.616 [2024-12-13 10:39:45.273037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:51.617 [2024-12-13 10:39:45.273048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:34168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.617 [2024-12-13 10:39:45.273057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:51.617 [2024-12-13 10:39:45.273068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:34176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.617 [2024-12-13 10:39:45.273077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:51.617 [2024-12-13 10:39:45.273088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:34184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.617 [2024-12-13 10:39:45.273097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:51.617 [2024-12-13 10:39:45.273108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:34192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.617 [2024-12-13 10:39:45.273117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:51.617 [2024-12-13 10:39:45.273133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:34200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.617 [2024-12-13 10:39:45.273143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:51.617 [2024-12-13 10:39:45.273154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:34208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.617 [2024-12-13 10:39:45.273163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:51.617 [2024-12-13 10:39:45.273174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:34216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.617 [2024-12-13 10:39:45.273184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:51.617 [2024-12-13 10:39:45.273195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:34224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.617 [2024-12-13 10:39:45.273204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:51.617 [2024-12-13 10:39:45.273215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:34232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.617 [2024-12-13 10:39:45.273226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:51.617 [2024-12-13 10:39:45.273237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:34240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.617 [2024-12-13 10:39:45.273246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:51.617 [2024-12-13 10:39:45.273260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:34248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.617 [2024-12-13 10:39:45.273269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:51.617 [2024-12-13 10:39:45.273280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:34256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.617 [2024-12-13 10:39:45.273289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:51.617 [2024-12-13 10:39:45.273300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:34264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.617 [2024-12-13 10:39:45.273309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:51.617 [2024-12-13 10:39:45.273320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:34272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.617 [2024-12-13 10:39:45.273330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:51.617 [2024-12-13 10:39:45.273340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:34280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.617 [2024-12-13 10:39:45.273349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:51.617 [2024-12-13 10:39:45.273360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:34288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.617 [2024-12-13 10:39:45.273369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:51.617 [2024-12-13 10:39:45.273379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:34296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.617 [2024-12-13 10:39:45.273388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:51.617 [2024-12-13 10:39:45.273399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:34304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.617 [2024-12-13 10:39:45.273408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:51.617 [2024-12-13 10:39:45.273419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:34312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.617 [2024-12-13 10:39:45.273428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:51.617 [2024-12-13 10:39:45.273439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:34320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.617 [2024-12-13 10:39:45.273454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:51.617 [2024-12-13 10:39:45.273465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:34328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.617 [2024-12-13 10:39:45.273474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:51.617 [2024-12-13 10:39:45.273486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:34336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.617 [2024-12-13 10:39:45.273496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:51.617 [2024-12-13 10:39:45.273507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:34344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.617 [2024-12-13 10:39:45.273516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:51.617 [2024-12-13 10:39:45.273526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:34352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.617 [2024-12-13 10:39:45.273535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:51.617 [2024-12-13 10:39:45.273546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:34360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.617 [2024-12-13 10:39:45.273555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:51.617 [2024-12-13 10:39:45.273568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:34368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.617 [2024-12-13 10:39:45.273576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:51.617 [2024-12-13 10:39:45.273588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:34376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.617 [2024-12-13 10:39:45.273597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:51.617 [2024-12-13 10:39:45.273608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:34384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.617 [2024-12-13 10:39:45.273617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:51.617 [2024-12-13 10:39:45.273628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:34392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.617 [2024-12-13 10:39:45.273637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:51.617 [2024-12-13 10:39:45.273647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:34400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.617 [2024-12-13 10:39:45.273657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:51.617 [2024-12-13 10:39:45.273667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:34408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.617 [2024-12-13 10:39:45.273676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:51.617 [2024-12-13 10:39:45.273687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:34416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.617 [2024-12-13 10:39:45.273695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:51.617 [2024-12-13 10:39:45.273706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:34424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.617 [2024-12-13 10:39:45.273715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:51.617 [2024-12-13 10:39:45.273726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:34432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.617 [2024-12-13 10:39:45.273736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:51.617 [2024-12-13 10:39:45.273747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:34440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.617 [2024-12-13 10:39:45.273756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:51.617 [2024-12-13 10:39:45.273766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:34448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.617 [2024-12-13 10:39:45.273775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:51.617 [2024-12-13 10:39:45.273786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:34456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.617 [2024-12-13 10:39:45.273795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:51.618 [2024-12-13 10:39:45.273805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:34464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.618 [2024-12-13 10:39:45.273814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:51.618 [2024-12-13 10:39:45.273825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:34472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.618 [2024-12-13 10:39:45.273834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:51.618 [2024-12-13 10:39:45.273844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:34480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.618 [2024-12-13 10:39:45.273853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:51.618 [2024-12-13 10:39:45.273864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:34488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.618 [2024-12-13 10:39:45.273873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:51.618 [2024-12-13 10:39:45.273885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:34496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.618 [2024-12-13 10:39:45.273894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:51.618 [2024-12-13 10:39:45.273907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:34504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.618 [2024-12-13 10:39:45.273916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:51.618 [2024-12-13 10:39:45.273927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:34512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.618 [2024-12-13 10:39:45.273936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:51.618 [2024-12-13 10:39:45.273947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:34520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.618 [2024-12-13 10:39:45.273956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:51.618 [2024-12-13 10:39:45.273966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:34528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.618 [2024-12-13 10:39:45.273975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:51.618 [2024-12-13 10:39:45.273987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:34536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.618 [2024-12-13 10:39:45.273996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:51.618 [2024-12-13 10:39:45.274007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:34544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.618 [2024-12-13 10:39:45.274016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:51.618 [2024-12-13 10:39:45.274026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:34552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.618 [2024-12-13 10:39:45.274035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:51.618 [2024-12-13 10:39:45.274046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:34560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.618 [2024-12-13 10:39:45.274055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:51.618 [2024-12-13 10:39:45.274065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:34568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.618 [2024-12-13 10:39:45.274074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:51.618 [2024-12-13 10:39:45.274085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:34576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.618 [2024-12-13 10:39:45.274094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:51.618 [2024-12-13 10:39:45.274105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:34584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.618 [2024-12-13 10:39:45.274114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:51.618 [2024-12-13 10:39:45.274124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:34592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.618 [2024-12-13 10:39:45.274133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:51.618 [2024-12-13 10:39:45.274143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:34600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.618 [2024-12-13 10:39:45.274153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:51.618 [2024-12-13 10:39:45.274163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:34608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.618 [2024-12-13 10:39:45.274172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:51.618 [2024-12-13 10:39:45.274181] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000326480 is same with the state(6) to be set 00:37:51.618 [2024-12-13 10:39:45.274195] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:37:51.618 [2024-12-13 10:39:45.274211] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:37:51.618 [2024-12-13 10:39:45.274221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:34616 len:8 PRP1 0x0 PRP2 0x0 00:37:51.618 [2024-12-13 10:39:45.274232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:51.618 [2024-12-13 10:39:45.277671] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:51.618 [2024-12-13 10:39:45.277753] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:51.618 [2024-12-13 10:39:45.278356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:51.618 [2024-12-13 10:39:45.278382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:51.618 [2024-12-13 10:39:45.278394] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:51.618 [2024-12-13 10:39:45.278604] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:51.618 [2024-12-13 10:39:45.278805] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:51.618 [2024-12-13 10:39:45.278822] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:51.618 [2024-12-13 10:39:45.278834] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:51.618 [2024-12-13 10:39:45.278845] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:51.618 [2024-12-13 10:39:45.291106] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:51.618 [2024-12-13 10:39:45.291593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:51.618 [2024-12-13 10:39:45.291657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:51.618 [2024-12-13 10:39:45.291690] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:51.618 [2024-12-13 10:39:45.292157] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:51.618 [2024-12-13 10:39:45.292336] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:51.618 [2024-12-13 10:39:45.292347] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:51.618 [2024-12-13 10:39:45.292355] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:51.618 [2024-12-13 10:39:45.292364] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:51.618 [2024-12-13 10:39:45.304201] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:51.618 [2024-12-13 10:39:45.304709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:51.618 [2024-12-13 10:39:45.304771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:51.618 [2024-12-13 10:39:45.304804] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:51.618 [2024-12-13 10:39:45.305225] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:51.618 [2024-12-13 10:39:45.305404] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:51.618 [2024-12-13 10:39:45.305414] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:51.618 [2024-12-13 10:39:45.305423] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:51.618 [2024-12-13 10:39:45.305431] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:51.618 [2024-12-13 10:39:45.317249] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:51.618 [2024-12-13 10:39:45.317749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:51.618 [2024-12-13 10:39:45.317808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:51.618 [2024-12-13 10:39:45.317849] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:51.618 [2024-12-13 10:39:45.318300] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:51.618 [2024-12-13 10:39:45.318498] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:51.618 [2024-12-13 10:39:45.318509] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:51.618 [2024-12-13 10:39:45.318518] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:51.618 [2024-12-13 10:39:45.318526] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:51.618 [2024-12-13 10:39:45.330308] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:51.618 [2024-12-13 10:39:45.330783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:51.618 [2024-12-13 10:39:45.330805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:51.618 [2024-12-13 10:39:45.330815] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:51.618 [2024-12-13 10:39:45.331006] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:51.619 [2024-12-13 10:39:45.331196] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:51.619 [2024-12-13 10:39:45.331206] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:51.619 [2024-12-13 10:39:45.331215] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:51.619 [2024-12-13 10:39:45.331224] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:51.619 [2024-12-13 10:39:45.343424] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:51.619 [2024-12-13 10:39:45.343942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:51.619 [2024-12-13 10:39:45.344000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:51.619 [2024-12-13 10:39:45.344033] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:51.619 [2024-12-13 10:39:45.344554] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:51.619 [2024-12-13 10:39:45.344744] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:51.619 [2024-12-13 10:39:45.344755] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:51.619 [2024-12-13 10:39:45.344763] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:51.619 [2024-12-13 10:39:45.344772] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:51.619 [2024-12-13 10:39:45.356564] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:51.619 [2024-12-13 10:39:45.357007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:51.619 [2024-12-13 10:39:45.357027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:51.619 [2024-12-13 10:39:45.357037] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:51.619 [2024-12-13 10:39:45.357219] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:51.619 [2024-12-13 10:39:45.357399] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:51.619 [2024-12-13 10:39:45.357409] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:51.619 [2024-12-13 10:39:45.357418] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:51.619 [2024-12-13 10:39:45.357426] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:51.619 [2024-12-13 10:39:45.369736] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:51.619 [2024-12-13 10:39:45.370199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:51.619 [2024-12-13 10:39:45.370220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:51.619 [2024-12-13 10:39:45.370230] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:51.619 [2024-12-13 10:39:45.370410] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:51.619 [2024-12-13 10:39:45.370618] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:51.619 [2024-12-13 10:39:45.370630] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:51.619 [2024-12-13 10:39:45.370638] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:51.619 [2024-12-13 10:39:45.370647] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:51.619 [2024-12-13 10:39:45.382763] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:51.619 [2024-12-13 10:39:45.383212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:51.619 [2024-12-13 10:39:45.383270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:51.619 [2024-12-13 10:39:45.383302] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:51.619 [2024-12-13 10:39:45.383811] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:51.619 [2024-12-13 10:39:45.384000] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:51.619 [2024-12-13 10:39:45.384011] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:51.619 [2024-12-13 10:39:45.384019] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:51.619 [2024-12-13 10:39:45.384028] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:51.619 [2024-12-13 10:39:45.395910] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:51.619 [2024-12-13 10:39:45.396413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:51.619 [2024-12-13 10:39:45.396434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:51.619 [2024-12-13 10:39:45.396445] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:51.619 [2024-12-13 10:39:45.396641] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:51.619 [2024-12-13 10:39:45.396829] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:51.619 [2024-12-13 10:39:45.396843] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:51.619 [2024-12-13 10:39:45.396852] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:51.619 [2024-12-13 10:39:45.396860] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:51.619 [2024-12-13 10:39:45.408978] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:51.619 [2024-12-13 10:39:45.409427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:51.619 [2024-12-13 10:39:45.409453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:51.619 [2024-12-13 10:39:45.409463] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:51.619 [2024-12-13 10:39:45.409667] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:51.619 [2024-12-13 10:39:45.409855] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:51.619 [2024-12-13 10:39:45.409866] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:51.619 [2024-12-13 10:39:45.409874] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:51.619 [2024-12-13 10:39:45.409883] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:51.619 [2024-12-13 10:39:45.422232] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:51.619 [2024-12-13 10:39:45.422687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:51.619 [2024-12-13 10:39:45.422707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:51.619 [2024-12-13 10:39:45.422717] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:51.619 [2024-12-13 10:39:45.422914] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:51.619 [2024-12-13 10:39:45.423102] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:51.619 [2024-12-13 10:39:45.423113] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:51.619 [2024-12-13 10:39:45.423121] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:51.619 [2024-12-13 10:39:45.423130] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:51.619 [2024-12-13 10:39:45.435390] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:51.619 [2024-12-13 10:39:45.435880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:51.619 [2024-12-13 10:39:45.435940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:51.619 [2024-12-13 10:39:45.435974] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:51.619 [2024-12-13 10:39:45.436544] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:51.619 [2024-12-13 10:39:45.436734] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:51.619 [2024-12-13 10:39:45.436745] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:51.619 [2024-12-13 10:39:45.436754] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:51.619 [2024-12-13 10:39:45.436765] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:51.619 [2024-12-13 10:39:45.448470] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:51.619 [2024-12-13 10:39:45.448916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:51.620 [2024-12-13 10:39:45.448937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:51.620 [2024-12-13 10:39:45.448946] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:51.620 [2024-12-13 10:39:45.449125] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:51.620 [2024-12-13 10:39:45.449304] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:51.620 [2024-12-13 10:39:45.449315] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:51.620 [2024-12-13 10:39:45.449323] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:51.620 [2024-12-13 10:39:45.449331] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:51.620 [2024-12-13 10:39:45.461635] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:51.620 [2024-12-13 10:39:45.462142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:51.620 [2024-12-13 10:39:45.462200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:51.620 [2024-12-13 10:39:45.462233] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:51.620 [2024-12-13 10:39:45.462776] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:51.620 [2024-12-13 10:39:45.462965] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:51.620 [2024-12-13 10:39:45.462976] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:51.620 [2024-12-13 10:39:45.462984] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:51.620 [2024-12-13 10:39:45.462993] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:51.620 [2024-12-13 10:39:45.474790] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:51.620 [2024-12-13 10:39:45.475268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:51.620 [2024-12-13 10:39:45.475326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:51.620 [2024-12-13 10:39:45.475359] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:51.620 [2024-12-13 10:39:45.475939] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:51.620 [2024-12-13 10:39:45.476129] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:51.620 [2024-12-13 10:39:45.476139] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:51.620 [2024-12-13 10:39:45.476148] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:51.620 [2024-12-13 10:39:45.476156] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:51.620 [2024-12-13 10:39:45.487980] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:51.620 [2024-12-13 10:39:45.488415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:51.620 [2024-12-13 10:39:45.488442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:51.620 [2024-12-13 10:39:45.488458] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:51.620 [2024-12-13 10:39:45.488663] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:51.620 [2024-12-13 10:39:45.488851] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:51.620 [2024-12-13 10:39:45.488862] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:51.620 [2024-12-13 10:39:45.488871] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:51.620 [2024-12-13 10:39:45.488879] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:51.620 [2024-12-13 10:39:45.501213] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:51.620 [2024-12-13 10:39:45.501616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:51.620 [2024-12-13 10:39:45.501638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:51.620 [2024-12-13 10:39:45.501648] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:51.620 [2024-12-13 10:39:45.501842] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:51.620 [2024-12-13 10:39:45.502037] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:51.620 [2024-12-13 10:39:45.502048] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:51.620 [2024-12-13 10:39:45.502057] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:51.620 [2024-12-13 10:39:45.502065] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:51.879 [2024-12-13 10:39:45.514370] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:51.880 [2024-12-13 10:39:45.514827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:51.880 [2024-12-13 10:39:45.514848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:51.880 [2024-12-13 10:39:45.514858] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:51.880 [2024-12-13 10:39:45.515052] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:51.880 [2024-12-13 10:39:45.515246] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:51.880 [2024-12-13 10:39:45.515257] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:51.880 [2024-12-13 10:39:45.515265] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:51.880 [2024-12-13 10:39:45.515274] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:51.880 [2024-12-13 10:39:45.527418] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:51.880 [2024-12-13 10:39:45.527904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:51.880 [2024-12-13 10:39:45.527925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:51.880 [2024-12-13 10:39:45.527938] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:51.880 [2024-12-13 10:39:45.528126] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:51.880 [2024-12-13 10:39:45.528314] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:51.880 [2024-12-13 10:39:45.528325] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:51.880 [2024-12-13 10:39:45.528334] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:51.880 [2024-12-13 10:39:45.528344] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:51.880 [2024-12-13 10:39:45.540854] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:51.880 [2024-12-13 10:39:45.541341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:51.880 [2024-12-13 10:39:45.541406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:51.880 [2024-12-13 10:39:45.541440] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:51.880 [2024-12-13 10:39:45.541987] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:51.880 [2024-12-13 10:39:45.542180] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:51.880 [2024-12-13 10:39:45.542192] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:51.880 [2024-12-13 10:39:45.542200] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:51.880 [2024-12-13 10:39:45.542209] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:51.880 [2024-12-13 10:39:45.554244] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:51.880 [2024-12-13 10:39:45.554714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:51.880 [2024-12-13 10:39:45.554736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:51.880 [2024-12-13 10:39:45.554746] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:51.880 [2024-12-13 10:39:45.554940] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:51.880 [2024-12-13 10:39:45.555134] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:51.880 [2024-12-13 10:39:45.555145] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:51.880 [2024-12-13 10:39:45.555154] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:51.880 [2024-12-13 10:39:45.555162] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:51.880 [2024-12-13 10:39:45.567496] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:51.880 [2024-12-13 10:39:45.567922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:51.880 [2024-12-13 10:39:45.567943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:51.880 [2024-12-13 10:39:45.567953] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:51.880 [2024-12-13 10:39:45.568141] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:51.880 [2024-12-13 10:39:45.568333] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:51.880 [2024-12-13 10:39:45.568344] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:51.880 [2024-12-13 10:39:45.568352] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:51.880 [2024-12-13 10:39:45.568361] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:51.880 [2024-12-13 10:39:45.580643] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:51.880 [2024-12-13 10:39:45.581040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:51.880 [2024-12-13 10:39:45.581098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:51.880 [2024-12-13 10:39:45.581131] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:51.880 [2024-12-13 10:39:45.581797] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:51.880 [2024-12-13 10:39:45.582216] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:51.880 [2024-12-13 10:39:45.582227] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:51.880 [2024-12-13 10:39:45.582235] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:51.880 [2024-12-13 10:39:45.582244] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:51.880 [2024-12-13 10:39:45.593718] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:51.880 [2024-12-13 10:39:45.594089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:51.880 [2024-12-13 10:39:45.594147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:51.880 [2024-12-13 10:39:45.594179] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:51.880 [2024-12-13 10:39:45.594845] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:51.880 [2024-12-13 10:39:45.595254] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:51.880 [2024-12-13 10:39:45.595264] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:51.880 [2024-12-13 10:39:45.595273] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:51.880 [2024-12-13 10:39:45.595281] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:51.880 [2024-12-13 10:39:45.606847] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:51.880 [2024-12-13 10:39:45.607335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:51.880 [2024-12-13 10:39:45.607356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:51.880 [2024-12-13 10:39:45.607366] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:51.880 [2024-12-13 10:39:45.607561] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:51.880 [2024-12-13 10:39:45.607751] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:51.880 [2024-12-13 10:39:45.607762] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:51.880 [2024-12-13 10:39:45.607773] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:51.880 [2024-12-13 10:39:45.607782] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:51.880 [2024-12-13 10:39:45.619901] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:51.880 [2024-12-13 10:39:45.620361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:51.880 [2024-12-13 10:39:45.620419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:51.880 [2024-12-13 10:39:45.620468] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:51.880 [2024-12-13 10:39:45.620949] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:51.880 [2024-12-13 10:39:45.621136] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:51.880 [2024-12-13 10:39:45.621147] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:51.880 [2024-12-13 10:39:45.621155] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:51.880 [2024-12-13 10:39:45.621164] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:51.880 [2024-12-13 10:39:45.632958] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:51.880 [2024-12-13 10:39:45.633349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:51.881 [2024-12-13 10:39:45.633369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:51.881 [2024-12-13 10:39:45.633379] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:51.881 [2024-12-13 10:39:45.633574] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:51.881 [2024-12-13 10:39:45.633762] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:51.881 [2024-12-13 10:39:45.633773] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:51.881 [2024-12-13 10:39:45.633782] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:51.881 [2024-12-13 10:39:45.633790] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:51.881 [2024-12-13 10:39:45.646291] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:51.881 [2024-12-13 10:39:45.646789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:51.881 [2024-12-13 10:39:45.646850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:51.881 [2024-12-13 10:39:45.646886] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:51.881 [2024-12-13 10:39:45.647424] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:51.881 [2024-12-13 10:39:45.647620] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:51.881 [2024-12-13 10:39:45.647632] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:51.881 [2024-12-13 10:39:45.647640] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:51.881 [2024-12-13 10:39:45.647649] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:51.881 [2024-12-13 10:39:45.659446] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:51.881 [2024-12-13 10:39:45.659895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:51.881 [2024-12-13 10:39:45.659954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:51.881 [2024-12-13 10:39:45.659986] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:51.881 [2024-12-13 10:39:45.660651] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:51.881 [2024-12-13 10:39:45.661262] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:51.881 [2024-12-13 10:39:45.661272] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:51.881 [2024-12-13 10:39:45.661280] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:51.881 [2024-12-13 10:39:45.661288] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:51.881 [2024-12-13 10:39:45.672595] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:51.881 [2024-12-13 10:39:45.673056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:51.881 [2024-12-13 10:39:45.673077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:51.881 [2024-12-13 10:39:45.673087] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:51.881 [2024-12-13 10:39:45.673272] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:51.881 [2024-12-13 10:39:45.673456] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:51.881 [2024-12-13 10:39:45.673467] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:51.881 [2024-12-13 10:39:45.673475] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:51.881 [2024-12-13 10:39:45.673483] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:51.881 [2024-12-13 10:39:45.685680] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:51.881 [2024-12-13 10:39:45.686106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:51.881 [2024-12-13 10:39:45.686126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:51.881 [2024-12-13 10:39:45.686136] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:51.881 [2024-12-13 10:39:45.686314] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:51.881 [2024-12-13 10:39:45.686516] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:51.881 [2024-12-13 10:39:45.686528] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:51.881 [2024-12-13 10:39:45.686536] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:51.881 [2024-12-13 10:39:45.686545] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:51.881 [2024-12-13 10:39:45.698750] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:51.881 [2024-12-13 10:39:45.699209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:51.881 [2024-12-13 10:39:45.699232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:51.881 [2024-12-13 10:39:45.699242] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:51.881 [2024-12-13 10:39:45.699422] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:51.881 [2024-12-13 10:39:45.699630] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:51.881 [2024-12-13 10:39:45.699642] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:51.881 [2024-12-13 10:39:45.699650] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:51.881 [2024-12-13 10:39:45.699659] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:51.881 [2024-12-13 10:39:45.711773] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:51.881 [2024-12-13 10:39:45.712144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:51.881 [2024-12-13 10:39:45.712203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:51.881 [2024-12-13 10:39:45.712235] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:51.881 [2024-12-13 10:39:45.712901] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:51.881 [2024-12-13 10:39:45.713476] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:51.881 [2024-12-13 10:39:45.713486] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:51.881 [2024-12-13 10:39:45.713494] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:51.881 [2024-12-13 10:39:45.713502] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:51.881 [2024-12-13 10:39:45.724863] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:51.881 [2024-12-13 10:39:45.725287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:51.881 [2024-12-13 10:39:45.725306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:51.881 [2024-12-13 10:39:45.725316] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:51.881 [2024-12-13 10:39:45.725517] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:51.881 [2024-12-13 10:39:45.725706] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:51.881 [2024-12-13 10:39:45.725717] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:51.881 [2024-12-13 10:39:45.725725] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:51.881 [2024-12-13 10:39:45.725734] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:51.881 [2024-12-13 10:39:45.738112] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:51.881 [2024-12-13 10:39:45.738591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:51.881 [2024-12-13 10:39:45.738635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:51.881 [2024-12-13 10:39:45.738668] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:51.881 [2024-12-13 10:39:45.739330] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:51.881 [2024-12-13 10:39:45.739567] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:51.881 [2024-12-13 10:39:45.739579] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:51.881 [2024-12-13 10:39:45.739587] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:51.881 [2024-12-13 10:39:45.739596] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:51.881 [2024-12-13 10:39:45.751141] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:51.881 [2024-12-13 10:39:45.751581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:51.881 [2024-12-13 10:39:45.751640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:51.881 [2024-12-13 10:39:45.751672] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:51.881 [2024-12-13 10:39:45.752321] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:51.881 [2024-12-13 10:39:45.752662] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:51.881 [2024-12-13 10:39:45.752673] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:51.881 [2024-12-13 10:39:45.752682] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:51.881 [2024-12-13 10:39:45.752690] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:51.881 [2024-12-13 10:39:45.764311] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:51.881 [2024-12-13 10:39:45.764710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:51.881 [2024-12-13 10:39:45.764732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:51.882 [2024-12-13 10:39:45.764741] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:51.882 [2024-12-13 10:39:45.764929] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:51.882 [2024-12-13 10:39:45.765117] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:51.882 [2024-12-13 10:39:45.765128] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:51.882 [2024-12-13 10:39:45.765138] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:51.882 [2024-12-13 10:39:45.765146] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:52.142 [2024-12-13 10:39:45.777742] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:52.142 [2024-12-13 10:39:45.778210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:52.142 [2024-12-13 10:39:45.778231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:52.142 [2024-12-13 10:39:45.778241] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:52.142 [2024-12-13 10:39:45.778435] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:52.142 [2024-12-13 10:39:45.778638] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:52.142 [2024-12-13 10:39:45.778650] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:52.142 [2024-12-13 10:39:45.778659] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:52.142 [2024-12-13 10:39:45.778668] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:52.142 [2024-12-13 10:39:45.791157] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:52.142 [2024-12-13 10:39:45.791620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:52.142 [2024-12-13 10:39:45.791689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:52.142 [2024-12-13 10:39:45.791722] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:52.142 [2024-12-13 10:39:45.792371] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:52.142 [2024-12-13 10:39:45.792737] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:52.142 [2024-12-13 10:39:45.792755] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:52.142 [2024-12-13 10:39:45.792769] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:52.142 [2024-12-13 10:39:45.792782] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:52.142 [2024-12-13 10:39:45.805326] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:52.142 [2024-12-13 10:39:45.805801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:52.142 [2024-12-13 10:39:45.805823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:52.142 [2024-12-13 10:39:45.805834] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:52.142 [2024-12-13 10:39:45.806039] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:52.142 [2024-12-13 10:39:45.806246] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:52.142 [2024-12-13 10:39:45.806258] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:52.142 [2024-12-13 10:39:45.806267] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:52.142 [2024-12-13 10:39:45.806276] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:52.142 [2024-12-13 10:39:45.818491] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:52.142 [2024-12-13 10:39:45.818930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:52.142 [2024-12-13 10:39:45.818950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:52.142 [2024-12-13 10:39:45.818960] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:52.142 [2024-12-13 10:39:45.819138] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:52.142 [2024-12-13 10:39:45.819316] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:52.142 [2024-12-13 10:39:45.819326] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:52.142 [2024-12-13 10:39:45.819337] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:52.142 [2024-12-13 10:39:45.819346] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:52.142 [2024-12-13 10:39:45.831647] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:52.142 [2024-12-13 10:39:45.831997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:52.142 [2024-12-13 10:39:45.832017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:52.142 [2024-12-13 10:39:45.832027] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:52.142 [2024-12-13 10:39:45.832203] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:52.142 [2024-12-13 10:39:45.832382] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:52.142 [2024-12-13 10:39:45.832392] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:52.142 [2024-12-13 10:39:45.832400] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:52.142 [2024-12-13 10:39:45.832409] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:52.142 [2024-12-13 10:39:45.844909] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:52.142 [2024-12-13 10:39:45.845249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:52.142 [2024-12-13 10:39:45.845270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:52.142 [2024-12-13 10:39:45.845280] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:52.142 [2024-12-13 10:39:45.845474] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:52.142 [2024-12-13 10:39:45.845663] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:52.142 [2024-12-13 10:39:45.845673] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:52.142 [2024-12-13 10:39:45.845681] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:52.142 [2024-12-13 10:39:45.845690] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:52.142 [2024-12-13 10:39:45.857990] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:52.142 [2024-12-13 10:39:45.858459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:52.142 [2024-12-13 10:39:45.858525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:52.142 [2024-12-13 10:39:45.858557] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:52.143 [2024-12-13 10:39:45.859204] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:52.143 [2024-12-13 10:39:45.859780] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:52.143 [2024-12-13 10:39:45.859813] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:52.143 [2024-12-13 10:39:45.859821] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:52.143 [2024-12-13 10:39:45.859830] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:52.143 [2024-12-13 10:39:45.871375] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:52.143 [2024-12-13 10:39:45.871782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:52.143 [2024-12-13 10:39:45.871804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:52.143 [2024-12-13 10:39:45.871814] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:52.143 [2024-12-13 10:39:45.872008] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:52.143 [2024-12-13 10:39:45.872202] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:52.143 [2024-12-13 10:39:45.872214] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:52.143 [2024-12-13 10:39:45.872223] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:52.143 [2024-12-13 10:39:45.872231] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:52.143 [2024-12-13 10:39:45.884821] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:52.143 [2024-12-13 10:39:45.885289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:52.143 [2024-12-13 10:39:45.885309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:52.143 [2024-12-13 10:39:45.885319] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:52.143 [2024-12-13 10:39:45.885522] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:52.143 [2024-12-13 10:39:45.885716] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:52.143 [2024-12-13 10:39:45.885727] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:52.143 [2024-12-13 10:39:45.885736] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:52.143 [2024-12-13 10:39:45.885745] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:52.143 [2024-12-13 10:39:45.898164] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:52.143 [2024-12-13 10:39:45.898610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:52.143 [2024-12-13 10:39:45.898632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:52.143 [2024-12-13 10:39:45.898642] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:52.143 [2024-12-13 10:39:45.898837] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:52.143 [2024-12-13 10:39:45.899030] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:52.143 [2024-12-13 10:39:45.899041] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:52.143 [2024-12-13 10:39:45.899050] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:52.143 [2024-12-13 10:39:45.899059] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:52.143 [2024-12-13 10:39:45.911484] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:52.143 [2024-12-13 10:39:45.911883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:52.143 [2024-12-13 10:39:45.911943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:52.143 [2024-12-13 10:39:45.911984] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:52.143 [2024-12-13 10:39:45.912601] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:52.143 [2024-12-13 10:39:45.912797] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:52.143 [2024-12-13 10:39:45.912808] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:52.143 [2024-12-13 10:39:45.912817] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:52.143 [2024-12-13 10:39:45.912826] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:52.143 [2024-12-13 10:39:45.924671] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:52.143 [2024-12-13 10:39:45.925093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:52.143 [2024-12-13 10:39:45.925115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:52.143 [2024-12-13 10:39:45.925126] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:52.143 [2024-12-13 10:39:45.925314] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:52.143 [2024-12-13 10:39:45.925508] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:52.143 [2024-12-13 10:39:45.925520] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:52.143 [2024-12-13 10:39:45.925528] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:52.143 [2024-12-13 10:39:45.925537] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:52.143 [2024-12-13 10:39:45.937929] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:52.143 [2024-12-13 10:39:45.938266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:52.143 [2024-12-13 10:39:45.938287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:52.143 [2024-12-13 10:39:45.938297] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:52.143 [2024-12-13 10:39:45.938497] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:52.143 [2024-12-13 10:39:45.938691] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:52.143 [2024-12-13 10:39:45.938703] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:52.143 [2024-12-13 10:39:45.938711] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:52.143 [2024-12-13 10:39:45.938720] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:52.143 [2024-12-13 10:39:45.951163] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:52.143 [2024-12-13 10:39:45.951530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:52.143 [2024-12-13 10:39:45.951551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:52.143 [2024-12-13 10:39:45.951562] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:52.143 [2024-12-13 10:39:45.951754] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:52.143 [2024-12-13 10:39:45.951942] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:52.143 [2024-12-13 10:39:45.951953] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:52.143 [2024-12-13 10:39:45.951962] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:52.143 [2024-12-13 10:39:45.951970] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:52.143 [2024-12-13 10:39:45.964275] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:52.143 [2024-12-13 10:39:45.964659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:52.143 [2024-12-13 10:39:45.964680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:52.143 [2024-12-13 10:39:45.964690] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:52.143 [2024-12-13 10:39:45.964879] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:52.143 [2024-12-13 10:39:45.965068] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:52.143 [2024-12-13 10:39:45.965079] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:52.143 [2024-12-13 10:39:45.965088] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:52.143 [2024-12-13 10:39:45.965096] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:52.143 [2024-12-13 10:39:45.977391] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:52.143 [2024-12-13 10:39:45.977759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:52.143 [2024-12-13 10:39:45.977816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:52.143 [2024-12-13 10:39:45.977848] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:52.143 [2024-12-13 10:39:45.978509] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:52.143 [2024-12-13 10:39:45.979091] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:52.143 [2024-12-13 10:39:45.979102] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:52.143 [2024-12-13 10:39:45.979110] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:52.143 [2024-12-13 10:39:45.979119] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:52.143 [2024-12-13 10:39:45.990486] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:52.143 [2024-12-13 10:39:45.990933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:52.143 [2024-12-13 10:39:45.990954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:52.143 [2024-12-13 10:39:45.990964] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:52.143 [2024-12-13 10:39:45.991152] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:52.144 [2024-12-13 10:39:45.991341] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:52.144 [2024-12-13 10:39:45.991356] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:52.144 [2024-12-13 10:39:45.991366] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:52.144 [2024-12-13 10:39:45.991375] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:52.144 [2024-12-13 10:39:46.003608] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:52.144 [2024-12-13 10:39:46.004019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:52.144 [2024-12-13 10:39:46.004040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:52.144 [2024-12-13 10:39:46.004050] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:52.144 [2024-12-13 10:39:46.004238] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:52.144 [2024-12-13 10:39:46.004427] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:52.144 [2024-12-13 10:39:46.004438] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:52.144 [2024-12-13 10:39:46.004446] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:52.144 [2024-12-13 10:39:46.004461] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:52.144 [2024-12-13 10:39:46.016693] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:52.144 [2024-12-13 10:39:46.017090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:52.144 [2024-12-13 10:39:46.017110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:52.144 [2024-12-13 10:39:46.017120] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:52.144 [2024-12-13 10:39:46.017308] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:52.144 [2024-12-13 10:39:46.017502] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:52.144 [2024-12-13 10:39:46.017514] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:52.144 [2024-12-13 10:39:46.017522] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:52.144 [2024-12-13 10:39:46.017531] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:52.144 [2024-12-13 10:39:46.030103] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:52.144 [2024-12-13 10:39:46.030567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:52.144 [2024-12-13 10:39:46.030589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:52.144 [2024-12-13 10:39:46.030599] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:52.144 [2024-12-13 10:39:46.030792] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:52.144 [2024-12-13 10:39:46.030986] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:52.144 [2024-12-13 10:39:46.030998] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:52.144 [2024-12-13 10:39:46.031006] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:52.144 [2024-12-13 10:39:46.031019] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:52.404 [2024-12-13 10:39:46.043587] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:52.404 [2024-12-13 10:39:46.043916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:52.404 [2024-12-13 10:39:46.043939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:52.404 [2024-12-13 10:39:46.043950] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:52.404 [2024-12-13 10:39:46.044144] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:52.404 [2024-12-13 10:39:46.044338] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:52.404 [2024-12-13 10:39:46.044349] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:52.404 [2024-12-13 10:39:46.044358] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:52.404 [2024-12-13 10:39:46.044374] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:52.404 [2024-12-13 10:39:46.056889] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:52.404 [2024-12-13 10:39:46.057202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:52.404 [2024-12-13 10:39:46.057224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:52.404 [2024-12-13 10:39:46.057234] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:52.404 [2024-12-13 10:39:46.057422] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:52.404 [2024-12-13 10:39:46.057617] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:52.404 [2024-12-13 10:39:46.057629] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:52.404 [2024-12-13 10:39:46.057638] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:52.404 [2024-12-13 10:39:46.057647] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:52.404 [2024-12-13 10:39:46.070037] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:52.404 [2024-12-13 10:39:46.070368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:52.404 [2024-12-13 10:39:46.070389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:52.404 [2024-12-13 10:39:46.070399] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:52.404 [2024-12-13 10:39:46.070592] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:52.404 [2024-12-13 10:39:46.070781] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:52.404 [2024-12-13 10:39:46.070791] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:52.404 [2024-12-13 10:39:46.070800] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:52.404 [2024-12-13 10:39:46.070808] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:52.404 [2024-12-13 10:39:46.083281] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:52.404 [2024-12-13 10:39:46.083600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:52.404 [2024-12-13 10:39:46.083621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:52.404 [2024-12-13 10:39:46.083631] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:52.405 [2024-12-13 10:39:46.083819] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:52.405 [2024-12-13 10:39:46.084007] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:52.405 [2024-12-13 10:39:46.084018] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:52.405 [2024-12-13 10:39:46.084026] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:52.405 [2024-12-13 10:39:46.084035] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:52.405 [2024-12-13 10:39:46.096568] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:52.405 [2024-12-13 10:39:46.096973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:52.405 [2024-12-13 10:39:46.096995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:52.405 [2024-12-13 10:39:46.097005] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:52.405 [2024-12-13 10:39:46.097192] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:52.405 [2024-12-13 10:39:46.097380] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:52.405 [2024-12-13 10:39:46.097391] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:52.405 [2024-12-13 10:39:46.097399] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:52.405 [2024-12-13 10:39:46.097408] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:52.405 6866.67 IOPS, 26.82 MiB/s [2024-12-13T09:39:46.296Z] [2024-12-13 10:39:46.109655] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:52.405 [2024-12-13 10:39:46.110038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:52.405 [2024-12-13 10:39:46.110058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:52.405 [2024-12-13 10:39:46.110068] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:52.405 [2024-12-13 10:39:46.110248] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:52.405 [2024-12-13 10:39:46.110427] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:52.405 [2024-12-13 10:39:46.110437] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:52.405 [2024-12-13 10:39:46.110445] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:52.405 [2024-12-13 10:39:46.110459] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:52.405 [2024-12-13 10:39:46.122876] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:52.405 [2024-12-13 10:39:46.123312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:52.405 [2024-12-13 10:39:46.123370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:52.405 [2024-12-13 10:39:46.123413] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:52.405 [2024-12-13 10:39:46.124078] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:52.405 [2024-12-13 10:39:46.124504] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:52.405 [2024-12-13 10:39:46.124516] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:52.405 [2024-12-13 10:39:46.124524] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:52.405 [2024-12-13 10:39:46.124533] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:52.405 [2024-12-13 10:39:46.136009] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:52.405 [2024-12-13 10:39:46.136396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:52.405 [2024-12-13 10:39:46.136417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:52.405 [2024-12-13 10:39:46.136427] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:52.405 [2024-12-13 10:39:46.136623] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:52.405 [2024-12-13 10:39:46.136811] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:52.405 [2024-12-13 10:39:46.136822] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:52.405 [2024-12-13 10:39:46.136830] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:52.405 [2024-12-13 10:39:46.136839] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:52.405 [2024-12-13 10:39:46.149159] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:52.405 [2024-12-13 10:39:46.149489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:52.405 [2024-12-13 10:39:46.149510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:52.405 [2024-12-13 10:39:46.149521] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:52.405 [2024-12-13 10:39:46.149713] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:52.405 [2024-12-13 10:39:46.149890] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:52.405 [2024-12-13 10:39:46.149901] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:52.405 [2024-12-13 10:39:46.149909] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:52.405 [2024-12-13 10:39:46.149917] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:52.405 [2024-12-13 10:39:46.162212] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:52.405 [2024-12-13 10:39:46.162597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:52.405 [2024-12-13 10:39:46.162619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:52.405 [2024-12-13 10:39:46.162629] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:52.405 [2024-12-13 10:39:46.162817] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:52.405 [2024-12-13 10:39:46.163009] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:52.405 [2024-12-13 10:39:46.163020] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:52.405 [2024-12-13 10:39:46.163029] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:52.405 [2024-12-13 10:39:46.163037] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:52.405 [2024-12-13 10:39:46.175329] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:52.405 [2024-12-13 10:39:46.175738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:52.405 [2024-12-13 10:39:46.175759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:52.405 [2024-12-13 10:39:46.175769] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:52.405 [2024-12-13 10:39:46.175957] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:52.405 [2024-12-13 10:39:46.176146] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:52.405 [2024-12-13 10:39:46.176157] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:52.405 [2024-12-13 10:39:46.176165] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:52.405 [2024-12-13 10:39:46.176174] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:52.405 [2024-12-13 10:39:46.188466] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:52.405 [2024-12-13 10:39:46.188815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:52.405 [2024-12-13 10:39:46.188835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:52.405 [2024-12-13 10:39:46.188845] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:52.405 [2024-12-13 10:39:46.189034] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:52.405 [2024-12-13 10:39:46.189223] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:52.405 [2024-12-13 10:39:46.189234] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:52.405 [2024-12-13 10:39:46.189243] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:52.405 [2024-12-13 10:39:46.189252] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:52.405 [2024-12-13 10:39:46.201580] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:52.405 [2024-12-13 10:39:46.201974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:52.405 [2024-12-13 10:39:46.201996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:52.405 [2024-12-13 10:39:46.202006] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:52.405 [2024-12-13 10:39:46.202194] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:52.405 [2024-12-13 10:39:46.202384] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:52.405 [2024-12-13 10:39:46.202397] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:52.405 [2024-12-13 10:39:46.202406] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:52.405 [2024-12-13 10:39:46.202414] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:52.405 [2024-12-13 10:39:46.214710] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:52.405 [2024-12-13 10:39:46.215177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:52.405 [2024-12-13 10:39:46.215235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:52.405 [2024-12-13 10:39:46.215268] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:52.406 [2024-12-13 10:39:46.215737] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:52.406 [2024-12-13 10:39:46.215925] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:52.406 [2024-12-13 10:39:46.215936] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:52.406 [2024-12-13 10:39:46.215945] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:52.406 [2024-12-13 10:39:46.215953] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:52.406 [2024-12-13 10:39:46.227741] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:52.406 [2024-12-13 10:39:46.228202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:52.406 [2024-12-13 10:39:46.228259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:52.406 [2024-12-13 10:39:46.228291] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:52.406 [2024-12-13 10:39:46.228737] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:52.406 [2024-12-13 10:39:46.228915] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:52.406 [2024-12-13 10:39:46.228925] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:52.406 [2024-12-13 10:39:46.228933] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:52.406 [2024-12-13 10:39:46.228941] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:52.406 [2024-12-13 10:39:46.240809] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:52.406 [2024-12-13 10:39:46.241282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:52.406 [2024-12-13 10:39:46.241339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:52.406 [2024-12-13 10:39:46.241371] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:52.406 [2024-12-13 10:39:46.241781] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:52.406 [2024-12-13 10:39:46.241969] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:52.406 [2024-12-13 10:39:46.241979] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:52.406 [2024-12-13 10:39:46.241988] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:52.406 [2024-12-13 10:39:46.242000] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:52.406 [2024-12-13 10:39:46.253965] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:52.406 [2024-12-13 10:39:46.254421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:52.406 [2024-12-13 10:39:46.254490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:52.406 [2024-12-13 10:39:46.254523] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:52.406 [2024-12-13 10:39:46.255171] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:52.406 [2024-12-13 10:39:46.255711] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:52.406 [2024-12-13 10:39:46.255722] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:52.406 [2024-12-13 10:39:46.255730] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:52.406 [2024-12-13 10:39:46.255744] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:52.406 [2024-12-13 10:39:46.267110] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:52.406 [2024-12-13 10:39:46.267565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:52.406 [2024-12-13 10:39:46.267624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:52.406 [2024-12-13 10:39:46.267657] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:52.406 [2024-12-13 10:39:46.268305] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:52.406 [2024-12-13 10:39:46.268940] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:52.406 [2024-12-13 10:39:46.268957] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:52.406 [2024-12-13 10:39:46.268970] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:52.406 [2024-12-13 10:39:46.268983] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:52.406 [2024-12-13 10:39:46.281324] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:52.406 [2024-12-13 10:39:46.281793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:52.406 [2024-12-13 10:39:46.281814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:52.406 [2024-12-13 10:39:46.281825] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:52.406 [2024-12-13 10:39:46.282030] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:52.406 [2024-12-13 10:39:46.282235] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:52.406 [2024-12-13 10:39:46.282247] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:52.406 [2024-12-13 10:39:46.282257] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:52.406 [2024-12-13 10:39:46.282266] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:52.406 [2024-12-13 10:39:46.294670] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:52.666 [2024-12-13 10:39:46.295063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:52.666 [2024-12-13 10:39:46.295084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:52.666 [2024-12-13 10:39:46.295094] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:52.666 [2024-12-13 10:39:46.295288] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:52.666 [2024-12-13 10:39:46.295487] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:52.666 [2024-12-13 10:39:46.295499] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:52.666 [2024-12-13 10:39:46.295507] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:52.666 [2024-12-13 10:39:46.295516] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:52.666 [2024-12-13 10:39:46.307887] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:52.666 [2024-12-13 10:39:46.308367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:52.666 [2024-12-13 10:39:46.308426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:52.666 [2024-12-13 10:39:46.308476] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:52.666 [2024-12-13 10:39:46.308927] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:52.666 [2024-12-13 10:39:46.309115] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:52.666 [2024-12-13 10:39:46.309126] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:52.666 [2024-12-13 10:39:46.309134] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:52.666 [2024-12-13 10:39:46.309143] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:52.666 [2024-12-13 10:39:46.320906] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:52.666 [2024-12-13 10:39:46.321383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:52.666 [2024-12-13 10:39:46.321430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:52.666 [2024-12-13 10:39:46.321481] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:52.666 [2024-12-13 10:39:46.322030] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:52.666 [2024-12-13 10:39:46.322218] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:52.666 [2024-12-13 10:39:46.322229] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:52.667 [2024-12-13 10:39:46.322237] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:52.667 [2024-12-13 10:39:46.322246] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:52.667 [2024-12-13 10:39:46.334034] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:52.667 [2024-12-13 10:39:46.334480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:52.667 [2024-12-13 10:39:46.334501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:52.667 [2024-12-13 10:39:46.334514] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:52.667 [2024-12-13 10:39:46.334702] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:52.667 [2024-12-13 10:39:46.334891] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:52.667 [2024-12-13 10:39:46.334902] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:52.667 [2024-12-13 10:39:46.334910] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:52.667 [2024-12-13 10:39:46.334919] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:52.667 [2024-12-13 10:39:46.347101] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:52.667 [2024-12-13 10:39:46.347474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:52.667 [2024-12-13 10:39:46.347495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:52.667 [2024-12-13 10:39:46.347506] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:52.667 [2024-12-13 10:39:46.347685] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:52.667 [2024-12-13 10:39:46.347864] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:52.667 [2024-12-13 10:39:46.347874] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:52.667 [2024-12-13 10:39:46.347883] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:52.667 [2024-12-13 10:39:46.347892] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:52.667 [2024-12-13 10:39:46.360150] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:52.667 [2024-12-13 10:39:46.360628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:52.667 [2024-12-13 10:39:46.360686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:52.667 [2024-12-13 10:39:46.360717] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:52.667 [2024-12-13 10:39:46.361233] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:52.667 [2024-12-13 10:39:46.361421] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:52.667 [2024-12-13 10:39:46.361431] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:52.667 [2024-12-13 10:39:46.361440] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:52.667 [2024-12-13 10:39:46.361455] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:52.667 [2024-12-13 10:39:46.373239] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:52.667 [2024-12-13 10:39:46.373697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:52.667 [2024-12-13 10:39:46.373755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:52.667 [2024-12-13 10:39:46.373788] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:52.667 [2024-12-13 10:39:46.374437] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:52.667 [2024-12-13 10:39:46.375004] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:52.667 [2024-12-13 10:39:46.375015] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:52.667 [2024-12-13 10:39:46.375024] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:52.667 [2024-12-13 10:39:46.375032] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:52.667 [2024-12-13 10:39:46.386322] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:52.667 [2024-12-13 10:39:46.386801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:52.667 [2024-12-13 10:39:46.386823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:52.667 [2024-12-13 10:39:46.386833] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:52.667 [2024-12-13 10:39:46.387022] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:52.667 [2024-12-13 10:39:46.387210] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:52.667 [2024-12-13 10:39:46.387221] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:52.667 [2024-12-13 10:39:46.387229] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:52.667 [2024-12-13 10:39:46.387238] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:52.667 [2024-12-13 10:39:46.399476] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:52.667 [2024-12-13 10:39:46.399931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:52.667 [2024-12-13 10:39:46.399988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:52.667 [2024-12-13 10:39:46.400020] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:52.667 [2024-12-13 10:39:46.400686] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:52.667 [2024-12-13 10:39:46.401220] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:52.667 [2024-12-13 10:39:46.401231] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:52.667 [2024-12-13 10:39:46.401239] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:52.667 [2024-12-13 10:39:46.401248] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:52.667 [2024-12-13 10:39:46.412551] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:52.667 [2024-12-13 10:39:46.413007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:52.667 [2024-12-13 10:39:46.413027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:52.667 [2024-12-13 10:39:46.413036] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:52.667 [2024-12-13 10:39:46.413215] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:52.667 [2024-12-13 10:39:46.413393] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:52.667 [2024-12-13 10:39:46.413404] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:52.667 [2024-12-13 10:39:46.413415] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:52.667 [2024-12-13 10:39:46.413423] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:52.667 [2024-12-13 10:39:46.425717] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:52.667 [2024-12-13 10:39:46.426182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:52.667 [2024-12-13 10:39:46.426241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:52.667 [2024-12-13 10:39:46.426287] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:52.667 [2024-12-13 10:39:46.426953] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:52.667 [2024-12-13 10:39:46.427350] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:52.667 [2024-12-13 10:39:46.427360] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:52.667 [2024-12-13 10:39:46.427368] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:52.667 [2024-12-13 10:39:46.427377] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:52.667 [2024-12-13 10:39:46.439274] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:52.667 [2024-12-13 10:39:46.439726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:52.667 [2024-12-13 10:39:46.439749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:52.667 [2024-12-13 10:39:46.439759] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:52.667 [2024-12-13 10:39:46.439953] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:52.667 [2024-12-13 10:39:46.440146] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:52.667 [2024-12-13 10:39:46.440157] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:52.667 [2024-12-13 10:39:46.440166] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:52.667 [2024-12-13 10:39:46.440175] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:52.667 [2024-12-13 10:39:46.452417] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:52.667 [2024-12-13 10:39:46.452801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:52.667 [2024-12-13 10:39:46.452822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:52.667 [2024-12-13 10:39:46.452832] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:52.667 [2024-12-13 10:39:46.453019] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:52.667 [2024-12-13 10:39:46.453207] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:52.668 [2024-12-13 10:39:46.453217] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:52.668 [2024-12-13 10:39:46.453226] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:52.668 [2024-12-13 10:39:46.453235] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:52.668 [2024-12-13 10:39:46.465521] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:52.668 [2024-12-13 10:39:46.465970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:52.668 [2024-12-13 10:39:46.465991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:52.668 [2024-12-13 10:39:46.466001] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:52.668 [2024-12-13 10:39:46.466189] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:52.668 [2024-12-13 10:39:46.466376] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:52.668 [2024-12-13 10:39:46.466387] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:52.668 [2024-12-13 10:39:46.466396] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:52.668 [2024-12-13 10:39:46.466405] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:52.668 [2024-12-13 10:39:46.478528] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:52.668 [2024-12-13 10:39:46.478986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:52.668 [2024-12-13 10:39:46.479043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:52.668 [2024-12-13 10:39:46.479075] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:52.668 [2024-12-13 10:39:46.479534] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:52.668 [2024-12-13 10:39:46.479722] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:52.668 [2024-12-13 10:39:46.479733] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:52.668 [2024-12-13 10:39:46.479741] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:52.668 [2024-12-13 10:39:46.479750] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:52.668 [2024-12-13 10:39:46.491694] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:52.668 [2024-12-13 10:39:46.492057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:52.668 [2024-12-13 10:39:46.492078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:52.668 [2024-12-13 10:39:46.492087] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:52.668 [2024-12-13 10:39:46.492265] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:52.668 [2024-12-13 10:39:46.492443] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:52.668 [2024-12-13 10:39:46.492461] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:52.668 [2024-12-13 10:39:46.492469] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:52.668 [2024-12-13 10:39:46.492477] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:52.668 [2024-12-13 10:39:46.504743] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:52.668 [2024-12-13 10:39:46.505199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:52.668 [2024-12-13 10:39:46.505223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:52.668 [2024-12-13 10:39:46.505233] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:52.668 [2024-12-13 10:39:46.505421] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:52.668 [2024-12-13 10:39:46.505618] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:52.668 [2024-12-13 10:39:46.505629] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:52.668 [2024-12-13 10:39:46.505638] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:52.668 [2024-12-13 10:39:46.505646] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:52.668 [2024-12-13 10:39:46.517772] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:52.668 [2024-12-13 10:39:46.518225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:52.668 [2024-12-13 10:39:46.518246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:52.668 [2024-12-13 10:39:46.518256] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:52.668 [2024-12-13 10:39:46.518444] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:52.668 [2024-12-13 10:39:46.518640] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:52.668 [2024-12-13 10:39:46.518651] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:52.668 [2024-12-13 10:39:46.518659] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:52.668 [2024-12-13 10:39:46.518668] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:52.668 [2024-12-13 10:39:46.531104] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:52.668 [2024-12-13 10:39:46.531540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:52.668 [2024-12-13 10:39:46.531561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:52.668 [2024-12-13 10:39:46.531573] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:52.668 [2024-12-13 10:39:46.531780] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:52.668 [2024-12-13 10:39:46.531974] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:52.668 [2024-12-13 10:39:46.531985] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:52.668 [2024-12-13 10:39:46.531995] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:52.668 [2024-12-13 10:39:46.532004] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:52.668 [2024-12-13 10:39:46.544490] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:52.668 [2024-12-13 10:39:46.544971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:52.668 [2024-12-13 10:39:46.545028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:52.668 [2024-12-13 10:39:46.545060] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:52.668 [2024-12-13 10:39:46.545552] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:52.668 [2024-12-13 10:39:46.545741] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:52.668 [2024-12-13 10:39:46.545751] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:52.668 [2024-12-13 10:39:46.545760] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:52.668 [2024-12-13 10:39:46.545768] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:52.928 [2024-12-13 10:39:46.557833] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:52.928 [2024-12-13 10:39:46.558252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:52.928 [2024-12-13 10:39:46.558314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:52.928 [2024-12-13 10:39:46.558346] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:52.929 [2024-12-13 10:39:46.559012] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:52.929 [2024-12-13 10:39:46.559500] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:52.929 [2024-12-13 10:39:46.559511] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:52.929 [2024-12-13 10:39:46.559520] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:52.929 [2024-12-13 10:39:46.559528] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:52.929 [2024-12-13 10:39:46.570890] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:52.929 [2024-12-13 10:39:46.571342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:52.929 [2024-12-13 10:39:46.571362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:52.929 [2024-12-13 10:39:46.571372] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:52.929 [2024-12-13 10:39:46.571568] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:52.929 [2024-12-13 10:39:46.571757] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:52.929 [2024-12-13 10:39:46.571768] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:52.929 [2024-12-13 10:39:46.571776] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:52.929 [2024-12-13 10:39:46.571784] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:52.929 [2024-12-13 10:39:46.583905] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:52.929 [2024-12-13 10:39:46.584358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:52.929 [2024-12-13 10:39:46.584378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:52.929 [2024-12-13 10:39:46.584388] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:52.929 [2024-12-13 10:39:46.584583] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:52.929 [2024-12-13 10:39:46.584772] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:52.929 [2024-12-13 10:39:46.584785] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:52.929 [2024-12-13 10:39:46.584794] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:52.929 [2024-12-13 10:39:46.584803] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:52.929 [2024-12-13 10:39:46.597014] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:52.929 [2024-12-13 10:39:46.597471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:52.929 [2024-12-13 10:39:46.597492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:52.929 [2024-12-13 10:39:46.597502] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:52.929 [2024-12-13 10:39:46.597690] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:52.929 [2024-12-13 10:39:46.597878] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:52.929 [2024-12-13 10:39:46.597889] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:52.929 [2024-12-13 10:39:46.597897] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:52.929 [2024-12-13 10:39:46.597906] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:52.929 [2024-12-13 10:39:46.610040] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:52.929 [2024-12-13 10:39:46.610472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:52.929 [2024-12-13 10:39:46.610493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:52.929 [2024-12-13 10:39:46.610503] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:52.929 [2024-12-13 10:39:46.610692] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:52.929 [2024-12-13 10:39:46.610886] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:52.929 [2024-12-13 10:39:46.610896] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:52.929 [2024-12-13 10:39:46.610905] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:52.929 [2024-12-13 10:39:46.610914] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:52.929 [2024-12-13 10:39:46.623196] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:52.929 [2024-12-13 10:39:46.623655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:52.929 [2024-12-13 10:39:46.623714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:52.929 [2024-12-13 10:39:46.623747] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:52.929 [2024-12-13 10:39:46.624396] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:52.929 [2024-12-13 10:39:46.624745] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:52.929 [2024-12-13 10:39:46.624756] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:52.929 [2024-12-13 10:39:46.624767] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:52.929 [2024-12-13 10:39:46.624776] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:52.929 [2024-12-13 10:39:46.636259] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:52.929 [2024-12-13 10:39:46.636696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:52.929 [2024-12-13 10:39:46.636716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:52.929 [2024-12-13 10:39:46.636726] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:52.929 [2024-12-13 10:39:46.636915] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:52.929 [2024-12-13 10:39:46.637102] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:52.929 [2024-12-13 10:39:46.637112] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:52.929 [2024-12-13 10:39:46.637121] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:52.929 [2024-12-13 10:39:46.637129] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:52.929 [2024-12-13 10:39:46.649356] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:52.929 [2024-12-13 10:39:46.649810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:52.929 [2024-12-13 10:39:46.649833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:52.929 [2024-12-13 10:39:46.649843] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:52.929 [2024-12-13 10:39:46.650031] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:52.929 [2024-12-13 10:39:46.650220] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:52.929 [2024-12-13 10:39:46.650230] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:52.929 [2024-12-13 10:39:46.650239] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:52.929 [2024-12-13 10:39:46.650248] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:52.929 [2024-12-13 10:39:46.662375] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:52.929 [2024-12-13 10:39:46.662868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:52.929 [2024-12-13 10:39:46.662890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:52.929 [2024-12-13 10:39:46.662900] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:52.929 [2024-12-13 10:39:46.663088] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:52.929 [2024-12-13 10:39:46.663277] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:52.929 [2024-12-13 10:39:46.663287] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:52.929 [2024-12-13 10:39:46.663296] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:52.929 [2024-12-13 10:39:46.663304] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:52.929 [2024-12-13 10:39:46.675419] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:52.929 [2024-12-13 10:39:46.675851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:52.929 [2024-12-13 10:39:46.675910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:52.929 [2024-12-13 10:39:46.675943] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:52.929 [2024-12-13 10:39:46.676511] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:52.929 [2024-12-13 10:39:46.676690] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:52.929 [2024-12-13 10:39:46.676700] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:52.929 [2024-12-13 10:39:46.676707] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:52.929 [2024-12-13 10:39:46.676715] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:52.929 [2024-12-13 10:39:46.688573] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:52.929 [2024-12-13 10:39:46.688976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:52.929 [2024-12-13 10:39:46.688996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:52.930 [2024-12-13 10:39:46.689006] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:52.930 [2024-12-13 10:39:46.689184] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:52.930 [2024-12-13 10:39:46.689362] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:52.930 [2024-12-13 10:39:46.689372] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:52.930 [2024-12-13 10:39:46.689380] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:52.930 [2024-12-13 10:39:46.689388] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:52.930 [2024-12-13 10:39:46.701758] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:52.930 [2024-12-13 10:39:46.702212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:52.930 [2024-12-13 10:39:46.702234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:52.930 [2024-12-13 10:39:46.702244] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:52.930 [2024-12-13 10:39:46.702433] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:52.930 [2024-12-13 10:39:46.702629] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:52.930 [2024-12-13 10:39:46.702640] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:52.930 [2024-12-13 10:39:46.702649] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:52.930 [2024-12-13 10:39:46.702658] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:52.930 [2024-12-13 10:39:46.714771] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:52.930 [2024-12-13 10:39:46.715152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:52.930 [2024-12-13 10:39:46.715173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:52.930 [2024-12-13 10:39:46.715186] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:52.930 [2024-12-13 10:39:46.715373] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:52.930 [2024-12-13 10:39:46.715570] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:52.930 [2024-12-13 10:39:46.715581] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:52.930 [2024-12-13 10:39:46.715590] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:52.930 [2024-12-13 10:39:46.715598] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:52.930 [2024-12-13 10:39:46.727813] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:52.930 [2024-12-13 10:39:46.728266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:52.930 [2024-12-13 10:39:46.728288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:52.930 [2024-12-13 10:39:46.728298] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:52.930 [2024-12-13 10:39:46.728494] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:52.930 [2024-12-13 10:39:46.728684] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:52.930 [2024-12-13 10:39:46.728694] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:52.930 [2024-12-13 10:39:46.728702] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:52.930 [2024-12-13 10:39:46.728711] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:52.930 [2024-12-13 10:39:46.741081] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:52.930 [2024-12-13 10:39:46.741462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:52.930 [2024-12-13 10:39:46.741483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:52.930 [2024-12-13 10:39:46.741493] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:52.930 [2024-12-13 10:39:46.741686] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:52.930 [2024-12-13 10:39:46.741865] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:52.930 [2024-12-13 10:39:46.741875] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:52.930 [2024-12-13 10:39:46.741883] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:52.930 [2024-12-13 10:39:46.741891] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:52.930 [2024-12-13 10:39:46.754174] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:52.930 [2024-12-13 10:39:46.754599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:52.930 [2024-12-13 10:39:46.754620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:52.930 [2024-12-13 10:39:46.754629] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:52.930 [2024-12-13 10:39:46.754809] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:52.930 [2024-12-13 10:39:46.754988] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:52.930 [2024-12-13 10:39:46.754998] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:52.930 [2024-12-13 10:39:46.755006] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:52.930 [2024-12-13 10:39:46.755014] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:52.930 [2024-12-13 10:39:46.767291] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:52.930 [2024-12-13 10:39:46.767747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:52.930 [2024-12-13 10:39:46.767768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:52.930 [2024-12-13 10:39:46.767778] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:52.930 [2024-12-13 10:39:46.767967] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:52.930 [2024-12-13 10:39:46.768155] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:52.930 [2024-12-13 10:39:46.768166] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:52.930 [2024-12-13 10:39:46.768175] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:52.930 [2024-12-13 10:39:46.768183] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:52.930 [2024-12-13 10:39:46.780467] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:52.930 [2024-12-13 10:39:46.780888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:52.930 [2024-12-13 10:39:46.780909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:52.930 [2024-12-13 10:39:46.780919] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:52.930 [2024-12-13 10:39:46.781106] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:52.930 [2024-12-13 10:39:46.781294] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:52.930 [2024-12-13 10:39:46.781305] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:52.930 [2024-12-13 10:39:46.781314] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:52.930 [2024-12-13 10:39:46.781322] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:52.930 [2024-12-13 10:39:46.793622] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:52.930 [2024-12-13 10:39:46.794048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:52.930 [2024-12-13 10:39:46.794069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:52.930 [2024-12-13 10:39:46.794079] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:52.930 [2024-12-13 10:39:46.794267] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:52.930 [2024-12-13 10:39:46.794463] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:52.930 [2024-12-13 10:39:46.794478] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:52.930 [2024-12-13 10:39:46.794492] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:52.930 [2024-12-13 10:39:46.794501] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:52.930 [2024-12-13 10:39:46.806960] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:52.930 [2024-12-13 10:39:46.807344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:52.930 [2024-12-13 10:39:46.807366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:52.930 [2024-12-13 10:39:46.807376] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:52.930 [2024-12-13 10:39:46.807578] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:52.930 [2024-12-13 10:39:46.807771] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:52.930 [2024-12-13 10:39:46.807782] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:52.930 [2024-12-13 10:39:46.807792] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:52.930 [2024-12-13 10:39:46.807802] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:53.191 [2024-12-13 10:39:46.820274] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:53.191 [2024-12-13 10:39:46.820675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:53.191 [2024-12-13 10:39:46.820697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:53.191 [2024-12-13 10:39:46.820707] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:53.191 [2024-12-13 10:39:46.820901] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:53.191 [2024-12-13 10:39:46.821095] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:53.191 [2024-12-13 10:39:46.821106] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:53.191 [2024-12-13 10:39:46.821114] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:53.191 [2024-12-13 10:39:46.821123] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:53.191 [2024-12-13 10:39:46.833422] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:53.191 [2024-12-13 10:39:46.833855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:53.191 [2024-12-13 10:39:46.833875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:53.191 [2024-12-13 10:39:46.833885] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:53.191 [2024-12-13 10:39:46.834062] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:53.191 [2024-12-13 10:39:46.834241] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:53.191 [2024-12-13 10:39:46.834251] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:53.191 [2024-12-13 10:39:46.834259] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:53.191 [2024-12-13 10:39:46.834271] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:53.191 [2024-12-13 10:39:46.846635] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:53.191 [2024-12-13 10:39:46.847056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:53.191 [2024-12-13 10:39:46.847075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:53.191 [2024-12-13 10:39:46.847085] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:53.191 [2024-12-13 10:39:46.847263] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:53.191 [2024-12-13 10:39:46.847441] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:53.191 [2024-12-13 10:39:46.847458] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:53.191 [2024-12-13 10:39:46.847466] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:53.191 [2024-12-13 10:39:46.847475] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:53.191 [2024-12-13 10:39:46.859658] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:53.191 [2024-12-13 10:39:46.860009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:53.191 [2024-12-13 10:39:46.860028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:53.191 [2024-12-13 10:39:46.860037] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:53.191 [2024-12-13 10:39:46.860215] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:53.191 [2024-12-13 10:39:46.860393] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:53.191 [2024-12-13 10:39:46.860403] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:53.191 [2024-12-13 10:39:46.860411] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:53.191 [2024-12-13 10:39:46.860419] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:53.191 [2024-12-13 10:39:46.872795] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:53.191 [2024-12-13 10:39:46.873217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:53.191 [2024-12-13 10:39:46.873237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:53.191 [2024-12-13 10:39:46.873246] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:53.191 [2024-12-13 10:39:46.873424] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:53.191 [2024-12-13 10:39:46.873633] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:53.191 [2024-12-13 10:39:46.873644] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:53.191 [2024-12-13 10:39:46.873652] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:53.191 [2024-12-13 10:39:46.873661] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:53.191 [2024-12-13 10:39:46.885939] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:53.191 [2024-12-13 10:39:46.886383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:53.191 [2024-12-13 10:39:46.886403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:53.191 [2024-12-13 10:39:46.886412] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:53.191 [2024-12-13 10:39:46.886607] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:53.191 [2024-12-13 10:39:46.886795] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:53.191 [2024-12-13 10:39:46.886805] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:53.191 [2024-12-13 10:39:46.886814] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:53.191 [2024-12-13 10:39:46.886822] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:53.191 [2024-12-13 10:39:46.899031] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:53.191 [2024-12-13 10:39:46.899376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:53.191 [2024-12-13 10:39:46.899395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:53.191 [2024-12-13 10:39:46.899404] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:53.191 [2024-12-13 10:39:46.899611] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:53.191 [2024-12-13 10:39:46.899800] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:53.191 [2024-12-13 10:39:46.899810] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:53.191 [2024-12-13 10:39:46.899819] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:53.191 [2024-12-13 10:39:46.899828] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:53.191 [2024-12-13 10:39:46.912129] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:53.191 [2024-12-13 10:39:46.912560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:53.191 [2024-12-13 10:39:46.912580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:53.191 [2024-12-13 10:39:46.912590] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:53.191 [2024-12-13 10:39:46.912769] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:53.191 [2024-12-13 10:39:46.912948] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:53.191 [2024-12-13 10:39:46.912958] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:53.191 [2024-12-13 10:39:46.912966] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:53.191 [2024-12-13 10:39:46.912974] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:53.192 [2024-12-13 10:39:46.925353] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:53.192 [2024-12-13 10:39:46.925811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:53.192 [2024-12-13 10:39:46.925832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:53.192 [2024-12-13 10:39:46.925845] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:53.192 [2024-12-13 10:39:46.926033] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:53.192 [2024-12-13 10:39:46.926221] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:53.192 [2024-12-13 10:39:46.926232] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:53.192 [2024-12-13 10:39:46.926240] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:53.192 [2024-12-13 10:39:46.926249] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:53.192 [2024-12-13 10:39:46.938521] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:53.192 [2024-12-13 10:39:46.938902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:53.192 [2024-12-13 10:39:46.938922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:53.192 [2024-12-13 10:39:46.938932] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:53.192 [2024-12-13 10:39:46.939120] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:53.192 [2024-12-13 10:39:46.939308] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:53.192 [2024-12-13 10:39:46.939319] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:53.192 [2024-12-13 10:39:46.939328] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:53.192 [2024-12-13 10:39:46.939336] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:53.192 [2024-12-13 10:39:46.951602] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:53.192 [2024-12-13 10:39:46.951969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:53.192 [2024-12-13 10:39:46.951990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:53.192 [2024-12-13 10:39:46.952000] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:53.192 [2024-12-13 10:39:46.952189] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:53.192 [2024-12-13 10:39:46.952377] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:53.192 [2024-12-13 10:39:46.952389] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:53.192 [2024-12-13 10:39:46.952397] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:53.192 [2024-12-13 10:39:46.952406] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:53.192 [2024-12-13 10:39:46.964704] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:53.192 [2024-12-13 10:39:46.965138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:53.192 [2024-12-13 10:39:46.965198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:53.192 [2024-12-13 10:39:46.965230] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:53.192 [2024-12-13 10:39:46.965692] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:53.192 [2024-12-13 10:39:46.965885] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:53.192 [2024-12-13 10:39:46.965895] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:53.192 [2024-12-13 10:39:46.965904] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:53.192 [2024-12-13 10:39:46.965913] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:53.192 [2024-12-13 10:39:46.977853] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:53.192 [2024-12-13 10:39:46.978279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:53.192 [2024-12-13 10:39:46.978301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:53.192 [2024-12-13 10:39:46.978310] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:53.192 [2024-12-13 10:39:46.978512] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:53.192 [2024-12-13 10:39:46.978700] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:53.192 [2024-12-13 10:39:46.978711] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:53.192 [2024-12-13 10:39:46.978720] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:53.192 [2024-12-13 10:39:46.978729] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:53.192 [2024-12-13 10:39:46.991108] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:53.192 [2024-12-13 10:39:46.991491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:53.192 [2024-12-13 10:39:46.991555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:53.192 [2024-12-13 10:39:46.991587] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:53.192 [2024-12-13 10:39:46.992236] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:53.192 [2024-12-13 10:39:46.992659] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:53.192 [2024-12-13 10:39:46.992670] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:53.192 [2024-12-13 10:39:46.992679] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:53.192 [2024-12-13 10:39:46.992687] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:53.192 [2024-12-13 10:39:47.004436] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:53.192 [2024-12-13 10:39:47.004874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:53.192 [2024-12-13 10:39:47.004894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:53.192 [2024-12-13 10:39:47.004904] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:53.192 [2024-12-13 10:39:47.005092] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:53.192 [2024-12-13 10:39:47.005281] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:53.192 [2024-12-13 10:39:47.005292] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:53.192 [2024-12-13 10:39:47.005303] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:53.192 [2024-12-13 10:39:47.005312] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:53.192 [2024-12-13 10:39:47.017680] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:53.192 [2024-12-13 10:39:47.018133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:53.192 [2024-12-13 10:39:47.018155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:53.192 [2024-12-13 10:39:47.018165] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:53.192 [2024-12-13 10:39:47.018360] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:53.192 [2024-12-13 10:39:47.018562] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:53.192 [2024-12-13 10:39:47.018574] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:53.192 [2024-12-13 10:39:47.018583] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:53.192 [2024-12-13 10:39:47.018593] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:53.192 [2024-12-13 10:39:47.030750] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:53.192 [2024-12-13 10:39:47.031193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:53.192 [2024-12-13 10:39:47.031251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:53.192 [2024-12-13 10:39:47.031283] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:53.192 [2024-12-13 10:39:47.031787] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:53.192 [2024-12-13 10:39:47.031966] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:53.192 [2024-12-13 10:39:47.031976] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:53.192 [2024-12-13 10:39:47.031984] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:53.192 [2024-12-13 10:39:47.031993] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:53.192 [2024-12-13 10:39:47.043926] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:53.192 [2024-12-13 10:39:47.044377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:53.192 [2024-12-13 10:39:47.044399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:53.192 [2024-12-13 10:39:47.044409] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:53.192 [2024-12-13 10:39:47.044610] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:53.192 [2024-12-13 10:39:47.044803] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:53.192 [2024-12-13 10:39:47.044814] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:53.192 [2024-12-13 10:39:47.044823] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:53.192 [2024-12-13 10:39:47.044832] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:53.193 [2024-12-13 10:39:47.057346] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:53.193 [2024-12-13 10:39:47.057858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:53.193 [2024-12-13 10:39:47.057922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:53.193 [2024-12-13 10:39:47.057954] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:53.193 [2024-12-13 10:39:47.058506] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:53.193 [2024-12-13 10:39:47.058696] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:53.193 [2024-12-13 10:39:47.058706] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:53.193 [2024-12-13 10:39:47.058714] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:53.193 [2024-12-13 10:39:47.058723] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:53.193 [2024-12-13 10:39:47.070570] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:53.193 [2024-12-13 10:39:47.071004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:53.193 [2024-12-13 10:39:47.071025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:53.193 [2024-12-13 10:39:47.071035] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:53.193 [2024-12-13 10:39:47.071223] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:53.193 [2024-12-13 10:39:47.071411] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:53.193 [2024-12-13 10:39:47.071422] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:53.193 [2024-12-13 10:39:47.071430] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:53.193 [2024-12-13 10:39:47.071438] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:53.453 [2024-12-13 10:39:47.083896] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:53.453 [2024-12-13 10:39:47.084348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:53.453 [2024-12-13 10:39:47.084368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:53.453 [2024-12-13 10:39:47.084379] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:53.453 [2024-12-13 10:39:47.084574] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:53.453 [2024-12-13 10:39:47.084762] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:53.453 [2024-12-13 10:39:47.084773] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:53.453 [2024-12-13 10:39:47.084781] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:53.453 [2024-12-13 10:39:47.084790] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:53.453 [2024-12-13 10:39:47.096944] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:53.453 [2024-12-13 10:39:47.097397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:53.453 [2024-12-13 10:39:47.097421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:53.453 [2024-12-13 10:39:47.097431] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:53.453 [2024-12-13 10:39:47.097628] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:53.453 [2024-12-13 10:39:47.097817] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:53.453 [2024-12-13 10:39:47.097828] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:53.453 [2024-12-13 10:39:47.097837] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:53.453 [2024-12-13 10:39:47.097845] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:53.453 5150.00 IOPS, 20.12 MiB/s [2024-12-13T09:39:47.344Z] [2024-12-13 10:39:47.110046] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:53.454 [2024-12-13 10:39:47.110516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:53.454 [2024-12-13 10:39:47.110578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:53.454 [2024-12-13 10:39:47.110610] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:53.454 [2024-12-13 10:39:47.111261] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:53.454 [2024-12-13 10:39:47.111580] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:53.454 [2024-12-13 10:39:47.111592] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:53.454 [2024-12-13 10:39:47.111601] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:53.454 [2024-12-13 10:39:47.111610] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:53.454 [2024-12-13 10:39:47.123202] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:53.454 [2024-12-13 10:39:47.123634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:53.454 [2024-12-13 10:39:47.123655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:53.454 [2024-12-13 10:39:47.123665] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:53.454 [2024-12-13 10:39:47.123843] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:53.454 [2024-12-13 10:39:47.124022] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:53.454 [2024-12-13 10:39:47.124032] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:53.454 [2024-12-13 10:39:47.124040] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:53.454 [2024-12-13 10:39:47.124048] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:53.454 [2024-12-13 10:39:47.136375] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:53.454 [2024-12-13 10:39:47.136851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:53.454 [2024-12-13 10:39:47.136872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:53.454 [2024-12-13 10:39:47.136885] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:53.454 [2024-12-13 10:39:47.137074] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:53.454 [2024-12-13 10:39:47.137262] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:53.454 [2024-12-13 10:39:47.137273] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:53.454 [2024-12-13 10:39:47.137282] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:53.454 [2024-12-13 10:39:47.137291] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:53.454 [2024-12-13 10:39:47.149565] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:53.454 [2024-12-13 10:39:47.150029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:53.454 [2024-12-13 10:39:47.150087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:53.454 [2024-12-13 10:39:47.150119] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:53.454 [2024-12-13 10:39:47.150595] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:53.454 [2024-12-13 10:39:47.150785] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:53.454 [2024-12-13 10:39:47.150795] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:53.454 [2024-12-13 10:39:47.150804] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:53.454 [2024-12-13 10:39:47.150813] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:53.454 [2024-12-13 10:39:47.162608] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:53.454 [2024-12-13 10:39:47.163055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:53.454 [2024-12-13 10:39:47.163076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:53.454 [2024-12-13 10:39:47.163086] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:53.454 [2024-12-13 10:39:47.163273] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:53.454 [2024-12-13 10:39:47.163466] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:53.454 [2024-12-13 10:39:47.163477] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:53.454 [2024-12-13 10:39:47.163485] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:53.454 [2024-12-13 10:39:47.163494] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:53.454 [2024-12-13 10:39:47.175766] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:53.454 [2024-12-13 10:39:47.176212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:53.454 [2024-12-13 10:39:47.176239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:53.454 [2024-12-13 10:39:47.176249] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:53.454 [2024-12-13 10:39:47.176438] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:53.454 [2024-12-13 10:39:47.176638] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:53.454 [2024-12-13 10:39:47.176649] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:53.454 [2024-12-13 10:39:47.176657] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:53.454 [2024-12-13 10:39:47.176666] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:53.454 [2024-12-13 10:39:47.188778] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:53.454 [2024-12-13 10:39:47.189247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:53.454 [2024-12-13 10:39:47.189304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:53.454 [2024-12-13 10:39:47.189336] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:53.454 [2024-12-13 10:39:47.189875] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:53.454 [2024-12-13 10:39:47.190064] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:53.454 [2024-12-13 10:39:47.190075] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:53.454 [2024-12-13 10:39:47.190083] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:53.454 [2024-12-13 10:39:47.190092] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:53.454 [2024-12-13 10:39:47.201969] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:53.454 [2024-12-13 10:39:47.202364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:53.454 [2024-12-13 10:39:47.202385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:53.454 [2024-12-13 10:39:47.202395] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:53.454 [2024-12-13 10:39:47.202589] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:53.454 [2024-12-13 10:39:47.202778] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:53.454 [2024-12-13 10:39:47.202788] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:53.454 [2024-12-13 10:39:47.202797] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:53.454 [2024-12-13 10:39:47.202806] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:53.454 [2024-12-13 10:39:47.215138] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:53.454 [2024-12-13 10:39:47.215612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:53.454 [2024-12-13 10:39:47.215634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:53.454 [2024-12-13 10:39:47.215644] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:53.454 [2024-12-13 10:39:47.215833] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:53.454 [2024-12-13 10:39:47.216021] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:53.454 [2024-12-13 10:39:47.216032] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:53.454 [2024-12-13 10:39:47.216043] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:53.454 [2024-12-13 10:39:47.216052] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:53.454 [2024-12-13 10:39:47.228289] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:53.454 [2024-12-13 10:39:47.228652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:53.454 [2024-12-13 10:39:47.228674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:53.454 [2024-12-13 10:39:47.228685] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:53.454 [2024-12-13 10:39:47.228873] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:53.454 [2024-12-13 10:39:47.229061] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:53.454 [2024-12-13 10:39:47.229073] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:53.454 [2024-12-13 10:39:47.229081] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:53.454 [2024-12-13 10:39:47.229090] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:53.454 [2024-12-13 10:39:47.241669] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:53.455 [2024-12-13 10:39:47.242040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:53.455 [2024-12-13 10:39:47.242061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:53.455 [2024-12-13 10:39:47.242071] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:53.455 [2024-12-13 10:39:47.242265] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:53.455 [2024-12-13 10:39:47.242465] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:53.455 [2024-12-13 10:39:47.242477] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:53.455 [2024-12-13 10:39:47.242485] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:53.455 [2024-12-13 10:39:47.242494] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:53.455 [2024-12-13 10:39:47.254709] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:53.455 [2024-12-13 10:39:47.255110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:53.455 [2024-12-13 10:39:47.255131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:53.455 [2024-12-13 10:39:47.255140] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:53.455 [2024-12-13 10:39:47.255328] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:53.455 [2024-12-13 10:39:47.255524] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:53.455 [2024-12-13 10:39:47.255536] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:53.455 [2024-12-13 10:39:47.255544] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:53.455 [2024-12-13 10:39:47.255553] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:53.455 [2024-12-13 10:39:47.267736] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:53.455 [2024-12-13 10:39:47.268253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:53.455 [2024-12-13 10:39:47.268309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:53.455 [2024-12-13 10:39:47.268341] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:53.455 [2024-12-13 10:39:47.269005] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:53.455 [2024-12-13 10:39:47.269295] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:53.455 [2024-12-13 10:39:47.269305] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:53.455 [2024-12-13 10:39:47.269313] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:53.455 [2024-12-13 10:39:47.269321] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:53.455 [2024-12-13 10:39:47.281945] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:53.455 [2024-12-13 10:39:47.282417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:53.455 [2024-12-13 10:39:47.282453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:53.455 [2024-12-13 10:39:47.282464] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:53.455 [2024-12-13 10:39:47.282670] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:53.455 [2024-12-13 10:39:47.282876] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:53.455 [2024-12-13 10:39:47.282887] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:53.455 [2024-12-13 10:39:47.282896] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:53.455 [2024-12-13 10:39:47.282906] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:53.455 [2024-12-13 10:39:47.295086] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:53.455 [2024-12-13 10:39:47.295575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:53.455 [2024-12-13 10:39:47.295597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:53.455 [2024-12-13 10:39:47.295608] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:53.455 [2024-12-13 10:39:47.295802] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:53.455 [2024-12-13 10:39:47.295996] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:53.455 [2024-12-13 10:39:47.296007] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:53.455 [2024-12-13 10:39:47.296015] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:53.455 [2024-12-13 10:39:47.296025] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:53.455 [2024-12-13 10:39:47.308489] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:53.455 [2024-12-13 10:39:47.308816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:53.455 [2024-12-13 10:39:47.308840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:53.455 [2024-12-13 10:39:47.308851] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:53.455 [2024-12-13 10:39:47.309044] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:53.455 [2024-12-13 10:39:47.309238] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:53.455 [2024-12-13 10:39:47.309249] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:53.455 [2024-12-13 10:39:47.309257] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:53.455 [2024-12-13 10:39:47.309266] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:53.455 [2024-12-13 10:39:47.321702] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:53.455 [2024-12-13 10:39:47.322061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:53.455 [2024-12-13 10:39:47.322098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:53.455 [2024-12-13 10:39:47.322109] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:53.455 [2024-12-13 10:39:47.322302] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:53.455 [2024-12-13 10:39:47.322505] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:53.455 [2024-12-13 10:39:47.322516] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:53.455 [2024-12-13 10:39:47.322525] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:53.455 [2024-12-13 10:39:47.322534] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:53.455 [2024-12-13 10:39:47.334848] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:53.455 [2024-12-13 10:39:47.335324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:53.455 [2024-12-13 10:39:47.335382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:53.455 [2024-12-13 10:39:47.335414] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:53.455 [2024-12-13 10:39:47.336076] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:53.455 [2024-12-13 10:39:47.336565] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:53.455 [2024-12-13 10:39:47.336576] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:53.455 [2024-12-13 10:39:47.336585] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:53.455 [2024-12-13 10:39:47.336594] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:53.716 [2024-12-13 10:39:47.348165] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:53.716 [2024-12-13 10:39:47.348631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:53.716 [2024-12-13 10:39:47.348654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:53.716 [2024-12-13 10:39:47.348664] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:53.716 [2024-12-13 10:39:47.348862] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:53.716 [2024-12-13 10:39:47.349055] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:53.716 [2024-12-13 10:39:47.349066] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:53.716 [2024-12-13 10:39:47.349075] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:53.716 [2024-12-13 10:39:47.349085] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:53.716 [2024-12-13 10:39:47.361274] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:53.716 [2024-12-13 10:39:47.361671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:53.716 [2024-12-13 10:39:47.361693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:53.716 [2024-12-13 10:39:47.361703] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:53.716 [2024-12-13 10:39:47.361897] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:53.716 [2024-12-13 10:39:47.362085] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:53.716 [2024-12-13 10:39:47.362096] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:53.716 [2024-12-13 10:39:47.362105] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:53.716 [2024-12-13 10:39:47.362114] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:53.716 [2024-12-13 10:39:47.374410] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:53.716 [2024-12-13 10:39:47.374815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:53.716 [2024-12-13 10:39:47.374836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:53.716 [2024-12-13 10:39:47.374846] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:53.716 [2024-12-13 10:39:47.375034] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:53.716 [2024-12-13 10:39:47.375222] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:53.716 [2024-12-13 10:39:47.375233] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:53.716 [2024-12-13 10:39:47.375241] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:53.716 [2024-12-13 10:39:47.375250] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:53.716 [2024-12-13 10:39:47.387542] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:53.716 [2024-12-13 10:39:47.387875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:53.716 [2024-12-13 10:39:47.387896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:53.716 [2024-12-13 10:39:47.387906] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:53.716 [2024-12-13 10:39:47.388095] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:53.716 [2024-12-13 10:39:47.388283] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:53.716 [2024-12-13 10:39:47.388297] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:53.716 [2024-12-13 10:39:47.388306] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:53.716 [2024-12-13 10:39:47.388315] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:53.716 [2024-12-13 10:39:47.400665] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:53.716 [2024-12-13 10:39:47.400971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:53.716 [2024-12-13 10:39:47.400991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:53.716 [2024-12-13 10:39:47.401001] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:53.716 [2024-12-13 10:39:47.401178] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:53.716 [2024-12-13 10:39:47.401356] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:53.716 [2024-12-13 10:39:47.401366] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:53.716 [2024-12-13 10:39:47.401374] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:53.716 [2024-12-13 10:39:47.401383] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:53.716 [2024-12-13 10:39:47.413844] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:53.716 [2024-12-13 10:39:47.414277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:53.716 [2024-12-13 10:39:47.414298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:53.716 [2024-12-13 10:39:47.414307] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:53.716 [2024-12-13 10:39:47.414503] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:53.716 [2024-12-13 10:39:47.414691] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:53.716 [2024-12-13 10:39:47.414702] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:53.716 [2024-12-13 10:39:47.414710] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:53.716 [2024-12-13 10:39:47.414719] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:53.716 [2024-12-13 10:39:47.426869] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:53.716 [2024-12-13 10:39:47.427361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:53.716 [2024-12-13 10:39:47.427417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:53.716 [2024-12-13 10:39:47.427462] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:53.716 [2024-12-13 10:39:47.428116] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:53.716 [2024-12-13 10:39:47.428590] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:53.716 [2024-12-13 10:39:47.428601] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:53.716 [2024-12-13 10:39:47.428613] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:53.716 [2024-12-13 10:39:47.428622] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:53.716 [2024-12-13 10:39:47.440037] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:53.716 [2024-12-13 10:39:47.440421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:53.716 [2024-12-13 10:39:47.440442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:53.716 [2024-12-13 10:39:47.440459] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:53.716 [2024-12-13 10:39:47.440647] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:53.716 [2024-12-13 10:39:47.440836] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:53.716 [2024-12-13 10:39:47.440846] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:53.716 [2024-12-13 10:39:47.440855] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:53.716 [2024-12-13 10:39:47.440863] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:53.717 [2024-12-13 10:39:47.453114] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:53.717 [2024-12-13 10:39:47.453439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:53.717 [2024-12-13 10:39:47.453466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:53.717 [2024-12-13 10:39:47.453476] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:53.717 [2024-12-13 10:39:47.453664] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:53.717 [2024-12-13 10:39:47.453852] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:53.717 [2024-12-13 10:39:47.453863] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:53.717 [2024-12-13 10:39:47.453871] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:53.717 [2024-12-13 10:39:47.453880] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:53.717 [2024-12-13 10:39:47.466284] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:53.717 [2024-12-13 10:39:47.466756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:53.717 [2024-12-13 10:39:47.466815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:53.717 [2024-12-13 10:39:47.466847] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:53.717 [2024-12-13 10:39:47.467510] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:53.717 [2024-12-13 10:39:47.467859] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:53.717 [2024-12-13 10:39:47.467870] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:53.717 [2024-12-13 10:39:47.467879] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:53.717 [2024-12-13 10:39:47.467887] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:53.717 [2024-12-13 10:39:47.479383] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:53.717 [2024-12-13 10:39:47.479778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:53.717 [2024-12-13 10:39:47.479799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:53.717 [2024-12-13 10:39:47.479809] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:53.717 [2024-12-13 10:39:47.479997] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:53.717 [2024-12-13 10:39:47.480185] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:53.717 [2024-12-13 10:39:47.480195] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:53.717 [2024-12-13 10:39:47.480204] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:53.717 [2024-12-13 10:39:47.480212] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:53.717 [2024-12-13 10:39:47.492607] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:53.717 [2024-12-13 10:39:47.492967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:53.717 [2024-12-13 10:39:47.492988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:53.717 [2024-12-13 10:39:47.492997] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:53.717 [2024-12-13 10:39:47.493184] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:53.717 [2024-12-13 10:39:47.493372] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:53.717 [2024-12-13 10:39:47.493383] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:53.717 [2024-12-13 10:39:47.493391] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:53.717 [2024-12-13 10:39:47.493400] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:53.717 [2024-12-13 10:39:47.505727] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:53.717 [2024-12-13 10:39:47.506141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:53.717 [2024-12-13 10:39:47.506163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:53.717 [2024-12-13 10:39:47.506174] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:53.717 [2024-12-13 10:39:47.506368] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:53.717 [2024-12-13 10:39:47.506570] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:53.717 [2024-12-13 10:39:47.506583] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:53.717 [2024-12-13 10:39:47.506612] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:53.717 [2024-12-13 10:39:47.506620] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:53.717 [2024-12-13 10:39:47.518878] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:53.717 [2024-12-13 10:39:47.519275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:53.717 [2024-12-13 10:39:47.519295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:53.717 [2024-12-13 10:39:47.519308] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:53.717 [2024-12-13 10:39:47.519505] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:53.717 [2024-12-13 10:39:47.519694] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:53.717 [2024-12-13 10:39:47.519705] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:53.717 [2024-12-13 10:39:47.519714] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:53.717 [2024-12-13 10:39:47.519722] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:53.717 [2024-12-13 10:39:47.532054] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:53.717 [2024-12-13 10:39:47.532446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:53.717 [2024-12-13 10:39:47.532472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:53.717 [2024-12-13 10:39:47.532483] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:53.717 [2024-12-13 10:39:47.532671] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:53.717 [2024-12-13 10:39:47.532859] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:53.717 [2024-12-13 10:39:47.532869] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:53.717 [2024-12-13 10:39:47.532878] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:53.717 [2024-12-13 10:39:47.532886] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:53.717 [2024-12-13 10:39:47.545213] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:53.717 [2024-12-13 10:39:47.545635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:53.717 [2024-12-13 10:39:47.545656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:53.717 [2024-12-13 10:39:47.545667] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:53.717 [2024-12-13 10:39:47.545861] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:53.717 [2024-12-13 10:39:47.546054] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:53.717 [2024-12-13 10:39:47.546072] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:53.717 [2024-12-13 10:39:47.546081] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:53.717 [2024-12-13 10:39:47.546090] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:53.717 [2024-12-13 10:39:47.558518] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:53.717 [2024-12-13 10:39:47.558913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:53.717 [2024-12-13 10:39:47.558933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:53.717 [2024-12-13 10:39:47.558944] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:53.717 [2024-12-13 10:39:47.559140] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:53.717 [2024-12-13 10:39:47.559334] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:53.717 [2024-12-13 10:39:47.559345] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:53.717 [2024-12-13 10:39:47.559354] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:53.717 [2024-12-13 10:39:47.559362] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:53.717 [2024-12-13 10:39:47.571837] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:53.717 [2024-12-13 10:39:47.572161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:53.717 [2024-12-13 10:39:47.572182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:53.717 [2024-12-13 10:39:47.572192] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:53.717 [2024-12-13 10:39:47.572380] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:53.717 [2024-12-13 10:39:47.572577] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:53.717 [2024-12-13 10:39:47.572588] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:53.717 [2024-12-13 10:39:47.572597] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:53.718 [2024-12-13 10:39:47.572605] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:53.718 [2024-12-13 10:39:47.585064] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:53.718 [2024-12-13 10:39:47.585360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:53.718 [2024-12-13 10:39:47.585380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:53.718 [2024-12-13 10:39:47.585389] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:53.718 [2024-12-13 10:39:47.585584] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:53.718 [2024-12-13 10:39:47.585772] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:53.718 [2024-12-13 10:39:47.585783] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:53.718 [2024-12-13 10:39:47.585792] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:53.718 [2024-12-13 10:39:47.585800] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:53.718 [2024-12-13 10:39:47.598213] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:53.718 [2024-12-13 10:39:47.598626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:53.718 [2024-12-13 10:39:47.598646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:53.718 [2024-12-13 10:39:47.598656] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:53.718 [2024-12-13 10:39:47.598844] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:53.718 [2024-12-13 10:39:47.599033] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:53.718 [2024-12-13 10:39:47.599047] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:53.718 [2024-12-13 10:39:47.599056] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:53.718 [2024-12-13 10:39:47.599064] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:53.978 [2024-12-13 10:39:47.611442] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:53.978 [2024-12-13 10:39:47.611832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:53.978 [2024-12-13 10:39:47.611853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:53.978 [2024-12-13 10:39:47.611863] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:53.978 [2024-12-13 10:39:47.612057] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:53.978 [2024-12-13 10:39:47.612250] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:53.978 [2024-12-13 10:39:47.612262] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:53.978 [2024-12-13 10:39:47.612270] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:53.978 [2024-12-13 10:39:47.612279] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:53.978 [2024-12-13 10:39:47.624587] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:53.978 [2024-12-13 10:39:47.625015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:53.978 [2024-12-13 10:39:47.625035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:53.978 [2024-12-13 10:39:47.625045] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:53.978 [2024-12-13 10:39:47.625233] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:53.978 [2024-12-13 10:39:47.625420] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:53.978 [2024-12-13 10:39:47.625431] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:53.978 [2024-12-13 10:39:47.625440] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:53.978 [2024-12-13 10:39:47.625456] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:53.978 [2024-12-13 10:39:47.637755] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:53.978 [2024-12-13 10:39:47.638148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:53.978 [2024-12-13 10:39:47.638205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:53.978 [2024-12-13 10:39:47.638236] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:53.978 [2024-12-13 10:39:47.638900] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:53.978 [2024-12-13 10:39:47.639234] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:53.978 [2024-12-13 10:39:47.639244] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:53.978 [2024-12-13 10:39:47.639253] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:53.978 [2024-12-13 10:39:47.639264] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:53.978 [2024-12-13 10:39:47.650876] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:53.978 [2024-12-13 10:39:47.651362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:53.978 [2024-12-13 10:39:47.651422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:53.978 [2024-12-13 10:39:47.651472] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:53.978 [2024-12-13 10:39:47.651972] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:53.978 [2024-12-13 10:39:47.652160] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:53.978 [2024-12-13 10:39:47.652171] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:53.978 [2024-12-13 10:39:47.652180] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:53.978 [2024-12-13 10:39:47.652188] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:53.978 [2024-12-13 10:39:47.663925] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:53.978 [2024-12-13 10:39:47.664373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:53.978 [2024-12-13 10:39:47.664393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:53.978 [2024-12-13 10:39:47.664403] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:53.978 [2024-12-13 10:39:47.664611] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:53.978 [2024-12-13 10:39:47.664800] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:53.978 [2024-12-13 10:39:47.664811] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:53.979 [2024-12-13 10:39:47.664819] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:53.979 [2024-12-13 10:39:47.664828] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:53.979 [2024-12-13 10:39:47.676950] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:53.979 [2024-12-13 10:39:47.677405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:53.979 [2024-12-13 10:39:47.677426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:53.979 [2024-12-13 10:39:47.677435] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:53.979 [2024-12-13 10:39:47.677646] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:53.979 [2024-12-13 10:39:47.677835] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:53.979 [2024-12-13 10:39:47.677846] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:53.979 [2024-12-13 10:39:47.677854] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:53.979 [2024-12-13 10:39:47.677862] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:53.979 [2024-12-13 10:39:47.689985] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:53.979 [2024-12-13 10:39:47.690464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:53.979 [2024-12-13 10:39:47.690523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:53.979 [2024-12-13 10:39:47.690555] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:53.979 [2024-12-13 10:39:47.691020] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:53.979 [2024-12-13 10:39:47.691208] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:53.979 [2024-12-13 10:39:47.691219] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:53.979 [2024-12-13 10:39:47.691227] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:53.979 [2024-12-13 10:39:47.691236] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:53.979 [2024-12-13 10:39:47.703087] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:53.979 [2024-12-13 10:39:47.703540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:53.979 [2024-12-13 10:39:47.703604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:53.979 [2024-12-13 10:39:47.703637] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:53.979 [2024-12-13 10:39:47.704195] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:53.979 [2024-12-13 10:39:47.704373] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:53.979 [2024-12-13 10:39:47.704383] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:53.979 [2024-12-13 10:39:47.704392] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:53.979 [2024-12-13 10:39:47.704400] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:53.979 [2024-12-13 10:39:47.716210] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:53.979 [2024-12-13 10:39:47.716668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:53.979 [2024-12-13 10:39:47.716722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:53.979 [2024-12-13 10:39:47.716757] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:53.979 [2024-12-13 10:39:47.717324] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:53.979 [2024-12-13 10:39:47.717526] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:53.979 [2024-12-13 10:39:47.717538] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:53.979 [2024-12-13 10:39:47.717546] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:53.979 [2024-12-13 10:39:47.717555] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:53.979 [2024-12-13 10:39:47.729271] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:53.979 [2024-12-13 10:39:47.729724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:53.979 [2024-12-13 10:39:47.729789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:53.979 [2024-12-13 10:39:47.729830] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:53.979 [2024-12-13 10:39:47.730343] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:53.979 [2024-12-13 10:39:47.730537] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:53.979 [2024-12-13 10:39:47.730548] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:53.979 [2024-12-13 10:39:47.730557] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:53.979 [2024-12-13 10:39:47.730571] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:53.979 [2024-12-13 10:39:47.742327] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:53.979 [2024-12-13 10:39:47.742785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:53.979 [2024-12-13 10:39:47.742806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:53.979 [2024-12-13 10:39:47.742816] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:53.979 [2024-12-13 10:39:47.743005] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:53.979 [2024-12-13 10:39:47.743192] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:53.979 [2024-12-13 10:39:47.743203] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:53.979 [2024-12-13 10:39:47.743211] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:53.979 [2024-12-13 10:39:47.743220] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:53.979 [2024-12-13 10:39:47.755379] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:53.979 [2024-12-13 10:39:47.755817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:53.979 [2024-12-13 10:39:47.755837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:53.979 [2024-12-13 10:39:47.755847] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:53.979 [2024-12-13 10:39:47.756025] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:53.979 [2024-12-13 10:39:47.756204] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:53.979 [2024-12-13 10:39:47.756214] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:53.979 [2024-12-13 10:39:47.756222] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:53.979 [2024-12-13 10:39:47.756230] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:53.979 [2024-12-13 10:39:47.768443] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:53.979 [2024-12-13 10:39:47.768817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:53.979 [2024-12-13 10:39:47.768836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:53.979 [2024-12-13 10:39:47.768845] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:53.979 [2024-12-13 10:39:47.769023] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:53.979 [2024-12-13 10:39:47.769202] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:53.979 [2024-12-13 10:39:47.769212] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:53.979 [2024-12-13 10:39:47.769220] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:53.979 [2024-12-13 10:39:47.769228] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:53.979 [2024-12-13 10:39:47.781606] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:53.979 [2024-12-13 10:39:47.781996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:53.979 [2024-12-13 10:39:47.782017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:53.979 [2024-12-13 10:39:47.782026] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:53.979 [2024-12-13 10:39:47.782204] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:53.979 [2024-12-13 10:39:47.782382] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:53.979 [2024-12-13 10:39:47.782392] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:53.979 [2024-12-13 10:39:47.782400] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:53.979 [2024-12-13 10:39:47.782409] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:53.979 [2024-12-13 10:39:47.794794] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:53.979 [2024-12-13 10:39:47.795267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:53.979 [2024-12-13 10:39:47.795337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:53.979 [2024-12-13 10:39:47.795370] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:53.979 [2024-12-13 10:39:47.796035] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:53.979 [2024-12-13 10:39:47.796553] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:53.979 [2024-12-13 10:39:47.796565] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:53.980 [2024-12-13 10:39:47.796573] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:53.980 [2024-12-13 10:39:47.796582] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:53.980 [2024-12-13 10:39:47.808171] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:53.980 [2024-12-13 10:39:47.808634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:53.980 [2024-12-13 10:39:47.808656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:53.980 [2024-12-13 10:39:47.808667] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:53.980 [2024-12-13 10:39:47.808860] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:53.980 [2024-12-13 10:39:47.809054] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:53.980 [2024-12-13 10:39:47.809065] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:53.980 [2024-12-13 10:39:47.809077] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:53.980 [2024-12-13 10:39:47.809086] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:53.980 [2024-12-13 10:39:47.821334] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:53.980 [2024-12-13 10:39:47.821827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:53.980 [2024-12-13 10:39:47.821886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:53.980 [2024-12-13 10:39:47.821918] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:53.980 [2024-12-13 10:39:47.822417] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:53.980 [2024-12-13 10:39:47.822612] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:53.980 [2024-12-13 10:39:47.822624] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:53.980 [2024-12-13 10:39:47.822632] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:53.980 [2024-12-13 10:39:47.822641] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:53.980 [2024-12-13 10:39:47.834434] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:53.980 [2024-12-13 10:39:47.834869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:53.980 [2024-12-13 10:39:47.834926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:53.980 [2024-12-13 10:39:47.834957] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:53.980 [2024-12-13 10:39:47.835377] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:53.980 [2024-12-13 10:39:47.835583] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:53.980 [2024-12-13 10:39:47.835594] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:53.980 [2024-12-13 10:39:47.835602] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:53.980 [2024-12-13 10:39:47.835611] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:53.980 [2024-12-13 10:39:47.847562] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:53.980 [2024-12-13 10:39:47.848012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:53.980 [2024-12-13 10:39:47.848067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:53.980 [2024-12-13 10:39:47.848100] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:53.980 [2024-12-13 10:39:47.848666] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:53.980 [2024-12-13 10:39:47.848844] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:53.980 [2024-12-13 10:39:47.848854] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:53.980 [2024-12-13 10:39:47.848862] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:53.980 [2024-12-13 10:39:47.848871] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:53.980 [2024-12-13 10:39:47.860584] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:53.980 [2024-12-13 10:39:47.860951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:53.980 [2024-12-13 10:39:47.860972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:53.980 [2024-12-13 10:39:47.860981] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:53.980 [2024-12-13 10:39:47.861159] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:53.980 [2024-12-13 10:39:47.861337] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:53.980 [2024-12-13 10:39:47.861347] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:53.980 [2024-12-13 10:39:47.861355] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:53.980 [2024-12-13 10:39:47.861363] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:54.240 [2024-12-13 10:39:47.873820] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:54.240 [2024-12-13 10:39:47.874213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:54.240 [2024-12-13 10:39:47.874233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:54.240 [2024-12-13 10:39:47.874243] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:54.240 [2024-12-13 10:39:47.874437] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:54.240 [2024-12-13 10:39:47.874636] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:54.240 [2024-12-13 10:39:47.874648] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:54.240 [2024-12-13 10:39:47.874656] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:54.240 [2024-12-13 10:39:47.874665] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:54.240 [2024-12-13 10:39:47.886929] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:54.240 [2024-12-13 10:39:47.887418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:54.240 [2024-12-13 10:39:47.887486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:54.240 [2024-12-13 10:39:47.887519] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:54.240 [2024-12-13 10:39:47.888167] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:54.240 [2024-12-13 10:39:47.888681] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:54.240 [2024-12-13 10:39:47.888692] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:54.240 [2024-12-13 10:39:47.888700] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:54.240 [2024-12-13 10:39:47.888708] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:54.240 [2024-12-13 10:39:47.899993] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:54.240 [2024-12-13 10:39:47.900468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:54.240 [2024-12-13 10:39:47.900533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:54.240 [2024-12-13 10:39:47.900566] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:54.240 [2024-12-13 10:39:47.901217] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:54.240 [2024-12-13 10:39:47.901781] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:54.240 [2024-12-13 10:39:47.901792] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:54.240 [2024-12-13 10:39:47.901800] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:54.240 [2024-12-13 10:39:47.901809] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:54.240 [2024-12-13 10:39:47.913106] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:54.240 [2024-12-13 10:39:47.913574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:54.240 [2024-12-13 10:39:47.913632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:54.241 [2024-12-13 10:39:47.913665] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:54.241 [2024-12-13 10:39:47.914316] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:54.241 [2024-12-13 10:39:47.914795] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:54.241 [2024-12-13 10:39:47.914806] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:54.241 [2024-12-13 10:39:47.914814] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:54.241 [2024-12-13 10:39:47.914822] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:54.241 [2024-12-13 10:39:47.926184] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:54.241 [2024-12-13 10:39:47.926641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:54.241 [2024-12-13 10:39:47.926662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:54.241 [2024-12-13 10:39:47.926671] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:54.241 [2024-12-13 10:39:47.926850] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:54.241 [2024-12-13 10:39:47.927032] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:54.241 [2024-12-13 10:39:47.927042] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:54.241 [2024-12-13 10:39:47.927051] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:54.241 [2024-12-13 10:39:47.927059] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:54.241 [2024-12-13 10:39:47.939473] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:54.241 [2024-12-13 10:39:47.939869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:54.241 [2024-12-13 10:39:47.939889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:54.241 [2024-12-13 10:39:47.939899] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:54.241 [2024-12-13 10:39:47.940089] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:54.241 [2024-12-13 10:39:47.940277] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:54.241 [2024-12-13 10:39:47.940288] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:54.241 [2024-12-13 10:39:47.940297] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:54.241 [2024-12-13 10:39:47.940305] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:54.241 [2024-12-13 10:39:47.952516] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:54.241 [2024-12-13 10:39:47.952973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:54.241 [2024-12-13 10:39:47.953030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:54.241 [2024-12-13 10:39:47.953061] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:54.241 [2024-12-13 10:39:47.953614] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:54.241 [2024-12-13 10:39:47.953803] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:54.241 [2024-12-13 10:39:47.953814] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:54.241 [2024-12-13 10:39:47.953822] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:54.241 [2024-12-13 10:39:47.953831] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:54.241 [2024-12-13 10:39:47.965617] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:54.241 [2024-12-13 10:39:47.966054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:54.241 [2024-12-13 10:39:47.966074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:54.241 [2024-12-13 10:39:47.966083] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:54.241 [2024-12-13 10:39:47.966261] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:54.241 [2024-12-13 10:39:47.966439] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:54.241 [2024-12-13 10:39:47.966456] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:54.241 [2024-12-13 10:39:47.966465] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:54.241 [2024-12-13 10:39:47.966473] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:54.241 [2024-12-13 10:39:47.978654] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:54.241 [2024-12-13 10:39:47.979098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:54.241 [2024-12-13 10:39:47.979161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:54.241 [2024-12-13 10:39:47.979193] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:54.241 [2024-12-13 10:39:47.979720] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:54.241 [2024-12-13 10:39:47.979902] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:54.241 [2024-12-13 10:39:47.979912] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:54.241 [2024-12-13 10:39:47.979920] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:54.241 [2024-12-13 10:39:47.979928] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:54.241 [2024-12-13 10:39:47.991813] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:54.241 [2024-12-13 10:39:47.992264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:54.241 [2024-12-13 10:39:47.992330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:54.241 [2024-12-13 10:39:47.992363] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:54.241 [2024-12-13 10:39:47.992892] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:54.241 [2024-12-13 10:39:47.993080] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:54.241 [2024-12-13 10:39:47.993091] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:54.241 [2024-12-13 10:39:47.993099] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:54.241 [2024-12-13 10:39:47.993108] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:54.241 [2024-12-13 10:39:48.005121] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:54.241 [2024-12-13 10:39:48.005584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:54.241 [2024-12-13 10:39:48.005606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:54.241 [2024-12-13 10:39:48.005616] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:54.241 [2024-12-13 10:39:48.005804] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:54.241 [2024-12-13 10:39:48.005992] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:54.241 [2024-12-13 10:39:48.006003] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:54.241 [2024-12-13 10:39:48.006011] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:54.241 [2024-12-13 10:39:48.006020] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:54.241 [2024-12-13 10:39:48.018137] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:54.241 [2024-12-13 10:39:48.018623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:54.241 [2024-12-13 10:39:48.018681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:54.241 [2024-12-13 10:39:48.018713] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:54.241 [2024-12-13 10:39:48.019181] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:54.241 [2024-12-13 10:39:48.019359] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:54.241 [2024-12-13 10:39:48.019370] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:54.241 [2024-12-13 10:39:48.019381] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:54.241 [2024-12-13 10:39:48.019390] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:54.241 [2024-12-13 10:39:48.031202] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:54.241 [2024-12-13 10:39:48.031662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:54.241 [2024-12-13 10:39:48.031721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:54.241 [2024-12-13 10:39:48.031753] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:54.241 [2024-12-13 10:39:48.032124] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:54.241 [2024-12-13 10:39:48.032302] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:54.241 [2024-12-13 10:39:48.032314] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:54.241 [2024-12-13 10:39:48.032323] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:54.241 [2024-12-13 10:39:48.032331] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:54.241 [2024-12-13 10:39:48.044302] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:54.241 [2024-12-13 10:39:48.044782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:54.241 [2024-12-13 10:39:48.044802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:54.241 [2024-12-13 10:39:48.044813] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:54.242 [2024-12-13 10:39:48.045001] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:54.242 [2024-12-13 10:39:48.045189] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:54.242 [2024-12-13 10:39:48.045200] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:54.242 [2024-12-13 10:39:48.045209] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:54.242 [2024-12-13 10:39:48.045218] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:54.242 [2024-12-13 10:39:48.057468] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:54.242 [2024-12-13 10:39:48.057912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:54.242 [2024-12-13 10:39:48.057933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:54.242 [2024-12-13 10:39:48.057943] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:54.242 [2024-12-13 10:39:48.058132] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:54.242 [2024-12-13 10:39:48.058319] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:54.242 [2024-12-13 10:39:48.058330] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:54.242 [2024-12-13 10:39:48.058339] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:54.242 [2024-12-13 10:39:48.058348] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:54.242 [2024-12-13 10:39:48.070850] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:54.242 [2024-12-13 10:39:48.071312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:54.242 [2024-12-13 10:39:48.071369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:54.242 [2024-12-13 10:39:48.071401] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:54.242 [2024-12-13 10:39:48.072039] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:54.242 [2024-12-13 10:39:48.072233] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:54.242 [2024-12-13 10:39:48.072244] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:54.242 [2024-12-13 10:39:48.072252] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:54.242 [2024-12-13 10:39:48.072261] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:54.242 [2024-12-13 10:39:48.084107] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:54.242 [2024-12-13 10:39:48.084610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:54.242 [2024-12-13 10:39:48.084669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:54.242 [2024-12-13 10:39:48.084701] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:54.242 [2024-12-13 10:39:48.085197] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:54.242 [2024-12-13 10:39:48.085384] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:54.242 [2024-12-13 10:39:48.085395] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:54.242 [2024-12-13 10:39:48.085403] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:54.242 [2024-12-13 10:39:48.085412] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:54.242 [2024-12-13 10:39:48.097343] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:54.242 [2024-12-13 10:39:48.097824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:54.242 [2024-12-13 10:39:48.097882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:54.242 [2024-12-13 10:39:48.097915] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:54.242 [2024-12-13 10:39:48.098375] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:54.242 [2024-12-13 10:39:48.098581] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:54.242 [2024-12-13 10:39:48.098594] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:54.242 [2024-12-13 10:39:48.098602] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:54.242 [2024-12-13 10:39:48.098611] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:54.242 4120.00 IOPS, 16.09 MiB/s [2024-12-13T09:39:48.133Z] [2024-12-13 10:39:48.111515] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:54.242 [2024-12-13 10:39:48.111879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:54.242 [2024-12-13 10:39:48.111903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:54.242 [2024-12-13 10:39:48.111920] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:54.242 [2024-12-13 10:39:48.112109] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:54.242 [2024-12-13 10:39:48.112297] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:54.242 [2024-12-13 10:39:48.112308] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:54.242 [2024-12-13 10:39:48.112317] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:54.242 [2024-12-13 10:39:48.112325] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:54.242 [2024-12-13 10:39:48.124701] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:54.242 [2024-12-13 10:39:48.125137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:54.242 [2024-12-13 10:39:48.125194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:54.242 [2024-12-13 10:39:48.125226] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:54.242 [2024-12-13 10:39:48.125893] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:54.242 [2024-12-13 10:39:48.126362] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:54.242 [2024-12-13 10:39:48.126373] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:54.242 [2024-12-13 10:39:48.126382] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:54.242 [2024-12-13 10:39:48.126390] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:54.503 [2024-12-13 10:39:48.138097] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:54.503 [2024-12-13 10:39:48.138480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:54.503 [2024-12-13 10:39:48.138501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:54.503 [2024-12-13 10:39:48.138511] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:54.503 [2024-12-13 10:39:48.138705] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:54.503 [2024-12-13 10:39:48.138882] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:54.503 [2024-12-13 10:39:48.138892] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:54.503 [2024-12-13 10:39:48.138900] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:54.503 [2024-12-13 10:39:48.138908] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:54.503 [2024-12-13 10:39:48.151262] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:54.503 [2024-12-13 10:39:48.151733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:54.503 [2024-12-13 10:39:48.151753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:54.503 [2024-12-13 10:39:48.151764] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:54.503 [2024-12-13 10:39:48.151955] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:54.503 [2024-12-13 10:39:48.152144] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:54.503 [2024-12-13 10:39:48.152154] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:54.503 [2024-12-13 10:39:48.152162] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:54.503 [2024-12-13 10:39:48.152171] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:54.503 [2024-12-13 10:39:48.164294] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:54.503 [2024-12-13 10:39:48.164763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:54.503 [2024-12-13 10:39:48.164784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:54.503 [2024-12-13 10:39:48.164794] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:54.503 [2024-12-13 10:39:48.164983] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:54.503 [2024-12-13 10:39:48.165171] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:54.503 [2024-12-13 10:39:48.165182] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:54.503 [2024-12-13 10:39:48.165190] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:54.503 [2024-12-13 10:39:48.165199] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:54.503 [2024-12-13 10:39:48.177323] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:54.503 [2024-12-13 10:39:48.177717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:54.504 [2024-12-13 10:39:48.177738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:54.504 [2024-12-13 10:39:48.177748] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:54.504 [2024-12-13 10:39:48.177937] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:54.504 [2024-12-13 10:39:48.178125] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:54.504 [2024-12-13 10:39:48.178136] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:54.504 [2024-12-13 10:39:48.178144] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:54.504 [2024-12-13 10:39:48.178153] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:54.504 [2024-12-13 10:39:48.190472] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:54.504 [2024-12-13 10:39:48.190856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:54.504 [2024-12-13 10:39:48.190877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:54.504 [2024-12-13 10:39:48.190887] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:54.504 [2024-12-13 10:39:48.191075] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:54.504 [2024-12-13 10:39:48.191262] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:54.504 [2024-12-13 10:39:48.191277] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:54.504 [2024-12-13 10:39:48.191286] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:54.504 [2024-12-13 10:39:48.191294] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:54.504 [2024-12-13 10:39:48.203492] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:54.504 [2024-12-13 10:39:48.203968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:54.504 [2024-12-13 10:39:48.203989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:54.504 [2024-12-13 10:39:48.203999] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:54.504 [2024-12-13 10:39:48.204187] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:54.504 [2024-12-13 10:39:48.204376] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:54.504 [2024-12-13 10:39:48.204387] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:54.504 [2024-12-13 10:39:48.204396] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:54.504 [2024-12-13 10:39:48.204405] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:54.504 [2024-12-13 10:39:48.216541] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:54.504 [2024-12-13 10:39:48.217004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:54.504 [2024-12-13 10:39:48.217061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:54.504 [2024-12-13 10:39:48.217093] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:54.504 [2024-12-13 10:39:48.217562] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:54.504 [2024-12-13 10:39:48.217741] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:54.504 [2024-12-13 10:39:48.217751] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:54.504 [2024-12-13 10:39:48.217759] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:54.504 [2024-12-13 10:39:48.217768] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:54.504 [2024-12-13 10:39:48.229637] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:54.504 [2024-12-13 10:39:48.230090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:54.504 [2024-12-13 10:39:48.230109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:54.504 [2024-12-13 10:39:48.230118] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:54.504 [2024-12-13 10:39:48.230296] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:54.504 [2024-12-13 10:39:48.230480] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:54.504 [2024-12-13 10:39:48.230491] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:54.504 [2024-12-13 10:39:48.230500] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:54.504 [2024-12-13 10:39:48.230511] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:54.504 [2024-12-13 10:39:48.242847] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:54.504 [2024-12-13 10:39:48.243293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:54.504 [2024-12-13 10:39:48.243312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:54.504 [2024-12-13 10:39:48.243322] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:54.504 [2024-12-13 10:39:48.243522] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:54.504 [2024-12-13 10:39:48.243711] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:54.504 [2024-12-13 10:39:48.243722] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:54.504 [2024-12-13 10:39:48.243730] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:54.504 [2024-12-13 10:39:48.243739] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:54.504 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 4153162 Killed "${NVMF_APP[@]}" "$@" 00:37:54.504 [2024-12-13 10:39:48.256006] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:54.504 10:39:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:37:54.504 [2024-12-13 10:39:48.256488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:54.504 [2024-12-13 10:39:48.256510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:54.504 [2024-12-13 10:39:48.256522] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:54.504 10:39:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:37:54.504 [2024-12-13 10:39:48.256716] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:54.504 [2024-12-13 10:39:48.256919] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in err 10:39:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:37:54.504 or state 00:37:54.504 [2024-12-13 10:39:48.256933] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:54.504 [2024-12-13 10:39:48.256942] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:54.504 [2024-12-13 10:39:48.256950] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:54.504 10:39:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:54.504 10:39:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:54.504 10:39:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=4154753 00:37:54.504 10:39:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:37:54.504 10:39:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 4154753 00:37:54.504 10:39:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 4154753 ']' 00:37:54.504 10:39:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:54.504 10:39:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:54.504 10:39:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:54.504 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:54.504 10:39:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:54.504 10:39:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:54.504 [2024-12-13 10:39:48.269330] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:54.504 [2024-12-13 10:39:48.269733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:54.504 [2024-12-13 10:39:48.269754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:54.504 [2024-12-13 10:39:48.269764] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:54.504 [2024-12-13 10:39:48.269958] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:54.504 [2024-12-13 10:39:48.270151] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:54.504 [2024-12-13 10:39:48.270162] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:54.504 [2024-12-13 10:39:48.270171] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:54.504 [2024-12-13 10:39:48.270180] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:54.504 [2024-12-13 10:39:48.282738] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:54.504 [2024-12-13 10:39:48.283213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:54.504 [2024-12-13 10:39:48.283234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:54.504 [2024-12-13 10:39:48.283244] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:54.504 [2024-12-13 10:39:48.283438] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:54.504 [2024-12-13 10:39:48.283637] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:54.504 [2024-12-13 10:39:48.283649] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:54.505 [2024-12-13 10:39:48.283658] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:54.505 [2024-12-13 10:39:48.283667] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:54.505 [2024-12-13 10:39:48.296089] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:54.505 [2024-12-13 10:39:48.296525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:54.505 [2024-12-13 10:39:48.296546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:54.505 [2024-12-13 10:39:48.296557] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:54.505 [2024-12-13 10:39:48.296759] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:54.505 [2024-12-13 10:39:48.296954] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:54.505 [2024-12-13 10:39:48.296965] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:54.505 [2024-12-13 10:39:48.296974] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:54.505 [2024-12-13 10:39:48.296985] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:54.505 [2024-12-13 10:39:48.309300] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:54.505 [2024-12-13 10:39:48.309797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:54.505 [2024-12-13 10:39:48.309820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:54.505 [2024-12-13 10:39:48.309831] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:54.505 [2024-12-13 10:39:48.310029] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:54.505 [2024-12-13 10:39:48.310226] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:54.505 [2024-12-13 10:39:48.310238] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:54.505 [2024-12-13 10:39:48.310247] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:54.505 [2024-12-13 10:39:48.310257] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:54.505 [2024-12-13 10:39:48.322729] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:54.505 [2024-12-13 10:39:48.323200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:54.505 [2024-12-13 10:39:48.323221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:54.505 [2024-12-13 10:39:48.323231] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:54.505 [2024-12-13 10:39:48.323429] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:54.505 [2024-12-13 10:39:48.323631] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:54.505 [2024-12-13 10:39:48.323643] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:54.505 [2024-12-13 10:39:48.323652] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:54.505 [2024-12-13 10:39:48.323662] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:54.505 [2024-12-13 10:39:48.336084] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:54.505 [2024-12-13 10:39:48.336558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:54.505 [2024-12-13 10:39:48.336581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:54.505 [2024-12-13 10:39:48.336593] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:54.505 [2024-12-13 10:39:48.336792] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:54.505 [2024-12-13 10:39:48.336989] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:54.505 [2024-12-13 10:39:48.337000] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:54.505 [2024-12-13 10:39:48.337010] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:54.505 [2024-12-13 10:39:48.337019] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:54.505 [2024-12-13 10:39:48.343396] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:37:54.505 [2024-12-13 10:39:48.343480] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:54.505 [2024-12-13 10:39:48.349495] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:54.505 [2024-12-13 10:39:48.349996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:54.505 [2024-12-13 10:39:48.350019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:54.505 [2024-12-13 10:39:48.350030] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:54.505 [2024-12-13 10:39:48.350228] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:54.505 [2024-12-13 10:39:48.350426] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:54.505 [2024-12-13 10:39:48.350438] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:54.505 [2024-12-13 10:39:48.350452] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:54.505 [2024-12-13 10:39:48.350463] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:54.505 [2024-12-13 10:39:48.362877] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:54.505 [2024-12-13 10:39:48.363347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:54.505 [2024-12-13 10:39:48.363369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:54.505 [2024-12-13 10:39:48.363381] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:54.505 [2024-12-13 10:39:48.363586] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:54.505 [2024-12-13 10:39:48.363786] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:54.505 [2024-12-13 10:39:48.363797] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:54.505 [2024-12-13 10:39:48.363807] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:54.505 [2024-12-13 10:39:48.363817] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:54.505 [2024-12-13 10:39:48.376328] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:54.505 [2024-12-13 10:39:48.376789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:54.505 [2024-12-13 10:39:48.376811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:54.505 [2024-12-13 10:39:48.376822] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:54.505 [2024-12-13 10:39:48.377019] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:54.505 [2024-12-13 10:39:48.377218] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:54.505 [2024-12-13 10:39:48.377229] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:54.505 [2024-12-13 10:39:48.377239] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:54.505 [2024-12-13 10:39:48.377249] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:54.505 [2024-12-13 10:39:48.389743] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:54.505 [2024-12-13 10:39:48.390232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:54.505 [2024-12-13 10:39:48.390254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:54.505 [2024-12-13 10:39:48.390265] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:54.505 [2024-12-13 10:39:48.390470] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:54.505 [2024-12-13 10:39:48.390669] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:54.505 [2024-12-13 10:39:48.390680] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:54.505 [2024-12-13 10:39:48.390690] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:54.505 [2024-12-13 10:39:48.390700] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:54.766 [2024-12-13 10:39:48.403151] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:54.766 [2024-12-13 10:39:48.403640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:54.766 [2024-12-13 10:39:48.403663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:54.766 [2024-12-13 10:39:48.403674] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:54.766 [2024-12-13 10:39:48.403873] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:54.766 [2024-12-13 10:39:48.404072] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:54.766 [2024-12-13 10:39:48.404084] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:54.766 [2024-12-13 10:39:48.404093] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:54.766 [2024-12-13 10:39:48.404103] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:54.766 [2024-12-13 10:39:48.416632] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:54.766 [2024-12-13 10:39:48.417035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:54.766 [2024-12-13 10:39:48.417057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:54.766 [2024-12-13 10:39:48.417068] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:54.766 [2024-12-13 10:39:48.417266] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:54.766 [2024-12-13 10:39:48.417467] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:54.766 [2024-12-13 10:39:48.417479] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:54.766 [2024-12-13 10:39:48.417489] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:54.766 [2024-12-13 10:39:48.417498] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:54.766 [2024-12-13 10:39:48.430045] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:54.766 [2024-12-13 10:39:48.430530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:54.766 [2024-12-13 10:39:48.430552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:54.766 [2024-12-13 10:39:48.430566] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:54.766 [2024-12-13 10:39:48.430764] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:54.766 [2024-12-13 10:39:48.430961] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:54.766 [2024-12-13 10:39:48.430972] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:54.766 [2024-12-13 10:39:48.430981] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:54.766 [2024-12-13 10:39:48.430991] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:54.766 [2024-12-13 10:39:48.443339] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:54.766 [2024-12-13 10:39:48.443829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:54.767 [2024-12-13 10:39:48.443850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:54.767 [2024-12-13 10:39:48.443861] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:54.767 [2024-12-13 10:39:48.444058] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:54.767 [2024-12-13 10:39:48.444256] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:54.767 [2024-12-13 10:39:48.444268] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:54.767 [2024-12-13 10:39:48.444277] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:54.767 [2024-12-13 10:39:48.444286] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:54.767 [2024-12-13 10:39:48.456687] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:54.767 [2024-12-13 10:39:48.457138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:54.767 [2024-12-13 10:39:48.457160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:54.767 [2024-12-13 10:39:48.457171] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:54.767 [2024-12-13 10:39:48.457367] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:54.767 [2024-12-13 10:39:48.457570] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:54.767 [2024-12-13 10:39:48.457581] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:54.767 [2024-12-13 10:39:48.457591] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:54.767 [2024-12-13 10:39:48.457600] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:54.767 [2024-12-13 10:39:48.467123] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:37:54.767 [2024-12-13 10:39:48.470059] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:54.767 [2024-12-13 10:39:48.470540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:54.767 [2024-12-13 10:39:48.470563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:54.767 [2024-12-13 10:39:48.470574] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:54.767 [2024-12-13 10:39:48.470773] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:54.767 [2024-12-13 10:39:48.470971] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:54.767 [2024-12-13 10:39:48.470982] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:54.767 [2024-12-13 10:39:48.470991] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:54.767 [2024-12-13 10:39:48.471000] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:54.767 [2024-12-13 10:39:48.483333] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:54.767 [2024-12-13 10:39:48.483802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:54.767 [2024-12-13 10:39:48.483824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:54.767 [2024-12-13 10:39:48.483835] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:54.767 [2024-12-13 10:39:48.484028] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:54.767 [2024-12-13 10:39:48.484220] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:54.767 [2024-12-13 10:39:48.484231] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:54.767 [2024-12-13 10:39:48.484250] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:54.767 [2024-12-13 10:39:48.484259] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:54.767 [2024-12-13 10:39:48.496726] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:54.767 [2024-12-13 10:39:48.497214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:54.767 [2024-12-13 10:39:48.497238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:54.767 [2024-12-13 10:39:48.497249] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:54.767 [2024-12-13 10:39:48.497454] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:54.767 [2024-12-13 10:39:48.497652] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:54.767 [2024-12-13 10:39:48.497664] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:54.767 [2024-12-13 10:39:48.497674] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:54.767 [2024-12-13 10:39:48.497683] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:54.767 [2024-12-13 10:39:48.509993] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:54.767 [2024-12-13 10:39:48.510459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:54.767 [2024-12-13 10:39:48.510481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:54.767 [2024-12-13 10:39:48.510492] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:54.767 [2024-12-13 10:39:48.510685] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:54.767 [2024-12-13 10:39:48.510875] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:54.767 [2024-12-13 10:39:48.510889] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:54.767 [2024-12-13 10:39:48.510898] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:54.767 [2024-12-13 10:39:48.510907] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:54.767 [2024-12-13 10:39:48.523338] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:54.767 [2024-12-13 10:39:48.523816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:54.767 [2024-12-13 10:39:48.523837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:54.767 [2024-12-13 10:39:48.523848] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:54.767 [2024-12-13 10:39:48.524040] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:54.767 [2024-12-13 10:39:48.524232] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:54.767 [2024-12-13 10:39:48.524243] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:54.767 [2024-12-13 10:39:48.524251] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:54.767 [2024-12-13 10:39:48.524261] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:54.767 [2024-12-13 10:39:48.536693] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:54.767 [2024-12-13 10:39:48.537156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:54.767 [2024-12-13 10:39:48.537177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:54.767 [2024-12-13 10:39:48.537188] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:54.767 [2024-12-13 10:39:48.537385] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:54.767 [2024-12-13 10:39:48.537591] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:54.767 [2024-12-13 10:39:48.537603] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:54.767 [2024-12-13 10:39:48.537612] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:54.767 [2024-12-13 10:39:48.537621] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:54.767 [2024-12-13 10:39:48.549915] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:54.767 [2024-12-13 10:39:48.550379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:54.767 [2024-12-13 10:39:48.550400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:54.767 [2024-12-13 10:39:48.550410] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:54.767 [2024-12-13 10:39:48.550609] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:54.767 [2024-12-13 10:39:48.550801] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:54.767 [2024-12-13 10:39:48.550812] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:54.767 [2024-12-13 10:39:48.550824] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:54.767 [2024-12-13 10:39:48.550833] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:54.767 [2024-12-13 10:39:48.563271] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:54.767 [2024-12-13 10:39:48.563674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:54.767 [2024-12-13 10:39:48.563696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:54.767 [2024-12-13 10:39:48.563707] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:54.767 [2024-12-13 10:39:48.563903] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:54.767 [2024-12-13 10:39:48.564100] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:54.767 [2024-12-13 10:39:48.564111] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:54.767 [2024-12-13 10:39:48.564119] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:54.767 [2024-12-13 10:39:48.564129] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:54.767 [2024-12-13 10:39:48.576651] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:54.767 [2024-12-13 10:39:48.577092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:54.768 [2024-12-13 10:39:48.577113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:54.768 [2024-12-13 10:39:48.577123] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:54.768 [2024-12-13 10:39:48.577320] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:54.768 [2024-12-13 10:39:48.577522] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:54.768 [2024-12-13 10:39:48.577535] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:54.768 [2024-12-13 10:39:48.577545] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:54.768 [2024-12-13 10:39:48.577554] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:54.768 [2024-12-13 10:39:48.578007] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:54.768 [2024-12-13 10:39:48.578038] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:54.768 [2024-12-13 10:39:48.578049] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:54.768 [2024-12-13 10:39:48.578060] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:54.768 [2024-12-13 10:39:48.578068] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:54.768 [2024-12-13 10:39:48.580205] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:37:54.768 [2024-12-13 10:39:48.580274] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:37:54.768 [2024-12-13 10:39:48.580282] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:37:54.768 [2024-12-13 10:39:48.590061] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:54.768 [2024-12-13 10:39:48.590541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:54.768 [2024-12-13 10:39:48.590566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:54.768 [2024-12-13 10:39:48.590583] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:54.768 [2024-12-13 10:39:48.590783] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:54.768 [2024-12-13 10:39:48.590984] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:54.768 [2024-12-13 10:39:48.590995] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:54.768 [2024-12-13 10:39:48.591005] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:54.768 [2024-12-13 10:39:48.591014] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:54.768 [2024-12-13 10:39:48.603514] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:54.768 [2024-12-13 10:39:48.603975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:54.768 [2024-12-13 10:39:48.603997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:54.768 [2024-12-13 10:39:48.604008] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:54.768 [2024-12-13 10:39:48.604206] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:54.768 [2024-12-13 10:39:48.604403] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:54.768 [2024-12-13 10:39:48.604414] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:54.768 [2024-12-13 10:39:48.604424] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:54.768 [2024-12-13 10:39:48.604434] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:54.768 [2024-12-13 10:39:48.616982] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:54.768 [2024-12-13 10:39:48.617459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:54.768 [2024-12-13 10:39:48.617482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:54.768 [2024-12-13 10:39:48.617494] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:54.768 [2024-12-13 10:39:48.617692] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:54.768 [2024-12-13 10:39:48.617891] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:54.768 [2024-12-13 10:39:48.617902] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:54.768 [2024-12-13 10:39:48.617912] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:54.768 [2024-12-13 10:39:48.617921] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:54.768 [2024-12-13 10:39:48.630390] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:54.768 [2024-12-13 10:39:48.630761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:54.768 [2024-12-13 10:39:48.630784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:54.768 [2024-12-13 10:39:48.630795] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:54.768 [2024-12-13 10:39:48.630993] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:54.768 [2024-12-13 10:39:48.631192] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:54.768 [2024-12-13 10:39:48.631204] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:54.768 [2024-12-13 10:39:48.631213] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:54.768 [2024-12-13 10:39:48.631223] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:54.768 [2024-12-13 10:39:48.643864] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:54.768 [2024-12-13 10:39:48.644333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:54.768 [2024-12-13 10:39:48.644355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:54.768 [2024-12-13 10:39:48.644365] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:54.768 [2024-12-13 10:39:48.644570] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:54.768 [2024-12-13 10:39:48.644768] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:54.768 [2024-12-13 10:39:48.644780] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:54.768 [2024-12-13 10:39:48.644789] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:54.768 [2024-12-13 10:39:48.644798] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:55.029 [2024-12-13 10:39:48.657292] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:55.029 [2024-12-13 10:39:48.657753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:55.029 [2024-12-13 10:39:48.657778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:55.029 [2024-12-13 10:39:48.657790] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:55.029 [2024-12-13 10:39:48.657989] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:55.029 [2024-12-13 10:39:48.658187] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:55.029 [2024-12-13 10:39:48.658199] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:55.029 [2024-12-13 10:39:48.658208] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:55.029 [2024-12-13 10:39:48.658218] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:55.029 [2024-12-13 10:39:48.670838] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:55.029 [2024-12-13 10:39:48.671269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:55.029 [2024-12-13 10:39:48.671296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:55.029 [2024-12-13 10:39:48.671308] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:55.029 [2024-12-13 10:39:48.671515] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:55.029 [2024-12-13 10:39:48.671716] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:55.029 [2024-12-13 10:39:48.671727] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:55.029 [2024-12-13 10:39:48.671742] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:55.029 [2024-12-13 10:39:48.671753] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:55.029 [2024-12-13 10:39:48.684265] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:55.029 [2024-12-13 10:39:48.684687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:55.029 [2024-12-13 10:39:48.684710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:55.029 [2024-12-13 10:39:48.684721] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:55.029 [2024-12-13 10:39:48.684920] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:55.029 [2024-12-13 10:39:48.685118] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:55.029 [2024-12-13 10:39:48.685129] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:55.029 [2024-12-13 10:39:48.685139] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:55.029 [2024-12-13 10:39:48.685148] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:55.029 [2024-12-13 10:39:48.697642] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:55.029 [2024-12-13 10:39:48.698020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:55.029 [2024-12-13 10:39:48.698041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:55.029 [2024-12-13 10:39:48.698052] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:55.029 [2024-12-13 10:39:48.698249] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:55.029 [2024-12-13 10:39:48.698456] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:55.029 [2024-12-13 10:39:48.698469] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:55.029 [2024-12-13 10:39:48.698479] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:55.029 [2024-12-13 10:39:48.698488] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:55.029 [2024-12-13 10:39:48.710976] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:55.029 [2024-12-13 10:39:48.711322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:55.029 [2024-12-13 10:39:48.711343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:55.029 [2024-12-13 10:39:48.711354] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:55.029 [2024-12-13 10:39:48.711558] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:55.029 [2024-12-13 10:39:48.711757] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:55.029 [2024-12-13 10:39:48.711768] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:55.029 [2024-12-13 10:39:48.711779] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:55.029 [2024-12-13 10:39:48.711788] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:55.029 [2024-12-13 10:39:48.724423] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:55.029 [2024-12-13 10:39:48.724809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:55.030 [2024-12-13 10:39:48.724831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:55.030 [2024-12-13 10:39:48.724842] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:55.030 [2024-12-13 10:39:48.725039] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:55.030 [2024-12-13 10:39:48.725233] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:55.030 [2024-12-13 10:39:48.725244] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:55.030 [2024-12-13 10:39:48.725253] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:55.030 [2024-12-13 10:39:48.725262] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:55.030 [2024-12-13 10:39:48.737857] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:55.030 [2024-12-13 10:39:48.738261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:55.030 [2024-12-13 10:39:48.738283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:55.030 [2024-12-13 10:39:48.738293] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:55.030 [2024-12-13 10:39:48.738496] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:55.030 [2024-12-13 10:39:48.738693] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:55.030 [2024-12-13 10:39:48.738704] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:55.030 [2024-12-13 10:39:48.738713] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:55.030 [2024-12-13 10:39:48.738722] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:55.030 [2024-12-13 10:39:48.751169] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:55.030 [2024-12-13 10:39:48.751621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:55.030 [2024-12-13 10:39:48.751643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:55.030 [2024-12-13 10:39:48.751654] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:55.030 [2024-12-13 10:39:48.751851] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:55.030 [2024-12-13 10:39:48.752047] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:55.030 [2024-12-13 10:39:48.752058] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:55.030 [2024-12-13 10:39:48.752067] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:55.030 [2024-12-13 10:39:48.752076] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:55.030 [2024-12-13 10:39:48.764535] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:55.030 [2024-12-13 10:39:48.764902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:55.030 [2024-12-13 10:39:48.764930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:55.030 [2024-12-13 10:39:48.764940] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:55.030 [2024-12-13 10:39:48.765141] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:55.030 [2024-12-13 10:39:48.765336] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:55.030 [2024-12-13 10:39:48.765347] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:55.030 [2024-12-13 10:39:48.765356] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:55.030 [2024-12-13 10:39:48.765365] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:55.030 [2024-12-13 10:39:48.777975] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:55.030 [2024-12-13 10:39:48.778479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:55.030 [2024-12-13 10:39:48.778501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:55.030 [2024-12-13 10:39:48.778511] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:55.030 [2024-12-13 10:39:48.778707] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:55.030 [2024-12-13 10:39:48.778902] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:55.030 [2024-12-13 10:39:48.778913] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:55.030 [2024-12-13 10:39:48.778922] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:55.030 [2024-12-13 10:39:48.778931] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:55.030 [2024-12-13 10:39:48.791347] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:55.030 [2024-12-13 10:39:48.791750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:55.030 [2024-12-13 10:39:48.791772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:55.030 [2024-12-13 10:39:48.791783] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:55.030 [2024-12-13 10:39:48.791978] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:55.030 [2024-12-13 10:39:48.792173] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:55.030 [2024-12-13 10:39:48.792184] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:55.030 [2024-12-13 10:39:48.792193] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:55.030 [2024-12-13 10:39:48.792202] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:55.030 [2024-12-13 10:39:48.804673] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:55.030 [2024-12-13 10:39:48.805184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:55.030 [2024-12-13 10:39:48.805209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:55.030 [2024-12-13 10:39:48.805222] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:55.030 [2024-12-13 10:39:48.805427] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:55.030 [2024-12-13 10:39:48.805632] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:55.030 [2024-12-13 10:39:48.805645] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:55.030 [2024-12-13 10:39:48.805655] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:55.030 [2024-12-13 10:39:48.805664] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:55.030 [2024-12-13 10:39:48.818029] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:55.030 [2024-12-13 10:39:48.818455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:55.030 [2024-12-13 10:39:48.818479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:55.030 [2024-12-13 10:39:48.818491] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:55.030 [2024-12-13 10:39:48.818689] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:55.030 [2024-12-13 10:39:48.818888] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:55.030 [2024-12-13 10:39:48.818901] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:55.030 [2024-12-13 10:39:48.818911] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:55.030 [2024-12-13 10:39:48.818921] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:55.030 [2024-12-13 10:39:48.831394] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:55.030 [2024-12-13 10:39:48.831725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:55.030 [2024-12-13 10:39:48.831747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:55.030 [2024-12-13 10:39:48.831759] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:55.030 [2024-12-13 10:39:48.831955] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:55.030 [2024-12-13 10:39:48.832152] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:55.030 [2024-12-13 10:39:48.832164] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:55.030 [2024-12-13 10:39:48.832173] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:55.030 [2024-12-13 10:39:48.832183] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:55.030 [2024-12-13 10:39:48.844849] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:55.030 [2024-12-13 10:39:48.845309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:55.030 [2024-12-13 10:39:48.845331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:55.030 [2024-12-13 10:39:48.845343] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:55.030 [2024-12-13 10:39:48.845545] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:55.030 [2024-12-13 10:39:48.845744] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:55.030 [2024-12-13 10:39:48.845759] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:55.030 [2024-12-13 10:39:48.845769] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:55.030 [2024-12-13 10:39:48.845778] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:55.030 [2024-12-13 10:39:48.858207] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:55.030 [2024-12-13 10:39:48.858587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:55.031 [2024-12-13 10:39:48.858609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:55.031 [2024-12-13 10:39:48.858620] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:55.031 [2024-12-13 10:39:48.858815] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:55.031 [2024-12-13 10:39:48.859010] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:55.031 [2024-12-13 10:39:48.859022] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:55.031 [2024-12-13 10:39:48.859031] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:55.031 [2024-12-13 10:39:48.859040] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:55.031 [2024-12-13 10:39:48.871657] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:55.031 [2024-12-13 10:39:48.872079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:55.031 [2024-12-13 10:39:48.872129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:55.031 [2024-12-13 10:39:48.872140] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:55.031 [2024-12-13 10:39:48.872338] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:55.031 [2024-12-13 10:39:48.872541] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:55.031 [2024-12-13 10:39:48.872553] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:55.031 [2024-12-13 10:39:48.872563] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:55.031 [2024-12-13 10:39:48.872572] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:55.031 [2024-12-13 10:39:48.884995] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:55.031 [2024-12-13 10:39:48.885389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:55.031 [2024-12-13 10:39:48.885410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:55.031 [2024-12-13 10:39:48.885421] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:55.031 [2024-12-13 10:39:48.885622] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:55.031 [2024-12-13 10:39:48.885817] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:55.031 [2024-12-13 10:39:48.885829] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:55.031 [2024-12-13 10:39:48.885841] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:55.031 [2024-12-13 10:39:48.885851] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:55.031 [2024-12-13 10:39:48.898433] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:55.031 [2024-12-13 10:39:48.898765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:55.031 [2024-12-13 10:39:48.898786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:55.031 [2024-12-13 10:39:48.898797] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:55.031 [2024-12-13 10:39:48.898993] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:55.031 [2024-12-13 10:39:48.899189] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:55.031 [2024-12-13 10:39:48.899201] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:55.031 [2024-12-13 10:39:48.899210] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:55.031 [2024-12-13 10:39:48.899219] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:55.031 [2024-12-13 10:39:48.911849] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:55.031 [2024-12-13 10:39:48.912169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:55.031 [2024-12-13 10:39:48.912189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:55.031 [2024-12-13 10:39:48.912201] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:55.031 [2024-12-13 10:39:48.912396] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:55.031 [2024-12-13 10:39:48.912598] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:55.031 [2024-12-13 10:39:48.912609] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:55.031 [2024-12-13 10:39:48.912619] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:55.031 [2024-12-13 10:39:48.912628] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:55.291 [2024-12-13 10:39:48.925243] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:55.291 [2024-12-13 10:39:48.925645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:55.291 [2024-12-13 10:39:48.925668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:55.291 [2024-12-13 10:39:48.925679] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:55.291 [2024-12-13 10:39:48.925876] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:55.291 [2024-12-13 10:39:48.926071] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:55.291 [2024-12-13 10:39:48.926082] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:55.291 [2024-12-13 10:39:48.926091] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:55.291 [2024-12-13 10:39:48.926101] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:55.291 [2024-12-13 10:39:48.938726] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:55.291 [2024-12-13 10:39:48.939122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:55.291 [2024-12-13 10:39:48.939143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:55.291 [2024-12-13 10:39:48.939154] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:55.291 [2024-12-13 10:39:48.939348] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:55.291 [2024-12-13 10:39:48.939550] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:55.291 [2024-12-13 10:39:48.939562] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:55.291 [2024-12-13 10:39:48.939571] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:55.291 [2024-12-13 10:39:48.939580] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:55.292 [2024-12-13 10:39:48.952032] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:55.292 [2024-12-13 10:39:48.952483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:55.292 [2024-12-13 10:39:48.952506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:55.292 [2024-12-13 10:39:48.952516] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:55.292 [2024-12-13 10:39:48.952711] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:55.292 [2024-12-13 10:39:48.952908] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:55.292 [2024-12-13 10:39:48.952920] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:55.292 [2024-12-13 10:39:48.952929] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:55.292 [2024-12-13 10:39:48.952938] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:55.292 [2024-12-13 10:39:48.965350] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:55.292 [2024-12-13 10:39:48.965727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:55.292 [2024-12-13 10:39:48.965748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:55.292 [2024-12-13 10:39:48.965759] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:55.292 [2024-12-13 10:39:48.965955] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:55.292 [2024-12-13 10:39:48.966152] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:55.292 [2024-12-13 10:39:48.966163] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:55.292 [2024-12-13 10:39:48.966172] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:55.292 [2024-12-13 10:39:48.966182] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:55.292 [2024-12-13 10:39:48.978785] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:55.292 [2024-12-13 10:39:48.979156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:55.292 [2024-12-13 10:39:48.979177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:55.292 [2024-12-13 10:39:48.979192] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:55.292 [2024-12-13 10:39:48.979389] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:55.292 [2024-12-13 10:39:48.979590] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:55.292 [2024-12-13 10:39:48.979602] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:55.292 [2024-12-13 10:39:48.979612] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:55.292 [2024-12-13 10:39:48.979621] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:55.292 [2024-12-13 10:39:48.992246] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:55.292 [2024-12-13 10:39:48.992658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:55.292 [2024-12-13 10:39:48.992679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:55.292 [2024-12-13 10:39:48.992691] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:55.292 [2024-12-13 10:39:48.992886] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:55.292 [2024-12-13 10:39:48.993080] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:55.292 [2024-12-13 10:39:48.993092] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:55.292 [2024-12-13 10:39:48.993101] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:55.292 [2024-12-13 10:39:48.993111] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:55.292 [2024-12-13 10:39:49.005722] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:55.292 [2024-12-13 10:39:49.006165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:55.292 [2024-12-13 10:39:49.006186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:55.292 [2024-12-13 10:39:49.006197] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:55.292 [2024-12-13 10:39:49.006391] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:55.292 [2024-12-13 10:39:49.006593] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:55.292 [2024-12-13 10:39:49.006605] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:55.292 [2024-12-13 10:39:49.006614] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:55.292 [2024-12-13 10:39:49.006623] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:55.292 [2024-12-13 10:39:49.019056] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:55.292 [2024-12-13 10:39:49.019467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:55.292 [2024-12-13 10:39:49.019491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:55.292 [2024-12-13 10:39:49.019502] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:55.292 [2024-12-13 10:39:49.019699] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:55.292 [2024-12-13 10:39:49.019895] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:55.292 [2024-12-13 10:39:49.019906] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:55.292 [2024-12-13 10:39:49.019916] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:55.292 [2024-12-13 10:39:49.019925] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:55.292 [2024-12-13 10:39:49.032381] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:55.292 [2024-12-13 10:39:49.032765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:55.292 [2024-12-13 10:39:49.032787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:55.292 [2024-12-13 10:39:49.032798] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:55.292 [2024-12-13 10:39:49.032994] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:55.292 [2024-12-13 10:39:49.033189] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:55.292 [2024-12-13 10:39:49.033200] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:55.292 [2024-12-13 10:39:49.033209] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:55.292 [2024-12-13 10:39:49.033218] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:55.292 [2024-12-13 10:39:49.045827] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:55.292 [2024-12-13 10:39:49.046215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:55.292 [2024-12-13 10:39:49.046236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:55.292 [2024-12-13 10:39:49.046246] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:55.292 [2024-12-13 10:39:49.046440] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:55.292 [2024-12-13 10:39:49.046643] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:55.292 [2024-12-13 10:39:49.046656] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:55.292 [2024-12-13 10:39:49.046666] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:55.292 [2024-12-13 10:39:49.046675] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:55.292 [2024-12-13 10:39:49.059266] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:55.292 [2024-12-13 10:39:49.059670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:55.292 [2024-12-13 10:39:49.059692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:55.292 [2024-12-13 10:39:49.059703] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:55.292 [2024-12-13 10:39:49.059905] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:55.292 [2024-12-13 10:39:49.060100] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:55.292 [2024-12-13 10:39:49.060114] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:55.292 [2024-12-13 10:39:49.060124] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:55.292 [2024-12-13 10:39:49.060132] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:55.292 [2024-12-13 10:39:49.072564] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:55.292 [2024-12-13 10:39:49.073033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:55.292 [2024-12-13 10:39:49.073054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:55.292 [2024-12-13 10:39:49.073065] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:55.292 [2024-12-13 10:39:49.073260] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:55.292 [2024-12-13 10:39:49.073460] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:55.292 [2024-12-13 10:39:49.073473] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:55.292 [2024-12-13 10:39:49.073482] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:55.292 [2024-12-13 10:39:49.073491] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:55.293 [2024-12-13 10:39:49.085901] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:55.293 [2024-12-13 10:39:49.086363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:55.293 [2024-12-13 10:39:49.086385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:55.293 [2024-12-13 10:39:49.086396] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:55.293 [2024-12-13 10:39:49.086596] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:55.293 [2024-12-13 10:39:49.086792] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:55.293 [2024-12-13 10:39:49.086805] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:55.293 [2024-12-13 10:39:49.086815] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:55.293 [2024-12-13 10:39:49.086824] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:55.293 [2024-12-13 10:39:49.099219] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:55.293 [2024-12-13 10:39:49.099694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:55.293 [2024-12-13 10:39:49.099716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:55.293 [2024-12-13 10:39:49.099727] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:55.293 [2024-12-13 10:39:49.099922] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:55.293 [2024-12-13 10:39:49.100117] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:55.293 [2024-12-13 10:39:49.100129] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:55.293 [2024-12-13 10:39:49.100138] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:55.293 [2024-12-13 10:39:49.100154] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:55.293 3433.33 IOPS, 13.41 MiB/s [2024-12-13T09:39:49.184Z] [2024-12-13 10:39:49.113933] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:55.293 [2024-12-13 10:39:49.114400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:55.293 [2024-12-13 10:39:49.114421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:55.293 [2024-12-13 10:39:49.114432] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:55.293 [2024-12-13 10:39:49.114633] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:55.293 [2024-12-13 10:39:49.114829] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:55.293 [2024-12-13 10:39:49.114841] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:55.293 [2024-12-13 10:39:49.114851] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:55.293 [2024-12-13 10:39:49.114860] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:55.293 [2024-12-13 10:39:49.127264] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:55.293 [2024-12-13 10:39:49.127627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:55.293 [2024-12-13 10:39:49.127648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:55.293 [2024-12-13 10:39:49.127659] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:55.293 [2024-12-13 10:39:49.127853] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:55.293 [2024-12-13 10:39:49.128047] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:55.293 [2024-12-13 10:39:49.128059] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:55.293 [2024-12-13 10:39:49.128068] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:55.293 [2024-12-13 10:39:49.128077] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:55.293 [2024-12-13 10:39:49.140655] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:55.293 [2024-12-13 10:39:49.141080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:55.293 [2024-12-13 10:39:49.141101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:55.293 [2024-12-13 10:39:49.141112] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:55.293 [2024-12-13 10:39:49.141307] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:55.293 [2024-12-13 10:39:49.141515] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:55.293 [2024-12-13 10:39:49.141527] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:55.293 [2024-12-13 10:39:49.141537] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:55.293 [2024-12-13 10:39:49.141546] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:55.293 [2024-12-13 10:39:49.153957] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:55.293 [2024-12-13 10:39:49.154438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:55.293 [2024-12-13 10:39:49.154463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:55.293 [2024-12-13 10:39:49.154475] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:55.293 [2024-12-13 10:39:49.154669] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:55.293 [2024-12-13 10:39:49.154865] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:55.293 [2024-12-13 10:39:49.154877] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:55.293 [2024-12-13 10:39:49.154887] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:55.293 [2024-12-13 10:39:49.154896] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:55.293 10:39:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:55.293 10:39:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:37:55.293 10:39:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:37:55.293 10:39:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:55.293 10:39:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:55.293 [2024-12-13 10:39:49.167312] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:55.293 [2024-12-13 10:39:49.167767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:55.293 [2024-12-13 10:39:49.167789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:55.293 [2024-12-13 10:39:49.167799] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:55.293 [2024-12-13 10:39:49.167995] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:55.293 [2024-12-13 10:39:49.168189] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:55.293 [2024-12-13 10:39:49.168203] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:55.293 [2024-12-13 10:39:49.168214] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:55.293 [2024-12-13 10:39:49.168224] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:55.293 [2024-12-13 10:39:49.180650] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:55.293 [2024-12-13 10:39:49.180976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:55.293 [2024-12-13 10:39:49.180997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:55.293 [2024-12-13 10:39:49.181008] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:55.293 [2024-12-13 10:39:49.181203] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:55.293 [2024-12-13 10:39:49.181399] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:55.293 [2024-12-13 10:39:49.181411] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:55.293 [2024-12-13 10:39:49.181420] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:55.293 [2024-12-13 10:39:49.181434] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:55.553 [2024-12-13 10:39:49.194059] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:55.553 10:39:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:55.553 [2024-12-13 10:39:49.194480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:55.553 [2024-12-13 10:39:49.194505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:55.553 [2024-12-13 10:39:49.194516] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:55.553 [2024-12-13 10:39:49.194710] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:55.553 10:39:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:37:55.553 [2024-12-13 10:39:49.194907] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:55.553 [2024-12-13 10:39:49.194920] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:55.553 [2024-12-13 10:39:49.194929] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:55.553 [2024-12-13 10:39:49.194938] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:55.553 10:39:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:55.553 10:39:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:55.553 [2024-12-13 10:39:49.201911] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:55.553 [2024-12-13 10:39:49.207340] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:55.553 [2024-12-13 10:39:49.207678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:55.554 [2024-12-13 10:39:49.207700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:55.554 [2024-12-13 10:39:49.207711] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:55.554 [2024-12-13 10:39:49.207906] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:55.554 [2024-12-13 10:39:49.208100] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:55.554 [2024-12-13 10:39:49.208111] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:55.554 [2024-12-13 10:39:49.208119] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:55.554 [2024-12-13 10:39:49.208128] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:55.554 10:39:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:55.554 10:39:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:37:55.554 10:39:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:55.554 10:39:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:55.554 [2024-12-13 10:39:49.220733] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:55.554 [2024-12-13 10:39:49.221106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:55.554 [2024-12-13 10:39:49.221127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:55.554 [2024-12-13 10:39:49.221137] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:55.554 [2024-12-13 10:39:49.221337] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:55.554 [2024-12-13 10:39:49.221538] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:55.554 [2024-12-13 10:39:49.221550] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:55.554 [2024-12-13 10:39:49.221559] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:55.554 [2024-12-13 10:39:49.221568] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:55.554 [2024-12-13 10:39:49.234013] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:55.554 [2024-12-13 10:39:49.234464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:55.554 [2024-12-13 10:39:49.234486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:55.554 [2024-12-13 10:39:49.234497] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:55.554 [2024-12-13 10:39:49.234693] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:55.554 [2024-12-13 10:39:49.234890] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:55.554 [2024-12-13 10:39:49.234901] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:55.554 [2024-12-13 10:39:49.234910] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:55.554 [2024-12-13 10:39:49.234919] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:55.554 [2024-12-13 10:39:49.247380] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:55.554 [2024-12-13 10:39:49.247848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:55.554 [2024-12-13 10:39:49.247871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:55.554 [2024-12-13 10:39:49.247883] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:55.554 [2024-12-13 10:39:49.248083] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:55.554 [2024-12-13 10:39:49.248282] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:55.554 [2024-12-13 10:39:49.248301] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:55.554 [2024-12-13 10:39:49.248310] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:55.554 [2024-12-13 10:39:49.248320] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:55.554 [2024-12-13 10:39:49.260778] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:55.554 [2024-12-13 10:39:49.261148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:55.554 [2024-12-13 10:39:49.261169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:55.554 [2024-12-13 10:39:49.261180] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:55.554 [2024-12-13 10:39:49.261375] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:55.554 [2024-12-13 10:39:49.261579] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:55.554 [2024-12-13 10:39:49.261595] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:55.554 [2024-12-13 10:39:49.261605] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:55.554 [2024-12-13 10:39:49.261614] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:55.554 [2024-12-13 10:39:49.274206] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:55.554 [2024-12-13 10:39:49.274582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:55.554 [2024-12-13 10:39:49.274604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:55.554 [2024-12-13 10:39:49.274615] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:55.554 [2024-12-13 10:39:49.274812] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:55.554 [2024-12-13 10:39:49.275007] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:55.554 [2024-12-13 10:39:49.275019] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:55.554 [2024-12-13 10:39:49.275028] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:55.554 [2024-12-13 10:39:49.275037] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:55.554 [2024-12-13 10:39:49.287622] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:55.554 [2024-12-13 10:39:49.288065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:55.554 [2024-12-13 10:39:49.288086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:55.554 [2024-12-13 10:39:49.288097] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:55.554 [2024-12-13 10:39:49.288292] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:55.554 [2024-12-13 10:39:49.288496] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:55.554 [2024-12-13 10:39:49.288508] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:55.554 [2024-12-13 10:39:49.288517] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:55.554 [2024-12-13 10:39:49.288526] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:55.554 [2024-12-13 10:39:49.300944] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:55.554 [2024-12-13 10:39:49.301390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:55.554 [2024-12-13 10:39:49.301411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:55.554 [2024-12-13 10:39:49.301423] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:55.554 [2024-12-13 10:39:49.301625] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:55.554 [2024-12-13 10:39:49.301822] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:55.554 [2024-12-13 10:39:49.301833] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:55.554 [2024-12-13 10:39:49.301845] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:55.554 [2024-12-13 10:39:49.301854] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:55.554 Malloc0 00:37:55.554 10:39:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:55.554 10:39:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:37:55.554 10:39:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:55.554 10:39:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:55.554 [2024-12-13 10:39:49.314294] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:55.554 [2024-12-13 10:39:49.314730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:55.554 [2024-12-13 10:39:49.314752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:55.554 [2024-12-13 10:39:49.314763] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:55.554 [2024-12-13 10:39:49.314960] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:55.554 [2024-12-13 10:39:49.315155] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:55.554 [2024-12-13 10:39:49.315167] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:55.554 [2024-12-13 10:39:49.315176] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:55.554 [2024-12-13 10:39:49.315185] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:55.554 10:39:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:55.555 10:39:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:37:55.555 10:39:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:55.555 10:39:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:55.555 [2024-12-13 10:39:49.327647] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:55.555 [2024-12-13 10:39:49.328116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:55.555 [2024-12-13 10:39:49.328138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:55.555 [2024-12-13 10:39:49.328148] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:55.555 [2024-12-13 10:39:49.328343] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:55.555 [2024-12-13 10:39:49.328543] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:55.555 [2024-12-13 10:39:49.328555] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:55.555 [2024-12-13 10:39:49.328565] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:55.555 [2024-12-13 10:39:49.328573] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:55.555 10:39:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:55.555 10:39:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:55.555 10:39:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:55.555 10:39:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:55.555 [2024-12-13 10:39:49.333461] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:55.555 10:39:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:55.555 10:39:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 4153850 00:37:55.555 [2024-12-13 10:39:49.340992] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:55.555 [2024-12-13 10:39:49.371213] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller successful. 00:37:57.426 3940.43 IOPS, 15.39 MiB/s [2024-12-13T09:39:52.294Z] 4651.50 IOPS, 18.17 MiB/s [2024-12-13T09:39:53.232Z] 5236.89 IOPS, 20.46 MiB/s [2024-12-13T09:39:54.169Z] 5689.20 IOPS, 22.22 MiB/s [2024-12-13T09:39:55.547Z] 6056.18 IOPS, 23.66 MiB/s [2024-12-13T09:39:56.483Z] 6387.33 IOPS, 24.95 MiB/s [2024-12-13T09:39:57.420Z] 6648.46 IOPS, 25.97 MiB/s [2024-12-13T09:39:58.357Z] 6872.29 IOPS, 26.84 MiB/s [2024-12-13T09:39:58.357Z] 7071.27 IOPS, 27.62 MiB/s 00:38:04.466 Latency(us) 00:38:04.466 [2024-12-13T09:39:58.357Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:04.466 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:38:04.466 Verification LBA range: start 0x0 length 0x4000 00:38:04.466 Nvme1n1 : 15.04 7051.27 27.54 12109.02 0.00 6641.18 748.98 41443.72 00:38:04.466 [2024-12-13T09:39:58.357Z] =================================================================================================================== 00:38:04.466 [2024-12-13T09:39:58.357Z] Total : 7051.27 27.54 12109.02 0.00 6641.18 748.98 41443.72 00:38:05.403 10:39:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:38:05.403 10:39:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:38:05.403 10:39:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:05.403 10:39:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:05.403 10:39:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:05.403 10:39:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:38:05.403 10:39:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:38:05.403 10:39:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@516 -- # nvmfcleanup 00:38:05.403 10:39:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # sync 00:38:05.403 10:39:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:05.403 10:39:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set +e 00:38:05.403 10:39:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:05.403 10:39:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:05.403 rmmod nvme_tcp 00:38:05.403 rmmod nvme_fabrics 00:38:05.403 rmmod nvme_keyring 00:38:05.403 10:39:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:05.403 10:39:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # set -e 00:38:05.403 10:39:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # return 0 00:38:05.403 10:39:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@517 -- # '[' -n 4154753 ']' 00:38:05.403 10:39:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@518 -- # killprocess 4154753 00:38:05.403 10:39:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 4154753 ']' 00:38:05.403 10:39:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # kill -0 4154753 00:38:05.403 10:39:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # uname 00:38:05.403 10:39:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:05.403 10:39:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4154753 00:38:05.403 10:39:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:38:05.403 10:39:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:38:05.403 10:39:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4154753' 00:38:05.403 killing process with pid 4154753 00:38:05.403 10:39:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@973 -- # kill 4154753 00:38:05.403 10:39:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@978 -- # wait 4154753 00:38:06.781 10:40:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:38:06.781 10:40:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:38:06.781 10:40:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:38:06.781 10:40:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # iptr 00:38:06.781 10:40:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-save 00:38:06.781 10:40:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:38:06.781 10:40:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-restore 00:38:06.781 10:40:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:06.781 10:40:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:06.781 10:40:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:06.781 10:40:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:06.781 10:40:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:09.316 10:40:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:09.316 00:38:09.316 real 0m29.815s 00:38:09.316 user 1m14.435s 00:38:09.316 sys 0m6.681s 00:38:09.316 10:40:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:09.316 10:40:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:09.316 ************************************ 00:38:09.316 END TEST nvmf_bdevperf 00:38:09.316 ************************************ 00:38:09.316 10:40:02 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:38:09.316 10:40:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:38:09.316 10:40:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:09.316 10:40:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:38:09.316 ************************************ 00:38:09.316 START TEST nvmf_target_disconnect 00:38:09.316 ************************************ 00:38:09.316 10:40:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:38:09.316 * Looking for test storage... 00:38:09.316 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:38:09.316 10:40:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:38:09.316 10:40:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1711 -- # lcov --version 00:38:09.316 10:40:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:38:09.316 10:40:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:38:09.316 10:40:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:09.316 10:40:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:09.316 10:40:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:09.316 10:40:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:38:09.316 10:40:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:38:09.316 10:40:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:38:09.316 10:40:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:38:09.316 10:40:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:38:09.316 10:40:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:38:09.316 10:40:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:38:09.316 10:40:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:09.316 10:40:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:38:09.316 10:40:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:38:09.316 10:40:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:09.316 10:40:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:09.316 10:40:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:38:09.316 10:40:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:38:09.316 10:40:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:09.316 10:40:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:38:09.316 10:40:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:38:09.316 10:40:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:38:09.316 10:40:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:38:09.316 10:40:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:09.316 10:40:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:38:09.316 10:40:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:38:09.316 10:40:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:09.316 10:40:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:09.316 10:40:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:38:09.316 10:40:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:09.316 10:40:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:38:09.316 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:09.316 --rc genhtml_branch_coverage=1 00:38:09.316 --rc genhtml_function_coverage=1 00:38:09.316 --rc genhtml_legend=1 00:38:09.316 --rc geninfo_all_blocks=1 00:38:09.316 --rc geninfo_unexecuted_blocks=1 00:38:09.316 00:38:09.316 ' 00:38:09.316 10:40:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:38:09.316 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:09.316 --rc genhtml_branch_coverage=1 00:38:09.316 --rc genhtml_function_coverage=1 00:38:09.316 --rc genhtml_legend=1 00:38:09.316 --rc geninfo_all_blocks=1 00:38:09.316 --rc geninfo_unexecuted_blocks=1 00:38:09.316 00:38:09.316 ' 00:38:09.316 10:40:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:38:09.316 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:09.316 --rc genhtml_branch_coverage=1 00:38:09.316 --rc genhtml_function_coverage=1 00:38:09.316 --rc genhtml_legend=1 00:38:09.316 --rc geninfo_all_blocks=1 00:38:09.316 --rc geninfo_unexecuted_blocks=1 00:38:09.317 00:38:09.317 ' 00:38:09.317 10:40:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:38:09.317 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:09.317 --rc genhtml_branch_coverage=1 00:38:09.317 --rc genhtml_function_coverage=1 00:38:09.317 --rc genhtml_legend=1 00:38:09.317 --rc geninfo_all_blocks=1 00:38:09.317 --rc geninfo_unexecuted_blocks=1 00:38:09.317 00:38:09.317 ' 00:38:09.317 10:40:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:09.317 10:40:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:38:09.317 10:40:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:09.317 10:40:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:09.317 10:40:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:09.317 10:40:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:09.317 10:40:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:09.317 10:40:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:09.317 10:40:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:09.317 10:40:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:09.317 10:40:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:09.317 10:40:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:09.317 10:40:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:38:09.317 10:40:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:38:09.317 10:40:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:09.317 10:40:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:09.317 10:40:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:09.317 10:40:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:09.317 10:40:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:09.317 10:40:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:38:09.317 10:40:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:09.317 10:40:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:09.317 10:40:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:09.317 10:40:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:09.317 10:40:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:09.317 10:40:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:09.317 10:40:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:38:09.317 10:40:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:09.317 10:40:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # : 0 00:38:09.317 10:40:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:09.317 10:40:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:09.317 10:40:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:09.317 10:40:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:09.317 10:40:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:09.317 10:40:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:38:09.317 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:38:09.317 10:40:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:09.317 10:40:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:09.317 10:40:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:09.317 10:40:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:38:09.317 10:40:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:38:09.317 10:40:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:38:09.317 10:40:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:38:09.317 10:40:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:38:09.317 10:40:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:09.317 10:40:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:38:09.317 10:40:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:38:09.317 10:40:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:38:09.317 10:40:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:09.317 10:40:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:09.317 10:40:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:09.317 10:40:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:38:09.317 10:40:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:38:09.317 10:40:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:38:09.317 10:40:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:38:14.660 10:40:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:14.660 10:40:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:38:14.660 10:40:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:38:14.660 10:40:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:38:14.660 10:40:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:38:14.660 10:40:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:38:14.660 10:40:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:38:14.660 10:40:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:38:14.660 10:40:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:38:14.660 10:40:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # e810=() 00:38:14.660 10:40:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:38:14.660 10:40:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # x722=() 00:38:14.660 10:40:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:38:14.660 10:40:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:38:14.660 10:40:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:38:14.660 10:40:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:14.660 10:40:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:14.660 10:40:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:14.660 10:40:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:14.660 10:40:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:14.660 10:40:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:14.660 10:40:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:14.660 10:40:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:38:14.660 10:40:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:14.660 10:40:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:14.660 10:40:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:14.660 10:40:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:14.660 10:40:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:38:14.660 10:40:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:38:14.660 10:40:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:38:14.660 10:40:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:38:14.660 10:40:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:38:14.660 10:40:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:38:14.660 10:40:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:14.660 10:40:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:38:14.660 Found 0000:af:00.0 (0x8086 - 0x159b) 00:38:14.660 10:40:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:14.660 10:40:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:14.660 10:40:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:14.660 10:40:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:14.660 10:40:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:14.660 10:40:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:14.660 10:40:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:38:14.660 Found 0000:af:00.1 (0x8086 - 0x159b) 00:38:14.660 10:40:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:14.660 10:40:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:14.660 10:40:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:14.660 10:40:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:14.660 10:40:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:14.660 10:40:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:38:14.660 10:40:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:38:14.660 10:40:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:38:14.660 10:40:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:14.660 10:40:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:14.660 10:40:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:14.660 10:40:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:14.660 10:40:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:14.660 10:40:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:14.660 10:40:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:14.660 10:40:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:38:14.660 Found net devices under 0000:af:00.0: cvl_0_0 00:38:14.660 10:40:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:14.660 10:40:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:14.660 10:40:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:14.660 10:40:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:14.660 10:40:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:14.660 10:40:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:14.660 10:40:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:14.660 10:40:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:14.660 10:40:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:38:14.660 Found net devices under 0000:af:00.1: cvl_0_1 00:38:14.660 10:40:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:14.660 10:40:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:38:14.660 10:40:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:38:14.660 10:40:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:38:14.660 10:40:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:38:14.660 10:40:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:38:14.660 10:40:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:14.660 10:40:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:14.661 10:40:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:14.661 10:40:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:14.661 10:40:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:38:14.661 10:40:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:14.661 10:40:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:14.661 10:40:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:38:14.661 10:40:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:38:14.661 10:40:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:14.661 10:40:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:14.661 10:40:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:38:14.661 10:40:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:38:14.661 10:40:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:38:14.661 10:40:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:14.661 10:40:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:14.661 10:40:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:14.661 10:40:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:38:14.661 10:40:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:14.661 10:40:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:14.661 10:40:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:14.661 10:40:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:38:14.661 10:40:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:38:14.661 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:14.661 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.316 ms 00:38:14.661 00:38:14.661 --- 10.0.0.2 ping statistics --- 00:38:14.661 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:14.661 rtt min/avg/max/mdev = 0.316/0.316/0.316/0.000 ms 00:38:14.661 10:40:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:14.661 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:14.661 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.175 ms 00:38:14.661 00:38:14.661 --- 10.0.0.1 ping statistics --- 00:38:14.661 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:14.661 rtt min/avg/max/mdev = 0.175/0.175/0.175/0.000 ms 00:38:14.661 10:40:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:14.661 10:40:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # return 0 00:38:14.661 10:40:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:38:14.661 10:40:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:14.661 10:40:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:38:14.661 10:40:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:38:14.661 10:40:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:14.661 10:40:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:38:14.661 10:40:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:38:14.661 10:40:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:38:14.661 10:40:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:38:14.661 10:40:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:14.661 10:40:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:38:14.661 ************************************ 00:38:14.661 START TEST nvmf_target_disconnect_tc1 00:38:14.661 ************************************ 00:38:14.661 10:40:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc1 00:38:14.661 10:40:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:38:14.661 10:40:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # local es=0 00:38:14.661 10:40:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:38:14.661 10:40:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:38:14.661 10:40:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:14.661 10:40:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:38:14.661 10:40:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:14.661 10:40:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:38:14.661 10:40:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:14.661 10:40:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:38:14.661 10:40:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:38:14.661 10:40:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:38:14.661 [2024-12-13 10:40:08.533513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:14.661 [2024-12-13 10:40:08.533686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325800 with addr=10.0.0.2, port=4420 00:38:14.661 [2024-12-13 10:40:08.533907] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:38:14.661 [2024-12-13 10:40:08.533954] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:38:14.661 [2024-12-13 10:40:08.533987] nvme.c: 951:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:38:14.661 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:38:14.661 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:38:14.920 Initializing NVMe Controllers 00:38:14.920 10:40:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # es=1 00:38:14.920 10:40:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:38:14.920 10:40:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:38:14.920 10:40:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:38:14.920 00:38:14.920 real 0m0.189s 00:38:14.920 user 0m0.076s 00:38:14.921 sys 0m0.112s 00:38:14.921 10:40:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:14.921 10:40:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:38:14.921 ************************************ 00:38:14.921 END TEST nvmf_target_disconnect_tc1 00:38:14.921 ************************************ 00:38:14.921 10:40:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:38:14.921 10:40:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:38:14.921 10:40:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:14.921 10:40:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:38:14.921 ************************************ 00:38:14.921 START TEST nvmf_target_disconnect_tc2 00:38:14.921 ************************************ 00:38:14.921 10:40:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc2 00:38:14.921 10:40:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:38:14.921 10:40:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:38:14.921 10:40:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:38:14.921 10:40:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:14.921 10:40:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:14.921 10:40:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=4160045 00:38:14.921 10:40:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 4160045 00:38:14.921 10:40:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:38:14.921 10:40:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 4160045 ']' 00:38:14.921 10:40:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:14.921 10:40:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:14.921 10:40:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:14.921 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:14.921 10:40:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:14.921 10:40:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:14.921 [2024-12-13 10:40:08.710420] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:38:14.921 [2024-12-13 10:40:08.710508] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:15.179 [2024-12-13 10:40:08.840345] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:38:15.179 [2024-12-13 10:40:08.945679] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:15.179 [2024-12-13 10:40:08.945725] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:15.179 [2024-12-13 10:40:08.945735] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:15.179 [2024-12-13 10:40:08.945761] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:15.179 [2024-12-13 10:40:08.945769] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:15.179 [2024-12-13 10:40:08.948076] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 5 00:38:15.179 [2024-12-13 10:40:08.948217] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 6 00:38:15.179 [2024-12-13 10:40:08.948280] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:38:15.179 [2024-12-13 10:40:08.948303] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 7 00:38:15.744 10:40:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:15.744 10:40:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:38:15.744 10:40:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:38:15.744 10:40:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:15.744 10:40:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:15.744 10:40:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:15.744 10:40:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:38:15.744 10:40:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:15.744 10:40:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:16.001 Malloc0 00:38:16.001 10:40:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:16.001 10:40:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:38:16.001 10:40:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:16.002 10:40:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:16.002 [2024-12-13 10:40:09.646916] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:16.002 10:40:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:16.002 10:40:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:38:16.002 10:40:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:16.002 10:40:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:16.002 10:40:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:16.002 10:40:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:38:16.002 10:40:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:16.002 10:40:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:16.002 10:40:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:16.002 10:40:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:16.002 10:40:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:16.002 10:40:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:16.002 [2024-12-13 10:40:09.675193] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:16.002 10:40:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:16.002 10:40:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:38:16.002 10:40:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:16.002 10:40:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:16.002 10:40:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:16.002 10:40:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=4160286 00:38:16.002 10:40:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:38:16.002 10:40:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:38:17.906 10:40:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 4160045 00:38:17.906 10:40:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:38:17.906 Read completed with error (sct=0, sc=8) 00:38:17.906 starting I/O failed 00:38:17.906 Read completed with error (sct=0, sc=8) 00:38:17.906 starting I/O failed 00:38:17.906 Read completed with error (sct=0, sc=8) 00:38:17.906 starting I/O failed 00:38:17.906 Read completed with error (sct=0, sc=8) 00:38:17.906 starting I/O failed 00:38:17.906 Read completed with error (sct=0, sc=8) 00:38:17.906 starting I/O failed 00:38:17.906 Read completed with error (sct=0, sc=8) 00:38:17.906 starting I/O failed 00:38:17.906 Read completed with error (sct=0, sc=8) 00:38:17.906 starting I/O failed 00:38:17.906 Read completed with error (sct=0, sc=8) 00:38:17.906 starting I/O failed 00:38:17.906 Read completed with error (sct=0, sc=8) 00:38:17.906 starting I/O failed 00:38:17.906 Read completed with error (sct=0, sc=8) 00:38:17.906 starting I/O failed 00:38:17.906 Read completed with error (sct=0, sc=8) 00:38:17.906 starting I/O failed 00:38:17.906 Read completed with error (sct=0, sc=8) 00:38:17.906 starting I/O failed 00:38:17.906 Read completed with error (sct=0, sc=8) 00:38:17.906 starting I/O failed 00:38:17.906 Write completed with error (sct=0, sc=8) 00:38:17.906 starting I/O failed 00:38:17.906 Write completed with error (sct=0, sc=8) 00:38:17.906 starting I/O failed 00:38:17.906 Read completed with error (sct=0, sc=8) 00:38:17.906 starting I/O failed 00:38:17.906 Write completed with error (sct=0, sc=8) 00:38:17.906 starting I/O failed 00:38:17.906 Write completed with error (sct=0, sc=8) 00:38:17.906 starting I/O failed 00:38:17.906 Write completed with error (sct=0, sc=8) 00:38:17.906 starting I/O failed 00:38:17.906 Read completed with error (sct=0, sc=8) 00:38:17.906 starting I/O failed 00:38:17.906 Read completed with error (sct=0, sc=8) 00:38:17.906 starting I/O failed 00:38:17.906 Write completed with error (sct=0, sc=8) 00:38:17.906 starting I/O failed 00:38:17.906 Write completed with error (sct=0, sc=8) 00:38:17.906 starting I/O failed 00:38:17.906 Read completed with error (sct=0, sc=8) 00:38:17.906 starting I/O failed 00:38:17.906 Write completed with error (sct=0, sc=8) 00:38:17.906 starting I/O failed 00:38:17.906 Write completed with error (sct=0, sc=8) 00:38:17.906 starting I/O failed 00:38:17.907 Write completed with error (sct=0, sc=8) 00:38:17.907 starting I/O failed 00:38:17.907 Read completed with error (sct=0, sc=8) 00:38:17.907 starting I/O failed 00:38:17.907 Read completed with error (sct=0, sc=8) 00:38:17.907 starting I/O failed 00:38:17.907 Read completed with error (sct=0, sc=8) 00:38:17.907 starting I/O failed 00:38:17.907 Write completed with error (sct=0, sc=8) 00:38:17.907 starting I/O failed 00:38:17.907 Read completed with error (sct=0, sc=8) 00:38:17.907 starting I/O failed 00:38:17.907 Read completed with error (sct=0, sc=8) 00:38:17.907 starting I/O failed 00:38:17.907 Read completed with error (sct=0, sc=8) 00:38:17.907 starting I/O failed 00:38:17.907 [2024-12-13 10:40:11.715269] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:17.907 Read completed with error (sct=0, sc=8) 00:38:17.907 starting I/O failed 00:38:17.907 Read completed with error (sct=0, sc=8) 00:38:17.907 starting I/O failed 00:38:17.907 Read completed with error (sct=0, sc=8) 00:38:17.907 starting I/O failed 00:38:17.907 Read completed with error (sct=0, sc=8) 00:38:17.907 starting I/O failed 00:38:17.907 Read completed with error (sct=0, sc=8) 00:38:17.907 starting I/O failed 00:38:17.907 Read completed with error (sct=0, sc=8) 00:38:17.907 starting I/O failed 00:38:17.907 Read completed with error (sct=0, sc=8) 00:38:17.907 starting I/O failed 00:38:17.907 Write completed with error (sct=0, sc=8) 00:38:17.907 starting I/O failed 00:38:17.907 Write completed with error (sct=0, sc=8) 00:38:17.907 starting I/O failed 00:38:17.907 Write completed with error (sct=0, sc=8) 00:38:17.907 starting I/O failed 00:38:17.907 Read completed with error (sct=0, sc=8) 00:38:17.907 starting I/O failed 00:38:17.907 Write completed with error (sct=0, sc=8) 00:38:17.907 starting I/O failed 00:38:17.907 Read completed with error (sct=0, sc=8) 00:38:17.907 starting I/O failed 00:38:17.907 Read completed with error (sct=0, sc=8) 00:38:17.907 starting I/O failed 00:38:17.907 Write completed with error (sct=0, sc=8) 00:38:17.907 starting I/O failed 00:38:17.907 Read completed with error (sct=0, sc=8) 00:38:17.907 starting I/O failed 00:38:17.907 Read completed with error (sct=0, sc=8) 00:38:17.907 starting I/O failed 00:38:17.907 Write completed with error (sct=0, sc=8) 00:38:17.907 starting I/O failed 00:38:17.907 Write completed with error (sct=0, sc=8) 00:38:17.907 starting I/O failed 00:38:17.907 Write completed with error (sct=0, sc=8) 00:38:17.907 starting I/O failed 00:38:17.907 Write completed with error (sct=0, sc=8) 00:38:17.907 starting I/O failed 00:38:17.907 Read completed with error (sct=0, sc=8) 00:38:17.907 starting I/O failed 00:38:17.907 Write completed with error (sct=0, sc=8) 00:38:17.907 starting I/O failed 00:38:17.907 Write completed with error (sct=0, sc=8) 00:38:17.907 starting I/O failed 00:38:17.907 Read completed with error (sct=0, sc=8) 00:38:17.907 starting I/O failed 00:38:17.907 Write completed with error (sct=0, sc=8) 00:38:17.907 starting I/O failed 00:38:17.907 Read completed with error (sct=0, sc=8) 00:38:17.907 starting I/O failed 00:38:17.907 Read completed with error (sct=0, sc=8) 00:38:17.907 starting I/O failed 00:38:17.907 Write completed with error (sct=0, sc=8) 00:38:17.907 starting I/O failed 00:38:17.907 Read completed with error (sct=0, sc=8) 00:38:17.907 starting I/O failed 00:38:17.907 [2024-12-13 10:40:11.715652] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:17.907 Read completed with error (sct=0, sc=8) 00:38:17.907 starting I/O failed 00:38:17.907 Read completed with error (sct=0, sc=8) 00:38:17.907 starting I/O failed 00:38:17.907 Read completed with error (sct=0, sc=8) 00:38:17.907 starting I/O failed 00:38:17.907 Read completed with error (sct=0, sc=8) 00:38:17.907 starting I/O failed 00:38:17.907 Read completed with error (sct=0, sc=8) 00:38:17.907 starting I/O failed 00:38:17.907 Read completed with error (sct=0, sc=8) 00:38:17.907 starting I/O failed 00:38:17.907 Read completed with error (sct=0, sc=8) 00:38:17.907 starting I/O failed 00:38:17.907 Read completed with error (sct=0, sc=8) 00:38:17.907 starting I/O failed 00:38:17.907 Write completed with error (sct=0, sc=8) 00:38:17.907 starting I/O failed 00:38:17.907 Read completed with error (sct=0, sc=8) 00:38:17.907 starting I/O failed 00:38:17.907 Read completed with error (sct=0, sc=8) 00:38:17.907 starting I/O failed 00:38:17.907 Write completed with error (sct=0, sc=8) 00:38:17.907 starting I/O failed 00:38:17.907 Write completed with error (sct=0, sc=8) 00:38:17.907 starting I/O failed 00:38:17.907 Write completed with error (sct=0, sc=8) 00:38:17.907 starting I/O failed 00:38:17.907 Write completed with error (sct=0, sc=8) 00:38:17.907 starting I/O failed 00:38:17.907 Write completed with error (sct=0, sc=8) 00:38:17.907 starting I/O failed 00:38:17.907 Write completed with error (sct=0, sc=8) 00:38:17.907 starting I/O failed 00:38:17.907 Write completed with error (sct=0, sc=8) 00:38:17.907 starting I/O failed 00:38:17.907 Read completed with error (sct=0, sc=8) 00:38:17.907 starting I/O failed 00:38:17.907 Write completed with error (sct=0, sc=8) 00:38:17.907 starting I/O failed 00:38:17.907 Write completed with error (sct=0, sc=8) 00:38:17.907 starting I/O failed 00:38:17.907 Write completed with error (sct=0, sc=8) 00:38:17.907 starting I/O failed 00:38:17.907 Write completed with error (sct=0, sc=8) 00:38:17.907 starting I/O failed 00:38:17.907 Write completed with error (sct=0, sc=8) 00:38:17.907 starting I/O failed 00:38:17.907 Read completed with error (sct=0, sc=8) 00:38:17.907 starting I/O failed 00:38:17.907 Read completed with error (sct=0, sc=8) 00:38:17.907 starting I/O failed 00:38:17.907 Write completed with error (sct=0, sc=8) 00:38:17.907 starting I/O failed 00:38:17.907 Read completed with error (sct=0, sc=8) 00:38:17.907 starting I/O failed 00:38:17.907 Read completed with error (sct=0, sc=8) 00:38:17.907 starting I/O failed 00:38:17.907 Write completed with error (sct=0, sc=8) 00:38:17.907 starting I/O failed 00:38:17.907 Read completed with error (sct=0, sc=8) 00:38:17.907 starting I/O failed 00:38:17.907 Read completed with error (sct=0, sc=8) 00:38:17.907 starting I/O failed 00:38:17.907 [2024-12-13 10:40:11.716017] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:17.907 Read completed with error (sct=0, sc=8) 00:38:17.907 starting I/O failed 00:38:17.907 Read completed with error (sct=0, sc=8) 00:38:17.907 starting I/O failed 00:38:17.907 Read completed with error (sct=0, sc=8) 00:38:17.907 starting I/O failed 00:38:17.907 Read completed with error (sct=0, sc=8) 00:38:17.907 starting I/O failed 00:38:17.907 Read completed with error (sct=0, sc=8) 00:38:17.907 starting I/O failed 00:38:17.907 Read completed with error (sct=0, sc=8) 00:38:17.907 starting I/O failed 00:38:17.907 Read completed with error (sct=0, sc=8) 00:38:17.907 starting I/O failed 00:38:17.907 Read completed with error (sct=0, sc=8) 00:38:17.907 starting I/O failed 00:38:17.907 Read completed with error (sct=0, sc=8) 00:38:17.907 starting I/O failed 00:38:17.907 Read completed with error (sct=0, sc=8) 00:38:17.907 starting I/O failed 00:38:17.907 Read completed with error (sct=0, sc=8) 00:38:17.907 starting I/O failed 00:38:17.907 Read completed with error (sct=0, sc=8) 00:38:17.907 starting I/O failed 00:38:17.907 Read completed with error (sct=0, sc=8) 00:38:17.907 starting I/O failed 00:38:17.907 Read completed with error (sct=0, sc=8) 00:38:17.907 starting I/O failed 00:38:17.907 Write completed with error (sct=0, sc=8) 00:38:17.907 starting I/O failed 00:38:17.907 Write completed with error (sct=0, sc=8) 00:38:17.907 starting I/O failed 00:38:17.907 Read completed with error (sct=0, sc=8) 00:38:17.907 starting I/O failed 00:38:17.907 Write completed with error (sct=0, sc=8) 00:38:17.907 starting I/O failed 00:38:17.907 Read completed with error (sct=0, sc=8) 00:38:17.907 starting I/O failed 00:38:17.907 Write completed with error (sct=0, sc=8) 00:38:17.907 starting I/O failed 00:38:17.907 Write completed with error (sct=0, sc=8) 00:38:17.907 starting I/O failed 00:38:17.907 Read completed with error (sct=0, sc=8) 00:38:17.907 starting I/O failed 00:38:17.907 Write completed with error (sct=0, sc=8) 00:38:17.907 starting I/O failed 00:38:17.907 Read completed with error (sct=0, sc=8) 00:38:17.907 starting I/O failed 00:38:17.907 Write completed with error (sct=0, sc=8) 00:38:17.907 starting I/O failed 00:38:17.907 Write completed with error (sct=0, sc=8) 00:38:17.907 starting I/O failed 00:38:17.907 Write completed with error (sct=0, sc=8) 00:38:17.907 starting I/O failed 00:38:17.907 Read completed with error (sct=0, sc=8) 00:38:17.907 starting I/O failed 00:38:17.907 Write completed with error (sct=0, sc=8) 00:38:17.907 starting I/O failed 00:38:17.907 Write completed with error (sct=0, sc=8) 00:38:17.907 starting I/O failed 00:38:17.907 Write completed with error (sct=0, sc=8) 00:38:17.907 starting I/O failed 00:38:17.907 Write completed with error (sct=0, sc=8) 00:38:17.907 starting I/O failed 00:38:17.907 [2024-12-13 10:40:11.716370] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:38:17.907 [2024-12-13 10:40:11.716556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.907 [2024-12-13 10:40:11.716582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:17.907 qpair failed and we were unable to recover it. 00:38:17.907 [2024-12-13 10:40:11.716765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.907 [2024-12-13 10:40:11.716780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:17.907 qpair failed and we were unable to recover it. 00:38:17.907 [2024-12-13 10:40:11.717017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.907 [2024-12-13 10:40:11.717032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:17.907 qpair failed and we were unable to recover it. 00:38:17.907 [2024-12-13 10:40:11.717120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.908 [2024-12-13 10:40:11.717134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:17.908 qpair failed and we were unable to recover it. 00:38:17.908 [2024-12-13 10:40:11.717298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.908 [2024-12-13 10:40:11.717312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:17.908 qpair failed and we were unable to recover it. 00:38:17.908 [2024-12-13 10:40:11.717426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.908 [2024-12-13 10:40:11.717440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:17.908 qpair failed and we were unable to recover it. 00:38:17.908 [2024-12-13 10:40:11.717566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.908 [2024-12-13 10:40:11.717581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:17.908 qpair failed and we were unable to recover it. 00:38:17.908 [2024-12-13 10:40:11.717678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.908 [2024-12-13 10:40:11.717692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:17.908 qpair failed and we were unable to recover it. 00:38:17.908 [2024-12-13 10:40:11.717793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.908 [2024-12-13 10:40:11.717807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:17.908 qpair failed and we were unable to recover it. 00:38:17.908 [2024-12-13 10:40:11.717970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.908 [2024-12-13 10:40:11.717984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:17.908 qpair failed and we were unable to recover it. 00:38:17.908 [2024-12-13 10:40:11.718083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.908 [2024-12-13 10:40:11.718097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:17.908 qpair failed and we were unable to recover it. 00:38:17.908 [2024-12-13 10:40:11.718263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.908 [2024-12-13 10:40:11.718277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:17.908 qpair failed and we were unable to recover it. 00:38:17.908 [2024-12-13 10:40:11.718505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.908 [2024-12-13 10:40:11.718519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:17.908 qpair failed and we were unable to recover it. 00:38:17.908 [2024-12-13 10:40:11.718604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.908 [2024-12-13 10:40:11.718617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:17.908 qpair failed and we were unable to recover it. 00:38:17.908 [2024-12-13 10:40:11.718718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.908 [2024-12-13 10:40:11.718731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:17.908 qpair failed and we were unable to recover it. 00:38:17.908 [2024-12-13 10:40:11.718841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.908 [2024-12-13 10:40:11.718855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:17.908 qpair failed and we were unable to recover it. 00:38:17.908 [2024-12-13 10:40:11.718943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.908 [2024-12-13 10:40:11.718956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:17.908 qpair failed and we were unable to recover it. 00:38:17.908 [2024-12-13 10:40:11.719064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.908 [2024-12-13 10:40:11.719077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:17.908 qpair failed and we were unable to recover it. 00:38:17.908 [2024-12-13 10:40:11.719255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.908 [2024-12-13 10:40:11.719269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:17.908 qpair failed and we were unable to recover it. 00:38:17.908 [2024-12-13 10:40:11.719431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.908 [2024-12-13 10:40:11.719444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:17.908 qpair failed and we were unable to recover it. 00:38:17.908 [2024-12-13 10:40:11.719633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.908 [2024-12-13 10:40:11.719647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:17.908 qpair failed and we were unable to recover it. 00:38:17.908 [2024-12-13 10:40:11.719822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.908 [2024-12-13 10:40:11.719835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:17.908 qpair failed and we were unable to recover it. 00:38:17.908 [2024-12-13 10:40:11.719946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.908 [2024-12-13 10:40:11.719962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:17.908 qpair failed and we were unable to recover it. 00:38:17.908 [2024-12-13 10:40:11.720137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.908 [2024-12-13 10:40:11.720151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:17.908 qpair failed and we were unable to recover it. 00:38:17.908 [2024-12-13 10:40:11.720318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.908 [2024-12-13 10:40:11.720332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:17.908 qpair failed and we were unable to recover it. 00:38:17.908 [2024-12-13 10:40:11.720611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.908 [2024-12-13 10:40:11.720626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:17.908 qpair failed and we were unable to recover it. 00:38:17.908 [2024-12-13 10:40:11.720722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.908 [2024-12-13 10:40:11.720735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:17.908 qpair failed and we were unable to recover it. 00:38:17.908 [2024-12-13 10:40:11.720838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.908 [2024-12-13 10:40:11.720853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:17.908 qpair failed and we were unable to recover it. 00:38:17.908 [2024-12-13 10:40:11.720963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.908 [2024-12-13 10:40:11.720976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:17.908 qpair failed and we were unable to recover it. 00:38:17.908 [2024-12-13 10:40:11.721059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.908 [2024-12-13 10:40:11.721072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:17.908 qpair failed and we were unable to recover it. 00:38:17.908 [2024-12-13 10:40:11.721365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.908 [2024-12-13 10:40:11.721379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:17.908 qpair failed and we were unable to recover it. 00:38:17.908 [2024-12-13 10:40:11.721626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.908 [2024-12-13 10:40:11.721640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:17.908 qpair failed and we were unable to recover it. 00:38:17.908 [2024-12-13 10:40:11.721862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.908 [2024-12-13 10:40:11.721876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:17.908 qpair failed and we were unable to recover it. 00:38:17.908 [2024-12-13 10:40:11.721972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.908 [2024-12-13 10:40:11.721985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:17.908 qpair failed and we were unable to recover it. 00:38:17.908 [2024-12-13 10:40:11.722146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.908 [2024-12-13 10:40:11.722160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:17.908 qpair failed and we were unable to recover it. 00:38:17.908 [2024-12-13 10:40:11.722279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.908 [2024-12-13 10:40:11.722304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.908 qpair failed and we were unable to recover it. 00:38:17.908 [2024-12-13 10:40:11.722597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.908 [2024-12-13 10:40:11.722623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:17.908 qpair failed and we were unable to recover it. 00:38:17.908 [2024-12-13 10:40:11.722758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.908 [2024-12-13 10:40:11.722783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:17.908 qpair failed and we were unable to recover it. 00:38:17.908 [2024-12-13 10:40:11.723020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.908 [2024-12-13 10:40:11.723035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:17.908 qpair failed and we were unable to recover it. 00:38:17.908 [2024-12-13 10:40:11.723205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.909 [2024-12-13 10:40:11.723219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:17.909 qpair failed and we were unable to recover it. 00:38:17.909 [2024-12-13 10:40:11.723373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.909 [2024-12-13 10:40:11.723388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:17.909 qpair failed and we were unable to recover it. 00:38:17.909 [2024-12-13 10:40:11.723488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.909 [2024-12-13 10:40:11.723502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:17.909 qpair failed and we were unable to recover it. 00:38:17.909 [2024-12-13 10:40:11.723731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.909 [2024-12-13 10:40:11.723745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:17.909 qpair failed and we were unable to recover it. 00:38:17.909 [2024-12-13 10:40:11.723844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.909 [2024-12-13 10:40:11.723857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:17.909 qpair failed and we were unable to recover it. 00:38:17.909 [2024-12-13 10:40:11.724114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.909 [2024-12-13 10:40:11.724127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:17.909 qpair failed and we were unable to recover it. 00:38:17.909 [2024-12-13 10:40:11.724224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.909 [2024-12-13 10:40:11.724238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:17.909 qpair failed and we were unable to recover it. 00:38:17.909 [2024-12-13 10:40:11.724446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.909 [2024-12-13 10:40:11.724464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:17.909 qpair failed and we were unable to recover it. 00:38:17.909 [2024-12-13 10:40:11.724630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.909 [2024-12-13 10:40:11.724644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:17.909 qpair failed and we were unable to recover it. 00:38:17.909 [2024-12-13 10:40:11.724797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.909 [2024-12-13 10:40:11.724811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:17.909 qpair failed and we were unable to recover it. 00:38:17.909 [2024-12-13 10:40:11.724917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.909 [2024-12-13 10:40:11.724932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:17.909 qpair failed and we were unable to recover it. 00:38:17.909 [2024-12-13 10:40:11.725085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.909 [2024-12-13 10:40:11.725099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:17.909 qpair failed and we were unable to recover it. 00:38:17.909 [2024-12-13 10:40:11.725253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.909 [2024-12-13 10:40:11.725267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:17.909 qpair failed and we were unable to recover it. 00:38:17.909 [2024-12-13 10:40:11.725435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.909 [2024-12-13 10:40:11.725454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:17.909 qpair failed and we were unable to recover it. 00:38:17.909 [2024-12-13 10:40:11.725602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.909 [2024-12-13 10:40:11.725616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:17.909 qpair failed and we were unable to recover it. 00:38:17.909 [2024-12-13 10:40:11.725863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.909 [2024-12-13 10:40:11.725906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:17.909 qpair failed and we were unable to recover it. 00:38:17.909 [2024-12-13 10:40:11.726184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.909 [2024-12-13 10:40:11.726227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:17.909 qpair failed and we were unable to recover it. 00:38:17.909 [2024-12-13 10:40:11.726492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.909 [2024-12-13 10:40:11.726537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:17.909 qpair failed and we were unable to recover it. 00:38:17.909 [2024-12-13 10:40:11.726820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.909 [2024-12-13 10:40:11.726865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:17.909 qpair failed and we were unable to recover it. 00:38:17.909 [2024-12-13 10:40:11.727191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.909 [2024-12-13 10:40:11.727234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:17.909 qpair failed and we were unable to recover it. 00:38:17.909 [2024-12-13 10:40:11.727550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.909 [2024-12-13 10:40:11.727595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:17.909 qpair failed and we were unable to recover it. 00:38:17.909 [2024-12-13 10:40:11.727855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.909 [2024-12-13 10:40:11.727914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:17.909 qpair failed and we were unable to recover it. 00:38:17.909 [2024-12-13 10:40:11.728133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.909 [2024-12-13 10:40:11.728177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:17.909 qpair failed and we were unable to recover it. 00:38:17.909 [2024-12-13 10:40:11.728486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.909 [2024-12-13 10:40:11.728536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:17.909 qpair failed and we were unable to recover it. 00:38:17.909 [2024-12-13 10:40:11.728703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.909 [2024-12-13 10:40:11.728745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:17.909 qpair failed and we were unable to recover it. 00:38:17.909 [2024-12-13 10:40:11.728904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.909 [2024-12-13 10:40:11.728947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:17.909 qpair failed and we were unable to recover it. 00:38:17.909 [2024-12-13 10:40:11.729106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.909 [2024-12-13 10:40:11.729150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:17.909 qpair failed and we were unable to recover it. 00:38:17.909 [2024-12-13 10:40:11.729314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.909 [2024-12-13 10:40:11.729356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:17.909 qpair failed and we were unable to recover it. 00:38:17.909 [2024-12-13 10:40:11.729583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.909 [2024-12-13 10:40:11.729613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:17.909 qpair failed and we were unable to recover it. 00:38:17.909 [2024-12-13 10:40:11.729817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.909 [2024-12-13 10:40:11.729831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:17.909 qpair failed and we were unable to recover it. 00:38:17.909 [2024-12-13 10:40:11.729927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.909 [2024-12-13 10:40:11.729941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:17.909 qpair failed and we were unable to recover it. 00:38:17.909 [2024-12-13 10:40:11.730168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.909 [2024-12-13 10:40:11.730209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:17.909 qpair failed and we were unable to recover it. 00:38:17.909 [2024-12-13 10:40:11.730442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.909 [2024-12-13 10:40:11.730502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:17.909 qpair failed and we were unable to recover it. 00:38:17.909 [2024-12-13 10:40:11.730772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.909 [2024-12-13 10:40:11.730820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:17.909 qpair failed and we were unable to recover it. 00:38:17.909 [2024-12-13 10:40:11.730998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.909 [2024-12-13 10:40:11.731020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:17.909 qpair failed and we were unable to recover it. 00:38:17.909 [2024-12-13 10:40:11.731298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.909 [2024-12-13 10:40:11.731319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:17.909 qpair failed and we were unable to recover it. 00:38:17.909 [2024-12-13 10:40:11.731615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.909 [2024-12-13 10:40:11.731634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.909 qpair failed and we were unable to recover it. 00:38:17.909 [2024-12-13 10:40:11.731943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.910 [2024-12-13 10:40:11.731957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.910 qpair failed and we were unable to recover it. 00:38:17.910 [2024-12-13 10:40:11.732106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.910 [2024-12-13 10:40:11.732121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.910 qpair failed and we were unable to recover it. 00:38:17.910 [2024-12-13 10:40:11.732325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.910 [2024-12-13 10:40:11.732339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.910 qpair failed and we were unable to recover it. 00:38:17.910 [2024-12-13 10:40:11.732433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.910 [2024-12-13 10:40:11.732446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.910 qpair failed and we were unable to recover it. 00:38:17.910 [2024-12-13 10:40:11.732551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.910 [2024-12-13 10:40:11.732566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.910 qpair failed and we were unable to recover it. 00:38:17.910 [2024-12-13 10:40:11.732754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.910 [2024-12-13 10:40:11.732768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.910 qpair failed and we were unable to recover it. 00:38:17.910 [2024-12-13 10:40:11.732922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.910 [2024-12-13 10:40:11.732936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.910 qpair failed and we were unable to recover it. 00:38:17.910 [2024-12-13 10:40:11.733076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.910 [2024-12-13 10:40:11.733090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.910 qpair failed and we were unable to recover it. 00:38:17.910 [2024-12-13 10:40:11.733253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.910 [2024-12-13 10:40:11.733267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.910 qpair failed and we were unable to recover it. 00:38:17.910 [2024-12-13 10:40:11.733437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.910 [2024-12-13 10:40:11.733455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.910 qpair failed and we were unable to recover it. 00:38:17.910 [2024-12-13 10:40:11.733610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.910 [2024-12-13 10:40:11.733624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.910 qpair failed and we were unable to recover it. 00:38:17.910 [2024-12-13 10:40:11.733850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.910 [2024-12-13 10:40:11.733864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.910 qpair failed and we were unable to recover it. 00:38:17.910 [2024-12-13 10:40:11.733957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.910 [2024-12-13 10:40:11.733971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.910 qpair failed and we were unable to recover it. 00:38:17.910 [2024-12-13 10:40:11.734143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.910 [2024-12-13 10:40:11.734158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.910 qpair failed and we were unable to recover it. 00:38:17.910 [2024-12-13 10:40:11.734310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.910 [2024-12-13 10:40:11.734324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.910 qpair failed and we were unable to recover it. 00:38:17.910 [2024-12-13 10:40:11.734510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.910 [2024-12-13 10:40:11.734525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.910 qpair failed and we were unable to recover it. 00:38:17.910 [2024-12-13 10:40:11.734690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.910 [2024-12-13 10:40:11.734704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.910 qpair failed and we were unable to recover it. 00:38:17.910 [2024-12-13 10:40:11.734797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.910 [2024-12-13 10:40:11.734810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.910 qpair failed and we were unable to recover it. 00:38:17.910 [2024-12-13 10:40:11.734947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.910 [2024-12-13 10:40:11.734963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.910 qpair failed and we were unable to recover it. 00:38:17.910 [2024-12-13 10:40:11.735111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.910 [2024-12-13 10:40:11.735125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.910 qpair failed and we were unable to recover it. 00:38:17.910 [2024-12-13 10:40:11.735374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.910 [2024-12-13 10:40:11.735388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.910 qpair failed and we were unable to recover it. 00:38:17.910 [2024-12-13 10:40:11.735573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.910 [2024-12-13 10:40:11.735587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.910 qpair failed and we were unable to recover it. 00:38:17.910 [2024-12-13 10:40:11.735847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.910 [2024-12-13 10:40:11.735890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.910 qpair failed and we were unable to recover it. 00:38:17.910 [2024-12-13 10:40:11.736202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.910 [2024-12-13 10:40:11.736249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.910 qpair failed and we were unable to recover it. 00:38:17.910 [2024-12-13 10:40:11.736510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.910 [2024-12-13 10:40:11.736556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.910 qpair failed and we were unable to recover it. 00:38:17.910 [2024-12-13 10:40:11.736844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.910 [2024-12-13 10:40:11.736888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.910 qpair failed and we were unable to recover it. 00:38:17.910 [2024-12-13 10:40:11.737170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.910 [2024-12-13 10:40:11.737219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.910 qpair failed and we were unable to recover it. 00:38:17.910 [2024-12-13 10:40:11.737482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.910 [2024-12-13 10:40:11.737528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.910 qpair failed and we were unable to recover it. 00:38:17.910 [2024-12-13 10:40:11.737751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.910 [2024-12-13 10:40:11.737795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.910 qpair failed and we were unable to recover it. 00:38:17.910 [2024-12-13 10:40:11.738005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.910 [2024-12-13 10:40:11.738048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.910 qpair failed and we were unable to recover it. 00:38:17.910 [2024-12-13 10:40:11.738307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.910 [2024-12-13 10:40:11.738349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.910 qpair failed and we were unable to recover it. 00:38:17.910 [2024-12-13 10:40:11.738548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.910 [2024-12-13 10:40:11.738563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.910 qpair failed and we were unable to recover it. 00:38:17.910 [2024-12-13 10:40:11.738715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.910 [2024-12-13 10:40:11.738729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.910 qpair failed and we were unable to recover it. 00:38:17.910 [2024-12-13 10:40:11.738876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.910 [2024-12-13 10:40:11.738889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.910 qpair failed and we were unable to recover it. 00:38:17.910 [2024-12-13 10:40:11.738989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.910 [2024-12-13 10:40:11.739002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.910 qpair failed and we were unable to recover it. 00:38:17.910 [2024-12-13 10:40:11.739145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.910 [2024-12-13 10:40:11.739159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.910 qpair failed and we were unable to recover it. 00:38:17.910 [2024-12-13 10:40:11.739304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.910 [2024-12-13 10:40:11.739318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.910 qpair failed and we were unable to recover it. 00:38:17.910 [2024-12-13 10:40:11.739467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.910 [2024-12-13 10:40:11.739482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.910 qpair failed and we were unable to recover it. 00:38:17.910 [2024-12-13 10:40:11.739581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.911 [2024-12-13 10:40:11.739594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.911 qpair failed and we were unable to recover it. 00:38:17.911 [2024-12-13 10:40:11.739732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.911 [2024-12-13 10:40:11.739746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.911 qpair failed and we were unable to recover it. 00:38:17.911 [2024-12-13 10:40:11.739977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.911 [2024-12-13 10:40:11.739992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.911 qpair failed and we were unable to recover it. 00:38:17.911 [2024-12-13 10:40:11.740070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.911 [2024-12-13 10:40:11.740082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.911 qpair failed and we were unable to recover it. 00:38:17.911 [2024-12-13 10:40:11.740157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.911 [2024-12-13 10:40:11.740169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.911 qpair failed and we were unable to recover it. 00:38:17.911 [2024-12-13 10:40:11.740386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.911 [2024-12-13 10:40:11.740430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.911 qpair failed and we were unable to recover it. 00:38:17.911 [2024-12-13 10:40:11.740663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.911 [2024-12-13 10:40:11.740706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.911 qpair failed and we were unable to recover it. 00:38:17.911 [2024-12-13 10:40:11.740982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.911 [2024-12-13 10:40:11.741023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.911 qpair failed and we were unable to recover it. 00:38:17.911 [2024-12-13 10:40:11.741319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.911 [2024-12-13 10:40:11.741361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.911 qpair failed and we were unable to recover it. 00:38:17.911 [2024-12-13 10:40:11.741642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.911 [2024-12-13 10:40:11.741656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.911 qpair failed and we were unable to recover it. 00:38:17.911 [2024-12-13 10:40:11.741768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.911 [2024-12-13 10:40:11.741782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.911 qpair failed and we were unable to recover it. 00:38:17.911 [2024-12-13 10:40:11.741984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.911 [2024-12-13 10:40:11.741997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.911 qpair failed and we were unable to recover it. 00:38:17.911 [2024-12-13 10:40:11.742097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.911 [2024-12-13 10:40:11.742109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.911 qpair failed and we were unable to recover it. 00:38:17.911 [2024-12-13 10:40:11.742332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.911 [2024-12-13 10:40:11.742345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.911 qpair failed and we were unable to recover it. 00:38:17.911 [2024-12-13 10:40:11.742492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.911 [2024-12-13 10:40:11.742507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.911 qpair failed and we were unable to recover it. 00:38:17.911 [2024-12-13 10:40:11.742648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.911 [2024-12-13 10:40:11.742662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.911 qpair failed and we were unable to recover it. 00:38:17.911 [2024-12-13 10:40:11.742816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.911 [2024-12-13 10:40:11.742830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.911 qpair failed and we were unable to recover it. 00:38:17.911 [2024-12-13 10:40:11.742979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.911 [2024-12-13 10:40:11.742994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.911 qpair failed and we were unable to recover it. 00:38:17.911 [2024-12-13 10:40:11.743148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.911 [2024-12-13 10:40:11.743162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.911 qpair failed and we were unable to recover it. 00:38:17.911 [2024-12-13 10:40:11.743414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.911 [2024-12-13 10:40:11.743429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.911 qpair failed and we were unable to recover it. 00:38:17.911 [2024-12-13 10:40:11.743619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.911 [2024-12-13 10:40:11.743633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.911 qpair failed and we were unable to recover it. 00:38:17.911 [2024-12-13 10:40:11.743785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.911 [2024-12-13 10:40:11.743799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.911 qpair failed and we were unable to recover it. 00:38:17.911 [2024-12-13 10:40:11.743911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.911 [2024-12-13 10:40:11.743925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.911 qpair failed and we were unable to recover it. 00:38:17.911 [2024-12-13 10:40:11.744131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.911 [2024-12-13 10:40:11.744154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.911 qpair failed and we were unable to recover it. 00:38:17.911 [2024-12-13 10:40:11.744262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.911 [2024-12-13 10:40:11.744276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.911 qpair failed and we were unable to recover it. 00:38:17.911 [2024-12-13 10:40:11.744504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.911 [2024-12-13 10:40:11.744519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.911 qpair failed and we were unable to recover it. 00:38:17.911 [2024-12-13 10:40:11.744689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.911 [2024-12-13 10:40:11.744703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.911 qpair failed and we were unable to recover it. 00:38:17.911 [2024-12-13 10:40:11.744855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.911 [2024-12-13 10:40:11.744893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.911 qpair failed and we were unable to recover it. 00:38:17.911 [2024-12-13 10:40:11.745135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.911 [2024-12-13 10:40:11.745184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.911 qpair failed and we were unable to recover it. 00:38:17.911 [2024-12-13 10:40:11.745376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.911 [2024-12-13 10:40:11.745419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.911 qpair failed and we were unable to recover it. 00:38:17.911 [2024-12-13 10:40:11.745645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.911 [2024-12-13 10:40:11.745659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.911 qpair failed and we were unable to recover it. 00:38:17.911 [2024-12-13 10:40:11.745864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.911 [2024-12-13 10:40:11.745877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.911 qpair failed and we were unable to recover it. 00:38:17.911 [2024-12-13 10:40:11.746095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.911 [2024-12-13 10:40:11.746109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.911 qpair failed and we were unable to recover it. 00:38:17.912 [2024-12-13 10:40:11.746407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.912 [2024-12-13 10:40:11.746420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.912 qpair failed and we were unable to recover it. 00:38:17.912 [2024-12-13 10:40:11.746666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.912 [2024-12-13 10:40:11.746681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.912 qpair failed and we were unable to recover it. 00:38:17.912 [2024-12-13 10:40:11.746904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.912 [2024-12-13 10:40:11.746919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.912 qpair failed and we were unable to recover it. 00:38:17.912 [2024-12-13 10:40:11.747122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.912 [2024-12-13 10:40:11.747137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.912 qpair failed and we were unable to recover it. 00:38:17.912 [2024-12-13 10:40:11.747317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.912 [2024-12-13 10:40:11.747331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.912 qpair failed and we were unable to recover it. 00:38:17.912 [2024-12-13 10:40:11.747528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.912 [2024-12-13 10:40:11.747544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.912 qpair failed and we were unable to recover it. 00:38:17.912 [2024-12-13 10:40:11.747731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.912 [2024-12-13 10:40:11.747773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.912 qpair failed and we were unable to recover it. 00:38:17.912 [2024-12-13 10:40:11.747930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.912 [2024-12-13 10:40:11.747973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.912 qpair failed and we were unable to recover it. 00:38:17.912 [2024-12-13 10:40:11.748321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.912 [2024-12-13 10:40:11.748363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.912 qpair failed and we were unable to recover it. 00:38:17.912 [2024-12-13 10:40:11.748588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.912 [2024-12-13 10:40:11.748634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.912 qpair failed and we were unable to recover it. 00:38:17.912 [2024-12-13 10:40:11.748866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.912 [2024-12-13 10:40:11.748881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.912 qpair failed and we were unable to recover it. 00:38:17.912 [2024-12-13 10:40:11.749034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.912 [2024-12-13 10:40:11.749049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.912 qpair failed and we were unable to recover it. 00:38:17.912 [2024-12-13 10:40:11.749295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.912 [2024-12-13 10:40:11.749309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.912 qpair failed and we were unable to recover it. 00:38:17.912 [2024-12-13 10:40:11.749514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.912 [2024-12-13 10:40:11.749549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.912 qpair failed and we were unable to recover it. 00:38:17.912 [2024-12-13 10:40:11.749766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.912 [2024-12-13 10:40:11.749808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.912 qpair failed and we were unable to recover it. 00:38:17.912 [2024-12-13 10:40:11.750029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.912 [2024-12-13 10:40:11.750072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.912 qpair failed and we were unable to recover it. 00:38:17.912 [2024-12-13 10:40:11.750303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.912 [2024-12-13 10:40:11.750344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.912 qpair failed and we were unable to recover it. 00:38:17.912 [2024-12-13 10:40:11.750640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.912 [2024-12-13 10:40:11.750685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.912 qpair failed and we were unable to recover it. 00:38:17.912 [2024-12-13 10:40:11.750850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.912 [2024-12-13 10:40:11.750894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.912 qpair failed and we were unable to recover it. 00:38:17.912 [2024-12-13 10:40:11.751138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.912 [2024-12-13 10:40:11.751153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.912 qpair failed and we were unable to recover it. 00:38:17.912 [2024-12-13 10:40:11.751379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.912 [2024-12-13 10:40:11.751393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.912 qpair failed and we were unable to recover it. 00:38:17.912 [2024-12-13 10:40:11.751577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.912 [2024-12-13 10:40:11.751592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.912 qpair failed and we were unable to recover it. 00:38:17.912 [2024-12-13 10:40:11.751698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.912 [2024-12-13 10:40:11.751712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.912 qpair failed and we were unable to recover it. 00:38:17.912 [2024-12-13 10:40:11.751814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.912 [2024-12-13 10:40:11.751826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.912 qpair failed and we were unable to recover it. 00:38:17.912 [2024-12-13 10:40:11.752038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.912 [2024-12-13 10:40:11.752052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.912 qpair failed and we were unable to recover it. 00:38:17.912 [2024-12-13 10:40:11.752302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.912 [2024-12-13 10:40:11.752316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.912 qpair failed and we were unable to recover it. 00:38:17.912 [2024-12-13 10:40:11.752534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.912 [2024-12-13 10:40:11.752548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.912 qpair failed and we were unable to recover it. 00:38:17.912 [2024-12-13 10:40:11.752639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.912 [2024-12-13 10:40:11.752651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.912 qpair failed and we were unable to recover it. 00:38:17.912 [2024-12-13 10:40:11.752762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.912 [2024-12-13 10:40:11.752775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.912 qpair failed and we were unable to recover it. 00:38:17.912 [2024-12-13 10:40:11.752888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.912 [2024-12-13 10:40:11.752902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.912 qpair failed and we were unable to recover it. 00:38:17.912 [2024-12-13 10:40:11.753163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.912 [2024-12-13 10:40:11.753176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.912 qpair failed and we were unable to recover it. 00:38:17.912 [2024-12-13 10:40:11.753424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.912 [2024-12-13 10:40:11.753438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.912 qpair failed and we were unable to recover it. 00:38:17.912 [2024-12-13 10:40:11.753654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.912 [2024-12-13 10:40:11.753668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.912 qpair failed and we were unable to recover it. 00:38:17.912 [2024-12-13 10:40:11.753779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.912 [2024-12-13 10:40:11.753794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.912 qpair failed and we were unable to recover it. 00:38:17.912 [2024-12-13 10:40:11.753877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.912 [2024-12-13 10:40:11.753890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.912 qpair failed and we were unable to recover it. 00:38:17.912 [2024-12-13 10:40:11.753987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.912 [2024-12-13 10:40:11.754002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.912 qpair failed and we were unable to recover it. 00:38:17.912 [2024-12-13 10:40:11.754092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.912 [2024-12-13 10:40:11.754104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.912 qpair failed and we were unable to recover it. 00:38:17.912 [2024-12-13 10:40:11.754337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.912 [2024-12-13 10:40:11.754351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.912 qpair failed and we were unable to recover it. 00:38:17.912 [2024-12-13 10:40:11.754497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.912 [2024-12-13 10:40:11.754526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.912 qpair failed and we were unable to recover it. 00:38:17.913 [2024-12-13 10:40:11.754685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.913 [2024-12-13 10:40:11.754727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.913 qpair failed and we were unable to recover it. 00:38:17.913 [2024-12-13 10:40:11.754884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.913 [2024-12-13 10:40:11.754927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.913 qpair failed and we were unable to recover it. 00:38:17.913 [2024-12-13 10:40:11.755096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.913 [2024-12-13 10:40:11.755147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.913 qpair failed and we were unable to recover it. 00:38:17.913 [2024-12-13 10:40:11.755364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.913 [2024-12-13 10:40:11.755407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.913 qpair failed and we were unable to recover it. 00:38:17.913 [2024-12-13 10:40:11.755651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.913 [2024-12-13 10:40:11.755665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.913 qpair failed and we were unable to recover it. 00:38:17.913 [2024-12-13 10:40:11.755823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.913 [2024-12-13 10:40:11.755837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.913 qpair failed and we were unable to recover it. 00:38:17.913 [2024-12-13 10:40:11.755983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.913 [2024-12-13 10:40:11.755996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.913 qpair failed and we were unable to recover it. 00:38:17.913 [2024-12-13 10:40:11.756180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.913 [2024-12-13 10:40:11.756194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.913 qpair failed and we were unable to recover it. 00:38:17.913 [2024-12-13 10:40:11.756341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.913 [2024-12-13 10:40:11.756354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.913 qpair failed and we were unable to recover it. 00:38:17.913 [2024-12-13 10:40:11.756627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.913 [2024-12-13 10:40:11.756641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.913 qpair failed and we were unable to recover it. 00:38:17.913 [2024-12-13 10:40:11.756753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.913 [2024-12-13 10:40:11.756769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.913 qpair failed and we were unable to recover it. 00:38:17.913 [2024-12-13 10:40:11.756871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.913 [2024-12-13 10:40:11.756885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.913 qpair failed and we were unable to recover it. 00:38:17.913 [2024-12-13 10:40:11.756982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.913 [2024-12-13 10:40:11.756994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.913 qpair failed and we were unable to recover it. 00:38:17.913 [2024-12-13 10:40:11.757169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.913 [2024-12-13 10:40:11.757183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.913 qpair failed and we were unable to recover it. 00:38:17.913 [2024-12-13 10:40:11.757337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.913 [2024-12-13 10:40:11.757351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.913 qpair failed and we were unable to recover it. 00:38:17.913 [2024-12-13 10:40:11.757501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.913 [2024-12-13 10:40:11.757543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.913 qpair failed and we were unable to recover it. 00:38:17.913 [2024-12-13 10:40:11.757701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.913 [2024-12-13 10:40:11.757716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.913 qpair failed and we were unable to recover it. 00:38:17.913 [2024-12-13 10:40:11.757824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.913 [2024-12-13 10:40:11.757838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.913 qpair failed and we were unable to recover it. 00:38:17.913 [2024-12-13 10:40:11.757994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.913 [2024-12-13 10:40:11.758009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.913 qpair failed and we were unable to recover it. 00:38:17.913 [2024-12-13 10:40:11.758260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.913 [2024-12-13 10:40:11.758273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.913 qpair failed and we were unable to recover it. 00:38:17.913 [2024-12-13 10:40:11.758446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.913 [2024-12-13 10:40:11.758540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.913 qpair failed and we were unable to recover it. 00:38:17.913 [2024-12-13 10:40:11.758688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.913 [2024-12-13 10:40:11.758730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.913 qpair failed and we were unable to recover it. 00:38:17.913 [2024-12-13 10:40:11.758900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.913 [2024-12-13 10:40:11.758943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.913 qpair failed and we were unable to recover it. 00:38:17.913 [2024-12-13 10:40:11.759092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.913 [2024-12-13 10:40:11.759106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.913 qpair failed and we were unable to recover it. 00:38:17.913 [2024-12-13 10:40:11.759258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.913 [2024-12-13 10:40:11.759271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.913 qpair failed and we were unable to recover it. 00:38:17.913 [2024-12-13 10:40:11.759418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.913 [2024-12-13 10:40:11.759432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.913 qpair failed and we were unable to recover it. 00:38:17.913 [2024-12-13 10:40:11.759552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.913 [2024-12-13 10:40:11.759567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.913 qpair failed and we were unable to recover it. 00:38:17.913 [2024-12-13 10:40:11.759672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.913 [2024-12-13 10:40:11.759686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.913 qpair failed and we were unable to recover it. 00:38:17.913 [2024-12-13 10:40:11.759931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.913 [2024-12-13 10:40:11.759975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.913 qpair failed and we were unable to recover it. 00:38:17.913 [2024-12-13 10:40:11.760198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.913 [2024-12-13 10:40:11.760239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.913 qpair failed and we were unable to recover it. 00:38:17.913 [2024-12-13 10:40:11.760527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.913 [2024-12-13 10:40:11.760573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.913 qpair failed and we were unable to recover it. 00:38:17.913 [2024-12-13 10:40:11.760739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.913 [2024-12-13 10:40:11.760754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.913 qpair failed and we were unable to recover it. 00:38:17.913 [2024-12-13 10:40:11.760852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.913 [2024-12-13 10:40:11.760866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.913 qpair failed and we were unable to recover it. 00:38:17.913 [2024-12-13 10:40:11.761080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.913 [2024-12-13 10:40:11.761093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.913 qpair failed and we were unable to recover it. 00:38:17.913 [2024-12-13 10:40:11.761352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.913 [2024-12-13 10:40:11.761378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.913 qpair failed and we were unable to recover it. 00:38:17.913 [2024-12-13 10:40:11.761553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.913 [2024-12-13 10:40:11.761567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.913 qpair failed and we were unable to recover it. 00:38:17.913 [2024-12-13 10:40:11.761709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.913 [2024-12-13 10:40:11.761725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.913 qpair failed and we were unable to recover it. 00:38:17.913 [2024-12-13 10:40:11.761881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.913 [2024-12-13 10:40:11.761895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.913 qpair failed and we were unable to recover it. 00:38:17.913 [2024-12-13 10:40:11.762005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.913 [2024-12-13 10:40:11.762019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.913 qpair failed and we were unable to recover it. 00:38:17.914 [2024-12-13 10:40:11.762201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.914 [2024-12-13 10:40:11.762215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.914 qpair failed and we were unable to recover it. 00:38:17.914 [2024-12-13 10:40:11.762294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.914 [2024-12-13 10:40:11.762308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.914 qpair failed and we were unable to recover it. 00:38:17.914 [2024-12-13 10:40:11.762392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.914 [2024-12-13 10:40:11.762404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.914 qpair failed and we were unable to recover it. 00:38:17.914 [2024-12-13 10:40:11.762562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.914 [2024-12-13 10:40:11.762577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.914 qpair failed and we were unable to recover it. 00:38:17.914 [2024-12-13 10:40:11.762673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.914 [2024-12-13 10:40:11.762686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.914 qpair failed and we were unable to recover it. 00:38:17.914 [2024-12-13 10:40:11.762842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.914 [2024-12-13 10:40:11.762856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.914 qpair failed and we were unable to recover it. 00:38:17.914 [2024-12-13 10:40:11.763022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.914 [2024-12-13 10:40:11.763035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.914 qpair failed and we were unable to recover it. 00:38:17.914 [2024-12-13 10:40:11.763217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.914 [2024-12-13 10:40:11.763231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.914 qpair failed and we were unable to recover it. 00:38:17.914 [2024-12-13 10:40:11.763494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.914 [2024-12-13 10:40:11.763537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.914 qpair failed and we were unable to recover it. 00:38:17.914 [2024-12-13 10:40:11.763730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.914 [2024-12-13 10:40:11.763773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.914 qpair failed and we were unable to recover it. 00:38:17.914 [2024-12-13 10:40:11.763985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.914 [2024-12-13 10:40:11.764026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.914 qpair failed and we were unable to recover it. 00:38:17.914 [2024-12-13 10:40:11.764265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.914 [2024-12-13 10:40:11.764309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.914 qpair failed and we were unable to recover it. 00:38:17.914 [2024-12-13 10:40:11.764530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.914 [2024-12-13 10:40:11.764574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.914 qpair failed and we were unable to recover it. 00:38:17.914 [2024-12-13 10:40:11.764842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.914 [2024-12-13 10:40:11.764884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.914 qpair failed and we were unable to recover it. 00:38:17.914 [2024-12-13 10:40:11.765111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.914 [2024-12-13 10:40:11.765153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.914 qpair failed and we were unable to recover it. 00:38:17.914 [2024-12-13 10:40:11.765389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.914 [2024-12-13 10:40:11.765432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.914 qpair failed and we were unable to recover it. 00:38:17.914 [2024-12-13 10:40:11.765609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.914 [2024-12-13 10:40:11.765651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.914 qpair failed and we were unable to recover it. 00:38:17.914 [2024-12-13 10:40:11.765818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.914 [2024-12-13 10:40:11.765831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.914 qpair failed and we were unable to recover it. 00:38:17.914 [2024-12-13 10:40:11.766020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.914 [2024-12-13 10:40:11.766064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.914 qpair failed and we were unable to recover it. 00:38:17.914 [2024-12-13 10:40:11.766353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.914 [2024-12-13 10:40:11.766396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.914 qpair failed and we were unable to recover it. 00:38:17.914 [2024-12-13 10:40:11.766631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.914 [2024-12-13 10:40:11.766645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.914 qpair failed and we were unable to recover it. 00:38:17.914 [2024-12-13 10:40:11.766735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.914 [2024-12-13 10:40:11.766748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.914 qpair failed and we were unable to recover it. 00:38:17.914 [2024-12-13 10:40:11.766927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.914 [2024-12-13 10:40:11.766941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.914 qpair failed and we were unable to recover it. 00:38:17.914 [2024-12-13 10:40:11.767054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.914 [2024-12-13 10:40:11.767068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.914 qpair failed and we were unable to recover it. 00:38:17.914 [2024-12-13 10:40:11.767342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.914 [2024-12-13 10:40:11.767357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.914 qpair failed and we were unable to recover it. 00:38:17.914 [2024-12-13 10:40:11.767527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.914 [2024-12-13 10:40:11.767541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.914 qpair failed and we were unable to recover it. 00:38:17.914 [2024-12-13 10:40:11.767768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.914 [2024-12-13 10:40:11.767782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.914 qpair failed and we were unable to recover it. 00:38:17.914 [2024-12-13 10:40:11.767921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.914 [2024-12-13 10:40:11.767935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.914 qpair failed and we were unable to recover it. 00:38:17.914 [2024-12-13 10:40:11.768088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.914 [2024-12-13 10:40:11.768101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.914 qpair failed and we were unable to recover it. 00:38:17.914 [2024-12-13 10:40:11.768304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.914 [2024-12-13 10:40:11.768317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.914 qpair failed and we were unable to recover it. 00:38:17.914 [2024-12-13 10:40:11.768478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.914 [2024-12-13 10:40:11.768492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.914 qpair failed and we were unable to recover it. 00:38:17.914 [2024-12-13 10:40:11.768716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.914 [2024-12-13 10:40:11.768730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.914 qpair failed and we were unable to recover it. 00:38:17.914 [2024-12-13 10:40:11.768890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.914 [2024-12-13 10:40:11.768903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.914 qpair failed and we were unable to recover it. 00:38:17.914 [2024-12-13 10:40:11.769002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.914 [2024-12-13 10:40:11.769018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.914 qpair failed and we were unable to recover it. 00:38:17.914 [2024-12-13 10:40:11.769185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.914 [2024-12-13 10:40:11.769199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.914 qpair failed and we were unable to recover it. 00:38:17.914 [2024-12-13 10:40:11.769349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.914 [2024-12-13 10:40:11.769363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.914 qpair failed and we were unable to recover it. 00:38:17.914 [2024-12-13 10:40:11.769438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.914 [2024-12-13 10:40:11.769455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.914 qpair failed and we were unable to recover it. 00:38:17.914 [2024-12-13 10:40:11.769668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.914 [2024-12-13 10:40:11.769684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.914 qpair failed and we were unable to recover it. 00:38:17.914 [2024-12-13 10:40:11.769791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.914 [2024-12-13 10:40:11.769805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.914 qpair failed and we were unable to recover it. 00:38:17.915 [2024-12-13 10:40:11.769917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.915 [2024-12-13 10:40:11.769931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.915 qpair failed and we were unable to recover it. 00:38:17.915 [2024-12-13 10:40:11.770076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.915 [2024-12-13 10:40:11.770096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.915 qpair failed and we were unable to recover it. 00:38:17.915 [2024-12-13 10:40:11.770236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.915 [2024-12-13 10:40:11.770250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.915 qpair failed and we were unable to recover it. 00:38:17.915 [2024-12-13 10:40:11.770457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.915 [2024-12-13 10:40:11.770471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.915 qpair failed and we were unable to recover it. 00:38:17.915 [2024-12-13 10:40:11.770582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.915 [2024-12-13 10:40:11.770596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.915 qpair failed and we were unable to recover it. 00:38:17.915 [2024-12-13 10:40:11.770691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.915 [2024-12-13 10:40:11.770704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.915 qpair failed and we were unable to recover it. 00:38:17.915 [2024-12-13 10:40:11.770886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.915 [2024-12-13 10:40:11.770899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.915 qpair failed and we were unable to recover it. 00:38:17.915 [2024-12-13 10:40:11.771043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.915 [2024-12-13 10:40:11.771056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.915 qpair failed and we were unable to recover it. 00:38:17.915 [2024-12-13 10:40:11.771337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.915 [2024-12-13 10:40:11.771351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.915 qpair failed and we were unable to recover it. 00:38:17.915 [2024-12-13 10:40:11.771549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.915 [2024-12-13 10:40:11.771563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.915 qpair failed and we were unable to recover it. 00:38:17.915 [2024-12-13 10:40:11.771732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.915 [2024-12-13 10:40:11.771746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.915 qpair failed and we were unable to recover it. 00:38:17.915 [2024-12-13 10:40:11.771994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.915 [2024-12-13 10:40:11.772007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.915 qpair failed and we were unable to recover it. 00:38:17.915 [2024-12-13 10:40:11.772295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.915 [2024-12-13 10:40:11.772310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.915 qpair failed and we were unable to recover it. 00:38:17.915 [2024-12-13 10:40:11.772540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.915 [2024-12-13 10:40:11.772566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.915 qpair failed and we were unable to recover it. 00:38:17.915 [2024-12-13 10:40:11.772654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.915 [2024-12-13 10:40:11.772668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.915 qpair failed and we were unable to recover it. 00:38:17.915 [2024-12-13 10:40:11.772822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.915 [2024-12-13 10:40:11.772835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.915 qpair failed and we were unable to recover it. 00:38:17.915 [2024-12-13 10:40:11.772991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.915 [2024-12-13 10:40:11.773005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.915 qpair failed and we were unable to recover it. 00:38:17.915 [2024-12-13 10:40:11.773190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.915 [2024-12-13 10:40:11.773204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.915 qpair failed and we were unable to recover it. 00:38:17.915 [2024-12-13 10:40:11.773348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.915 [2024-12-13 10:40:11.773361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.915 qpair failed and we were unable to recover it. 00:38:17.915 [2024-12-13 10:40:11.773433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.915 [2024-12-13 10:40:11.773445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.915 qpair failed and we were unable to recover it. 00:38:17.915 [2024-12-13 10:40:11.773543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.915 [2024-12-13 10:40:11.773556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.915 qpair failed and we were unable to recover it. 00:38:17.915 [2024-12-13 10:40:11.773664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.915 [2024-12-13 10:40:11.773677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.915 qpair failed and we were unable to recover it. 00:38:17.915 [2024-12-13 10:40:11.773793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.915 [2024-12-13 10:40:11.773807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.915 qpair failed and we were unable to recover it. 00:38:17.915 [2024-12-13 10:40:11.774008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.915 [2024-12-13 10:40:11.774021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.915 qpair failed and we were unable to recover it. 00:38:17.915 [2024-12-13 10:40:11.774114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.915 [2024-12-13 10:40:11.774127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.915 qpair failed and we were unable to recover it. 00:38:17.915 [2024-12-13 10:40:11.774308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.915 [2024-12-13 10:40:11.774322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.915 qpair failed and we were unable to recover it. 00:38:17.915 [2024-12-13 10:40:11.774473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.915 [2024-12-13 10:40:11.774488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.915 qpair failed and we were unable to recover it. 00:38:17.915 [2024-12-13 10:40:11.774640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.915 [2024-12-13 10:40:11.774653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.915 qpair failed and we were unable to recover it. 00:38:17.915 [2024-12-13 10:40:11.774767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.915 [2024-12-13 10:40:11.774781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.915 qpair failed and we were unable to recover it. 00:38:17.915 [2024-12-13 10:40:11.774885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.915 [2024-12-13 10:40:11.774898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.915 qpair failed and we were unable to recover it. 00:38:17.915 [2024-12-13 10:40:11.774974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.915 [2024-12-13 10:40:11.774987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.915 qpair failed and we were unable to recover it. 00:38:17.915 [2024-12-13 10:40:11.775135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.915 [2024-12-13 10:40:11.775149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.915 qpair failed and we were unable to recover it. 00:38:17.915 [2024-12-13 10:40:11.775301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.915 [2024-12-13 10:40:11.775315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.915 qpair failed and we were unable to recover it. 00:38:17.915 [2024-12-13 10:40:11.775515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.915 [2024-12-13 10:40:11.775529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.916 qpair failed and we were unable to recover it. 00:38:17.916 [2024-12-13 10:40:11.775632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.916 [2024-12-13 10:40:11.775646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.916 qpair failed and we were unable to recover it. 00:38:17.916 [2024-12-13 10:40:11.775786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.916 [2024-12-13 10:40:11.775800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.916 qpair failed and we were unable to recover it. 00:38:17.916 [2024-12-13 10:40:11.776089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.916 [2024-12-13 10:40:11.776133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.916 qpair failed and we were unable to recover it. 00:38:17.916 [2024-12-13 10:40:11.776444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.916 [2024-12-13 10:40:11.776501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.916 qpair failed and we were unable to recover it. 00:38:17.916 [2024-12-13 10:40:11.776733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.916 [2024-12-13 10:40:11.776749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.916 qpair failed and we were unable to recover it. 00:38:17.916 [2024-12-13 10:40:11.776905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.916 [2024-12-13 10:40:11.776919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.916 qpair failed and we were unable to recover it. 00:38:17.916 [2024-12-13 10:40:11.777083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.916 [2024-12-13 10:40:11.777123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.916 qpair failed and we were unable to recover it. 00:38:17.916 [2024-12-13 10:40:11.777333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.916 [2024-12-13 10:40:11.777375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.916 qpair failed and we were unable to recover it. 00:38:17.916 [2024-12-13 10:40:11.777685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.916 [2024-12-13 10:40:11.777730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.916 qpair failed and we were unable to recover it. 00:38:17.916 [2024-12-13 10:40:11.778019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.916 [2024-12-13 10:40:11.778065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.916 qpair failed and we were unable to recover it. 00:38:17.916 [2024-12-13 10:40:11.778279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.916 [2024-12-13 10:40:11.778323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.916 qpair failed and we were unable to recover it. 00:38:17.916 [2024-12-13 10:40:11.778550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.916 [2024-12-13 10:40:11.778564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.916 qpair failed and we were unable to recover it. 00:38:17.916 [2024-12-13 10:40:11.778707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.916 [2024-12-13 10:40:11.778720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.916 qpair failed and we were unable to recover it. 00:38:17.916 [2024-12-13 10:40:11.778998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.916 [2024-12-13 10:40:11.779039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.916 qpair failed and we were unable to recover it. 00:38:17.916 [2024-12-13 10:40:11.779343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.916 [2024-12-13 10:40:11.779386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.916 qpair failed and we were unable to recover it. 00:38:17.916 [2024-12-13 10:40:11.779748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.916 [2024-12-13 10:40:11.779803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.916 qpair failed and we were unable to recover it. 00:38:17.916 [2024-12-13 10:40:11.779970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.916 [2024-12-13 10:40:11.780013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.916 qpair failed and we were unable to recover it. 00:38:17.916 [2024-12-13 10:40:11.780268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.916 [2024-12-13 10:40:11.780309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.916 qpair failed and we were unable to recover it. 00:38:17.916 [2024-12-13 10:40:11.780521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.916 [2024-12-13 10:40:11.780572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.916 qpair failed and we were unable to recover it. 00:38:17.916 [2024-12-13 10:40:11.780787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.916 [2024-12-13 10:40:11.780830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.916 qpair failed and we were unable to recover it. 00:38:17.916 [2024-12-13 10:40:11.780956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.916 [2024-12-13 10:40:11.780970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.916 qpair failed and we were unable to recover it. 00:38:17.916 [2024-12-13 10:40:11.781196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.916 [2024-12-13 10:40:11.781239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.916 qpair failed and we were unable to recover it. 00:38:17.916 [2024-12-13 10:40:11.781508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.916 [2024-12-13 10:40:11.781552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.916 qpair failed and we were unable to recover it. 00:38:17.916 [2024-12-13 10:40:11.781727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.916 [2024-12-13 10:40:11.781740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.916 qpair failed and we were unable to recover it. 00:38:17.916 [2024-12-13 10:40:11.781928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.916 [2024-12-13 10:40:11.781970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.916 qpair failed and we were unable to recover it. 00:38:17.916 [2024-12-13 10:40:11.782277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.916 [2024-12-13 10:40:11.782319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.916 qpair failed and we were unable to recover it. 00:38:17.916 [2024-12-13 10:40:11.782573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.916 [2024-12-13 10:40:11.782616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.916 qpair failed and we were unable to recover it. 00:38:17.916 [2024-12-13 10:40:11.782869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.916 [2024-12-13 10:40:11.782882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.916 qpair failed and we were unable to recover it. 00:38:17.916 [2024-12-13 10:40:11.782985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.916 [2024-12-13 10:40:11.782999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.916 qpair failed and we were unable to recover it. 00:38:17.916 [2024-12-13 10:40:11.783090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.916 [2024-12-13 10:40:11.783103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.916 qpair failed and we were unable to recover it. 00:38:17.916 [2024-12-13 10:40:11.783291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.916 [2024-12-13 10:40:11.783306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.916 qpair failed and we were unable to recover it. 00:38:17.916 [2024-12-13 10:40:11.783452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.916 [2024-12-13 10:40:11.783470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.916 qpair failed and we were unable to recover it. 00:38:17.916 [2024-12-13 10:40:11.783555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.916 [2024-12-13 10:40:11.783568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.916 qpair failed and we were unable to recover it. 00:38:17.916 [2024-12-13 10:40:11.783768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.916 [2024-12-13 10:40:11.783781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.916 qpair failed and we were unable to recover it. 00:38:17.916 [2024-12-13 10:40:11.784020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.916 [2024-12-13 10:40:11.784064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.916 qpair failed and we were unable to recover it. 00:38:17.916 [2024-12-13 10:40:11.784270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.916 [2024-12-13 10:40:11.784313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.916 qpair failed and we were unable to recover it. 00:38:17.916 [2024-12-13 10:40:11.784539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.916 [2024-12-13 10:40:11.784583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.916 qpair failed and we were unable to recover it. 00:38:17.916 [2024-12-13 10:40:11.784760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.916 [2024-12-13 10:40:11.784773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.916 qpair failed and we were unable to recover it. 00:38:17.916 [2024-12-13 10:40:11.784946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.916 [2024-12-13 10:40:11.784989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.917 qpair failed and we were unable to recover it. 00:38:17.917 [2024-12-13 10:40:11.785297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.917 [2024-12-13 10:40:11.785339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.917 qpair failed and we were unable to recover it. 00:38:17.917 [2024-12-13 10:40:11.785482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.917 [2024-12-13 10:40:11.785539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.917 qpair failed and we were unable to recover it. 00:38:17.917 [2024-12-13 10:40:11.785742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.917 [2024-12-13 10:40:11.785756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.917 qpair failed and we were unable to recover it. 00:38:17.917 [2024-12-13 10:40:11.785859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.917 [2024-12-13 10:40:11.785872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.917 qpair failed and we were unable to recover it. 00:38:17.917 [2024-12-13 10:40:11.786031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.917 [2024-12-13 10:40:11.786043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.917 qpair failed and we were unable to recover it. 00:38:17.917 [2024-12-13 10:40:11.786230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.917 [2024-12-13 10:40:11.786277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.917 qpair failed and we were unable to recover it. 00:38:17.917 [2024-12-13 10:40:11.786506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.917 [2024-12-13 10:40:11.786550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.917 qpair failed and we were unable to recover it. 00:38:17.917 [2024-12-13 10:40:11.786793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.917 [2024-12-13 10:40:11.786807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.917 qpair failed and we were unable to recover it. 00:38:17.917 [2024-12-13 10:40:11.786908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.917 [2024-12-13 10:40:11.786921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.917 qpair failed and we were unable to recover it. 00:38:17.917 [2024-12-13 10:40:11.787026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.917 [2024-12-13 10:40:11.787039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.917 qpair failed and we were unable to recover it. 00:38:17.917 [2024-12-13 10:40:11.787203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.917 [2024-12-13 10:40:11.787216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.917 qpair failed and we were unable to recover it. 00:38:17.917 [2024-12-13 10:40:11.787302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.917 [2024-12-13 10:40:11.787315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.917 qpair failed and we were unable to recover it. 00:38:17.917 [2024-12-13 10:40:11.787554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.917 [2024-12-13 10:40:11.787568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.917 qpair failed and we were unable to recover it. 00:38:17.917 [2024-12-13 10:40:11.787770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.917 [2024-12-13 10:40:11.787783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.917 qpair failed and we were unable to recover it. 00:38:17.917 [2024-12-13 10:40:11.787931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.917 [2024-12-13 10:40:11.787944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.917 qpair failed and we were unable to recover it. 00:38:17.917 [2024-12-13 10:40:11.788114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.917 [2024-12-13 10:40:11.788128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.917 qpair failed and we were unable to recover it. 00:38:17.917 [2024-12-13 10:40:11.788378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.917 [2024-12-13 10:40:11.788392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.917 qpair failed and we were unable to recover it. 00:38:17.917 [2024-12-13 10:40:11.788617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.917 [2024-12-13 10:40:11.788632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.917 qpair failed and we were unable to recover it. 00:38:17.917 [2024-12-13 10:40:11.788776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.917 [2024-12-13 10:40:11.788789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.917 qpair failed and we were unable to recover it. 00:38:17.917 [2024-12-13 10:40:11.788899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.917 [2024-12-13 10:40:11.788913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.917 qpair failed and we were unable to recover it. 00:38:17.917 [2024-12-13 10:40:11.789151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.917 [2024-12-13 10:40:11.789165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.917 qpair failed and we were unable to recover it. 00:38:17.917 [2024-12-13 10:40:11.789309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.917 [2024-12-13 10:40:11.789322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.917 qpair failed and we were unable to recover it. 00:38:17.917 [2024-12-13 10:40:11.789491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.917 [2024-12-13 10:40:11.789506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.917 qpair failed and we were unable to recover it. 00:38:17.917 [2024-12-13 10:40:11.789660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.917 [2024-12-13 10:40:11.789674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.917 qpair failed and we were unable to recover it. 00:38:17.917 [2024-12-13 10:40:11.789911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.917 [2024-12-13 10:40:11.789924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.917 qpair failed and we were unable to recover it. 00:38:17.917 [2024-12-13 10:40:11.790034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.917 [2024-12-13 10:40:11.790048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.917 qpair failed and we were unable to recover it. 00:38:17.917 [2024-12-13 10:40:11.790292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.917 [2024-12-13 10:40:11.790305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.917 qpair failed and we were unable to recover it. 00:38:17.917 [2024-12-13 10:40:11.790472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.917 [2024-12-13 10:40:11.790487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.917 qpair failed and we were unable to recover it. 00:38:17.917 [2024-12-13 10:40:11.790623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.917 [2024-12-13 10:40:11.790636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.917 qpair failed and we were unable to recover it. 00:38:17.917 [2024-12-13 10:40:11.790712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.917 [2024-12-13 10:40:11.790725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.917 qpair failed and we were unable to recover it. 00:38:17.917 [2024-12-13 10:40:11.790881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.917 [2024-12-13 10:40:11.790895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.917 qpair failed and we were unable to recover it. 00:38:17.917 [2024-12-13 10:40:11.790994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.917 [2024-12-13 10:40:11.791007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.917 qpair failed and we were unable to recover it. 00:38:17.917 [2024-12-13 10:40:11.791290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.917 [2024-12-13 10:40:11.791304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.917 qpair failed and we were unable to recover it. 00:38:17.917 [2024-12-13 10:40:11.791453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.917 [2024-12-13 10:40:11.791468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.917 qpair failed and we were unable to recover it. 00:38:17.917 [2024-12-13 10:40:11.791573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.917 [2024-12-13 10:40:11.791587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.917 qpair failed and we were unable to recover it. 00:38:17.917 [2024-12-13 10:40:11.791682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.917 [2024-12-13 10:40:11.791695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.917 qpair failed and we were unable to recover it. 00:38:17.917 [2024-12-13 10:40:11.791846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.917 [2024-12-13 10:40:11.791859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.917 qpair failed and we were unable to recover it. 00:38:17.917 [2024-12-13 10:40:11.791994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:17.917 [2024-12-13 10:40:11.792008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:17.917 qpair failed and we were unable to recover it. 00:38:18.196 [2024-12-13 10:40:11.792226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.196 [2024-12-13 10:40:11.792240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.197 qpair failed and we were unable to recover it. 00:38:18.197 [2024-12-13 10:40:11.792318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.197 [2024-12-13 10:40:11.792331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.197 qpair failed and we were unable to recover it. 00:38:18.197 [2024-12-13 10:40:11.792482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.197 [2024-12-13 10:40:11.792496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.197 qpair failed and we were unable to recover it. 00:38:18.197 [2024-12-13 10:40:11.792612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.197 [2024-12-13 10:40:11.792625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.197 qpair failed and we were unable to recover it. 00:38:18.197 [2024-12-13 10:40:11.792763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.197 [2024-12-13 10:40:11.792777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.197 qpair failed and we were unable to recover it. 00:38:18.197 [2024-12-13 10:40:11.792960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.197 [2024-12-13 10:40:11.792974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.197 qpair failed and we were unable to recover it. 00:38:18.197 [2024-12-13 10:40:11.793200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.197 [2024-12-13 10:40:11.793213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.197 qpair failed and we were unable to recover it. 00:38:18.197 [2024-12-13 10:40:11.793304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.197 [2024-12-13 10:40:11.793319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.197 qpair failed and we were unable to recover it. 00:38:18.197 [2024-12-13 10:40:11.793562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.197 [2024-12-13 10:40:11.793576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.197 qpair failed and we were unable to recover it. 00:38:18.197 [2024-12-13 10:40:11.793730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.197 [2024-12-13 10:40:11.793744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.197 qpair failed and we were unable to recover it. 00:38:18.197 [2024-12-13 10:40:11.793833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.197 [2024-12-13 10:40:11.793846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.197 qpair failed and we were unable to recover it. 00:38:18.197 [2024-12-13 10:40:11.793935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.197 [2024-12-13 10:40:11.793947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.197 qpair failed and we were unable to recover it. 00:38:18.197 [2024-12-13 10:40:11.794106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.197 [2024-12-13 10:40:11.794120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.197 qpair failed and we were unable to recover it. 00:38:18.197 [2024-12-13 10:40:11.794287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.197 [2024-12-13 10:40:11.794301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.197 qpair failed and we were unable to recover it. 00:38:18.197 [2024-12-13 10:40:11.794456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.197 [2024-12-13 10:40:11.794471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.197 qpair failed and we were unable to recover it. 00:38:18.197 [2024-12-13 10:40:11.794690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.197 [2024-12-13 10:40:11.794731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.197 qpair failed and we were unable to recover it. 00:38:18.197 [2024-12-13 10:40:11.794942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.197 [2024-12-13 10:40:11.794983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.197 qpair failed and we were unable to recover it. 00:38:18.197 [2024-12-13 10:40:11.795216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.197 [2024-12-13 10:40:11.795259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.197 qpair failed and we were unable to recover it. 00:38:18.197 [2024-12-13 10:40:11.795572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.197 [2024-12-13 10:40:11.795617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.197 qpair failed and we were unable to recover it. 00:38:18.197 [2024-12-13 10:40:11.795768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.197 [2024-12-13 10:40:11.795782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.197 qpair failed and we were unable to recover it. 00:38:18.197 [2024-12-13 10:40:11.795929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.197 [2024-12-13 10:40:11.795943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.197 qpair failed and we were unable to recover it. 00:38:18.197 [2024-12-13 10:40:11.796141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.197 [2024-12-13 10:40:11.796155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.197 qpair failed and we were unable to recover it. 00:38:18.197 [2024-12-13 10:40:11.796296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.197 [2024-12-13 10:40:11.796309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.197 qpair failed and we were unable to recover it. 00:38:18.197 [2024-12-13 10:40:11.796456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.197 [2024-12-13 10:40:11.796470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.197 qpair failed and we were unable to recover it. 00:38:18.197 [2024-12-13 10:40:11.796612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.197 [2024-12-13 10:40:11.796626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.197 qpair failed and we were unable to recover it. 00:38:18.197 [2024-12-13 10:40:11.796824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.197 [2024-12-13 10:40:11.796838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.197 qpair failed and we were unable to recover it. 00:38:18.197 [2024-12-13 10:40:11.796990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.197 [2024-12-13 10:40:11.797004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.197 qpair failed and we were unable to recover it. 00:38:18.197 [2024-12-13 10:40:11.797218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.197 [2024-12-13 10:40:11.797232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.197 qpair failed and we were unable to recover it. 00:38:18.197 [2024-12-13 10:40:11.797388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.197 [2024-12-13 10:40:11.797401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.197 qpair failed and we were unable to recover it. 00:38:18.197 [2024-12-13 10:40:11.797548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.197 [2024-12-13 10:40:11.797563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.197 qpair failed and we were unable to recover it. 00:38:18.197 [2024-12-13 10:40:11.797722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.197 [2024-12-13 10:40:11.797736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.197 qpair failed and we were unable to recover it. 00:38:18.197 [2024-12-13 10:40:11.797881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.197 [2024-12-13 10:40:11.797894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.197 qpair failed and we were unable to recover it. 00:38:18.197 [2024-12-13 10:40:11.798076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.197 [2024-12-13 10:40:11.798089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.197 qpair failed and we were unable to recover it. 00:38:18.197 [2024-12-13 10:40:11.798176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.197 [2024-12-13 10:40:11.798188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.197 qpair failed and we were unable to recover it. 00:38:18.197 [2024-12-13 10:40:11.798336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.197 [2024-12-13 10:40:11.798349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.197 qpair failed and we were unable to recover it. 00:38:18.197 [2024-12-13 10:40:11.798522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.197 [2024-12-13 10:40:11.798536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.197 qpair failed and we were unable to recover it. 00:38:18.197 [2024-12-13 10:40:11.798690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.197 [2024-12-13 10:40:11.798704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.197 qpair failed and we were unable to recover it. 00:38:18.197 [2024-12-13 10:40:11.798855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.197 [2024-12-13 10:40:11.798870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.197 qpair failed and we were unable to recover it. 00:38:18.197 [2024-12-13 10:40:11.798958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.197 [2024-12-13 10:40:11.798971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.197 qpair failed and we were unable to recover it. 00:38:18.198 [2024-12-13 10:40:11.799155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.198 [2024-12-13 10:40:11.799169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.198 qpair failed and we were unable to recover it. 00:38:18.198 [2024-12-13 10:40:11.799339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.198 [2024-12-13 10:40:11.799352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.198 qpair failed and we were unable to recover it. 00:38:18.198 [2024-12-13 10:40:11.799565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.198 [2024-12-13 10:40:11.799609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.198 qpair failed and we were unable to recover it. 00:38:18.198 [2024-12-13 10:40:11.799819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.198 [2024-12-13 10:40:11.799862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.198 qpair failed and we were unable to recover it. 00:38:18.198 [2024-12-13 10:40:11.800063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.198 [2024-12-13 10:40:11.800104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.198 qpair failed and we were unable to recover it. 00:38:18.198 [2024-12-13 10:40:11.800302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.198 [2024-12-13 10:40:11.800344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.198 qpair failed and we were unable to recover it. 00:38:18.198 [2024-12-13 10:40:11.800564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.198 [2024-12-13 10:40:11.800608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.198 qpair failed and we were unable to recover it. 00:38:18.198 [2024-12-13 10:40:11.800831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.198 [2024-12-13 10:40:11.800844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.198 qpair failed and we were unable to recover it. 00:38:18.198 [2024-12-13 10:40:11.801047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.198 [2024-12-13 10:40:11.801063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.198 qpair failed and we were unable to recover it. 00:38:18.198 [2024-12-13 10:40:11.801177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.198 [2024-12-13 10:40:11.801190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.198 qpair failed and we were unable to recover it. 00:38:18.198 [2024-12-13 10:40:11.801458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.198 [2024-12-13 10:40:11.801473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.198 qpair failed and we were unable to recover it. 00:38:18.198 [2024-12-13 10:40:11.801623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.198 [2024-12-13 10:40:11.801636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.198 qpair failed and we were unable to recover it. 00:38:18.198 [2024-12-13 10:40:11.801741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.198 [2024-12-13 10:40:11.801753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.198 qpair failed and we were unable to recover it. 00:38:18.198 [2024-12-13 10:40:11.801910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.198 [2024-12-13 10:40:11.801923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.198 qpair failed and we were unable to recover it. 00:38:18.198 [2024-12-13 10:40:11.802090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.198 [2024-12-13 10:40:11.802103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.198 qpair failed and we were unable to recover it. 00:38:18.198 [2024-12-13 10:40:11.802305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.198 [2024-12-13 10:40:11.802348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.198 qpair failed and we were unable to recover it. 00:38:18.198 [2024-12-13 10:40:11.802588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.198 [2024-12-13 10:40:11.802632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.198 qpair failed and we were unable to recover it. 00:38:18.198 [2024-12-13 10:40:11.802905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.198 [2024-12-13 10:40:11.802947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.198 qpair failed and we were unable to recover it. 00:38:18.198 [2024-12-13 10:40:11.803093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.198 [2024-12-13 10:40:11.803136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.198 qpair failed and we were unable to recover it. 00:38:18.198 [2024-12-13 10:40:11.803342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.198 [2024-12-13 10:40:11.803385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.198 qpair failed and we were unable to recover it. 00:38:18.198 [2024-12-13 10:40:11.803587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.198 [2024-12-13 10:40:11.803632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.198 qpair failed and we were unable to recover it. 00:38:18.198 [2024-12-13 10:40:11.803790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.198 [2024-12-13 10:40:11.803830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.198 qpair failed and we were unable to recover it. 00:38:18.198 [2024-12-13 10:40:11.804114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.198 [2024-12-13 10:40:11.804158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.198 qpair failed and we were unable to recover it. 00:38:18.198 [2024-12-13 10:40:11.804364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.198 [2024-12-13 10:40:11.804406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.198 qpair failed and we were unable to recover it. 00:38:18.198 [2024-12-13 10:40:11.804633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.198 [2024-12-13 10:40:11.804679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.198 qpair failed and we were unable to recover it. 00:38:18.198 [2024-12-13 10:40:11.804903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.198 [2024-12-13 10:40:11.804917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.198 qpair failed and we were unable to recover it. 00:38:18.198 [2024-12-13 10:40:11.805054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.198 [2024-12-13 10:40:11.805068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.198 qpair failed and we were unable to recover it. 00:38:18.198 [2024-12-13 10:40:11.805222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.198 [2024-12-13 10:40:11.805235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.198 qpair failed and we were unable to recover it. 00:38:18.198 [2024-12-13 10:40:11.805390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.198 [2024-12-13 10:40:11.805403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.198 qpair failed and we were unable to recover it. 00:38:18.198 [2024-12-13 10:40:11.805615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.198 [2024-12-13 10:40:11.805629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.198 qpair failed and we were unable to recover it. 00:38:18.198 [2024-12-13 10:40:11.805804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.198 [2024-12-13 10:40:11.805818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.198 qpair failed and we were unable to recover it. 00:38:18.198 [2024-12-13 10:40:11.806017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.198 [2024-12-13 10:40:11.806059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.198 qpair failed and we were unable to recover it. 00:38:18.198 [2024-12-13 10:40:11.806280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.198 [2024-12-13 10:40:11.806323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.198 qpair failed and we were unable to recover it. 00:38:18.198 [2024-12-13 10:40:11.806606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.198 [2024-12-13 10:40:11.806642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.198 qpair failed and we were unable to recover it. 00:38:18.198 [2024-12-13 10:40:11.806713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.198 [2024-12-13 10:40:11.806726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.198 qpair failed and we were unable to recover it. 00:38:18.198 [2024-12-13 10:40:11.806817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.198 [2024-12-13 10:40:11.806830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.198 qpair failed and we were unable to recover it. 00:38:18.198 [2024-12-13 10:40:11.807042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.198 [2024-12-13 10:40:11.807056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.198 qpair failed and we were unable to recover it. 00:38:18.198 [2024-12-13 10:40:11.807344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.198 [2024-12-13 10:40:11.807358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.198 qpair failed and we were unable to recover it. 00:38:18.198 [2024-12-13 10:40:11.807549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.198 [2024-12-13 10:40:11.807563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.198 qpair failed and we were unable to recover it. 00:38:18.199 [2024-12-13 10:40:11.807663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.199 [2024-12-13 10:40:11.807676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.199 qpair failed and we were unable to recover it. 00:38:18.199 [2024-12-13 10:40:11.807900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.199 [2024-12-13 10:40:11.807913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.199 qpair failed and we were unable to recover it. 00:38:18.199 [2024-12-13 10:40:11.808116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.199 [2024-12-13 10:40:11.808129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.199 qpair failed and we were unable to recover it. 00:38:18.199 [2024-12-13 10:40:11.808367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.199 [2024-12-13 10:40:11.808382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.199 qpair failed and we were unable to recover it. 00:38:18.199 [2024-12-13 10:40:11.808606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.199 [2024-12-13 10:40:11.808626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.199 qpair failed and we were unable to recover it. 00:38:18.199 [2024-12-13 10:40:11.808776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.199 [2024-12-13 10:40:11.808790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.199 qpair failed and we were unable to recover it. 00:38:18.199 [2024-12-13 10:40:11.808884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.199 [2024-12-13 10:40:11.808897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.199 qpair failed and we were unable to recover it. 00:38:18.199 [2024-12-13 10:40:11.809062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.199 [2024-12-13 10:40:11.809076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.199 qpair failed and we were unable to recover it. 00:38:18.199 [2024-12-13 10:40:11.809309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.199 [2024-12-13 10:40:11.809351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.199 qpair failed and we were unable to recover it. 00:38:18.199 [2024-12-13 10:40:11.809545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.199 [2024-12-13 10:40:11.809596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.199 qpair failed and we were unable to recover it. 00:38:18.199 [2024-12-13 10:40:11.809819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.199 [2024-12-13 10:40:11.809862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.199 qpair failed and we were unable to recover it. 00:38:18.199 [2024-12-13 10:40:11.810088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.199 [2024-12-13 10:40:11.810101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.199 qpair failed and we were unable to recover it. 00:38:18.199 [2024-12-13 10:40:11.810344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.199 [2024-12-13 10:40:11.810358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.199 qpair failed and we were unable to recover it. 00:38:18.199 [2024-12-13 10:40:11.810594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.199 [2024-12-13 10:40:11.810608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.199 qpair failed and we were unable to recover it. 00:38:18.199 [2024-12-13 10:40:11.810751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.199 [2024-12-13 10:40:11.810764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.199 qpair failed and we were unable to recover it. 00:38:18.199 [2024-12-13 10:40:11.810918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.199 [2024-12-13 10:40:11.810932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.199 qpair failed and we were unable to recover it. 00:38:18.199 [2024-12-13 10:40:11.811124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.199 [2024-12-13 10:40:11.811138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.199 qpair failed and we were unable to recover it. 00:38:18.199 [2024-12-13 10:40:11.811348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.199 [2024-12-13 10:40:11.811361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.199 qpair failed and we were unable to recover it. 00:38:18.199 [2024-12-13 10:40:11.811580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.199 [2024-12-13 10:40:11.811593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.199 qpair failed and we were unable to recover it. 00:38:18.199 [2024-12-13 10:40:11.811878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.199 [2024-12-13 10:40:11.811891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.199 qpair failed and we were unable to recover it. 00:38:18.199 [2024-12-13 10:40:11.811974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.199 [2024-12-13 10:40:11.811986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.199 qpair failed and we were unable to recover it. 00:38:18.199 [2024-12-13 10:40:11.812147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.199 [2024-12-13 10:40:11.812161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.199 qpair failed and we were unable to recover it. 00:38:18.199 [2024-12-13 10:40:11.812308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.199 [2024-12-13 10:40:11.812322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.199 qpair failed and we were unable to recover it. 00:38:18.199 [2024-12-13 10:40:11.812517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.199 [2024-12-13 10:40:11.812532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.199 qpair failed and we were unable to recover it. 00:38:18.199 [2024-12-13 10:40:11.812684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.199 [2024-12-13 10:40:11.812698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.199 qpair failed and we were unable to recover it. 00:38:18.199 [2024-12-13 10:40:11.812787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.199 [2024-12-13 10:40:11.812800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.199 qpair failed and we were unable to recover it. 00:38:18.199 [2024-12-13 10:40:11.812900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.199 [2024-12-13 10:40:11.812914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.199 qpair failed and we were unable to recover it. 00:38:18.199 [2024-12-13 10:40:11.813095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.199 [2024-12-13 10:40:11.813109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.199 qpair failed and we were unable to recover it. 00:38:18.199 [2024-12-13 10:40:11.813275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.199 [2024-12-13 10:40:11.813317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.199 qpair failed and we were unable to recover it. 00:38:18.199 [2024-12-13 10:40:11.813605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.199 [2024-12-13 10:40:11.813650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.199 qpair failed and we were unable to recover it. 00:38:18.199 [2024-12-13 10:40:11.813843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.199 [2024-12-13 10:40:11.813857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.199 qpair failed and we were unable to recover it. 00:38:18.199 [2024-12-13 10:40:11.814037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.199 [2024-12-13 10:40:11.814080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.199 qpair failed and we were unable to recover it. 00:38:18.199 [2024-12-13 10:40:11.814287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.199 [2024-12-13 10:40:11.814330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.199 qpair failed and we were unable to recover it. 00:38:18.199 [2024-12-13 10:40:11.814563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.199 [2024-12-13 10:40:11.814607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.199 qpair failed and we were unable to recover it. 00:38:18.199 [2024-12-13 10:40:11.814907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.199 [2024-12-13 10:40:11.814921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.199 qpair failed and we were unable to recover it. 00:38:18.199 [2024-12-13 10:40:11.815138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.199 [2024-12-13 10:40:11.815151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.199 qpair failed and we were unable to recover it. 00:38:18.199 [2024-12-13 10:40:11.815316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.199 [2024-12-13 10:40:11.815330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.199 qpair failed and we were unable to recover it. 00:38:18.199 [2024-12-13 10:40:11.815486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.199 [2024-12-13 10:40:11.815499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.199 qpair failed and we were unable to recover it. 00:38:18.199 [2024-12-13 10:40:11.815599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.199 [2024-12-13 10:40:11.815611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.200 qpair failed and we were unable to recover it. 00:38:18.200 [2024-12-13 10:40:11.815714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.200 [2024-12-13 10:40:11.815728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.200 qpair failed and we were unable to recover it. 00:38:18.200 [2024-12-13 10:40:11.815930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.200 [2024-12-13 10:40:11.815943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.200 qpair failed and we were unable to recover it. 00:38:18.200 [2024-12-13 10:40:11.816052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.200 [2024-12-13 10:40:11.816066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.200 qpair failed and we were unable to recover it. 00:38:18.200 [2024-12-13 10:40:11.816216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.200 [2024-12-13 10:40:11.816231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.200 qpair failed and we were unable to recover it. 00:38:18.200 [2024-12-13 10:40:11.816502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.200 [2024-12-13 10:40:11.816516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.200 qpair failed and we were unable to recover it. 00:38:18.200 [2024-12-13 10:40:11.816712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.200 [2024-12-13 10:40:11.816726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.200 qpair failed and we were unable to recover it. 00:38:18.200 [2024-12-13 10:40:11.816954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.200 [2024-12-13 10:40:11.816968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.200 qpair failed and we were unable to recover it. 00:38:18.200 [2024-12-13 10:40:11.817080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.200 [2024-12-13 10:40:11.817094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.200 qpair failed and we were unable to recover it. 00:38:18.200 [2024-12-13 10:40:11.817325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.200 [2024-12-13 10:40:11.817339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.200 qpair failed and we were unable to recover it. 00:38:18.200 [2024-12-13 10:40:11.817549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.200 [2024-12-13 10:40:11.817564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.200 qpair failed and we were unable to recover it. 00:38:18.200 [2024-12-13 10:40:11.817707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.200 [2024-12-13 10:40:11.817723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.200 qpair failed and we were unable to recover it. 00:38:18.200 [2024-12-13 10:40:11.817819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.200 [2024-12-13 10:40:11.817831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.200 qpair failed and we were unable to recover it. 00:38:18.200 [2024-12-13 10:40:11.817978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.200 [2024-12-13 10:40:11.817993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.200 qpair failed and we were unable to recover it. 00:38:18.200 [2024-12-13 10:40:11.818164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.200 [2024-12-13 10:40:11.818178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.200 qpair failed and we were unable to recover it. 00:38:18.200 [2024-12-13 10:40:11.818341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.200 [2024-12-13 10:40:11.818355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.200 qpair failed and we were unable to recover it. 00:38:18.200 [2024-12-13 10:40:11.818508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.200 [2024-12-13 10:40:11.818522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.200 qpair failed and we were unable to recover it. 00:38:18.200 [2024-12-13 10:40:11.818679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.200 [2024-12-13 10:40:11.818692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.200 qpair failed and we were unable to recover it. 00:38:18.200 [2024-12-13 10:40:11.818852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.200 [2024-12-13 10:40:11.818866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.200 qpair failed and we were unable to recover it. 00:38:18.200 [2024-12-13 10:40:11.819022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.200 [2024-12-13 10:40:11.819035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.200 qpair failed and we were unable to recover it. 00:38:18.200 [2024-12-13 10:40:11.819204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.200 [2024-12-13 10:40:11.819217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.200 qpair failed and we were unable to recover it. 00:38:18.200 [2024-12-13 10:40:11.819465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.200 [2024-12-13 10:40:11.819479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.200 qpair failed and we were unable to recover it. 00:38:18.200 [2024-12-13 10:40:11.819661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.200 [2024-12-13 10:40:11.819674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.200 qpair failed and we were unable to recover it. 00:38:18.200 [2024-12-13 10:40:11.819883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.200 [2024-12-13 10:40:11.819924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.200 qpair failed and we were unable to recover it. 00:38:18.200 [2024-12-13 10:40:11.820121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.200 [2024-12-13 10:40:11.820163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.200 qpair failed and we were unable to recover it. 00:38:18.200 [2024-12-13 10:40:11.820442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.200 [2024-12-13 10:40:11.820498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.200 qpair failed and we were unable to recover it. 00:38:18.200 [2024-12-13 10:40:11.820731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.200 [2024-12-13 10:40:11.820773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.200 qpair failed and we were unable to recover it. 00:38:18.200 [2024-12-13 10:40:11.821015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.200 [2024-12-13 10:40:11.821028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.200 qpair failed and we were unable to recover it. 00:38:18.200 [2024-12-13 10:40:11.821321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.200 [2024-12-13 10:40:11.821334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.200 qpair failed and we were unable to recover it. 00:38:18.200 [2024-12-13 10:40:11.821433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.200 [2024-12-13 10:40:11.821453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.200 qpair failed and we were unable to recover it. 00:38:18.200 [2024-12-13 10:40:11.821597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.200 [2024-12-13 10:40:11.821619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.200 qpair failed and we were unable to recover it. 00:38:18.200 [2024-12-13 10:40:11.821727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.200 [2024-12-13 10:40:11.821740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.200 qpair failed and we were unable to recover it. 00:38:18.200 [2024-12-13 10:40:11.821964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.200 [2024-12-13 10:40:11.822007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.200 qpair failed and we were unable to recover it. 00:38:18.200 [2024-12-13 10:40:11.822233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.200 [2024-12-13 10:40:11.822277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.200 qpair failed and we were unable to recover it. 00:38:18.200 [2024-12-13 10:40:11.822502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.200 [2024-12-13 10:40:11.822546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.200 qpair failed and we were unable to recover it. 00:38:18.200 [2024-12-13 10:40:11.822756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.200 [2024-12-13 10:40:11.822798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.200 qpair failed and we were unable to recover it. 00:38:18.200 [2024-12-13 10:40:11.822943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.200 [2024-12-13 10:40:11.822957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.200 qpair failed and we were unable to recover it. 00:38:18.200 [2024-12-13 10:40:11.823205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.200 [2024-12-13 10:40:11.823249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.200 qpair failed and we were unable to recover it. 00:38:18.200 [2024-12-13 10:40:11.823521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.200 [2024-12-13 10:40:11.823608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:18.200 qpair failed and we were unable to recover it. 00:38:18.200 [2024-12-13 10:40:11.823988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.200 [2024-12-13 10:40:11.824070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:18.200 qpair failed and we were unable to recover it. 00:38:18.201 [2024-12-13 10:40:11.824322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.201 [2024-12-13 10:40:11.824409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:18.201 qpair failed and we were unable to recover it. 00:38:18.201 [2024-12-13 10:40:11.824618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.201 [2024-12-13 10:40:11.824663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.201 qpair failed and we were unable to recover it. 00:38:18.201 [2024-12-13 10:40:11.824903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.201 [2024-12-13 10:40:11.824916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.201 qpair failed and we were unable to recover it. 00:38:18.201 [2024-12-13 10:40:11.825155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.201 [2024-12-13 10:40:11.825168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.201 qpair failed and we were unable to recover it. 00:38:18.201 [2024-12-13 10:40:11.825398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.201 [2024-12-13 10:40:11.825412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.201 qpair failed and we were unable to recover it. 00:38:18.201 [2024-12-13 10:40:11.825577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.201 [2024-12-13 10:40:11.825591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.201 qpair failed and we were unable to recover it. 00:38:18.201 [2024-12-13 10:40:11.825835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.201 [2024-12-13 10:40:11.825878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.201 qpair failed and we were unable to recover it. 00:38:18.201 [2024-12-13 10:40:11.826188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.201 [2024-12-13 10:40:11.826231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.201 qpair failed and we were unable to recover it. 00:38:18.201 [2024-12-13 10:40:11.826518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.201 [2024-12-13 10:40:11.826561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.201 qpair failed and we were unable to recover it. 00:38:18.201 [2024-12-13 10:40:11.826784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.201 [2024-12-13 10:40:11.826827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.201 qpair failed and we were unable to recover it. 00:38:18.201 [2024-12-13 10:40:11.827090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.201 [2024-12-13 10:40:11.827103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.201 qpair failed and we were unable to recover it. 00:38:18.201 [2024-12-13 10:40:11.827198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.201 [2024-12-13 10:40:11.827210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.201 qpair failed and we were unable to recover it. 00:38:18.201 [2024-12-13 10:40:11.827419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.201 [2024-12-13 10:40:11.827433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.201 qpair failed and we were unable to recover it. 00:38:18.201 [2024-12-13 10:40:11.827698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.201 [2024-12-13 10:40:11.827712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.201 qpair failed and we were unable to recover it. 00:38:18.201 [2024-12-13 10:40:11.827967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.201 [2024-12-13 10:40:11.828010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.201 qpair failed and we were unable to recover it. 00:38:18.201 [2024-12-13 10:40:11.828214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.201 [2024-12-13 10:40:11.828256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.201 qpair failed and we were unable to recover it. 00:38:18.201 [2024-12-13 10:40:11.828504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.201 [2024-12-13 10:40:11.828547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.201 qpair failed and we were unable to recover it. 00:38:18.201 [2024-12-13 10:40:11.828769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.201 [2024-12-13 10:40:11.828783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.201 qpair failed and we were unable to recover it. 00:38:18.201 [2024-12-13 10:40:11.828899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.201 [2024-12-13 10:40:11.828913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.201 qpair failed and we were unable to recover it. 00:38:18.201 [2024-12-13 10:40:11.829133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.201 [2024-12-13 10:40:11.829146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.201 qpair failed and we were unable to recover it. 00:38:18.201 [2024-12-13 10:40:11.829396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.201 [2024-12-13 10:40:11.829410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.201 qpair failed and we were unable to recover it. 00:38:18.201 [2024-12-13 10:40:11.829571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.201 [2024-12-13 10:40:11.829584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.201 qpair failed and we were unable to recover it. 00:38:18.201 [2024-12-13 10:40:11.829684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.201 [2024-12-13 10:40:11.829698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.201 qpair failed and we were unable to recover it. 00:38:18.201 [2024-12-13 10:40:11.829914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.201 [2024-12-13 10:40:11.829928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.201 qpair failed and we were unable to recover it. 00:38:18.201 [2024-12-13 10:40:11.830087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.201 [2024-12-13 10:40:11.830101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.201 qpair failed and we were unable to recover it. 00:38:18.201 [2024-12-13 10:40:11.830315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.201 [2024-12-13 10:40:11.830356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.201 qpair failed and we were unable to recover it. 00:38:18.201 [2024-12-13 10:40:11.830625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.201 [2024-12-13 10:40:11.830668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.201 qpair failed and we were unable to recover it. 00:38:18.201 [2024-12-13 10:40:11.830945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.201 [2024-12-13 10:40:11.830986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.201 qpair failed and we were unable to recover it. 00:38:18.201 [2024-12-13 10:40:11.831265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.201 [2024-12-13 10:40:11.831307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.201 qpair failed and we were unable to recover it. 00:38:18.201 [2024-12-13 10:40:11.831514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.201 [2024-12-13 10:40:11.831558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.201 qpair failed and we were unable to recover it. 00:38:18.201 [2024-12-13 10:40:11.831775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.201 [2024-12-13 10:40:11.831817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.201 qpair failed and we were unable to recover it. 00:38:18.201 [2024-12-13 10:40:11.832027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.201 [2024-12-13 10:40:11.832040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.201 qpair failed and we were unable to recover it. 00:38:18.201 [2024-12-13 10:40:11.832142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.201 [2024-12-13 10:40:11.832156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.201 qpair failed and we were unable to recover it. 00:38:18.201 [2024-12-13 10:40:11.832361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.202 [2024-12-13 10:40:11.832374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.202 qpair failed and we were unable to recover it. 00:38:18.202 [2024-12-13 10:40:11.832474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.202 [2024-12-13 10:40:11.832487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.202 qpair failed and we were unable to recover it. 00:38:18.202 [2024-12-13 10:40:11.832695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.202 [2024-12-13 10:40:11.832731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.202 qpair failed and we were unable to recover it. 00:38:18.202 [2024-12-13 10:40:11.833003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.202 [2024-12-13 10:40:11.833044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.202 qpair failed and we were unable to recover it. 00:38:18.202 [2024-12-13 10:40:11.833341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.202 [2024-12-13 10:40:11.833383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.202 qpair failed and we were unable to recover it. 00:38:18.202 [2024-12-13 10:40:11.833612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.202 [2024-12-13 10:40:11.833673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.202 qpair failed and we were unable to recover it. 00:38:18.202 [2024-12-13 10:40:11.833849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.202 [2024-12-13 10:40:11.833862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.202 qpair failed and we were unable to recover it. 00:38:18.202 [2024-12-13 10:40:11.834035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.202 [2024-12-13 10:40:11.834077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.202 qpair failed and we were unable to recover it. 00:38:18.202 [2024-12-13 10:40:11.834351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.202 [2024-12-13 10:40:11.834392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.202 qpair failed and we were unable to recover it. 00:38:18.202 [2024-12-13 10:40:11.834621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.202 [2024-12-13 10:40:11.834663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.202 qpair failed and we were unable to recover it. 00:38:18.202 [2024-12-13 10:40:11.834825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.202 [2024-12-13 10:40:11.834866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.202 qpair failed and we were unable to recover it. 00:38:18.202 [2024-12-13 10:40:11.835076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.202 [2024-12-13 10:40:11.835090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.202 qpair failed and we were unable to recover it. 00:38:18.202 [2024-12-13 10:40:11.835314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.202 [2024-12-13 10:40:11.835327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.202 qpair failed and we were unable to recover it. 00:38:18.202 [2024-12-13 10:40:11.835474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.202 [2024-12-13 10:40:11.835488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.202 qpair failed and we were unable to recover it. 00:38:18.202 [2024-12-13 10:40:11.835751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.202 [2024-12-13 10:40:11.835793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.202 qpair failed and we were unable to recover it. 00:38:18.202 [2024-12-13 10:40:11.835969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.202 [2024-12-13 10:40:11.836012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.202 qpair failed and we were unable to recover it. 00:38:18.202 [2024-12-13 10:40:11.836236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.202 [2024-12-13 10:40:11.836280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.202 qpair failed and we were unable to recover it. 00:38:18.202 [2024-12-13 10:40:11.836589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.202 [2024-12-13 10:40:11.836633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.202 qpair failed and we were unable to recover it. 00:38:18.202 [2024-12-13 10:40:11.836801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.202 [2024-12-13 10:40:11.836844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.202 qpair failed and we were unable to recover it. 00:38:18.202 [2024-12-13 10:40:11.836971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.202 [2024-12-13 10:40:11.836984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.202 qpair failed and we were unable to recover it. 00:38:18.202 [2024-12-13 10:40:11.837124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.202 [2024-12-13 10:40:11.837138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.202 qpair failed and we were unable to recover it. 00:38:18.202 [2024-12-13 10:40:11.837286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.202 [2024-12-13 10:40:11.837300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.202 qpair failed and we were unable to recover it. 00:38:18.202 [2024-12-13 10:40:11.837539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.202 [2024-12-13 10:40:11.837581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.202 qpair failed and we were unable to recover it. 00:38:18.202 [2024-12-13 10:40:11.837847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.202 [2024-12-13 10:40:11.837861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.202 qpair failed and we were unable to recover it. 00:38:18.202 [2024-12-13 10:40:11.837960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.202 [2024-12-13 10:40:11.837977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.202 qpair failed and we were unable to recover it. 00:38:18.202 [2024-12-13 10:40:11.838078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.202 [2024-12-13 10:40:11.838091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.202 qpair failed and we were unable to recover it. 00:38:18.202 [2024-12-13 10:40:11.838266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.202 [2024-12-13 10:40:11.838280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.202 qpair failed and we were unable to recover it. 00:38:18.202 [2024-12-13 10:40:11.838425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.202 [2024-12-13 10:40:11.838439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.202 qpair failed and we were unable to recover it. 00:38:18.202 [2024-12-13 10:40:11.838612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.202 [2024-12-13 10:40:11.838626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.202 qpair failed and we were unable to recover it. 00:38:18.202 [2024-12-13 10:40:11.838837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.202 [2024-12-13 10:40:11.838850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.202 qpair failed and we were unable to recover it. 00:38:18.202 [2024-12-13 10:40:11.839010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.202 [2024-12-13 10:40:11.839023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.202 qpair failed and we were unable to recover it. 00:38:18.202 [2024-12-13 10:40:11.839196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.202 [2024-12-13 10:40:11.839238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.202 qpair failed and we were unable to recover it. 00:38:18.202 [2024-12-13 10:40:11.839385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.202 [2024-12-13 10:40:11.839427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.202 qpair failed and we were unable to recover it. 00:38:18.202 [2024-12-13 10:40:11.839692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.202 [2024-12-13 10:40:11.839734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.202 qpair failed and we were unable to recover it. 00:38:18.202 [2024-12-13 10:40:11.839958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.202 [2024-12-13 10:40:11.839971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.202 qpair failed and we were unable to recover it. 00:38:18.202 [2024-12-13 10:40:11.840203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.202 [2024-12-13 10:40:11.840216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.202 qpair failed and we were unable to recover it. 00:38:18.202 [2024-12-13 10:40:11.840332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.202 [2024-12-13 10:40:11.840344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.202 qpair failed and we were unable to recover it. 00:38:18.202 [2024-12-13 10:40:11.840443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.202 [2024-12-13 10:40:11.840462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.202 qpair failed and we were unable to recover it. 00:38:18.202 [2024-12-13 10:40:11.840594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.202 [2024-12-13 10:40:11.840607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.202 qpair failed and we were unable to recover it. 00:38:18.202 [2024-12-13 10:40:11.840758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.203 [2024-12-13 10:40:11.840772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.203 qpair failed and we were unable to recover it. 00:38:18.203 [2024-12-13 10:40:11.840920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.203 [2024-12-13 10:40:11.840933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.203 qpair failed and we were unable to recover it. 00:38:18.203 [2024-12-13 10:40:11.841090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.203 [2024-12-13 10:40:11.841104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.203 qpair failed and we were unable to recover it. 00:38:18.203 [2024-12-13 10:40:11.841296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.203 [2024-12-13 10:40:11.841309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.203 qpair failed and we were unable to recover it. 00:38:18.203 [2024-12-13 10:40:11.841562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.203 [2024-12-13 10:40:11.841577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.203 qpair failed and we were unable to recover it. 00:38:18.203 [2024-12-13 10:40:11.841843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.203 [2024-12-13 10:40:11.841857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.203 qpair failed and we were unable to recover it. 00:38:18.203 [2024-12-13 10:40:11.842174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.203 [2024-12-13 10:40:11.842190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.203 qpair failed and we were unable to recover it. 00:38:18.203 [2024-12-13 10:40:11.842441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.203 [2024-12-13 10:40:11.842459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.203 qpair failed and we were unable to recover it. 00:38:18.203 [2024-12-13 10:40:11.842618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.203 [2024-12-13 10:40:11.842632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.203 qpair failed and we were unable to recover it. 00:38:18.203 [2024-12-13 10:40:11.842799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.203 [2024-12-13 10:40:11.842812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.203 qpair failed and we were unable to recover it. 00:38:18.203 [2024-12-13 10:40:11.843016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.203 [2024-12-13 10:40:11.843029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.203 qpair failed and we were unable to recover it. 00:38:18.203 [2024-12-13 10:40:11.843266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.203 [2024-12-13 10:40:11.843279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.203 qpair failed and we were unable to recover it. 00:38:18.203 [2024-12-13 10:40:11.843444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.203 [2024-12-13 10:40:11.843462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.203 qpair failed and we were unable to recover it. 00:38:18.203 [2024-12-13 10:40:11.843592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.203 [2024-12-13 10:40:11.843605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.203 qpair failed and we were unable to recover it. 00:38:18.203 [2024-12-13 10:40:11.843760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.203 [2024-12-13 10:40:11.843773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.203 qpair failed and we were unable to recover it. 00:38:18.203 [2024-12-13 10:40:11.843925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.203 [2024-12-13 10:40:11.843938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.203 qpair failed and we were unable to recover it. 00:38:18.203 [2024-12-13 10:40:11.844111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.203 [2024-12-13 10:40:11.844125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.203 qpair failed and we were unable to recover it. 00:38:18.203 [2024-12-13 10:40:11.844264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.203 [2024-12-13 10:40:11.844291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.203 qpair failed and we were unable to recover it. 00:38:18.203 [2024-12-13 10:40:11.844589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.203 [2024-12-13 10:40:11.844632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.203 qpair failed and we were unable to recover it. 00:38:18.203 [2024-12-13 10:40:11.844905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.203 [2024-12-13 10:40:11.844948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.203 qpair failed and we were unable to recover it. 00:38:18.203 [2024-12-13 10:40:11.845177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.203 [2024-12-13 10:40:11.845220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.203 qpair failed and we were unable to recover it. 00:38:18.203 [2024-12-13 10:40:11.845487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.203 [2024-12-13 10:40:11.845531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.203 qpair failed and we were unable to recover it. 00:38:18.203 [2024-12-13 10:40:11.845793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.203 [2024-12-13 10:40:11.845835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.203 qpair failed and we were unable to recover it. 00:38:18.203 [2024-12-13 10:40:11.845983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.203 [2024-12-13 10:40:11.845996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.203 qpair failed and we were unable to recover it. 00:38:18.203 [2024-12-13 10:40:11.846278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.203 [2024-12-13 10:40:11.846322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.203 qpair failed and we were unable to recover it. 00:38:18.203 [2024-12-13 10:40:11.846526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.203 [2024-12-13 10:40:11.846568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.203 qpair failed and we were unable to recover it. 00:38:18.203 [2024-12-13 10:40:11.846853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.203 [2024-12-13 10:40:11.846896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.203 qpair failed and we were unable to recover it. 00:38:18.203 [2024-12-13 10:40:11.847058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.203 [2024-12-13 10:40:11.847100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.203 qpair failed and we were unable to recover it. 00:38:18.203 [2024-12-13 10:40:11.847309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.203 [2024-12-13 10:40:11.847352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.203 qpair failed and we were unable to recover it. 00:38:18.203 [2024-12-13 10:40:11.847574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.203 [2024-12-13 10:40:11.847618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.203 qpair failed and we were unable to recover it. 00:38:18.203 [2024-12-13 10:40:11.847769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.203 [2024-12-13 10:40:11.847812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.203 qpair failed and we were unable to recover it. 00:38:18.203 [2024-12-13 10:40:11.847949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.203 [2024-12-13 10:40:11.847984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.203 qpair failed and we were unable to recover it. 00:38:18.203 [2024-12-13 10:40:11.848153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.203 [2024-12-13 10:40:11.848166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.203 qpair failed and we were unable to recover it. 00:38:18.203 [2024-12-13 10:40:11.848427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.203 [2024-12-13 10:40:11.848482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.203 qpair failed and we were unable to recover it. 00:38:18.203 [2024-12-13 10:40:11.848686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.203 [2024-12-13 10:40:11.848727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.203 qpair failed and we were unable to recover it. 00:38:18.203 [2024-12-13 10:40:11.849007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.203 [2024-12-13 10:40:11.849049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.203 qpair failed and we were unable to recover it. 00:38:18.203 [2024-12-13 10:40:11.849261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.203 [2024-12-13 10:40:11.849303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.203 qpair failed and we were unable to recover it. 00:38:18.203 [2024-12-13 10:40:11.849502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.203 [2024-12-13 10:40:11.849547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.203 qpair failed and we were unable to recover it. 00:38:18.203 [2024-12-13 10:40:11.849761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.203 [2024-12-13 10:40:11.849803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.203 qpair failed and we were unable to recover it. 00:38:18.203 [2024-12-13 10:40:11.849948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.204 [2024-12-13 10:40:11.849961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.204 qpair failed and we were unable to recover it. 00:38:18.204 [2024-12-13 10:40:11.850214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.204 [2024-12-13 10:40:11.850257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.204 qpair failed and we were unable to recover it. 00:38:18.204 [2024-12-13 10:40:11.850472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.204 [2024-12-13 10:40:11.850516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.204 qpair failed and we were unable to recover it. 00:38:18.204 [2024-12-13 10:40:11.850679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.204 [2024-12-13 10:40:11.850722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.204 qpair failed and we were unable to recover it. 00:38:18.204 [2024-12-13 10:40:11.850861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.204 [2024-12-13 10:40:11.850874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.204 qpair failed and we were unable to recover it. 00:38:18.204 [2024-12-13 10:40:11.851139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.204 [2024-12-13 10:40:11.851182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.204 qpair failed and we were unable to recover it. 00:38:18.204 [2024-12-13 10:40:11.851471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.204 [2024-12-13 10:40:11.851515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.204 qpair failed and we were unable to recover it. 00:38:18.204 [2024-12-13 10:40:11.851729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.204 [2024-12-13 10:40:11.851776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.204 qpair failed and we were unable to recover it. 00:38:18.204 [2024-12-13 10:40:11.852014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.204 [2024-12-13 10:40:11.852057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.204 qpair failed and we were unable to recover it. 00:38:18.204 [2024-12-13 10:40:11.852232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.204 [2024-12-13 10:40:11.852275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.204 qpair failed and we were unable to recover it. 00:38:18.204 [2024-12-13 10:40:11.852479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.204 [2024-12-13 10:40:11.852535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.204 qpair failed and we were unable to recover it. 00:38:18.204 [2024-12-13 10:40:11.852694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.204 [2024-12-13 10:40:11.852737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.204 qpair failed and we were unable to recover it. 00:38:18.204 [2024-12-13 10:40:11.852880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.204 [2024-12-13 10:40:11.852933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.204 qpair failed and we were unable to recover it. 00:38:18.204 [2024-12-13 10:40:11.853128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.204 [2024-12-13 10:40:11.853142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.204 qpair failed and we were unable to recover it. 00:38:18.204 [2024-12-13 10:40:11.853378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.204 [2024-12-13 10:40:11.853392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.204 qpair failed and we were unable to recover it. 00:38:18.204 [2024-12-13 10:40:11.853556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.204 [2024-12-13 10:40:11.853570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.204 qpair failed and we were unable to recover it. 00:38:18.204 [2024-12-13 10:40:11.853737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.204 [2024-12-13 10:40:11.853785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.204 qpair failed and we were unable to recover it. 00:38:18.204 [2024-12-13 10:40:11.853996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.204 [2024-12-13 10:40:11.854039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.204 qpair failed and we were unable to recover it. 00:38:18.204 [2024-12-13 10:40:11.854256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.204 [2024-12-13 10:40:11.854299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.204 qpair failed and we were unable to recover it. 00:38:18.204 [2024-12-13 10:40:11.854563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.204 [2024-12-13 10:40:11.854606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.204 qpair failed and we were unable to recover it. 00:38:18.204 [2024-12-13 10:40:11.854804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.204 [2024-12-13 10:40:11.854817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.204 qpair failed and we were unable to recover it. 00:38:18.204 [2024-12-13 10:40:11.854964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.204 [2024-12-13 10:40:11.855014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.204 qpair failed and we were unable to recover it. 00:38:18.204 [2024-12-13 10:40:11.855295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.204 [2024-12-13 10:40:11.855339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.204 qpair failed and we were unable to recover it. 00:38:18.204 [2024-12-13 10:40:11.855659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.204 [2024-12-13 10:40:11.855703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.204 qpair failed and we were unable to recover it. 00:38:18.204 [2024-12-13 10:40:11.855914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.204 [2024-12-13 10:40:11.855956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.204 qpair failed and we were unable to recover it. 00:38:18.204 [2024-12-13 10:40:11.856189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.204 [2024-12-13 10:40:11.856232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.204 qpair failed and we were unable to recover it. 00:38:18.204 [2024-12-13 10:40:11.856439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.204 [2024-12-13 10:40:11.856493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.204 qpair failed and we were unable to recover it. 00:38:18.204 [2024-12-13 10:40:11.856704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.204 [2024-12-13 10:40:11.856746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.204 qpair failed and we were unable to recover it. 00:38:18.204 [2024-12-13 10:40:11.856969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.204 [2024-12-13 10:40:11.857012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.204 qpair failed and we were unable to recover it. 00:38:18.204 [2024-12-13 10:40:11.857291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.204 [2024-12-13 10:40:11.857334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.204 qpair failed and we were unable to recover it. 00:38:18.204 [2024-12-13 10:40:11.857627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.204 [2024-12-13 10:40:11.857671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.204 qpair failed and we were unable to recover it. 00:38:18.204 [2024-12-13 10:40:11.857906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.204 [2024-12-13 10:40:11.857921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.204 qpair failed and we were unable to recover it. 00:38:18.204 [2024-12-13 10:40:11.858143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.204 [2024-12-13 10:40:11.858157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.204 qpair failed and we were unable to recover it. 00:38:18.204 [2024-12-13 10:40:11.858363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.204 [2024-12-13 10:40:11.858376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.204 qpair failed and we were unable to recover it. 00:38:18.204 [2024-12-13 10:40:11.858593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.204 [2024-12-13 10:40:11.858636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.204 qpair failed and we were unable to recover it. 00:38:18.204 [2024-12-13 10:40:11.858810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.204 [2024-12-13 10:40:11.858851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.204 qpair failed and we were unable to recover it. 00:38:18.204 [2024-12-13 10:40:11.859056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.204 [2024-12-13 10:40:11.859097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.204 qpair failed and we were unable to recover it. 00:38:18.204 [2024-12-13 10:40:11.859326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.204 [2024-12-13 10:40:11.859340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.204 qpair failed and we were unable to recover it. 00:38:18.204 [2024-12-13 10:40:11.859507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.204 [2024-12-13 10:40:11.859521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.204 qpair failed and we were unable to recover it. 00:38:18.204 [2024-12-13 10:40:11.859681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.205 [2024-12-13 10:40:11.859721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.205 qpair failed and we were unable to recover it. 00:38:18.205 [2024-12-13 10:40:11.859884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.205 [2024-12-13 10:40:11.859927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.205 qpair failed and we were unable to recover it. 00:38:18.205 [2024-12-13 10:40:11.860222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.205 [2024-12-13 10:40:11.860265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.205 qpair failed and we were unable to recover it. 00:38:18.205 [2024-12-13 10:40:11.860479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.205 [2024-12-13 10:40:11.860522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.205 qpair failed and we were unable to recover it. 00:38:18.205 [2024-12-13 10:40:11.860773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.205 [2024-12-13 10:40:11.860786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.205 qpair failed and we were unable to recover it. 00:38:18.205 [2024-12-13 10:40:11.860939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.205 [2024-12-13 10:40:11.860952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.205 qpair failed and we were unable to recover it. 00:38:18.205 [2024-12-13 10:40:11.861036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.205 [2024-12-13 10:40:11.861049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.205 qpair failed and we were unable to recover it. 00:38:18.205 [2024-12-13 10:40:11.861297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.205 [2024-12-13 10:40:11.861310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.205 qpair failed and we were unable to recover it. 00:38:18.205 [2024-12-13 10:40:11.861518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.205 [2024-12-13 10:40:11.861536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.205 qpair failed and we were unable to recover it. 00:38:18.205 [2024-12-13 10:40:11.861750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.205 [2024-12-13 10:40:11.861764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.205 qpair failed and we were unable to recover it. 00:38:18.205 [2024-12-13 10:40:11.861920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.205 [2024-12-13 10:40:11.861934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.205 qpair failed and we were unable to recover it. 00:38:18.205 [2024-12-13 10:40:11.862199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.205 [2024-12-13 10:40:11.862241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.205 qpair failed and we were unable to recover it. 00:38:18.205 [2024-12-13 10:40:11.862518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.205 [2024-12-13 10:40:11.862562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.205 qpair failed and we were unable to recover it. 00:38:18.205 [2024-12-13 10:40:11.862772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.205 [2024-12-13 10:40:11.862814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.205 qpair failed and we were unable to recover it. 00:38:18.205 [2024-12-13 10:40:11.863036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.205 [2024-12-13 10:40:11.863080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.205 qpair failed and we were unable to recover it. 00:38:18.205 [2024-12-13 10:40:11.863280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.205 [2024-12-13 10:40:11.863294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.205 qpair failed and we were unable to recover it. 00:38:18.205 [2024-12-13 10:40:11.863500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.205 [2024-12-13 10:40:11.863514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.205 qpair failed and we were unable to recover it. 00:38:18.205 [2024-12-13 10:40:11.863676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.205 [2024-12-13 10:40:11.863690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.205 qpair failed and we were unable to recover it. 00:38:18.205 [2024-12-13 10:40:11.863921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.205 [2024-12-13 10:40:11.863957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.205 qpair failed and we were unable to recover it. 00:38:18.205 [2024-12-13 10:40:11.864235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.205 [2024-12-13 10:40:11.864278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.205 qpair failed and we were unable to recover it. 00:38:18.205 [2024-12-13 10:40:11.864497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.205 [2024-12-13 10:40:11.864542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.205 qpair failed and we were unable to recover it. 00:38:18.205 [2024-12-13 10:40:11.864710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.205 [2024-12-13 10:40:11.864753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.205 qpair failed and we were unable to recover it. 00:38:18.205 [2024-12-13 10:40:11.864972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.205 [2024-12-13 10:40:11.865015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.205 qpair failed and we were unable to recover it. 00:38:18.205 [2024-12-13 10:40:11.865294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.205 [2024-12-13 10:40:11.865308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.205 qpair failed and we were unable to recover it. 00:38:18.205 [2024-12-13 10:40:11.865558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.205 [2024-12-13 10:40:11.865573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.205 qpair failed and we were unable to recover it. 00:38:18.205 [2024-12-13 10:40:11.865673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.205 [2024-12-13 10:40:11.865687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.205 qpair failed and we were unable to recover it. 00:38:18.205 [2024-12-13 10:40:11.865846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.205 [2024-12-13 10:40:11.865860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.205 qpair failed and we were unable to recover it. 00:38:18.205 [2024-12-13 10:40:11.865958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.205 [2024-12-13 10:40:11.865972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.205 qpair failed and we were unable to recover it. 00:38:18.205 [2024-12-13 10:40:11.866065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.205 [2024-12-13 10:40:11.866079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.205 qpair failed and we were unable to recover it. 00:38:18.205 [2024-12-13 10:40:11.866292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.205 [2024-12-13 10:40:11.866305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.205 qpair failed and we were unable to recover it. 00:38:18.205 [2024-12-13 10:40:11.866533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.205 [2024-12-13 10:40:11.866547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.205 qpair failed and we were unable to recover it. 00:38:18.205 [2024-12-13 10:40:11.866694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.205 [2024-12-13 10:40:11.866708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.205 qpair failed and we were unable to recover it. 00:38:18.205 [2024-12-13 10:40:11.866923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.205 [2024-12-13 10:40:11.866967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.205 qpair failed and we were unable to recover it. 00:38:18.205 [2024-12-13 10:40:11.867200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.205 [2024-12-13 10:40:11.867243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.205 qpair failed and we were unable to recover it. 00:38:18.205 [2024-12-13 10:40:11.867575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.205 [2024-12-13 10:40:11.867621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.205 qpair failed and we were unable to recover it. 00:38:18.206 [2024-12-13 10:40:11.867888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.206 [2024-12-13 10:40:11.867907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.206 qpair failed and we were unable to recover it. 00:38:18.206 [2024-12-13 10:40:11.868044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.206 [2024-12-13 10:40:11.868058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.206 qpair failed and we were unable to recover it. 00:38:18.206 [2024-12-13 10:40:11.868327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.206 [2024-12-13 10:40:11.868341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.206 qpair failed and we were unable to recover it. 00:38:18.206 [2024-12-13 10:40:11.868576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.206 [2024-12-13 10:40:11.868621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.206 qpair failed and we were unable to recover it. 00:38:18.206 [2024-12-13 10:40:11.868768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.206 [2024-12-13 10:40:11.868810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.206 qpair failed and we were unable to recover it. 00:38:18.206 [2024-12-13 10:40:11.869023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.206 [2024-12-13 10:40:11.869065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.206 qpair failed and we were unable to recover it. 00:38:18.206 [2024-12-13 10:40:11.869270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.206 [2024-12-13 10:40:11.869284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.206 qpair failed and we were unable to recover it. 00:38:18.206 [2024-12-13 10:40:11.869384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.206 [2024-12-13 10:40:11.869397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.206 qpair failed and we were unable to recover it. 00:38:18.206 [2024-12-13 10:40:11.869574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.206 [2024-12-13 10:40:11.869588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.206 qpair failed and we were unable to recover it. 00:38:18.206 [2024-12-13 10:40:11.869790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.206 [2024-12-13 10:40:11.869804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.206 qpair failed and we were unable to recover it. 00:38:18.206 [2024-12-13 10:40:11.869895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.206 [2024-12-13 10:40:11.869908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.206 qpair failed and we were unable to recover it. 00:38:18.206 [2024-12-13 10:40:11.870115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.206 [2024-12-13 10:40:11.870129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.206 qpair failed and we were unable to recover it. 00:38:18.206 [2024-12-13 10:40:11.870339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.206 [2024-12-13 10:40:11.870352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.206 qpair failed and we were unable to recover it. 00:38:18.206 [2024-12-13 10:40:11.870506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.206 [2024-12-13 10:40:11.870523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.206 qpair failed and we were unable to recover it. 00:38:18.206 [2024-12-13 10:40:11.870659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.206 [2024-12-13 10:40:11.870673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.206 qpair failed and we were unable to recover it. 00:38:18.206 [2024-12-13 10:40:11.870836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.206 [2024-12-13 10:40:11.870850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.206 qpair failed and we were unable to recover it. 00:38:18.206 [2024-12-13 10:40:11.870959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.206 [2024-12-13 10:40:11.870973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.206 qpair failed and we were unable to recover it. 00:38:18.206 [2024-12-13 10:40:11.871145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.206 [2024-12-13 10:40:11.871159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.206 qpair failed and we were unable to recover it. 00:38:18.206 [2024-12-13 10:40:11.871267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.206 [2024-12-13 10:40:11.871281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.206 qpair failed and we were unable to recover it. 00:38:18.206 [2024-12-13 10:40:11.871491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.206 [2024-12-13 10:40:11.871505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.206 qpair failed and we were unable to recover it. 00:38:18.206 [2024-12-13 10:40:11.871733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.206 [2024-12-13 10:40:11.871747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.206 qpair failed and we were unable to recover it. 00:38:18.206 [2024-12-13 10:40:11.871927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.206 [2024-12-13 10:40:11.871940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.206 qpair failed and we were unable to recover it. 00:38:18.206 [2024-12-13 10:40:11.872040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.206 [2024-12-13 10:40:11.872054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.206 qpair failed and we were unable to recover it. 00:38:18.206 [2024-12-13 10:40:11.872268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.206 [2024-12-13 10:40:11.872282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.206 qpair failed and we were unable to recover it. 00:38:18.206 [2024-12-13 10:40:11.872375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.206 [2024-12-13 10:40:11.872388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.206 qpair failed and we were unable to recover it. 00:38:18.206 [2024-12-13 10:40:11.872524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.206 [2024-12-13 10:40:11.872539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.206 qpair failed and we were unable to recover it. 00:38:18.206 [2024-12-13 10:40:11.872700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.206 [2024-12-13 10:40:11.872715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.206 qpair failed and we were unable to recover it. 00:38:18.206 [2024-12-13 10:40:11.872892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.206 [2024-12-13 10:40:11.872906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.206 qpair failed and we were unable to recover it. 00:38:18.206 [2024-12-13 10:40:11.873016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.206 [2024-12-13 10:40:11.873029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.206 qpair failed and we were unable to recover it. 00:38:18.206 [2024-12-13 10:40:11.873229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.206 [2024-12-13 10:40:11.873243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.206 qpair failed and we were unable to recover it. 00:38:18.206 [2024-12-13 10:40:11.873387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.206 [2024-12-13 10:40:11.873401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.206 qpair failed and we were unable to recover it. 00:38:18.206 [2024-12-13 10:40:11.873552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.206 [2024-12-13 10:40:11.873566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.206 qpair failed and we were unable to recover it. 00:38:18.206 [2024-12-13 10:40:11.873769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.206 [2024-12-13 10:40:11.873783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.206 qpair failed and we were unable to recover it. 00:38:18.206 [2024-12-13 10:40:11.873881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.206 [2024-12-13 10:40:11.873893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.206 qpair failed and we were unable to recover it. 00:38:18.206 [2024-12-13 10:40:11.873989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.206 [2024-12-13 10:40:11.874002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.206 qpair failed and we were unable to recover it. 00:38:18.206 [2024-12-13 10:40:11.874298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.206 [2024-12-13 10:40:11.874313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.206 qpair failed and we were unable to recover it. 00:38:18.206 [2024-12-13 10:40:11.874493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.206 [2024-12-13 10:40:11.874507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.206 qpair failed and we were unable to recover it. 00:38:18.206 [2024-12-13 10:40:11.874585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.206 [2024-12-13 10:40:11.874597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.206 qpair failed and we were unable to recover it. 00:38:18.206 [2024-12-13 10:40:11.874756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.206 [2024-12-13 10:40:11.874771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.207 qpair failed and we were unable to recover it. 00:38:18.207 [2024-12-13 10:40:11.874920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.207 [2024-12-13 10:40:11.874934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.207 qpair failed and we were unable to recover it. 00:38:18.207 [2024-12-13 10:40:11.875084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.207 [2024-12-13 10:40:11.875097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.207 qpair failed and we were unable to recover it. 00:38:18.207 [2024-12-13 10:40:11.875322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.207 [2024-12-13 10:40:11.875336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.207 qpair failed and we were unable to recover it. 00:38:18.207 [2024-12-13 10:40:11.875539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.207 [2024-12-13 10:40:11.875552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.207 qpair failed and we were unable to recover it. 00:38:18.207 [2024-12-13 10:40:11.875663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.207 [2024-12-13 10:40:11.875676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.207 qpair failed and we were unable to recover it. 00:38:18.207 [2024-12-13 10:40:11.875840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.207 [2024-12-13 10:40:11.875853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.207 qpair failed and we were unable to recover it. 00:38:18.207 [2024-12-13 10:40:11.876009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.207 [2024-12-13 10:40:11.876055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.207 qpair failed and we were unable to recover it. 00:38:18.207 [2024-12-13 10:40:11.876265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.207 [2024-12-13 10:40:11.876307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.207 qpair failed and we were unable to recover it. 00:38:18.207 [2024-12-13 10:40:11.876575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.207 [2024-12-13 10:40:11.876643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.207 qpair failed and we were unable to recover it. 00:38:18.207 [2024-12-13 10:40:11.876887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.207 [2024-12-13 10:40:11.876931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.207 qpair failed and we were unable to recover it. 00:38:18.207 [2024-12-13 10:40:11.877176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.207 [2024-12-13 10:40:11.877219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.207 qpair failed and we were unable to recover it. 00:38:18.207 [2024-12-13 10:40:11.877363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.207 [2024-12-13 10:40:11.877377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.207 qpair failed and we were unable to recover it. 00:38:18.207 [2024-12-13 10:40:11.877538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.207 [2024-12-13 10:40:11.877554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.207 qpair failed and we were unable to recover it. 00:38:18.207 [2024-12-13 10:40:11.877694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.207 [2024-12-13 10:40:11.877709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.207 qpair failed and we were unable to recover it. 00:38:18.207 [2024-12-13 10:40:11.877911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.207 [2024-12-13 10:40:11.877927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.207 qpair failed and we were unable to recover it. 00:38:18.207 [2024-12-13 10:40:11.878094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.207 [2024-12-13 10:40:11.878108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.207 qpair failed and we were unable to recover it. 00:38:18.207 [2024-12-13 10:40:11.878348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.207 [2024-12-13 10:40:11.878390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.207 qpair failed and we were unable to recover it. 00:38:18.207 [2024-12-13 10:40:11.878678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.207 [2024-12-13 10:40:11.878721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.207 qpair failed and we were unable to recover it. 00:38:18.207 [2024-12-13 10:40:11.878965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.207 [2024-12-13 10:40:11.878978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.207 qpair failed and we were unable to recover it. 00:38:18.207 [2024-12-13 10:40:11.879203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.207 [2024-12-13 10:40:11.879216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.207 qpair failed and we were unable to recover it. 00:38:18.207 [2024-12-13 10:40:11.879376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.207 [2024-12-13 10:40:11.879389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.207 qpair failed and we were unable to recover it. 00:38:18.207 [2024-12-13 10:40:11.879615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.207 [2024-12-13 10:40:11.879630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.207 qpair failed and we were unable to recover it. 00:38:18.207 [2024-12-13 10:40:11.879715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.207 [2024-12-13 10:40:11.879728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.207 qpair failed and we were unable to recover it. 00:38:18.207 [2024-12-13 10:40:11.879879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.207 [2024-12-13 10:40:11.879892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.207 qpair failed and we were unable to recover it. 00:38:18.207 [2024-12-13 10:40:11.880116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.207 [2024-12-13 10:40:11.880158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.207 qpair failed and we were unable to recover it. 00:38:18.207 [2024-12-13 10:40:11.880364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.207 [2024-12-13 10:40:11.880417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.207 qpair failed and we were unable to recover it. 00:38:18.207 [2024-12-13 10:40:11.880634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.207 [2024-12-13 10:40:11.880720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:18.207 qpair failed and we were unable to recover it. 00:38:18.207 [2024-12-13 10:40:11.880981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.207 [2024-12-13 10:40:11.881026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:18.207 qpair failed and we were unable to recover it. 00:38:18.207 [2024-12-13 10:40:11.881390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.207 [2024-12-13 10:40:11.881489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:18.207 qpair failed and we were unable to recover it. 00:38:18.207 [2024-12-13 10:40:11.881683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.207 [2024-12-13 10:40:11.881729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.207 qpair failed and we were unable to recover it. 00:38:18.207 [2024-12-13 10:40:11.881951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.207 [2024-12-13 10:40:11.881993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.207 qpair failed and we were unable to recover it. 00:38:18.207 [2024-12-13 10:40:11.882237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.207 [2024-12-13 10:40:11.882280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.207 qpair failed and we were unable to recover it. 00:38:18.207 [2024-12-13 10:40:11.882542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.207 [2024-12-13 10:40:11.882585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.207 qpair failed and we were unable to recover it. 00:38:18.207 [2024-12-13 10:40:11.882815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.207 [2024-12-13 10:40:11.882830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.207 qpair failed and we were unable to recover it. 00:38:18.207 [2024-12-13 10:40:11.882915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.207 [2024-12-13 10:40:11.882929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.207 qpair failed and we were unable to recover it. 00:38:18.207 [2024-12-13 10:40:11.883099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.207 [2024-12-13 10:40:11.883113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.207 qpair failed and we were unable to recover it. 00:38:18.207 [2024-12-13 10:40:11.883258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.207 [2024-12-13 10:40:11.883272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.207 qpair failed and we were unable to recover it. 00:38:18.207 [2024-12-13 10:40:11.883490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.207 [2024-12-13 10:40:11.883504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.207 qpair failed and we were unable to recover it. 00:38:18.207 [2024-12-13 10:40:11.883726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.208 [2024-12-13 10:40:11.883740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.208 qpair failed and we were unable to recover it. 00:38:18.208 [2024-12-13 10:40:11.883919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.208 [2024-12-13 10:40:11.883932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.208 qpair failed and we were unable to recover it. 00:38:18.208 [2024-12-13 10:40:11.884037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.208 [2024-12-13 10:40:11.884051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.208 qpair failed and we were unable to recover it. 00:38:18.208 [2024-12-13 10:40:11.884256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.208 [2024-12-13 10:40:11.884269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.208 qpair failed and we were unable to recover it. 00:38:18.208 [2024-12-13 10:40:11.884358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.208 [2024-12-13 10:40:11.884370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.208 qpair failed and we were unable to recover it. 00:38:18.208 [2024-12-13 10:40:11.884560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.208 [2024-12-13 10:40:11.884576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.208 qpair failed and we were unable to recover it. 00:38:18.208 [2024-12-13 10:40:11.884679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.208 [2024-12-13 10:40:11.884693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.208 qpair failed and we were unable to recover it. 00:38:18.208 [2024-12-13 10:40:11.884947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.208 [2024-12-13 10:40:11.884960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.208 qpair failed and we were unable to recover it. 00:38:18.208 [2024-12-13 10:40:11.885142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.208 [2024-12-13 10:40:11.885155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.208 qpair failed and we were unable to recover it. 00:38:18.208 [2024-12-13 10:40:11.885363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.208 [2024-12-13 10:40:11.885377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.208 qpair failed and we were unable to recover it. 00:38:18.208 [2024-12-13 10:40:11.885465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.208 [2024-12-13 10:40:11.885477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.208 qpair failed and we were unable to recover it. 00:38:18.208 [2024-12-13 10:40:11.885610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.208 [2024-12-13 10:40:11.885624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.208 qpair failed and we were unable to recover it. 00:38:18.208 [2024-12-13 10:40:11.885709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.208 [2024-12-13 10:40:11.885723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.208 qpair failed and we were unable to recover it. 00:38:18.208 [2024-12-13 10:40:11.885932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.208 [2024-12-13 10:40:11.885947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.208 qpair failed and we were unable to recover it. 00:38:18.208 [2024-12-13 10:40:11.886221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.208 [2024-12-13 10:40:11.886234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.208 qpair failed and we were unable to recover it. 00:38:18.208 [2024-12-13 10:40:11.886370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.208 [2024-12-13 10:40:11.886383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.208 qpair failed and we were unable to recover it. 00:38:18.208 [2024-12-13 10:40:11.886555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.208 [2024-12-13 10:40:11.886573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.208 qpair failed and we were unable to recover it. 00:38:18.208 [2024-12-13 10:40:11.886729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.208 [2024-12-13 10:40:11.886742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.208 qpair failed and we were unable to recover it. 00:38:18.208 [2024-12-13 10:40:11.886918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.208 [2024-12-13 10:40:11.886932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.208 qpair failed and we were unable to recover it. 00:38:18.208 [2024-12-13 10:40:11.887086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.208 [2024-12-13 10:40:11.887142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.208 qpair failed and we were unable to recover it. 00:38:18.208 [2024-12-13 10:40:11.887429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.208 [2024-12-13 10:40:11.887504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.208 qpair failed and we were unable to recover it. 00:38:18.208 [2024-12-13 10:40:11.887721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.208 [2024-12-13 10:40:11.887769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.208 qpair failed and we were unable to recover it. 00:38:18.208 [2024-12-13 10:40:11.888001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.208 [2024-12-13 10:40:11.888014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.208 qpair failed and we were unable to recover it. 00:38:18.208 [2024-12-13 10:40:11.888216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.208 [2024-12-13 10:40:11.888230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.208 qpair failed and we were unable to recover it. 00:38:18.208 [2024-12-13 10:40:11.888362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.208 [2024-12-13 10:40:11.888376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.208 qpair failed and we were unable to recover it. 00:38:18.208 [2024-12-13 10:40:11.888596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.208 [2024-12-13 10:40:11.888610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.208 qpair failed and we were unable to recover it. 00:38:18.208 [2024-12-13 10:40:11.888754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.208 [2024-12-13 10:40:11.888768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.208 qpair failed and we were unable to recover it. 00:38:18.208 [2024-12-13 10:40:11.888991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.208 [2024-12-13 10:40:11.889005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.208 qpair failed and we were unable to recover it. 00:38:18.208 [2024-12-13 10:40:11.889232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.208 [2024-12-13 10:40:11.889275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.208 qpair failed and we were unable to recover it. 00:38:18.208 [2024-12-13 10:40:11.889598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.208 [2024-12-13 10:40:11.889644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.208 qpair failed and we were unable to recover it. 00:38:18.208 [2024-12-13 10:40:11.889914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.208 [2024-12-13 10:40:11.889957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.208 qpair failed and we were unable to recover it. 00:38:18.208 [2024-12-13 10:40:11.890170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.208 [2024-12-13 10:40:11.890214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.208 qpair failed and we were unable to recover it. 00:38:18.208 [2024-12-13 10:40:11.890528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.208 [2024-12-13 10:40:11.890573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.208 qpair failed and we were unable to recover it. 00:38:18.208 [2024-12-13 10:40:11.890804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.208 [2024-12-13 10:40:11.890848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.208 qpair failed and we were unable to recover it. 00:38:18.208 [2024-12-13 10:40:11.890989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.208 [2024-12-13 10:40:11.891030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.208 qpair failed and we were unable to recover it. 00:38:18.208 [2024-12-13 10:40:11.891270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.208 [2024-12-13 10:40:11.891311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.208 qpair failed and we were unable to recover it. 00:38:18.208 [2024-12-13 10:40:11.891548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.208 [2024-12-13 10:40:11.891591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.208 qpair failed and we were unable to recover it. 00:38:18.208 [2024-12-13 10:40:11.891828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.208 [2024-12-13 10:40:11.891872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.208 qpair failed and we were unable to recover it. 00:38:18.208 [2024-12-13 10:40:11.892179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.208 [2024-12-13 10:40:11.892220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.208 qpair failed and we were unable to recover it. 00:38:18.208 [2024-12-13 10:40:11.892408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.209 [2024-12-13 10:40:11.892474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.209 qpair failed and we were unable to recover it. 00:38:18.209 [2024-12-13 10:40:11.892692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.209 [2024-12-13 10:40:11.892735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.209 qpair failed and we were unable to recover it. 00:38:18.209 [2024-12-13 10:40:11.892995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.209 [2024-12-13 10:40:11.893009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.209 qpair failed and we were unable to recover it. 00:38:18.209 [2024-12-13 10:40:11.893184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.209 [2024-12-13 10:40:11.893198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.209 qpair failed and we were unable to recover it. 00:38:18.209 [2024-12-13 10:40:11.893386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.209 [2024-12-13 10:40:11.893429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.209 qpair failed and we were unable to recover it. 00:38:18.209 [2024-12-13 10:40:11.893668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.209 [2024-12-13 10:40:11.893712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.209 qpair failed and we were unable to recover it. 00:38:18.209 [2024-12-13 10:40:11.893919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.209 [2024-12-13 10:40:11.893933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.209 qpair failed and we were unable to recover it. 00:38:18.209 [2024-12-13 10:40:11.894161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.209 [2024-12-13 10:40:11.894174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.209 qpair failed and we were unable to recover it. 00:38:18.209 [2024-12-13 10:40:11.894378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.209 [2024-12-13 10:40:11.894392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.209 qpair failed and we were unable to recover it. 00:38:18.209 [2024-12-13 10:40:11.894623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.209 [2024-12-13 10:40:11.894638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.209 qpair failed and we were unable to recover it. 00:38:18.209 [2024-12-13 10:40:11.894853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.209 [2024-12-13 10:40:11.894866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.209 qpair failed and we were unable to recover it. 00:38:18.209 [2024-12-13 10:40:11.894950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.209 [2024-12-13 10:40:11.894963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.209 qpair failed and we were unable to recover it. 00:38:18.209 [2024-12-13 10:40:11.895040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.209 [2024-12-13 10:40:11.895053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.209 qpair failed and we were unable to recover it. 00:38:18.209 [2024-12-13 10:40:11.895170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.209 [2024-12-13 10:40:11.895183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.209 qpair failed and we were unable to recover it. 00:38:18.209 [2024-12-13 10:40:11.895333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.209 [2024-12-13 10:40:11.895347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.209 qpair failed and we were unable to recover it. 00:38:18.209 [2024-12-13 10:40:11.895436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.209 [2024-12-13 10:40:11.895459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.209 qpair failed and we were unable to recover it. 00:38:18.209 [2024-12-13 10:40:11.895638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.209 [2024-12-13 10:40:11.895680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.209 qpair failed and we were unable to recover it. 00:38:18.209 [2024-12-13 10:40:11.895825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.209 [2024-12-13 10:40:11.895870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.209 qpair failed and we were unable to recover it. 00:38:18.209 [2024-12-13 10:40:11.896031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.209 [2024-12-13 10:40:11.896073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.209 qpair failed and we were unable to recover it. 00:38:18.209 [2024-12-13 10:40:11.896290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.209 [2024-12-13 10:40:11.896305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.209 qpair failed and we were unable to recover it. 00:38:18.209 [2024-12-13 10:40:11.896512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.209 [2024-12-13 10:40:11.896537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.209 qpair failed and we were unable to recover it. 00:38:18.209 [2024-12-13 10:40:11.896622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.209 [2024-12-13 10:40:11.896634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.209 qpair failed and we were unable to recover it. 00:38:18.209 [2024-12-13 10:40:11.896796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.209 [2024-12-13 10:40:11.896810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.209 qpair failed and we were unable to recover it. 00:38:18.209 [2024-12-13 10:40:11.896964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.209 [2024-12-13 10:40:11.896978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.209 qpair failed and we were unable to recover it. 00:38:18.209 [2024-12-13 10:40:11.897057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.209 [2024-12-13 10:40:11.897070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.209 qpair failed and we were unable to recover it. 00:38:18.209 [2024-12-13 10:40:11.897226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.209 [2024-12-13 10:40:11.897239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.209 qpair failed and we were unable to recover it. 00:38:18.209 [2024-12-13 10:40:11.897322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.209 [2024-12-13 10:40:11.897334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.209 qpair failed and we were unable to recover it. 00:38:18.209 [2024-12-13 10:40:11.897470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.209 [2024-12-13 10:40:11.897485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.209 qpair failed and we were unable to recover it. 00:38:18.209 [2024-12-13 10:40:11.897586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.209 [2024-12-13 10:40:11.897600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.209 qpair failed and we were unable to recover it. 00:38:18.209 [2024-12-13 10:40:11.897804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.209 [2024-12-13 10:40:11.897818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.209 qpair failed and we were unable to recover it. 00:38:18.209 [2024-12-13 10:40:11.897910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.209 [2024-12-13 10:40:11.897923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.209 qpair failed and we were unable to recover it. 00:38:18.209 [2024-12-13 10:40:11.898126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.209 [2024-12-13 10:40:11.898140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.209 qpair failed and we were unable to recover it. 00:38:18.209 [2024-12-13 10:40:11.898298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.209 [2024-12-13 10:40:11.898311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.209 qpair failed and we were unable to recover it. 00:38:18.209 [2024-12-13 10:40:11.898473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.209 [2024-12-13 10:40:11.898487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.209 qpair failed and we were unable to recover it. 00:38:18.209 [2024-12-13 10:40:11.898570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.209 [2024-12-13 10:40:11.898583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.209 qpair failed and we were unable to recover it. 00:38:18.209 [2024-12-13 10:40:11.898661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.209 [2024-12-13 10:40:11.898673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.209 qpair failed and we were unable to recover it. 00:38:18.209 [2024-12-13 10:40:11.898770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.209 [2024-12-13 10:40:11.898783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.209 qpair failed and we were unable to recover it. 00:38:18.209 [2024-12-13 10:40:11.898860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.209 [2024-12-13 10:40:11.898873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.209 qpair failed and we were unable to recover it. 00:38:18.209 [2024-12-13 10:40:11.898952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.209 [2024-12-13 10:40:11.898965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.209 qpair failed and we were unable to recover it. 00:38:18.209 [2024-12-13 10:40:11.899043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.209 [2024-12-13 10:40:11.899055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.210 qpair failed and we were unable to recover it. 00:38:18.210 [2024-12-13 10:40:11.899123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.210 [2024-12-13 10:40:11.899135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.210 qpair failed and we were unable to recover it. 00:38:18.210 [2024-12-13 10:40:11.899216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.210 [2024-12-13 10:40:11.899228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.210 qpair failed and we were unable to recover it. 00:38:18.210 [2024-12-13 10:40:11.899386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.210 [2024-12-13 10:40:11.899399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.210 qpair failed and we were unable to recover it. 00:38:18.210 [2024-12-13 10:40:11.899498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.210 [2024-12-13 10:40:11.899511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.210 qpair failed and we were unable to recover it. 00:38:18.210 [2024-12-13 10:40:11.899606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.210 [2024-12-13 10:40:11.899620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.210 qpair failed and we were unable to recover it. 00:38:18.210 [2024-12-13 10:40:11.899756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.210 [2024-12-13 10:40:11.899769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.210 qpair failed and we were unable to recover it. 00:38:18.210 [2024-12-13 10:40:11.899858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.210 [2024-12-13 10:40:11.899871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.210 qpair failed and we were unable to recover it. 00:38:18.210 [2024-12-13 10:40:11.899948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.210 [2024-12-13 10:40:11.899959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.210 qpair failed and we were unable to recover it. 00:38:18.210 [2024-12-13 10:40:11.900091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.210 [2024-12-13 10:40:11.900105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.210 qpair failed and we were unable to recover it. 00:38:18.210 [2024-12-13 10:40:11.900182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.210 [2024-12-13 10:40:11.900194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.210 qpair failed and we were unable to recover it. 00:38:18.210 [2024-12-13 10:40:11.900269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.210 [2024-12-13 10:40:11.900282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.210 qpair failed and we were unable to recover it. 00:38:18.210 [2024-12-13 10:40:11.900427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.210 [2024-12-13 10:40:11.900441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.210 qpair failed and we were unable to recover it. 00:38:18.210 [2024-12-13 10:40:11.900615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.210 [2024-12-13 10:40:11.900630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.210 qpair failed and we were unable to recover it. 00:38:18.210 [2024-12-13 10:40:11.900724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.210 [2024-12-13 10:40:11.900737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.210 qpair failed and we were unable to recover it. 00:38:18.210 [2024-12-13 10:40:11.900882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.210 [2024-12-13 10:40:11.900896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.210 qpair failed and we were unable to recover it. 00:38:18.210 [2024-12-13 10:40:11.900976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.210 [2024-12-13 10:40:11.900989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.210 qpair failed and we were unable to recover it. 00:38:18.210 [2024-12-13 10:40:11.901077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.210 [2024-12-13 10:40:11.901091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.210 qpair failed and we were unable to recover it. 00:38:18.210 [2024-12-13 10:40:11.901167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.210 [2024-12-13 10:40:11.901183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.210 qpair failed and we were unable to recover it. 00:38:18.210 [2024-12-13 10:40:11.901340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.210 [2024-12-13 10:40:11.901355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.210 qpair failed and we were unable to recover it. 00:38:18.210 [2024-12-13 10:40:11.901432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.210 [2024-12-13 10:40:11.901445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.210 qpair failed and we were unable to recover it. 00:38:18.210 [2024-12-13 10:40:11.901509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.210 [2024-12-13 10:40:11.901522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.210 qpair failed and we were unable to recover it. 00:38:18.210 [2024-12-13 10:40:11.901668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.210 [2024-12-13 10:40:11.901681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.210 qpair failed and we were unable to recover it. 00:38:18.210 [2024-12-13 10:40:11.901838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.210 [2024-12-13 10:40:11.901852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.210 qpair failed and we were unable to recover it. 00:38:18.210 [2024-12-13 10:40:11.901932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.210 [2024-12-13 10:40:11.901945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.210 qpair failed and we were unable to recover it. 00:38:18.210 [2024-12-13 10:40:11.902088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.210 [2024-12-13 10:40:11.902102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.210 qpair failed and we were unable to recover it. 00:38:18.210 [2024-12-13 10:40:11.902249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.210 [2024-12-13 10:40:11.902263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.210 qpair failed and we were unable to recover it. 00:38:18.210 [2024-12-13 10:40:11.902334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.210 [2024-12-13 10:40:11.902347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.210 qpair failed and we were unable to recover it. 00:38:18.210 [2024-12-13 10:40:11.902420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.210 [2024-12-13 10:40:11.902434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.210 qpair failed and we were unable to recover it. 00:38:18.210 [2024-12-13 10:40:11.902514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.210 [2024-12-13 10:40:11.902527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.210 qpair failed and we were unable to recover it. 00:38:18.210 [2024-12-13 10:40:11.902599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.210 [2024-12-13 10:40:11.902613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.210 qpair failed and we were unable to recover it. 00:38:18.210 [2024-12-13 10:40:11.902841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.210 [2024-12-13 10:40:11.902855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.210 qpair failed and we were unable to recover it. 00:38:18.210 [2024-12-13 10:40:11.903007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.210 [2024-12-13 10:40:11.903048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.210 qpair failed and we were unable to recover it. 00:38:18.210 [2024-12-13 10:40:11.903261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.210 [2024-12-13 10:40:11.903303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.210 qpair failed and we were unable to recover it. 00:38:18.210 [2024-12-13 10:40:11.903496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.210 [2024-12-13 10:40:11.903540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.210 qpair failed and we were unable to recover it. 00:38:18.210 [2024-12-13 10:40:11.903699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.211 [2024-12-13 10:40:11.903744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.211 qpair failed and we were unable to recover it. 00:38:18.211 [2024-12-13 10:40:11.903880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.211 [2024-12-13 10:40:11.903922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.211 qpair failed and we were unable to recover it. 00:38:18.211 [2024-12-13 10:40:11.904109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.211 [2024-12-13 10:40:11.904123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.211 qpair failed and we were unable to recover it. 00:38:18.211 [2024-12-13 10:40:11.904285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.211 [2024-12-13 10:40:11.904327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.211 qpair failed and we were unable to recover it. 00:38:18.211 [2024-12-13 10:40:11.904554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.211 [2024-12-13 10:40:11.904598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.211 qpair failed and we were unable to recover it. 00:38:18.211 [2024-12-13 10:40:11.904836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.211 [2024-12-13 10:40:11.904879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.211 qpair failed and we were unable to recover it. 00:38:18.211 [2024-12-13 10:40:11.905093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.211 [2024-12-13 10:40:11.905106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.211 qpair failed and we were unable to recover it. 00:38:18.211 [2024-12-13 10:40:11.905304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.211 [2024-12-13 10:40:11.905317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.211 qpair failed and we were unable to recover it. 00:38:18.211 [2024-12-13 10:40:11.905459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.211 [2024-12-13 10:40:11.905473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.211 qpair failed and we were unable to recover it. 00:38:18.211 [2024-12-13 10:40:11.905624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.211 [2024-12-13 10:40:11.905638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.211 qpair failed and we were unable to recover it. 00:38:18.211 [2024-12-13 10:40:11.905921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.211 [2024-12-13 10:40:11.906006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:18.211 qpair failed and we were unable to recover it. 00:38:18.211 [2024-12-13 10:40:11.906238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.211 [2024-12-13 10:40:11.906287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:18.211 qpair failed and we were unable to recover it. 00:38:18.211 [2024-12-13 10:40:11.906495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.211 [2024-12-13 10:40:11.906542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:18.211 qpair failed and we were unable to recover it. 00:38:18.211 [2024-12-13 10:40:11.906789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.211 [2024-12-13 10:40:11.906834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:18.211 qpair failed and we were unable to recover it. 00:38:18.211 [2024-12-13 10:40:11.907041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.211 [2024-12-13 10:40:11.907084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:18.211 qpair failed and we were unable to recover it. 00:38:18.211 [2024-12-13 10:40:11.907367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.211 [2024-12-13 10:40:11.907409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:18.211 qpair failed and we were unable to recover it. 00:38:18.211 [2024-12-13 10:40:11.907587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.211 [2024-12-13 10:40:11.907633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.211 qpair failed and we were unable to recover it. 00:38:18.211 [2024-12-13 10:40:11.907835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.211 [2024-12-13 10:40:11.907849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.211 qpair failed and we were unable to recover it. 00:38:18.211 [2024-12-13 10:40:11.907960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.211 [2024-12-13 10:40:11.907974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.211 qpair failed and we were unable to recover it. 00:38:18.211 [2024-12-13 10:40:11.908056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.211 [2024-12-13 10:40:11.908071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.211 qpair failed and we were unable to recover it. 00:38:18.211 [2024-12-13 10:40:11.908167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.211 [2024-12-13 10:40:11.908180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.211 qpair failed and we were unable to recover it. 00:38:18.211 [2024-12-13 10:40:11.908343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.211 [2024-12-13 10:40:11.908357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.211 qpair failed and we were unable to recover it. 00:38:18.211 [2024-12-13 10:40:11.908428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.211 [2024-12-13 10:40:11.908441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.211 qpair failed and we were unable to recover it. 00:38:18.211 [2024-12-13 10:40:11.908614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.211 [2024-12-13 10:40:11.908669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.211 qpair failed and we were unable to recover it. 00:38:18.211 [2024-12-13 10:40:11.908868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.211 [2024-12-13 10:40:11.908909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.211 qpair failed and we were unable to recover it. 00:38:18.211 [2024-12-13 10:40:11.909042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.211 [2024-12-13 10:40:11.909083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.211 qpair failed and we were unable to recover it. 00:38:18.211 [2024-12-13 10:40:11.909250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.211 [2024-12-13 10:40:11.909263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.211 qpair failed and we were unable to recover it. 00:38:18.211 [2024-12-13 10:40:11.909345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.211 [2024-12-13 10:40:11.909358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.211 qpair failed and we were unable to recover it. 00:38:18.211 [2024-12-13 10:40:11.909510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.211 [2024-12-13 10:40:11.909524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.211 qpair failed and we were unable to recover it. 00:38:18.211 [2024-12-13 10:40:11.909608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.211 [2024-12-13 10:40:11.909621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.211 qpair failed and we were unable to recover it. 00:38:18.211 [2024-12-13 10:40:11.909702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.211 [2024-12-13 10:40:11.909716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.211 qpair failed and we were unable to recover it. 00:38:18.211 [2024-12-13 10:40:11.909870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.211 [2024-12-13 10:40:11.909883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.211 qpair failed and we were unable to recover it. 00:38:18.211 [2024-12-13 10:40:11.910014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.211 [2024-12-13 10:40:11.910027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.211 qpair failed and we were unable to recover it. 00:38:18.211 [2024-12-13 10:40:11.910179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.211 [2024-12-13 10:40:11.910193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.211 qpair failed and we were unable to recover it. 00:38:18.211 [2024-12-13 10:40:11.910339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.211 [2024-12-13 10:40:11.910353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.211 qpair failed and we were unable to recover it. 00:38:18.211 [2024-12-13 10:40:11.910422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.211 [2024-12-13 10:40:11.910434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.211 qpair failed and we were unable to recover it. 00:38:18.211 [2024-12-13 10:40:11.910531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.211 [2024-12-13 10:40:11.910544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.211 qpair failed and we were unable to recover it. 00:38:18.211 [2024-12-13 10:40:11.910637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.211 [2024-12-13 10:40:11.910650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.211 qpair failed and we were unable to recover it. 00:38:18.211 [2024-12-13 10:40:11.910722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.211 [2024-12-13 10:40:11.910734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.211 qpair failed and we were unable to recover it. 00:38:18.211 [2024-12-13 10:40:11.910830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.212 [2024-12-13 10:40:11.910843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.212 qpair failed and we were unable to recover it. 00:38:18.212 [2024-12-13 10:40:11.910921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.212 [2024-12-13 10:40:11.910934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.212 qpair failed and we were unable to recover it. 00:38:18.212 [2024-12-13 10:40:11.911076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.212 [2024-12-13 10:40:11.911089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.212 qpair failed and we were unable to recover it. 00:38:18.212 [2024-12-13 10:40:11.911169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.212 [2024-12-13 10:40:11.911182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.212 qpair failed and we were unable to recover it. 00:38:18.212 [2024-12-13 10:40:11.911261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.212 [2024-12-13 10:40:11.911274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.212 qpair failed and we were unable to recover it. 00:38:18.212 [2024-12-13 10:40:11.911367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.212 [2024-12-13 10:40:11.911380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.212 qpair failed and we were unable to recover it. 00:38:18.212 [2024-12-13 10:40:11.911517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.212 [2024-12-13 10:40:11.911530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.212 qpair failed and we were unable to recover it. 00:38:18.212 [2024-12-13 10:40:11.911619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.212 [2024-12-13 10:40:11.911633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.212 qpair failed and we were unable to recover it. 00:38:18.212 [2024-12-13 10:40:11.911717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.212 [2024-12-13 10:40:11.911730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.212 qpair failed and we were unable to recover it. 00:38:18.212 [2024-12-13 10:40:11.911815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.212 [2024-12-13 10:40:11.911828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.212 qpair failed and we were unable to recover it. 00:38:18.212 [2024-12-13 10:40:11.912032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.212 [2024-12-13 10:40:11.912045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.212 qpair failed and we were unable to recover it. 00:38:18.212 [2024-12-13 10:40:11.912124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.212 [2024-12-13 10:40:11.912138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.212 qpair failed and we were unable to recover it. 00:38:18.212 [2024-12-13 10:40:11.912210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.212 [2024-12-13 10:40:11.912223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.212 qpair failed and we were unable to recover it. 00:38:18.212 [2024-12-13 10:40:11.912392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.212 [2024-12-13 10:40:11.912406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.212 qpair failed and we were unable to recover it. 00:38:18.212 [2024-12-13 10:40:11.912474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.212 [2024-12-13 10:40:11.912487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.212 qpair failed and we were unable to recover it. 00:38:18.212 [2024-12-13 10:40:11.912576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.212 [2024-12-13 10:40:11.912588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.212 qpair failed and we were unable to recover it. 00:38:18.212 [2024-12-13 10:40:11.912671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.212 [2024-12-13 10:40:11.912683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.212 qpair failed and we were unable to recover it. 00:38:18.212 [2024-12-13 10:40:11.912838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.212 [2024-12-13 10:40:11.912851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.212 qpair failed and we were unable to recover it. 00:38:18.212 [2024-12-13 10:40:11.912925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.212 [2024-12-13 10:40:11.912937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.212 qpair failed and we were unable to recover it. 00:38:18.212 [2024-12-13 10:40:11.913122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.212 [2024-12-13 10:40:11.913165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.212 qpair failed and we were unable to recover it. 00:38:18.212 [2024-12-13 10:40:11.913376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.212 [2024-12-13 10:40:11.913416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.212 qpair failed and we were unable to recover it. 00:38:18.212 [2024-12-13 10:40:11.913616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.212 [2024-12-13 10:40:11.913702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:18.212 qpair failed and we were unable to recover it. 00:38:18.212 [2024-12-13 10:40:11.913859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.212 [2024-12-13 10:40:11.913907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:18.212 qpair failed and we were unable to recover it. 00:38:18.212 [2024-12-13 10:40:11.914044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.212 [2024-12-13 10:40:11.914096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:18.212 qpair failed and we were unable to recover it. 00:38:18.212 [2024-12-13 10:40:11.914249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.212 [2024-12-13 10:40:11.914274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:18.212 qpair failed and we were unable to recover it. 00:38:18.212 [2024-12-13 10:40:11.914441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.212 [2024-12-13 10:40:11.914469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:18.212 qpair failed and we were unable to recover it. 00:38:18.212 [2024-12-13 10:40:11.914552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.212 [2024-12-13 10:40:11.914572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:18.212 qpair failed and we were unable to recover it. 00:38:18.212 [2024-12-13 10:40:11.914721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.212 [2024-12-13 10:40:11.914742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.212 qpair failed and we were unable to recover it. 00:38:18.212 [2024-12-13 10:40:11.914824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.212 [2024-12-13 10:40:11.914836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.212 qpair failed and we were unable to recover it. 00:38:18.212 [2024-12-13 10:40:11.914980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.212 [2024-12-13 10:40:11.914994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.212 qpair failed and we were unable to recover it. 00:38:18.212 [2024-12-13 10:40:11.915067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.212 [2024-12-13 10:40:11.915080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.212 qpair failed and we were unable to recover it. 00:38:18.212 [2024-12-13 10:40:11.915151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.212 [2024-12-13 10:40:11.915163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.212 qpair failed and we were unable to recover it. 00:38:18.212 [2024-12-13 10:40:11.915307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.212 [2024-12-13 10:40:11.915321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.212 qpair failed and we were unable to recover it. 00:38:18.212 [2024-12-13 10:40:11.915488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.212 [2024-12-13 10:40:11.915503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.212 qpair failed and we were unable to recover it. 00:38:18.212 [2024-12-13 10:40:11.915640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.212 [2024-12-13 10:40:11.915653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.212 qpair failed and we were unable to recover it. 00:38:18.212 [2024-12-13 10:40:11.915785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.212 [2024-12-13 10:40:11.915799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.212 qpair failed and we were unable to recover it. 00:38:18.212 [2024-12-13 10:40:11.915891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.212 [2024-12-13 10:40:11.915904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.212 qpair failed and we were unable to recover it. 00:38:18.212 [2024-12-13 10:40:11.915976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.212 [2024-12-13 10:40:11.915989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.212 qpair failed and we were unable to recover it. 00:38:18.212 [2024-12-13 10:40:11.916081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.212 [2024-12-13 10:40:11.916093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.212 qpair failed and we were unable to recover it. 00:38:18.212 [2024-12-13 10:40:11.916173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.213 [2024-12-13 10:40:11.916186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.213 qpair failed and we were unable to recover it. 00:38:18.213 [2024-12-13 10:40:11.916324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.213 [2024-12-13 10:40:11.916338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.213 qpair failed and we were unable to recover it. 00:38:18.213 [2024-12-13 10:40:11.916501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.213 [2024-12-13 10:40:11.916517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.213 qpair failed and we were unable to recover it. 00:38:18.213 [2024-12-13 10:40:11.916653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.213 [2024-12-13 10:40:11.916673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.213 qpair failed and we were unable to recover it. 00:38:18.213 [2024-12-13 10:40:11.916737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.213 [2024-12-13 10:40:11.916749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.213 qpair failed and we were unable to recover it. 00:38:18.213 [2024-12-13 10:40:11.916814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.213 [2024-12-13 10:40:11.916826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.213 qpair failed and we were unable to recover it. 00:38:18.213 [2024-12-13 10:40:11.916900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.213 [2024-12-13 10:40:11.916912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.213 qpair failed and we were unable to recover it. 00:38:18.213 [2024-12-13 10:40:11.917012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.213 [2024-12-13 10:40:11.917025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.213 qpair failed and we were unable to recover it. 00:38:18.213 [2024-12-13 10:40:11.917111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.213 [2024-12-13 10:40:11.917124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.213 qpair failed and we were unable to recover it. 00:38:18.213 [2024-12-13 10:40:11.917212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.213 [2024-12-13 10:40:11.917225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.213 qpair failed and we were unable to recover it. 00:38:18.213 [2024-12-13 10:40:11.917310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.213 [2024-12-13 10:40:11.917323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.213 qpair failed and we were unable to recover it. 00:38:18.213 [2024-12-13 10:40:11.917414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.213 [2024-12-13 10:40:11.917426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.213 qpair failed and we were unable to recover it. 00:38:18.213 [2024-12-13 10:40:11.917512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.213 [2024-12-13 10:40:11.917525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.213 qpair failed and we were unable to recover it. 00:38:18.213 [2024-12-13 10:40:11.917675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.213 [2024-12-13 10:40:11.917687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.213 qpair failed and we were unable to recover it. 00:38:18.213 [2024-12-13 10:40:11.917763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.213 [2024-12-13 10:40:11.917776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.213 qpair failed and we were unable to recover it. 00:38:18.213 [2024-12-13 10:40:11.917867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.213 [2024-12-13 10:40:11.917880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.213 qpair failed and we were unable to recover it. 00:38:18.213 [2024-12-13 10:40:11.917954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.213 [2024-12-13 10:40:11.917967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.213 qpair failed and we were unable to recover it. 00:38:18.213 [2024-12-13 10:40:11.918128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.213 [2024-12-13 10:40:11.918141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.213 qpair failed and we were unable to recover it. 00:38:18.213 [2024-12-13 10:40:11.918214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.213 [2024-12-13 10:40:11.918226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.213 qpair failed and we were unable to recover it. 00:38:18.213 [2024-12-13 10:40:11.918458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.213 [2024-12-13 10:40:11.918472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.213 qpair failed and we were unable to recover it. 00:38:18.213 [2024-12-13 10:40:11.918626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.213 [2024-12-13 10:40:11.918640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.213 qpair failed and we were unable to recover it. 00:38:18.213 [2024-12-13 10:40:11.918705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.213 [2024-12-13 10:40:11.918717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.213 qpair failed and we were unable to recover it. 00:38:18.213 [2024-12-13 10:40:11.918868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.213 [2024-12-13 10:40:11.918881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.213 qpair failed and we were unable to recover it. 00:38:18.213 [2024-12-13 10:40:11.918973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.213 [2024-12-13 10:40:11.918987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.213 qpair failed and we were unable to recover it. 00:38:18.213 [2024-12-13 10:40:11.919120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.213 [2024-12-13 10:40:11.919134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.213 qpair failed and we were unable to recover it. 00:38:18.213 [2024-12-13 10:40:11.919224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.213 [2024-12-13 10:40:11.919238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.213 qpair failed and we were unable to recover it. 00:38:18.213 [2024-12-13 10:40:11.919319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.213 [2024-12-13 10:40:11.919332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.213 qpair failed and we were unable to recover it. 00:38:18.213 [2024-12-13 10:40:11.919404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.213 [2024-12-13 10:40:11.919417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.213 qpair failed and we were unable to recover it. 00:38:18.213 [2024-12-13 10:40:11.919516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.213 [2024-12-13 10:40:11.919529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.213 qpair failed and we were unable to recover it. 00:38:18.213 [2024-12-13 10:40:11.919677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.213 [2024-12-13 10:40:11.919691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.213 qpair failed and we were unable to recover it. 00:38:18.213 [2024-12-13 10:40:11.919768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.213 [2024-12-13 10:40:11.919781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.213 qpair failed and we were unable to recover it. 00:38:18.213 [2024-12-13 10:40:11.919922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.213 [2024-12-13 10:40:11.919936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.213 qpair failed and we were unable to recover it. 00:38:18.213 [2024-12-13 10:40:11.920008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.213 [2024-12-13 10:40:11.920021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.213 qpair failed and we were unable to recover it. 00:38:18.213 [2024-12-13 10:40:11.920096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.213 [2024-12-13 10:40:11.920108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.213 qpair failed and we were unable to recover it. 00:38:18.213 [2024-12-13 10:40:11.920316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.213 [2024-12-13 10:40:11.920331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.213 qpair failed and we were unable to recover it. 00:38:18.213 [2024-12-13 10:40:11.920510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.213 [2024-12-13 10:40:11.920524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.213 qpair failed and we were unable to recover it. 00:38:18.213 [2024-12-13 10:40:11.920592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.213 [2024-12-13 10:40:11.920605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.213 qpair failed and we were unable to recover it. 00:38:18.213 [2024-12-13 10:40:11.920698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.213 [2024-12-13 10:40:11.920711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.213 qpair failed and we were unable to recover it. 00:38:18.213 [2024-12-13 10:40:11.920794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.213 [2024-12-13 10:40:11.920806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.213 qpair failed and we were unable to recover it. 00:38:18.213 [2024-12-13 10:40:11.920884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.213 [2024-12-13 10:40:11.920896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.214 qpair failed and we were unable to recover it. 00:38:18.214 [2024-12-13 10:40:11.921026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.214 [2024-12-13 10:40:11.921040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.214 qpair failed and we were unable to recover it. 00:38:18.214 [2024-12-13 10:40:11.921220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.214 [2024-12-13 10:40:11.921234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.214 qpair failed and we were unable to recover it. 00:38:18.214 [2024-12-13 10:40:11.921314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.214 [2024-12-13 10:40:11.921326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.214 qpair failed and we were unable to recover it. 00:38:18.214 [2024-12-13 10:40:11.921407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.214 [2024-12-13 10:40:11.921419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.214 qpair failed and we were unable to recover it. 00:38:18.214 [2024-12-13 10:40:11.921519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.214 [2024-12-13 10:40:11.921533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.214 qpair failed and we were unable to recover it. 00:38:18.214 [2024-12-13 10:40:11.921599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.214 [2024-12-13 10:40:11.921612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.214 qpair failed and we were unable to recover it. 00:38:18.214 [2024-12-13 10:40:11.921753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.214 [2024-12-13 10:40:11.921767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.214 qpair failed and we were unable to recover it. 00:38:18.214 [2024-12-13 10:40:11.921841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.214 [2024-12-13 10:40:11.921854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.214 qpair failed and we were unable to recover it. 00:38:18.214 [2024-12-13 10:40:11.921944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.214 [2024-12-13 10:40:11.921957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.214 qpair failed and we were unable to recover it. 00:38:18.214 [2024-12-13 10:40:11.922043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.214 [2024-12-13 10:40:11.922055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.214 qpair failed and we were unable to recover it. 00:38:18.214 [2024-12-13 10:40:11.922132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.214 [2024-12-13 10:40:11.922145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.214 qpair failed and we were unable to recover it. 00:38:18.214 [2024-12-13 10:40:11.922225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.214 [2024-12-13 10:40:11.922238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.214 qpair failed and we were unable to recover it. 00:38:18.214 [2024-12-13 10:40:11.922412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.214 [2024-12-13 10:40:11.922439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:18.214 qpair failed and we were unable to recover it. 00:38:18.214 [2024-12-13 10:40:11.922582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.214 [2024-12-13 10:40:11.922628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:18.214 qpair failed and we were unable to recover it. 00:38:18.214 [2024-12-13 10:40:11.922833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.214 [2024-12-13 10:40:11.922881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:18.214 qpair failed and we were unable to recover it. 00:38:18.214 [2024-12-13 10:40:11.923023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.214 [2024-12-13 10:40:11.923068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.214 qpair failed and we were unable to recover it. 00:38:18.214 [2024-12-13 10:40:11.923206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.214 [2024-12-13 10:40:11.923247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.214 qpair failed and we were unable to recover it. 00:38:18.214 [2024-12-13 10:40:11.923515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.214 [2024-12-13 10:40:11.923562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.214 qpair failed and we were unable to recover it. 00:38:18.214 [2024-12-13 10:40:11.923771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.214 [2024-12-13 10:40:11.923814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.214 qpair failed and we were unable to recover it. 00:38:18.214 [2024-12-13 10:40:11.924042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.214 [2024-12-13 10:40:11.924086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.214 qpair failed and we were unable to recover it. 00:38:18.214 [2024-12-13 10:40:11.924222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.214 [2024-12-13 10:40:11.924235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.214 qpair failed and we were unable to recover it. 00:38:18.214 [2024-12-13 10:40:11.924386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.214 [2024-12-13 10:40:11.924400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.214 qpair failed and we were unable to recover it. 00:38:18.214 [2024-12-13 10:40:11.924491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.214 [2024-12-13 10:40:11.924504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.214 qpair failed and we were unable to recover it. 00:38:18.214 [2024-12-13 10:40:11.924642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.214 [2024-12-13 10:40:11.924656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.214 qpair failed and we were unable to recover it. 00:38:18.214 [2024-12-13 10:40:11.924796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.214 [2024-12-13 10:40:11.924809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.214 qpair failed and we were unable to recover it. 00:38:18.214 [2024-12-13 10:40:11.924900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.214 [2024-12-13 10:40:11.924914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.214 qpair failed and we were unable to recover it. 00:38:18.214 [2024-12-13 10:40:11.925068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.214 [2024-12-13 10:40:11.925081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.214 qpair failed and we were unable to recover it. 00:38:18.214 [2024-12-13 10:40:11.925170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.214 [2024-12-13 10:40:11.925183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.214 qpair failed and we were unable to recover it. 00:38:18.214 [2024-12-13 10:40:11.925270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.214 [2024-12-13 10:40:11.925283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.214 qpair failed and we were unable to recover it. 00:38:18.214 [2024-12-13 10:40:11.925402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.214 [2024-12-13 10:40:11.925416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.214 qpair failed and we were unable to recover it. 00:38:18.214 [2024-12-13 10:40:11.925553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.214 [2024-12-13 10:40:11.925571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.214 qpair failed and we were unable to recover it. 00:38:18.214 [2024-12-13 10:40:11.925647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.214 [2024-12-13 10:40:11.925661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.214 qpair failed and we were unable to recover it. 00:38:18.214 [2024-12-13 10:40:11.925808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.214 [2024-12-13 10:40:11.925822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.214 qpair failed and we were unable to recover it. 00:38:18.214 [2024-12-13 10:40:11.925985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.214 [2024-12-13 10:40:11.925999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.214 qpair failed and we were unable to recover it. 00:38:18.214 [2024-12-13 10:40:11.926220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.215 [2024-12-13 10:40:11.926269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.215 qpair failed and we were unable to recover it. 00:38:18.215 [2024-12-13 10:40:11.926409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.215 [2024-12-13 10:40:11.926461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.215 qpair failed and we were unable to recover it. 00:38:18.215 [2024-12-13 10:40:11.926613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.215 [2024-12-13 10:40:11.926656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.215 qpair failed and we were unable to recover it. 00:38:18.215 [2024-12-13 10:40:11.926862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.215 [2024-12-13 10:40:11.926876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.215 qpair failed and we were unable to recover it. 00:38:18.215 [2024-12-13 10:40:11.927044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.215 [2024-12-13 10:40:11.927057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.215 qpair failed and we were unable to recover it. 00:38:18.215 [2024-12-13 10:40:11.927130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.215 [2024-12-13 10:40:11.927143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.215 qpair failed and we were unable to recover it. 00:38:18.215 [2024-12-13 10:40:11.927213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.215 [2024-12-13 10:40:11.927225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.215 qpair failed and we were unable to recover it. 00:38:18.215 [2024-12-13 10:40:11.927359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.215 [2024-12-13 10:40:11.927371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.215 qpair failed and we were unable to recover it. 00:38:18.215 [2024-12-13 10:40:11.927465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.215 [2024-12-13 10:40:11.927478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.215 qpair failed and we were unable to recover it. 00:38:18.215 [2024-12-13 10:40:11.927627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.215 [2024-12-13 10:40:11.927640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.215 qpair failed and we were unable to recover it. 00:38:18.215 [2024-12-13 10:40:11.927780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.215 [2024-12-13 10:40:11.927792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.215 qpair failed and we were unable to recover it. 00:38:18.215 [2024-12-13 10:40:11.927937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.215 [2024-12-13 10:40:11.927950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.215 qpair failed and we were unable to recover it. 00:38:18.215 [2024-12-13 10:40:11.928020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.215 [2024-12-13 10:40:11.928033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.215 qpair failed and we were unable to recover it. 00:38:18.215 [2024-12-13 10:40:11.928197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.215 [2024-12-13 10:40:11.928239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.215 qpair failed and we were unable to recover it. 00:38:18.215 [2024-12-13 10:40:11.928516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.215 [2024-12-13 10:40:11.928560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.215 qpair failed and we were unable to recover it. 00:38:18.215 [2024-12-13 10:40:11.928760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.215 [2024-12-13 10:40:11.928802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.215 qpair failed and we were unable to recover it. 00:38:18.215 [2024-12-13 10:40:11.929000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.215 [2024-12-13 10:40:11.929044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.215 qpair failed and we were unable to recover it. 00:38:18.215 [2024-12-13 10:40:11.929191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.215 [2024-12-13 10:40:11.929233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.215 qpair failed and we were unable to recover it. 00:38:18.215 [2024-12-13 10:40:11.929362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.215 [2024-12-13 10:40:11.929411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.215 qpair failed and we were unable to recover it. 00:38:18.215 [2024-12-13 10:40:11.929689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.215 [2024-12-13 10:40:11.929751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:18.215 qpair failed and we were unable to recover it. 00:38:18.215 [2024-12-13 10:40:11.930004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.215 [2024-12-13 10:40:11.930049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:18.215 qpair failed and we were unable to recover it. 00:38:18.215 [2024-12-13 10:40:11.930204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.215 [2024-12-13 10:40:11.930246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:18.215 qpair failed and we were unable to recover it. 00:38:18.215 [2024-12-13 10:40:11.930424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.215 [2024-12-13 10:40:11.930440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.215 qpair failed and we were unable to recover it. 00:38:18.215 [2024-12-13 10:40:11.930541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.215 [2024-12-13 10:40:11.930555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.215 qpair failed and we were unable to recover it. 00:38:18.215 [2024-12-13 10:40:11.930803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.215 [2024-12-13 10:40:11.930818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.215 qpair failed and we were unable to recover it. 00:38:18.215 [2024-12-13 10:40:11.930975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.215 [2024-12-13 10:40:11.930988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.215 qpair failed and we were unable to recover it. 00:38:18.215 [2024-12-13 10:40:11.931165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.215 [2024-12-13 10:40:11.931179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.215 qpair failed and we were unable to recover it. 00:38:18.215 [2024-12-13 10:40:11.931377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.215 [2024-12-13 10:40:11.931418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.215 qpair failed and we were unable to recover it. 00:38:18.215 [2024-12-13 10:40:11.931646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.215 [2024-12-13 10:40:11.931691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.215 qpair failed and we were unable to recover it. 00:38:18.215 [2024-12-13 10:40:11.931977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.215 [2024-12-13 10:40:11.932018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.215 qpair failed and we were unable to recover it. 00:38:18.215 [2024-12-13 10:40:11.932307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.215 [2024-12-13 10:40:11.932351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.215 qpair failed and we were unable to recover it. 00:38:18.215 [2024-12-13 10:40:11.932556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.215 [2024-12-13 10:40:11.932600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.215 qpair failed and we were unable to recover it. 00:38:18.215 [2024-12-13 10:40:11.932825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.215 [2024-12-13 10:40:11.932868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.215 qpair failed and we were unable to recover it. 00:38:18.215 [2024-12-13 10:40:11.932998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.215 [2024-12-13 10:40:11.933039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.215 qpair failed and we were unable to recover it. 00:38:18.215 [2024-12-13 10:40:11.933300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.215 [2024-12-13 10:40:11.933344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.215 qpair failed and we were unable to recover it. 00:38:18.215 [2024-12-13 10:40:11.933566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.215 [2024-12-13 10:40:11.933580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.215 qpair failed and we were unable to recover it. 00:38:18.215 [2024-12-13 10:40:11.933749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.215 [2024-12-13 10:40:11.933792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.215 qpair failed and we were unable to recover it. 00:38:18.215 [2024-12-13 10:40:11.934000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.215 [2024-12-13 10:40:11.934042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.215 qpair failed and we were unable to recover it. 00:38:18.215 [2024-12-13 10:40:11.934270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.215 [2024-12-13 10:40:11.934313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.215 qpair failed and we were unable to recover it. 00:38:18.215 [2024-12-13 10:40:11.934455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.216 [2024-12-13 10:40:11.934470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.216 qpair failed and we were unable to recover it. 00:38:18.216 [2024-12-13 10:40:11.934618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.216 [2024-12-13 10:40:11.934631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.216 qpair failed and we were unable to recover it. 00:38:18.216 [2024-12-13 10:40:11.934723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.216 [2024-12-13 10:40:11.934736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.216 qpair failed and we were unable to recover it. 00:38:18.216 [2024-12-13 10:40:11.934915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.216 [2024-12-13 10:40:11.934929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.216 qpair failed and we were unable to recover it. 00:38:18.216 [2024-12-13 10:40:11.935089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.216 [2024-12-13 10:40:11.935102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.216 qpair failed and we were unable to recover it. 00:38:18.216 [2024-12-13 10:40:11.935253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.216 [2024-12-13 10:40:11.935266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.216 qpair failed and we were unable to recover it. 00:38:18.216 [2024-12-13 10:40:11.935403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.216 [2024-12-13 10:40:11.935417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.216 qpair failed and we were unable to recover it. 00:38:18.216 [2024-12-13 10:40:11.935578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.216 [2024-12-13 10:40:11.935621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.216 qpair failed and we were unable to recover it. 00:38:18.216 [2024-12-13 10:40:11.935758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.216 [2024-12-13 10:40:11.935800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.216 qpair failed and we were unable to recover it. 00:38:18.216 [2024-12-13 10:40:11.935951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.216 [2024-12-13 10:40:11.936002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.216 qpair failed and we were unable to recover it. 00:38:18.216 [2024-12-13 10:40:11.936317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.216 [2024-12-13 10:40:11.936332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.216 qpair failed and we were unable to recover it. 00:38:18.216 [2024-12-13 10:40:11.936640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.216 [2024-12-13 10:40:11.936727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:18.216 qpair failed and we were unable to recover it. 00:38:18.216 [2024-12-13 10:40:11.936935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.216 [2024-12-13 10:40:11.937021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:18.216 qpair failed and we were unable to recover it. 00:38:18.216 [2024-12-13 10:40:11.937306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.216 [2024-12-13 10:40:11.937331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:18.216 qpair failed and we were unable to recover it. 00:38:18.216 [2024-12-13 10:40:11.937571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.216 [2024-12-13 10:40:11.937587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.216 qpair failed and we were unable to recover it. 00:38:18.216 [2024-12-13 10:40:11.937682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.216 [2024-12-13 10:40:11.937696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.216 qpair failed and we were unable to recover it. 00:38:18.216 [2024-12-13 10:40:11.937895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.216 [2024-12-13 10:40:11.937908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.216 qpair failed and we were unable to recover it. 00:38:18.216 [2024-12-13 10:40:11.937993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.216 [2024-12-13 10:40:11.938006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.216 qpair failed and we were unable to recover it. 00:38:18.216 [2024-12-13 10:40:11.938190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.216 [2024-12-13 10:40:11.938204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.216 qpair failed and we were unable to recover it. 00:38:18.216 [2024-12-13 10:40:11.938383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.216 [2024-12-13 10:40:11.938399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.216 qpair failed and we were unable to recover it. 00:38:18.216 [2024-12-13 10:40:11.938607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.216 [2024-12-13 10:40:11.938621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.216 qpair failed and we were unable to recover it. 00:38:18.216 [2024-12-13 10:40:11.938728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.216 [2024-12-13 10:40:11.938742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.216 qpair failed and we were unable to recover it. 00:38:18.216 [2024-12-13 10:40:11.938882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.216 [2024-12-13 10:40:11.938896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.216 qpair failed and we were unable to recover it. 00:38:18.216 [2024-12-13 10:40:11.939169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.216 [2024-12-13 10:40:11.939183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.216 qpair failed and we were unable to recover it. 00:38:18.216 [2024-12-13 10:40:11.939436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.216 [2024-12-13 10:40:11.939462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.216 qpair failed and we were unable to recover it. 00:38:18.216 [2024-12-13 10:40:11.939620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.216 [2024-12-13 10:40:11.939634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.216 qpair failed and we were unable to recover it. 00:38:18.216 [2024-12-13 10:40:11.939744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.216 [2024-12-13 10:40:11.939758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.216 qpair failed and we were unable to recover it. 00:38:18.216 [2024-12-13 10:40:11.939914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.216 [2024-12-13 10:40:11.939931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.216 qpair failed and we were unable to recover it. 00:38:18.216 [2024-12-13 10:40:11.940204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.216 [2024-12-13 10:40:11.940218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.216 qpair failed and we were unable to recover it. 00:38:18.216 [2024-12-13 10:40:11.940458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.216 [2024-12-13 10:40:11.940472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.216 qpair failed and we were unable to recover it. 00:38:18.216 [2024-12-13 10:40:11.940632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.216 [2024-12-13 10:40:11.940646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.216 qpair failed and we were unable to recover it. 00:38:18.216 [2024-12-13 10:40:11.940793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.216 [2024-12-13 10:40:11.940806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.216 qpair failed and we were unable to recover it. 00:38:18.216 [2024-12-13 10:40:11.941010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.216 [2024-12-13 10:40:11.941024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.216 qpair failed and we were unable to recover it. 00:38:18.216 [2024-12-13 10:40:11.941271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.216 [2024-12-13 10:40:11.941285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.216 qpair failed and we were unable to recover it. 00:38:18.216 [2024-12-13 10:40:11.941510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.216 [2024-12-13 10:40:11.941524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.216 qpair failed and we were unable to recover it. 00:38:18.216 [2024-12-13 10:40:11.941615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.216 [2024-12-13 10:40:11.941629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.216 qpair failed and we were unable to recover it. 00:38:18.216 [2024-12-13 10:40:11.941800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.216 [2024-12-13 10:40:11.941813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.216 qpair failed and we were unable to recover it. 00:38:18.216 [2024-12-13 10:40:11.941919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.216 [2024-12-13 10:40:11.941933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.216 qpair failed and we were unable to recover it. 00:38:18.216 [2024-12-13 10:40:11.942087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.216 [2024-12-13 10:40:11.942100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.216 qpair failed and we were unable to recover it. 00:38:18.216 [2024-12-13 10:40:11.942252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.216 [2024-12-13 10:40:11.942266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.217 qpair failed and we were unable to recover it. 00:38:18.217 [2024-12-13 10:40:11.942468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.217 [2024-12-13 10:40:11.942482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.217 qpair failed and we were unable to recover it. 00:38:18.217 [2024-12-13 10:40:11.942562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.217 [2024-12-13 10:40:11.942574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.217 qpair failed and we were unable to recover it. 00:38:18.217 [2024-12-13 10:40:11.942798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.217 [2024-12-13 10:40:11.942812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.217 qpair failed and we were unable to recover it. 00:38:18.217 [2024-12-13 10:40:11.942987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.217 [2024-12-13 10:40:11.943000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.217 qpair failed and we were unable to recover it. 00:38:18.217 [2024-12-13 10:40:11.943227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.217 [2024-12-13 10:40:11.943240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.217 qpair failed and we were unable to recover it. 00:38:18.217 [2024-12-13 10:40:11.943466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.217 [2024-12-13 10:40:11.943480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.217 qpair failed and we were unable to recover it. 00:38:18.217 [2024-12-13 10:40:11.943696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.217 [2024-12-13 10:40:11.943710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.217 qpair failed and we were unable to recover it. 00:38:18.217 [2024-12-13 10:40:11.943879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.217 [2024-12-13 10:40:11.943893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.217 qpair failed and we were unable to recover it. 00:38:18.217 [2024-12-13 10:40:11.943991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.217 [2024-12-13 10:40:11.944004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.217 qpair failed and we were unable to recover it. 00:38:18.217 [2024-12-13 10:40:11.944209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.217 [2024-12-13 10:40:11.944223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.217 qpair failed and we were unable to recover it. 00:38:18.217 [2024-12-13 10:40:11.944376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.217 [2024-12-13 10:40:11.944390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.217 qpair failed and we were unable to recover it. 00:38:18.217 [2024-12-13 10:40:11.944555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.217 [2024-12-13 10:40:11.944569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.217 qpair failed and we were unable to recover it. 00:38:18.217 [2024-12-13 10:40:11.944672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.217 [2024-12-13 10:40:11.944686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.217 qpair failed and we were unable to recover it. 00:38:18.217 [2024-12-13 10:40:11.944783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.217 [2024-12-13 10:40:11.944795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.217 qpair failed and we were unable to recover it. 00:38:18.217 [2024-12-13 10:40:11.944899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.217 [2024-12-13 10:40:11.944913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.217 qpair failed and we were unable to recover it. 00:38:18.217 [2024-12-13 10:40:11.945010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.217 [2024-12-13 10:40:11.945023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.217 qpair failed and we were unable to recover it. 00:38:18.217 [2024-12-13 10:40:11.945213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.217 [2024-12-13 10:40:11.945227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.217 qpair failed and we were unable to recover it. 00:38:18.217 [2024-12-13 10:40:11.945429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.217 [2024-12-13 10:40:11.945443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.217 qpair failed and we were unable to recover it. 00:38:18.217 [2024-12-13 10:40:11.945635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.217 [2024-12-13 10:40:11.945649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.217 qpair failed and we were unable to recover it. 00:38:18.217 [2024-12-13 10:40:11.945739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.217 [2024-12-13 10:40:11.945754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.217 qpair failed and we were unable to recover it. 00:38:18.217 [2024-12-13 10:40:11.945843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.217 [2024-12-13 10:40:11.945856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.217 qpair failed and we were unable to recover it. 00:38:18.217 [2024-12-13 10:40:11.946031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.217 [2024-12-13 10:40:11.946044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.217 qpair failed and we were unable to recover it. 00:38:18.217 [2024-12-13 10:40:11.946200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.217 [2024-12-13 10:40:11.946214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.217 qpair failed and we were unable to recover it. 00:38:18.217 [2024-12-13 10:40:11.946372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.217 [2024-12-13 10:40:11.946386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.217 qpair failed and we were unable to recover it. 00:38:18.217 [2024-12-13 10:40:11.946529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.217 [2024-12-13 10:40:11.946544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.217 qpair failed and we were unable to recover it. 00:38:18.217 [2024-12-13 10:40:11.946699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.217 [2024-12-13 10:40:11.946712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.217 qpair failed and we were unable to recover it. 00:38:18.217 [2024-12-13 10:40:11.946818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.217 [2024-12-13 10:40:11.946832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.217 qpair failed and we were unable to recover it. 00:38:18.217 [2024-12-13 10:40:11.946934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.217 [2024-12-13 10:40:11.946948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.217 qpair failed and we were unable to recover it. 00:38:18.217 [2024-12-13 10:40:11.947122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.217 [2024-12-13 10:40:11.947136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.217 qpair failed and we were unable to recover it. 00:38:18.217 [2024-12-13 10:40:11.947281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.217 [2024-12-13 10:40:11.947295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.217 qpair failed and we were unable to recover it. 00:38:18.217 [2024-12-13 10:40:11.947504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.217 [2024-12-13 10:40:11.947518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.217 qpair failed and we were unable to recover it. 00:38:18.217 [2024-12-13 10:40:11.947718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.217 [2024-12-13 10:40:11.947732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.217 qpair failed and we were unable to recover it. 00:38:18.217 [2024-12-13 10:40:11.947965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.217 [2024-12-13 10:40:11.947979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.217 qpair failed and we were unable to recover it. 00:38:18.217 [2024-12-13 10:40:11.948207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.217 [2024-12-13 10:40:11.948220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.217 qpair failed and we were unable to recover it. 00:38:18.217 [2024-12-13 10:40:11.948383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.217 [2024-12-13 10:40:11.948397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.217 qpair failed and we were unable to recover it. 00:38:18.217 [2024-12-13 10:40:11.948557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.217 [2024-12-13 10:40:11.948571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.217 qpair failed and we were unable to recover it. 00:38:18.217 [2024-12-13 10:40:11.948721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.217 [2024-12-13 10:40:11.948734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.217 qpair failed and we were unable to recover it. 00:38:18.217 [2024-12-13 10:40:11.948940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.217 [2024-12-13 10:40:11.948953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.217 qpair failed and we were unable to recover it. 00:38:18.217 [2024-12-13 10:40:11.949272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.217 [2024-12-13 10:40:11.949286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.217 qpair failed and we were unable to recover it. 00:38:18.218 [2024-12-13 10:40:11.949515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.218 [2024-12-13 10:40:11.949529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.218 qpair failed and we were unable to recover it. 00:38:18.218 [2024-12-13 10:40:11.949680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.218 [2024-12-13 10:40:11.949694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.218 qpair failed and we were unable to recover it. 00:38:18.218 [2024-12-13 10:40:11.949894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.218 [2024-12-13 10:40:11.949907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.218 qpair failed and we were unable to recover it. 00:38:18.218 [2024-12-13 10:40:11.949985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.218 [2024-12-13 10:40:11.949998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.218 qpair failed and we were unable to recover it. 00:38:18.218 [2024-12-13 10:40:11.950170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.218 [2024-12-13 10:40:11.950184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.218 qpair failed and we were unable to recover it. 00:38:18.218 [2024-12-13 10:40:11.950405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.218 [2024-12-13 10:40:11.950419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.218 qpair failed and we were unable to recover it. 00:38:18.218 [2024-12-13 10:40:11.950623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.218 [2024-12-13 10:40:11.950637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.218 qpair failed and we were unable to recover it. 00:38:18.218 [2024-12-13 10:40:11.950798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.218 [2024-12-13 10:40:11.950812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.218 qpair failed and we were unable to recover it. 00:38:18.218 [2024-12-13 10:40:11.950972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.218 [2024-12-13 10:40:11.950985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.218 qpair failed and we were unable to recover it. 00:38:18.218 [2024-12-13 10:40:11.951167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.218 [2024-12-13 10:40:11.951181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.218 qpair failed and we were unable to recover it. 00:38:18.218 [2024-12-13 10:40:11.951400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.218 [2024-12-13 10:40:11.951413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.218 qpair failed and we were unable to recover it. 00:38:18.218 [2024-12-13 10:40:11.951562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.218 [2024-12-13 10:40:11.951576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.218 qpair failed and we were unable to recover it. 00:38:18.218 [2024-12-13 10:40:11.951730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.218 [2024-12-13 10:40:11.951752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.218 qpair failed and we were unable to recover it. 00:38:18.218 [2024-12-13 10:40:11.951858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.218 [2024-12-13 10:40:11.951872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.218 qpair failed and we were unable to recover it. 00:38:18.218 [2024-12-13 10:40:11.951973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.218 [2024-12-13 10:40:11.951987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.218 qpair failed and we were unable to recover it. 00:38:18.218 [2024-12-13 10:40:11.952220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.218 [2024-12-13 10:40:11.952234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.218 qpair failed and we were unable to recover it. 00:38:18.218 [2024-12-13 10:40:11.952365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.218 [2024-12-13 10:40:11.952378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.218 qpair failed and we were unable to recover it. 00:38:18.218 [2024-12-13 10:40:11.952544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.218 [2024-12-13 10:40:11.952558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.218 qpair failed and we were unable to recover it. 00:38:18.218 [2024-12-13 10:40:11.952756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.218 [2024-12-13 10:40:11.952812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.218 qpair failed and we were unable to recover it. 00:38:18.218 [2024-12-13 10:40:11.952967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.218 [2024-12-13 10:40:11.953010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.218 qpair failed and we were unable to recover it. 00:38:18.218 [2024-12-13 10:40:11.953214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.218 [2024-12-13 10:40:11.953262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.218 qpair failed and we were unable to recover it. 00:38:18.218 [2024-12-13 10:40:11.953485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.218 [2024-12-13 10:40:11.953499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.218 qpair failed and we were unable to recover it. 00:38:18.218 [2024-12-13 10:40:11.953652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.218 [2024-12-13 10:40:11.953665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.218 qpair failed and we were unable to recover it. 00:38:18.218 [2024-12-13 10:40:11.953760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.218 [2024-12-13 10:40:11.953772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.218 qpair failed and we were unable to recover it. 00:38:18.218 [2024-12-13 10:40:11.953874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.218 [2024-12-13 10:40:11.953887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.218 qpair failed and we were unable to recover it. 00:38:18.218 [2024-12-13 10:40:11.953970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.218 [2024-12-13 10:40:11.953983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.218 qpair failed and we were unable to recover it. 00:38:18.218 [2024-12-13 10:40:11.954168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.218 [2024-12-13 10:40:11.954181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.218 qpair failed and we were unable to recover it. 00:38:18.218 [2024-12-13 10:40:11.954438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.218 [2024-12-13 10:40:11.954497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.218 qpair failed and we were unable to recover it. 00:38:18.218 [2024-12-13 10:40:11.954770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.218 [2024-12-13 10:40:11.954813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.218 qpair failed and we were unable to recover it. 00:38:18.218 [2024-12-13 10:40:11.954975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.218 [2024-12-13 10:40:11.955018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.218 qpair failed and we were unable to recover it. 00:38:18.218 [2024-12-13 10:40:11.955217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.218 [2024-12-13 10:40:11.955230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.218 qpair failed and we were unable to recover it. 00:38:18.218 [2024-12-13 10:40:11.955390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.218 [2024-12-13 10:40:11.955404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.218 qpair failed and we were unable to recover it. 00:38:18.218 [2024-12-13 10:40:11.955563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.218 [2024-12-13 10:40:11.955601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.218 qpair failed and we were unable to recover it. 00:38:18.218 [2024-12-13 10:40:11.955748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.218 [2024-12-13 10:40:11.955791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.218 qpair failed and we were unable to recover it. 00:38:18.218 [2024-12-13 10:40:11.955955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.218 [2024-12-13 10:40:11.955999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.218 qpair failed and we were unable to recover it. 00:38:18.218 [2024-12-13 10:40:11.956164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.218 [2024-12-13 10:40:11.956178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.218 qpair failed and we were unable to recover it. 00:38:18.218 [2024-12-13 10:40:11.956428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.218 [2024-12-13 10:40:11.956482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.218 qpair failed and we were unable to recover it. 00:38:18.218 [2024-12-13 10:40:11.956700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.218 [2024-12-13 10:40:11.956743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.218 qpair failed and we were unable to recover it. 00:38:18.218 [2024-12-13 10:40:11.956969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.218 [2024-12-13 10:40:11.957011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.218 qpair failed and we were unable to recover it. 00:38:18.219 [2024-12-13 10:40:11.957273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.219 [2024-12-13 10:40:11.957316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.219 qpair failed and we were unable to recover it. 00:38:18.219 [2024-12-13 10:40:11.957475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.219 [2024-12-13 10:40:11.957519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.219 qpair failed and we were unable to recover it. 00:38:18.219 [2024-12-13 10:40:11.957720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.219 [2024-12-13 10:40:11.957763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.219 qpair failed and we were unable to recover it. 00:38:18.219 [2024-12-13 10:40:11.957918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.219 [2024-12-13 10:40:11.957961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.219 qpair failed and we were unable to recover it. 00:38:18.219 [2024-12-13 10:40:11.958112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.219 [2024-12-13 10:40:11.958126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.219 qpair failed and we were unable to recover it. 00:38:18.219 [2024-12-13 10:40:11.958366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.219 [2024-12-13 10:40:11.958409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.219 qpair failed and we were unable to recover it. 00:38:18.219 [2024-12-13 10:40:11.958611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.219 [2024-12-13 10:40:11.958671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:18.219 qpair failed and we were unable to recover it. 00:38:18.219 [2024-12-13 10:40:11.958845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.219 [2024-12-13 10:40:11.958905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:18.219 qpair failed and we were unable to recover it. 00:38:18.219 [2024-12-13 10:40:11.958998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.219 [2024-12-13 10:40:11.959020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:18.219 qpair failed and we were unable to recover it. 00:38:18.219 [2024-12-13 10:40:11.959121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.219 [2024-12-13 10:40:11.959142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:18.219 qpair failed and we were unable to recover it. 00:38:18.219 [2024-12-13 10:40:11.959316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.219 [2024-12-13 10:40:11.959337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:18.219 qpair failed and we were unable to recover it. 00:38:18.219 [2024-12-13 10:40:11.959499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.219 [2024-12-13 10:40:11.959520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:18.219 qpair failed and we were unable to recover it. 00:38:18.219 [2024-12-13 10:40:11.959684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.219 [2024-12-13 10:40:11.959700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.219 qpair failed and we were unable to recover it. 00:38:18.219 [2024-12-13 10:40:11.959791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.219 [2024-12-13 10:40:11.959804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.219 qpair failed and we were unable to recover it. 00:38:18.219 [2024-12-13 10:40:11.959961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.219 [2024-12-13 10:40:11.959975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.219 qpair failed and we were unable to recover it. 00:38:18.219 [2024-12-13 10:40:11.960146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.219 [2024-12-13 10:40:11.960188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.219 qpair failed and we were unable to recover it. 00:38:18.219 [2024-12-13 10:40:11.960408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.219 [2024-12-13 10:40:11.960463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.219 qpair failed and we were unable to recover it. 00:38:18.219 [2024-12-13 10:40:11.960676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.219 [2024-12-13 10:40:11.960719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.219 qpair failed and we were unable to recover it. 00:38:18.219 [2024-12-13 10:40:11.960977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.219 [2024-12-13 10:40:11.961019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.219 qpair failed and we were unable to recover it. 00:38:18.219 [2024-12-13 10:40:11.961275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.219 [2024-12-13 10:40:11.961317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.219 qpair failed and we were unable to recover it. 00:38:18.219 [2024-12-13 10:40:11.961541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.219 [2024-12-13 10:40:11.961584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.219 qpair failed and we were unable to recover it. 00:38:18.219 [2024-12-13 10:40:11.961754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.219 [2024-12-13 10:40:11.961808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.219 qpair failed and we were unable to recover it. 00:38:18.219 [2024-12-13 10:40:11.962002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.219 [2024-12-13 10:40:11.962016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.219 qpair failed and we were unable to recover it. 00:38:18.219 [2024-12-13 10:40:11.962228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.219 [2024-12-13 10:40:11.962271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.219 qpair failed and we were unable to recover it. 00:38:18.219 [2024-12-13 10:40:11.962467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.219 [2024-12-13 10:40:11.962510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.219 qpair failed and we were unable to recover it. 00:38:18.219 [2024-12-13 10:40:11.962670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.219 [2024-12-13 10:40:11.962712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.219 qpair failed and we were unable to recover it. 00:38:18.219 [2024-12-13 10:40:11.962870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.219 [2024-12-13 10:40:11.962911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.219 qpair failed and we were unable to recover it. 00:38:18.219 [2024-12-13 10:40:11.963169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.219 [2024-12-13 10:40:11.963218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.219 qpair failed and we were unable to recover it. 00:38:18.219 [2024-12-13 10:40:11.963503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.219 [2024-12-13 10:40:11.963547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.219 qpair failed and we were unable to recover it. 00:38:18.219 [2024-12-13 10:40:11.963763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.219 [2024-12-13 10:40:11.963804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.219 qpair failed and we were unable to recover it. 00:38:18.219 [2024-12-13 10:40:11.963945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.219 [2024-12-13 10:40:11.963987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.219 qpair failed and we were unable to recover it. 00:38:18.219 [2024-12-13 10:40:11.964216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.219 [2024-12-13 10:40:11.964259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.219 qpair failed and we were unable to recover it. 00:38:18.219 [2024-12-13 10:40:11.964515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.219 [2024-12-13 10:40:11.964529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.219 qpair failed and we were unable to recover it. 00:38:18.219 [2024-12-13 10:40:11.964680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.219 [2024-12-13 10:40:11.964694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.219 qpair failed and we were unable to recover it. 00:38:18.219 [2024-12-13 10:40:11.964875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.220 [2024-12-13 10:40:11.964888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.220 qpair failed and we were unable to recover it. 00:38:18.220 [2024-12-13 10:40:11.965041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.220 [2024-12-13 10:40:11.965055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.220 qpair failed and we were unable to recover it. 00:38:18.220 [2024-12-13 10:40:11.965362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.220 [2024-12-13 10:40:11.965404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.220 qpair failed and we were unable to recover it. 00:38:18.220 [2024-12-13 10:40:11.965554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.220 [2024-12-13 10:40:11.965598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.220 qpair failed and we were unable to recover it. 00:38:18.220 [2024-12-13 10:40:11.965814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.220 [2024-12-13 10:40:11.965856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.220 qpair failed and we were unable to recover it. 00:38:18.220 [2024-12-13 10:40:11.966017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.220 [2024-12-13 10:40:11.966061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.220 qpair failed and we were unable to recover it. 00:38:18.220 [2024-12-13 10:40:11.966313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.220 [2024-12-13 10:40:11.966328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.220 qpair failed and we were unable to recover it. 00:38:18.220 [2024-12-13 10:40:11.966587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.220 [2024-12-13 10:40:11.966601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.220 qpair failed and we were unable to recover it. 00:38:18.220 [2024-12-13 10:40:11.966707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.220 [2024-12-13 10:40:11.966720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.220 qpair failed and we were unable to recover it. 00:38:18.220 [2024-12-13 10:40:11.966874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.220 [2024-12-13 10:40:11.966905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.220 qpair failed and we were unable to recover it. 00:38:18.220 [2024-12-13 10:40:11.967002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.220 [2024-12-13 10:40:11.967015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.220 qpair failed and we were unable to recover it. 00:38:18.220 [2024-12-13 10:40:11.967193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.220 [2024-12-13 10:40:11.967207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.220 qpair failed and we were unable to recover it. 00:38:18.220 [2024-12-13 10:40:11.967288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.220 [2024-12-13 10:40:11.967301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.220 qpair failed and we were unable to recover it. 00:38:18.220 [2024-12-13 10:40:11.967512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.220 [2024-12-13 10:40:11.967525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.220 qpair failed and we were unable to recover it. 00:38:18.220 [2024-12-13 10:40:11.967731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.220 [2024-12-13 10:40:11.967746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.220 qpair failed and we were unable to recover it. 00:38:18.220 [2024-12-13 10:40:11.967844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.220 [2024-12-13 10:40:11.967857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.220 qpair failed and we were unable to recover it. 00:38:18.220 [2024-12-13 10:40:11.968027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.220 [2024-12-13 10:40:11.968042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.220 qpair failed and we were unable to recover it. 00:38:18.220 [2024-12-13 10:40:11.968199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.220 [2024-12-13 10:40:11.968212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.220 qpair failed and we were unable to recover it. 00:38:18.220 [2024-12-13 10:40:11.968365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.220 [2024-12-13 10:40:11.968379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.220 qpair failed and we were unable to recover it. 00:38:18.220 [2024-12-13 10:40:11.968482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.220 [2024-12-13 10:40:11.968495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.220 qpair failed and we were unable to recover it. 00:38:18.220 [2024-12-13 10:40:11.968658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.220 [2024-12-13 10:40:11.968671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.220 qpair failed and we were unable to recover it. 00:38:18.220 [2024-12-13 10:40:11.968764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.220 [2024-12-13 10:40:11.968776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.220 qpair failed and we were unable to recover it. 00:38:18.220 [2024-12-13 10:40:11.968909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.220 [2024-12-13 10:40:11.968924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.220 qpair failed and we were unable to recover it. 00:38:18.220 [2024-12-13 10:40:11.969025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.220 [2024-12-13 10:40:11.969038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.220 qpair failed and we were unable to recover it. 00:38:18.220 [2024-12-13 10:40:11.969281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.220 [2024-12-13 10:40:11.969294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.220 qpair failed and we were unable to recover it. 00:38:18.220 [2024-12-13 10:40:11.969442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.220 [2024-12-13 10:40:11.969460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.220 qpair failed and we were unable to recover it. 00:38:18.220 [2024-12-13 10:40:11.969617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.220 [2024-12-13 10:40:11.969631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.220 qpair failed and we were unable to recover it. 00:38:18.220 [2024-12-13 10:40:11.969774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.220 [2024-12-13 10:40:11.969790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.220 qpair failed and we were unable to recover it. 00:38:18.220 [2024-12-13 10:40:11.969894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.220 [2024-12-13 10:40:11.969908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.220 qpair failed and we were unable to recover it. 00:38:18.220 [2024-12-13 10:40:11.970017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.220 [2024-12-13 10:40:11.970030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.220 qpair failed and we were unable to recover it. 00:38:18.220 [2024-12-13 10:40:11.970226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.220 [2024-12-13 10:40:11.970240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.220 qpair failed and we were unable to recover it. 00:38:18.220 [2024-12-13 10:40:11.970508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.220 [2024-12-13 10:40:11.970522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.220 qpair failed and we were unable to recover it. 00:38:18.220 [2024-12-13 10:40:11.970689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.220 [2024-12-13 10:40:11.970703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.220 qpair failed and we were unable to recover it. 00:38:18.220 [2024-12-13 10:40:11.970838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.220 [2024-12-13 10:40:11.970851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.220 qpair failed and we were unable to recover it. 00:38:18.220 [2024-12-13 10:40:11.971029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.220 [2024-12-13 10:40:11.971043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.220 qpair failed and we were unable to recover it. 00:38:18.220 [2024-12-13 10:40:11.971214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.220 [2024-12-13 10:40:11.971229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.220 qpair failed and we were unable to recover it. 00:38:18.220 [2024-12-13 10:40:11.971381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.220 [2024-12-13 10:40:11.971394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.220 qpair failed and we were unable to recover it. 00:38:18.220 [2024-12-13 10:40:11.971573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.220 [2024-12-13 10:40:11.971616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.220 qpair failed and we were unable to recover it. 00:38:18.220 [2024-12-13 10:40:11.971819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.220 [2024-12-13 10:40:11.971862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.220 qpair failed and we were unable to recover it. 00:38:18.220 [2024-12-13 10:40:11.972072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.221 [2024-12-13 10:40:11.972116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.221 qpair failed and we were unable to recover it. 00:38:18.221 [2024-12-13 10:40:11.972300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.221 [2024-12-13 10:40:11.972314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.221 qpair failed and we were unable to recover it. 00:38:18.221 [2024-12-13 10:40:11.972490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.221 [2024-12-13 10:40:11.972505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.221 qpair failed and we were unable to recover it. 00:38:18.221 [2024-12-13 10:40:11.972663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.221 [2024-12-13 10:40:11.972677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.221 qpair failed and we were unable to recover it. 00:38:18.221 [2024-12-13 10:40:11.972782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.221 [2024-12-13 10:40:11.972795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.221 qpair failed and we were unable to recover it. 00:38:18.221 [2024-12-13 10:40:11.972903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.221 [2024-12-13 10:40:11.972917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.221 qpair failed and we were unable to recover it. 00:38:18.221 [2024-12-13 10:40:11.973072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.221 [2024-12-13 10:40:11.973086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.221 qpair failed and we were unable to recover it. 00:38:18.221 [2024-12-13 10:40:11.973237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.221 [2024-12-13 10:40:11.973251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.221 qpair failed and we were unable to recover it. 00:38:18.221 [2024-12-13 10:40:11.973405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.221 [2024-12-13 10:40:11.973419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.221 qpair failed and we were unable to recover it. 00:38:18.221 [2024-12-13 10:40:11.973522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.221 [2024-12-13 10:40:11.973534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.221 qpair failed and we were unable to recover it. 00:38:18.221 [2024-12-13 10:40:11.973640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.221 [2024-12-13 10:40:11.973653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.221 qpair failed and we were unable to recover it. 00:38:18.221 [2024-12-13 10:40:11.973745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.221 [2024-12-13 10:40:11.973758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.221 qpair failed and we were unable to recover it. 00:38:18.221 [2024-12-13 10:40:11.973875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.221 [2024-12-13 10:40:11.973891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.221 qpair failed and we were unable to recover it. 00:38:18.221 [2024-12-13 10:40:11.974032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.221 [2024-12-13 10:40:11.974046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.221 qpair failed and we were unable to recover it. 00:38:18.221 [2024-12-13 10:40:11.974214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.221 [2024-12-13 10:40:11.974227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.221 qpair failed and we were unable to recover it. 00:38:18.221 [2024-12-13 10:40:11.974381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.221 [2024-12-13 10:40:11.974396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.221 qpair failed and we were unable to recover it. 00:38:18.221 [2024-12-13 10:40:11.974580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.221 [2024-12-13 10:40:11.974594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.221 qpair failed and we were unable to recover it. 00:38:18.221 [2024-12-13 10:40:11.974742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.221 [2024-12-13 10:40:11.974756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.221 qpair failed and we were unable to recover it. 00:38:18.221 [2024-12-13 10:40:11.974841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.221 [2024-12-13 10:40:11.974854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.221 qpair failed and we were unable to recover it. 00:38:18.221 [2024-12-13 10:40:11.975036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.221 [2024-12-13 10:40:11.975050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.221 qpair failed and we were unable to recover it. 00:38:18.221 [2024-12-13 10:40:11.975201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.221 [2024-12-13 10:40:11.975215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.221 qpair failed and we were unable to recover it. 00:38:18.221 [2024-12-13 10:40:11.975362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.221 [2024-12-13 10:40:11.975376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.221 qpair failed and we were unable to recover it. 00:38:18.221 [2024-12-13 10:40:11.975467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.221 [2024-12-13 10:40:11.975481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.221 qpair failed and we were unable to recover it. 00:38:18.221 [2024-12-13 10:40:11.975649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.221 [2024-12-13 10:40:11.975663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.221 qpair failed and we were unable to recover it. 00:38:18.221 [2024-12-13 10:40:11.975766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.221 [2024-12-13 10:40:11.975779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.221 qpair failed and we were unable to recover it. 00:38:18.221 [2024-12-13 10:40:11.975870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.221 [2024-12-13 10:40:11.975884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.221 qpair failed and we were unable to recover it. 00:38:18.221 [2024-12-13 10:40:11.975988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.221 [2024-12-13 10:40:11.976001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.221 qpair failed and we were unable to recover it. 00:38:18.221 [2024-12-13 10:40:11.976193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.221 [2024-12-13 10:40:11.976207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.221 qpair failed and we were unable to recover it. 00:38:18.221 [2024-12-13 10:40:11.976441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.221 [2024-12-13 10:40:11.976503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.221 qpair failed and we were unable to recover it. 00:38:18.221 [2024-12-13 10:40:11.976663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.221 [2024-12-13 10:40:11.976705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.221 qpair failed and we were unable to recover it. 00:38:18.221 [2024-12-13 10:40:11.976920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.221 [2024-12-13 10:40:11.976964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.221 qpair failed and we were unable to recover it. 00:38:18.221 [2024-12-13 10:40:11.977176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.221 [2024-12-13 10:40:11.977190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.221 qpair failed and we were unable to recover it. 00:38:18.221 [2024-12-13 10:40:11.977333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.221 [2024-12-13 10:40:11.977346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.221 qpair failed and we were unable to recover it. 00:38:18.221 [2024-12-13 10:40:11.977531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.221 [2024-12-13 10:40:11.977577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.221 qpair failed and we were unable to recover it. 00:38:18.221 [2024-12-13 10:40:11.977737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.221 [2024-12-13 10:40:11.977791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.221 qpair failed and we were unable to recover it. 00:38:18.221 [2024-12-13 10:40:11.978007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.221 [2024-12-13 10:40:11.978052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.221 qpair failed and we were unable to recover it. 00:38:18.221 [2024-12-13 10:40:11.978268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.221 [2024-12-13 10:40:11.978282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.221 qpair failed and we were unable to recover it. 00:38:18.221 [2024-12-13 10:40:11.978376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.221 [2024-12-13 10:40:11.978390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.221 qpair failed and we were unable to recover it. 00:38:18.221 [2024-12-13 10:40:11.978595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.221 [2024-12-13 10:40:11.978608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.221 qpair failed and we were unable to recover it. 00:38:18.221 [2024-12-13 10:40:11.978785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.222 [2024-12-13 10:40:11.978799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.222 qpair failed and we were unable to recover it. 00:38:18.222 [2024-12-13 10:40:11.978907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.222 [2024-12-13 10:40:11.978920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.222 qpair failed and we were unable to recover it. 00:38:18.222 [2024-12-13 10:40:11.979071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.222 [2024-12-13 10:40:11.979085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.222 qpair failed and we were unable to recover it. 00:38:18.222 [2024-12-13 10:40:11.979335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.222 [2024-12-13 10:40:11.979378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.222 qpair failed and we were unable to recover it. 00:38:18.222 [2024-12-13 10:40:11.979627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.222 [2024-12-13 10:40:11.979672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.222 qpair failed and we were unable to recover it. 00:38:18.222 [2024-12-13 10:40:11.979934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.222 [2024-12-13 10:40:11.979976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.222 qpair failed and we were unable to recover it. 00:38:18.222 [2024-12-13 10:40:11.980296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.222 [2024-12-13 10:40:11.980342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.222 qpair failed and we were unable to recover it. 00:38:18.222 [2024-12-13 10:40:11.980614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.222 [2024-12-13 10:40:11.980670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.222 qpair failed and we were unable to recover it. 00:38:18.222 [2024-12-13 10:40:11.980829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.222 [2024-12-13 10:40:11.980871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.222 qpair failed and we were unable to recover it. 00:38:18.222 [2024-12-13 10:40:11.981138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.222 [2024-12-13 10:40:11.981153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.222 qpair failed and we were unable to recover it. 00:38:18.222 [2024-12-13 10:40:11.981251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.222 [2024-12-13 10:40:11.981264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.222 qpair failed and we were unable to recover it. 00:38:18.222 [2024-12-13 10:40:11.981417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.222 [2024-12-13 10:40:11.981432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.222 qpair failed and we were unable to recover it. 00:38:18.222 [2024-12-13 10:40:11.981545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.222 [2024-12-13 10:40:11.981560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.222 qpair failed and we were unable to recover it. 00:38:18.222 [2024-12-13 10:40:11.981654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.222 [2024-12-13 10:40:11.981670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.222 qpair failed and we were unable to recover it. 00:38:18.222 [2024-12-13 10:40:11.981771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.222 [2024-12-13 10:40:11.981785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.222 qpair failed and we were unable to recover it. 00:38:18.222 [2024-12-13 10:40:11.981996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.222 [2024-12-13 10:40:11.982011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.222 qpair failed and we were unable to recover it. 00:38:18.222 [2024-12-13 10:40:11.982192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.222 [2024-12-13 10:40:11.982206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.222 qpair failed and we were unable to recover it. 00:38:18.222 [2024-12-13 10:40:11.982361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.222 [2024-12-13 10:40:11.982375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.222 qpair failed and we were unable to recover it. 00:38:18.222 [2024-12-13 10:40:11.982470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.222 [2024-12-13 10:40:11.982483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.222 qpair failed and we were unable to recover it. 00:38:18.222 [2024-12-13 10:40:11.982660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.222 [2024-12-13 10:40:11.982674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.222 qpair failed and we were unable to recover it. 00:38:18.222 [2024-12-13 10:40:11.982776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.222 [2024-12-13 10:40:11.982791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.222 qpair failed and we were unable to recover it. 00:38:18.222 [2024-12-13 10:40:11.982882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.222 [2024-12-13 10:40:11.982896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.222 qpair failed and we were unable to recover it. 00:38:18.222 [2024-12-13 10:40:11.983130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.222 [2024-12-13 10:40:11.983174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.222 qpair failed and we were unable to recover it. 00:38:18.222 [2024-12-13 10:40:11.983472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.222 [2024-12-13 10:40:11.983518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.222 qpair failed and we were unable to recover it. 00:38:18.222 [2024-12-13 10:40:11.983681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.222 [2024-12-13 10:40:11.983724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.222 qpair failed and we were unable to recover it. 00:38:18.222 [2024-12-13 10:40:11.983881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.222 [2024-12-13 10:40:11.983924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.222 qpair failed and we were unable to recover it. 00:38:18.222 [2024-12-13 10:40:11.984213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.222 [2024-12-13 10:40:11.984258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.222 qpair failed and we were unable to recover it. 00:38:18.222 [2024-12-13 10:40:11.984403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.222 [2024-12-13 10:40:11.984417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.222 qpair failed and we were unable to recover it. 00:38:18.222 [2024-12-13 10:40:11.984594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.222 [2024-12-13 10:40:11.984608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.222 qpair failed and we were unable to recover it. 00:38:18.222 [2024-12-13 10:40:11.984717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.222 [2024-12-13 10:40:11.984733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.222 qpair failed and we were unable to recover it. 00:38:18.222 [2024-12-13 10:40:11.984826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.222 [2024-12-13 10:40:11.984839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.222 qpair failed and we were unable to recover it. 00:38:18.222 [2024-12-13 10:40:11.984923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.222 [2024-12-13 10:40:11.984936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.222 qpair failed and we were unable to recover it. 00:38:18.222 [2024-12-13 10:40:11.985166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.222 [2024-12-13 10:40:11.985179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.222 qpair failed and we were unable to recover it. 00:38:18.222 [2024-12-13 10:40:11.985322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.222 [2024-12-13 10:40:11.985335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.222 qpair failed and we were unable to recover it. 00:38:18.222 [2024-12-13 10:40:11.985477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.222 [2024-12-13 10:40:11.985491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.222 qpair failed and we were unable to recover it. 00:38:18.222 [2024-12-13 10:40:11.985718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.222 [2024-12-13 10:40:11.985731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.222 qpair failed and we were unable to recover it. 00:38:18.222 [2024-12-13 10:40:11.985838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.222 [2024-12-13 10:40:11.985851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.222 qpair failed and we were unable to recover it. 00:38:18.222 [2024-12-13 10:40:11.985994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.222 [2024-12-13 10:40:11.986008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.222 qpair failed and we were unable to recover it. 00:38:18.222 [2024-12-13 10:40:11.986110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.222 [2024-12-13 10:40:11.986123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.222 qpair failed and we were unable to recover it. 00:38:18.222 [2024-12-13 10:40:11.986278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.223 [2024-12-13 10:40:11.986292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.223 qpair failed and we were unable to recover it. 00:38:18.223 [2024-12-13 10:40:11.986495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.223 [2024-12-13 10:40:11.986510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.223 qpair failed and we were unable to recover it. 00:38:18.223 [2024-12-13 10:40:11.986674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.223 [2024-12-13 10:40:11.986689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.223 qpair failed and we were unable to recover it. 00:38:18.223 [2024-12-13 10:40:11.986857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.223 [2024-12-13 10:40:11.986871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.223 qpair failed and we were unable to recover it. 00:38:18.223 [2024-12-13 10:40:11.986985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.223 [2024-12-13 10:40:11.987002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.223 qpair failed and we were unable to recover it. 00:38:18.223 [2024-12-13 10:40:11.987166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.223 [2024-12-13 10:40:11.987181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.223 qpair failed and we were unable to recover it. 00:38:18.223 [2024-12-13 10:40:11.987372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.223 [2024-12-13 10:40:11.987416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.223 qpair failed and we were unable to recover it. 00:38:18.223 [2024-12-13 10:40:11.987659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.223 [2024-12-13 10:40:11.987703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.223 qpair failed and we were unable to recover it. 00:38:18.223 [2024-12-13 10:40:11.987901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.223 [2024-12-13 10:40:11.987945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.223 qpair failed and we were unable to recover it. 00:38:18.223 [2024-12-13 10:40:11.988226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.223 [2024-12-13 10:40:11.988240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.223 qpair failed and we were unable to recover it. 00:38:18.223 [2024-12-13 10:40:11.988395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.223 [2024-12-13 10:40:11.988408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.223 qpair failed and we were unable to recover it. 00:38:18.223 [2024-12-13 10:40:11.988539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.223 [2024-12-13 10:40:11.988554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.223 qpair failed and we were unable to recover it. 00:38:18.223 [2024-12-13 10:40:11.988644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.223 [2024-12-13 10:40:11.988656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.223 qpair failed and we were unable to recover it. 00:38:18.223 [2024-12-13 10:40:11.988808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.223 [2024-12-13 10:40:11.988821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.223 qpair failed and we were unable to recover it. 00:38:18.223 [2024-12-13 10:40:11.988962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.223 [2024-12-13 10:40:11.988976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.223 qpair failed and we were unable to recover it. 00:38:18.223 [2024-12-13 10:40:11.989157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.223 [2024-12-13 10:40:11.989171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.223 qpair failed and we were unable to recover it. 00:38:18.223 [2024-12-13 10:40:11.989314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.223 [2024-12-13 10:40:11.989327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.223 qpair failed and we were unable to recover it. 00:38:18.223 [2024-12-13 10:40:11.989484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.223 [2024-12-13 10:40:11.989499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.223 qpair failed and we were unable to recover it. 00:38:18.223 [2024-12-13 10:40:11.989674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.223 [2024-12-13 10:40:11.989688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.223 qpair failed and we were unable to recover it. 00:38:18.223 [2024-12-13 10:40:11.989847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.223 [2024-12-13 10:40:11.989862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.223 qpair failed and we were unable to recover it. 00:38:18.223 [2024-12-13 10:40:11.989957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.223 [2024-12-13 10:40:11.989976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.223 qpair failed and we were unable to recover it. 00:38:18.223 [2024-12-13 10:40:11.990051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.223 [2024-12-13 10:40:11.990064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.223 qpair failed and we were unable to recover it. 00:38:18.223 [2024-12-13 10:40:11.990241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.223 [2024-12-13 10:40:11.990254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.223 qpair failed and we were unable to recover it. 00:38:18.223 [2024-12-13 10:40:11.990427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.223 [2024-12-13 10:40:11.990441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.223 qpair failed and we were unable to recover it. 00:38:18.223 [2024-12-13 10:40:11.990526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.223 [2024-12-13 10:40:11.990539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.223 qpair failed and we were unable to recover it. 00:38:18.223 [2024-12-13 10:40:11.990686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.223 [2024-12-13 10:40:11.990699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.223 qpair failed and we were unable to recover it. 00:38:18.223 [2024-12-13 10:40:11.990792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.223 [2024-12-13 10:40:11.990806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.223 qpair failed and we were unable to recover it. 00:38:18.223 [2024-12-13 10:40:11.991006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.223 [2024-12-13 10:40:11.991019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.223 qpair failed and we were unable to recover it. 00:38:18.223 [2024-12-13 10:40:11.991238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.223 [2024-12-13 10:40:11.991252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.223 qpair failed and we were unable to recover it. 00:38:18.223 [2024-12-13 10:40:11.991473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.223 [2024-12-13 10:40:11.991487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.223 qpair failed and we were unable to recover it. 00:38:18.223 [2024-12-13 10:40:11.991638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.223 [2024-12-13 10:40:11.991655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.223 qpair failed and we were unable to recover it. 00:38:18.223 [2024-12-13 10:40:11.991735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.223 [2024-12-13 10:40:11.991749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.223 qpair failed and we were unable to recover it. 00:38:18.223 [2024-12-13 10:40:11.991944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.223 [2024-12-13 10:40:11.991957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.223 qpair failed and we were unable to recover it. 00:38:18.223 [2024-12-13 10:40:11.992074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.223 [2024-12-13 10:40:11.992088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.223 qpair failed and we were unable to recover it. 00:38:18.223 [2024-12-13 10:40:11.992317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.223 [2024-12-13 10:40:11.992330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.223 qpair failed and we were unable to recover it. 00:38:18.223 [2024-12-13 10:40:11.992523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.224 [2024-12-13 10:40:11.992537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.224 qpair failed and we were unable to recover it. 00:38:18.224 [2024-12-13 10:40:11.992697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.224 [2024-12-13 10:40:11.992711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.224 qpair failed and we were unable to recover it. 00:38:18.224 [2024-12-13 10:40:11.992884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.224 [2024-12-13 10:40:11.992898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.224 qpair failed and we were unable to recover it. 00:38:18.224 [2024-12-13 10:40:11.993101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.224 [2024-12-13 10:40:11.993115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.224 qpair failed and we were unable to recover it. 00:38:18.224 [2024-12-13 10:40:11.993281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.224 [2024-12-13 10:40:11.993323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.224 qpair failed and we were unable to recover it. 00:38:18.224 [2024-12-13 10:40:11.993539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.224 [2024-12-13 10:40:11.993585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.224 qpair failed and we were unable to recover it. 00:38:18.224 [2024-12-13 10:40:11.993867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.224 [2024-12-13 10:40:11.993910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.224 qpair failed and we were unable to recover it. 00:38:18.224 [2024-12-13 10:40:11.994168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.224 [2024-12-13 10:40:11.994209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.224 qpair failed and we were unable to recover it. 00:38:18.224 [2024-12-13 10:40:11.994417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.224 [2024-12-13 10:40:11.994476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.224 qpair failed and we were unable to recover it. 00:38:18.224 [2024-12-13 10:40:11.994682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.224 [2024-12-13 10:40:11.994696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.224 qpair failed and we were unable to recover it. 00:38:18.224 [2024-12-13 10:40:11.994859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.224 [2024-12-13 10:40:11.994873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.224 qpair failed and we were unable to recover it. 00:38:18.224 [2024-12-13 10:40:11.995016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.224 [2024-12-13 10:40:11.995030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.224 qpair failed and we were unable to recover it. 00:38:18.224 [2024-12-13 10:40:11.995179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.224 [2024-12-13 10:40:11.995192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.224 qpair failed and we were unable to recover it. 00:38:18.224 [2024-12-13 10:40:11.995419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.224 [2024-12-13 10:40:11.995432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.224 qpair failed and we were unable to recover it. 00:38:18.224 [2024-12-13 10:40:11.995674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.224 [2024-12-13 10:40:11.995688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.224 qpair failed and we were unable to recover it. 00:38:18.224 [2024-12-13 10:40:11.995786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.224 [2024-12-13 10:40:11.995799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.224 qpair failed and we were unable to recover it. 00:38:18.224 [2024-12-13 10:40:11.995996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.224 [2024-12-13 10:40:11.996010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.224 qpair failed and we were unable to recover it. 00:38:18.224 [2024-12-13 10:40:11.996103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.224 [2024-12-13 10:40:11.996116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.224 qpair failed and we were unable to recover it. 00:38:18.224 [2024-12-13 10:40:11.996295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.224 [2024-12-13 10:40:11.996336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.224 qpair failed and we were unable to recover it. 00:38:18.224 [2024-12-13 10:40:11.996650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.224 [2024-12-13 10:40:11.996696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.224 qpair failed and we were unable to recover it. 00:38:18.224 [2024-12-13 10:40:11.996939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.224 [2024-12-13 10:40:11.996981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.224 qpair failed and we were unable to recover it. 00:38:18.224 [2024-12-13 10:40:11.997289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.224 [2024-12-13 10:40:11.997303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.224 qpair failed and we were unable to recover it. 00:38:18.224 [2024-12-13 10:40:11.997477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.224 [2024-12-13 10:40:11.997492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.224 qpair failed and we were unable to recover it. 00:38:18.224 [2024-12-13 10:40:11.997647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.224 [2024-12-13 10:40:11.997702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.224 qpair failed and we were unable to recover it. 00:38:18.224 [2024-12-13 10:40:11.997984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.224 [2024-12-13 10:40:11.998027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.224 qpair failed and we were unable to recover it. 00:38:18.224 [2024-12-13 10:40:11.998268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.224 [2024-12-13 10:40:11.998311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.224 qpair failed and we were unable to recover it. 00:38:18.224 [2024-12-13 10:40:11.998533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.224 [2024-12-13 10:40:11.998580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.224 qpair failed and we were unable to recover it. 00:38:18.224 [2024-12-13 10:40:11.998793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.224 [2024-12-13 10:40:11.998837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.224 qpair failed and we were unable to recover it. 00:38:18.224 [2024-12-13 10:40:11.999101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.224 [2024-12-13 10:40:11.999144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.224 qpair failed and we were unable to recover it. 00:38:18.224 [2024-12-13 10:40:11.999359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.224 [2024-12-13 10:40:11.999373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.224 qpair failed and we were unable to recover it. 00:38:18.224 [2024-12-13 10:40:11.999601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.224 [2024-12-13 10:40:11.999616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.224 qpair failed and we were unable to recover it. 00:38:18.224 [2024-12-13 10:40:11.999713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.224 [2024-12-13 10:40:11.999728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.224 qpair failed and we were unable to recover it. 00:38:18.224 [2024-12-13 10:40:11.999882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.224 [2024-12-13 10:40:11.999896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.224 qpair failed and we were unable to recover it. 00:38:18.224 [2024-12-13 10:40:12.000003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.224 [2024-12-13 10:40:12.000018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.224 qpair failed and we were unable to recover it. 00:38:18.224 [2024-12-13 10:40:12.000106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.224 [2024-12-13 10:40:12.000119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.224 qpair failed and we were unable to recover it. 00:38:18.224 [2024-12-13 10:40:12.000280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.224 [2024-12-13 10:40:12.000297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.224 qpair failed and we were unable to recover it. 00:38:18.224 [2024-12-13 10:40:12.000380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.224 [2024-12-13 10:40:12.000393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.224 qpair failed and we were unable to recover it. 00:38:18.224 [2024-12-13 10:40:12.000600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.224 [2024-12-13 10:40:12.000615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.224 qpair failed and we were unable to recover it. 00:38:18.224 [2024-12-13 10:40:12.000769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.224 [2024-12-13 10:40:12.000782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.224 qpair failed and we were unable to recover it. 00:38:18.224 [2024-12-13 10:40:12.000870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.225 [2024-12-13 10:40:12.000883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.225 qpair failed and we were unable to recover it. 00:38:18.225 [2024-12-13 10:40:12.001040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.225 [2024-12-13 10:40:12.001055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.225 qpair failed and we were unable to recover it. 00:38:18.225 [2024-12-13 10:40:12.001281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.225 [2024-12-13 10:40:12.001295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.225 qpair failed and we were unable to recover it. 00:38:18.225 [2024-12-13 10:40:12.001455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.225 [2024-12-13 10:40:12.001469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.225 qpair failed and we were unable to recover it. 00:38:18.225 [2024-12-13 10:40:12.001606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.225 [2024-12-13 10:40:12.001619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.225 qpair failed and we were unable to recover it. 00:38:18.225 [2024-12-13 10:40:12.001687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.225 [2024-12-13 10:40:12.001700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.225 qpair failed and we were unable to recover it. 00:38:18.225 [2024-12-13 10:40:12.001777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.225 [2024-12-13 10:40:12.001789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.225 qpair failed and we were unable to recover it. 00:38:18.225 [2024-12-13 10:40:12.001931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.225 [2024-12-13 10:40:12.001944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.225 qpair failed and we were unable to recover it. 00:38:18.225 [2024-12-13 10:40:12.002155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.225 [2024-12-13 10:40:12.002169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.225 qpair failed and we were unable to recover it. 00:38:18.225 [2024-12-13 10:40:12.002379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.225 [2024-12-13 10:40:12.002392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.225 qpair failed and we were unable to recover it. 00:38:18.225 [2024-12-13 10:40:12.002489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.225 [2024-12-13 10:40:12.002508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.225 qpair failed and we were unable to recover it. 00:38:18.225 [2024-12-13 10:40:12.002610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.225 [2024-12-13 10:40:12.002623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.225 qpair failed and we were unable to recover it. 00:38:18.225 [2024-12-13 10:40:12.002779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.225 [2024-12-13 10:40:12.002792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.225 qpair failed and we were unable to recover it. 00:38:18.225 [2024-12-13 10:40:12.003026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.225 [2024-12-13 10:40:12.003039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.225 qpair failed and we were unable to recover it. 00:38:18.225 [2024-12-13 10:40:12.003244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.225 [2024-12-13 10:40:12.003257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.225 qpair failed and we were unable to recover it. 00:38:18.225 [2024-12-13 10:40:12.003467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.225 [2024-12-13 10:40:12.003481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.225 qpair failed and we were unable to recover it. 00:38:18.225 [2024-12-13 10:40:12.003708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.225 [2024-12-13 10:40:12.003722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.225 qpair failed and we were unable to recover it. 00:38:18.225 [2024-12-13 10:40:12.003973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.225 [2024-12-13 10:40:12.003987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.225 qpair failed and we were unable to recover it. 00:38:18.225 [2024-12-13 10:40:12.004077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.225 [2024-12-13 10:40:12.004089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.225 qpair failed and we were unable to recover it. 00:38:18.225 [2024-12-13 10:40:12.004355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.225 [2024-12-13 10:40:12.004369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.225 qpair failed and we were unable to recover it. 00:38:18.225 [2024-12-13 10:40:12.004563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.225 [2024-12-13 10:40:12.004577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.225 qpair failed and we were unable to recover it. 00:38:18.225 [2024-12-13 10:40:12.004789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.225 [2024-12-13 10:40:12.004803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.225 qpair failed and we were unable to recover it. 00:38:18.225 [2024-12-13 10:40:12.004908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.225 [2024-12-13 10:40:12.004921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.225 qpair failed and we were unable to recover it. 00:38:18.225 [2024-12-13 10:40:12.005202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.225 [2024-12-13 10:40:12.005216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.225 qpair failed and we were unable to recover it. 00:38:18.225 [2024-12-13 10:40:12.005360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.225 [2024-12-13 10:40:12.005373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.225 qpair failed and we were unable to recover it. 00:38:18.225 [2024-12-13 10:40:12.005564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.225 [2024-12-13 10:40:12.005579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.225 qpair failed and we were unable to recover it. 00:38:18.225 [2024-12-13 10:40:12.005822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.225 [2024-12-13 10:40:12.005836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.225 qpair failed and we were unable to recover it. 00:38:18.225 [2024-12-13 10:40:12.005996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.225 [2024-12-13 10:40:12.006009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.225 qpair failed and we were unable to recover it. 00:38:18.225 [2024-12-13 10:40:12.006188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.225 [2024-12-13 10:40:12.006202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.225 qpair failed and we were unable to recover it. 00:38:18.225 [2024-12-13 10:40:12.006403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.225 [2024-12-13 10:40:12.006416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.225 qpair failed and we were unable to recover it. 00:38:18.225 [2024-12-13 10:40:12.006652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.225 [2024-12-13 10:40:12.006667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.225 qpair failed and we were unable to recover it. 00:38:18.225 [2024-12-13 10:40:12.006893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.225 [2024-12-13 10:40:12.006907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.225 qpair failed and we were unable to recover it. 00:38:18.225 [2024-12-13 10:40:12.007110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.225 [2024-12-13 10:40:12.007124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.225 qpair failed and we were unable to recover it. 00:38:18.225 [2024-12-13 10:40:12.007391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.225 [2024-12-13 10:40:12.007432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.225 qpair failed and we were unable to recover it. 00:38:18.225 [2024-12-13 10:40:12.007706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.225 [2024-12-13 10:40:12.007750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.225 qpair failed and we were unable to recover it. 00:38:18.225 [2024-12-13 10:40:12.007957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.225 [2024-12-13 10:40:12.007998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.225 qpair failed and we were unable to recover it. 00:38:18.225 [2024-12-13 10:40:12.008290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.225 [2024-12-13 10:40:12.008341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.225 qpair failed and we were unable to recover it. 00:38:18.225 [2024-12-13 10:40:12.008566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.225 [2024-12-13 10:40:12.008582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.225 qpair failed and we were unable to recover it. 00:38:18.225 [2024-12-13 10:40:12.008732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.225 [2024-12-13 10:40:12.008746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.225 qpair failed and we were unable to recover it. 00:38:18.225 [2024-12-13 10:40:12.008847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.225 [2024-12-13 10:40:12.008861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.226 qpair failed and we were unable to recover it. 00:38:18.226 [2024-12-13 10:40:12.009038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.226 [2024-12-13 10:40:12.009051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.226 qpair failed and we were unable to recover it. 00:38:18.226 [2024-12-13 10:40:12.009279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.226 [2024-12-13 10:40:12.009293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.226 qpair failed and we were unable to recover it. 00:38:18.226 [2024-12-13 10:40:12.009490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.226 [2024-12-13 10:40:12.009504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.226 qpair failed and we were unable to recover it. 00:38:18.226 [2024-12-13 10:40:12.009616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.226 [2024-12-13 10:40:12.009629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.226 qpair failed and we were unable to recover it. 00:38:18.226 [2024-12-13 10:40:12.009775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.226 [2024-12-13 10:40:12.009789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.226 qpair failed and we were unable to recover it. 00:38:18.226 [2024-12-13 10:40:12.009927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.226 [2024-12-13 10:40:12.009940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.226 qpair failed and we were unable to recover it. 00:38:18.226 [2024-12-13 10:40:12.010042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.226 [2024-12-13 10:40:12.010056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.226 qpair failed and we were unable to recover it. 00:38:18.226 [2024-12-13 10:40:12.010202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.226 [2024-12-13 10:40:12.010215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.226 qpair failed and we were unable to recover it. 00:38:18.226 [2024-12-13 10:40:12.010384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.226 [2024-12-13 10:40:12.010399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.226 qpair failed and we were unable to recover it. 00:38:18.226 [2024-12-13 10:40:12.010545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.226 [2024-12-13 10:40:12.010560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.226 qpair failed and we were unable to recover it. 00:38:18.226 [2024-12-13 10:40:12.010710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.226 [2024-12-13 10:40:12.010724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.226 qpair failed and we were unable to recover it. 00:38:18.226 [2024-12-13 10:40:12.010829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.226 [2024-12-13 10:40:12.010843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.226 qpair failed and we were unable to recover it. 00:38:18.226 [2024-12-13 10:40:12.010941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.226 [2024-12-13 10:40:12.010954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.226 qpair failed and we were unable to recover it. 00:38:18.226 [2024-12-13 10:40:12.011061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.226 [2024-12-13 10:40:12.011075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.226 qpair failed and we were unable to recover it. 00:38:18.226 [2024-12-13 10:40:12.011324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.226 [2024-12-13 10:40:12.011338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.226 qpair failed and we were unable to recover it. 00:38:18.226 [2024-12-13 10:40:12.011510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.226 [2024-12-13 10:40:12.011525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.226 qpair failed and we were unable to recover it. 00:38:18.226 [2024-12-13 10:40:12.011637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.226 [2024-12-13 10:40:12.011652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.226 qpair failed and we were unable to recover it. 00:38:18.226 [2024-12-13 10:40:12.011794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.226 [2024-12-13 10:40:12.011809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.226 qpair failed and we were unable to recover it. 00:38:18.226 [2024-12-13 10:40:12.011917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.226 [2024-12-13 10:40:12.011931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.226 qpair failed and we were unable to recover it. 00:38:18.226 [2024-12-13 10:40:12.012136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.226 [2024-12-13 10:40:12.012151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.226 qpair failed and we were unable to recover it. 00:38:18.226 [2024-12-13 10:40:12.012338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.226 [2024-12-13 10:40:12.012352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.226 qpair failed and we were unable to recover it. 00:38:18.226 [2024-12-13 10:40:12.012427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.226 [2024-12-13 10:40:12.012440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.226 qpair failed and we were unable to recover it. 00:38:18.226 [2024-12-13 10:40:12.012619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.226 [2024-12-13 10:40:12.012633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.226 qpair failed and we were unable to recover it. 00:38:18.226 [2024-12-13 10:40:12.012744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.226 [2024-12-13 10:40:12.012757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.226 qpair failed and we were unable to recover it. 00:38:18.226 [2024-12-13 10:40:12.012903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.226 [2024-12-13 10:40:12.012918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.226 qpair failed and we were unable to recover it. 00:38:18.226 [2024-12-13 10:40:12.013071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.226 [2024-12-13 10:40:12.013084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.226 qpair failed and we were unable to recover it. 00:38:18.226 [2024-12-13 10:40:12.013225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.226 [2024-12-13 10:40:12.013239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.226 qpair failed and we were unable to recover it. 00:38:18.226 [2024-12-13 10:40:12.013374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.226 [2024-12-13 10:40:12.013388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.226 qpair failed and we were unable to recover it. 00:38:18.226 [2024-12-13 10:40:12.013551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.226 [2024-12-13 10:40:12.013566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.226 qpair failed and we were unable to recover it. 00:38:18.226 [2024-12-13 10:40:12.013723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.226 [2024-12-13 10:40:12.013738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.226 qpair failed and we were unable to recover it. 00:38:18.226 [2024-12-13 10:40:12.013844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.226 [2024-12-13 10:40:12.013857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.226 qpair failed and we were unable to recover it. 00:38:18.226 [2024-12-13 10:40:12.014004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.226 [2024-12-13 10:40:12.014018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.226 qpair failed and we were unable to recover it. 00:38:18.226 [2024-12-13 10:40:12.014280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.226 [2024-12-13 10:40:12.014294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.226 qpair failed and we were unable to recover it. 00:38:18.226 [2024-12-13 10:40:12.014517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.226 [2024-12-13 10:40:12.014532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.226 qpair failed and we were unable to recover it. 00:38:18.226 [2024-12-13 10:40:12.014751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.226 [2024-12-13 10:40:12.014769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.226 qpair failed and we were unable to recover it. 00:38:18.226 [2024-12-13 10:40:12.014931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.226 [2024-12-13 10:40:12.014945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.226 qpair failed and we were unable to recover it. 00:38:18.226 [2024-12-13 10:40:12.015140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.226 [2024-12-13 10:40:12.015156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.226 qpair failed and we were unable to recover it. 00:38:18.226 [2024-12-13 10:40:12.015257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.226 [2024-12-13 10:40:12.015272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.226 qpair failed and we were unable to recover it. 00:38:18.226 [2024-12-13 10:40:12.015473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.226 [2024-12-13 10:40:12.015487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.226 qpair failed and we were unable to recover it. 00:38:18.227 [2024-12-13 10:40:12.015569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.227 [2024-12-13 10:40:12.015583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.227 qpair failed and we were unable to recover it. 00:38:18.227 [2024-12-13 10:40:12.015759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.227 [2024-12-13 10:40:12.015774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.227 qpair failed and we were unable to recover it. 00:38:18.227 [2024-12-13 10:40:12.015912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.227 [2024-12-13 10:40:12.015927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.227 qpair failed and we were unable to recover it. 00:38:18.227 [2024-12-13 10:40:12.016019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.227 [2024-12-13 10:40:12.016031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.227 qpair failed and we were unable to recover it. 00:38:18.227 [2024-12-13 10:40:12.016169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.227 [2024-12-13 10:40:12.016183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.227 qpair failed and we were unable to recover it. 00:38:18.227 [2024-12-13 10:40:12.016386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.227 [2024-12-13 10:40:12.016401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.227 qpair failed and we were unable to recover it. 00:38:18.227 [2024-12-13 10:40:12.016584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.227 [2024-12-13 10:40:12.016598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.227 qpair failed and we were unable to recover it. 00:38:18.227 [2024-12-13 10:40:12.016750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.227 [2024-12-13 10:40:12.016764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.227 qpair failed and we were unable to recover it. 00:38:18.227 [2024-12-13 10:40:12.016940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.227 [2024-12-13 10:40:12.016953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.227 qpair failed and we were unable to recover it. 00:38:18.227 [2024-12-13 10:40:12.017205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.227 [2024-12-13 10:40:12.017218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.227 qpair failed and we were unable to recover it. 00:38:18.227 [2024-12-13 10:40:12.017322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.227 [2024-12-13 10:40:12.017336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.227 qpair failed and we were unable to recover it. 00:38:18.227 [2024-12-13 10:40:12.017536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.227 [2024-12-13 10:40:12.017551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.227 qpair failed and we were unable to recover it. 00:38:18.227 [2024-12-13 10:40:12.017653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.227 [2024-12-13 10:40:12.017666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.227 qpair failed and we were unable to recover it. 00:38:18.227 [2024-12-13 10:40:12.017780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.227 [2024-12-13 10:40:12.017793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.227 qpair failed and we were unable to recover it. 00:38:18.227 [2024-12-13 10:40:12.017888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.227 [2024-12-13 10:40:12.017902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.227 qpair failed and we were unable to recover it. 00:38:18.227 [2024-12-13 10:40:12.018001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.227 [2024-12-13 10:40:12.018014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.227 qpair failed and we were unable to recover it. 00:38:18.227 [2024-12-13 10:40:12.018236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.227 [2024-12-13 10:40:12.018251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.227 qpair failed and we were unable to recover it. 00:38:18.227 [2024-12-13 10:40:12.018485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.227 [2024-12-13 10:40:12.018500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.227 qpair failed and we were unable to recover it. 00:38:18.227 [2024-12-13 10:40:12.018673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.227 [2024-12-13 10:40:12.018688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.227 qpair failed and we were unable to recover it. 00:38:18.227 [2024-12-13 10:40:12.018837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.227 [2024-12-13 10:40:12.018880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.227 qpair failed and we were unable to recover it. 00:38:18.227 [2024-12-13 10:40:12.019177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.227 [2024-12-13 10:40:12.019221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.227 qpair failed and we were unable to recover it. 00:38:18.227 [2024-12-13 10:40:12.019498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.227 [2024-12-13 10:40:12.019513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.227 qpair failed and we were unable to recover it. 00:38:18.227 [2024-12-13 10:40:12.019607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.227 [2024-12-13 10:40:12.019621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.227 qpair failed and we were unable to recover it. 00:38:18.227 [2024-12-13 10:40:12.019794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.227 [2024-12-13 10:40:12.019808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.227 qpair failed and we were unable to recover it. 00:38:18.227 [2024-12-13 10:40:12.019970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.227 [2024-12-13 10:40:12.019984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.227 qpair failed and we were unable to recover it. 00:38:18.227 [2024-12-13 10:40:12.020246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.227 [2024-12-13 10:40:12.020290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.227 qpair failed and we were unable to recover it. 00:38:18.227 [2024-12-13 10:40:12.020568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.227 [2024-12-13 10:40:12.020612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.227 qpair failed and we were unable to recover it. 00:38:18.227 [2024-12-13 10:40:12.020823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.227 [2024-12-13 10:40:12.020866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.227 qpair failed and we were unable to recover it. 00:38:18.227 [2024-12-13 10:40:12.021120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.227 [2024-12-13 10:40:12.021162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.227 qpair failed and we were unable to recover it. 00:38:18.227 [2024-12-13 10:40:12.021396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.227 [2024-12-13 10:40:12.021409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.227 qpair failed and we were unable to recover it. 00:38:18.227 [2024-12-13 10:40:12.021547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.227 [2024-12-13 10:40:12.021562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.227 qpair failed and we were unable to recover it. 00:38:18.227 [2024-12-13 10:40:12.021648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.227 [2024-12-13 10:40:12.021661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.227 qpair failed and we were unable to recover it. 00:38:18.227 [2024-12-13 10:40:12.021744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.227 [2024-12-13 10:40:12.021761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.227 qpair failed and we were unable to recover it. 00:38:18.227 [2024-12-13 10:40:12.021862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.227 [2024-12-13 10:40:12.021876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.227 qpair failed and we were unable to recover it. 00:38:18.227 [2024-12-13 10:40:12.022011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.227 [2024-12-13 10:40:12.022025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.227 qpair failed and we were unable to recover it. 00:38:18.227 [2024-12-13 10:40:12.022183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.227 [2024-12-13 10:40:12.022197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.227 qpair failed and we were unable to recover it. 00:38:18.227 [2024-12-13 10:40:12.022375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.227 [2024-12-13 10:40:12.022389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.227 qpair failed and we were unable to recover it. 00:38:18.228 [2024-12-13 10:40:12.022628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.228 [2024-12-13 10:40:12.022642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.228 qpair failed and we were unable to recover it. 00:38:18.228 [2024-12-13 10:40:12.022718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.228 [2024-12-13 10:40:12.022732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.228 qpair failed and we were unable to recover it. 00:38:18.228 [2024-12-13 10:40:12.022840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.228 [2024-12-13 10:40:12.022854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.228 qpair failed and we were unable to recover it. 00:38:18.228 [2024-12-13 10:40:12.023002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.228 [2024-12-13 10:40:12.023016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.228 qpair failed and we were unable to recover it. 00:38:18.228 [2024-12-13 10:40:12.023242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.228 [2024-12-13 10:40:12.023255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.228 qpair failed and we were unable to recover it. 00:38:18.228 [2024-12-13 10:40:12.023421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.228 [2024-12-13 10:40:12.023435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.228 qpair failed and we were unable to recover it. 00:38:18.228 [2024-12-13 10:40:12.023604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.228 [2024-12-13 10:40:12.023618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.228 qpair failed and we were unable to recover it. 00:38:18.228 [2024-12-13 10:40:12.023707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.228 [2024-12-13 10:40:12.023720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.228 qpair failed and we were unable to recover it. 00:38:18.228 [2024-12-13 10:40:12.023899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.228 [2024-12-13 10:40:12.023912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.228 qpair failed and we were unable to recover it. 00:38:18.228 [2024-12-13 10:40:12.024107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.228 [2024-12-13 10:40:12.024121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.228 qpair failed and we were unable to recover it. 00:38:18.228 [2024-12-13 10:40:12.024366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.228 [2024-12-13 10:40:12.024380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.228 qpair failed and we were unable to recover it. 00:38:18.228 [2024-12-13 10:40:12.024610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.228 [2024-12-13 10:40:12.024625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.228 qpair failed and we were unable to recover it. 00:38:18.228 [2024-12-13 10:40:12.024783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.228 [2024-12-13 10:40:12.024797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.228 qpair failed and we were unable to recover it. 00:38:18.228 [2024-12-13 10:40:12.024944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.228 [2024-12-13 10:40:12.024957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.228 qpair failed and we were unable to recover it. 00:38:18.228 [2024-12-13 10:40:12.025055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.228 [2024-12-13 10:40:12.025068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.228 qpair failed and we were unable to recover it. 00:38:18.228 [2024-12-13 10:40:12.025167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.228 [2024-12-13 10:40:12.025181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.228 qpair failed and we were unable to recover it. 00:38:18.228 [2024-12-13 10:40:12.025344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.228 [2024-12-13 10:40:12.025358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.228 qpair failed and we were unable to recover it. 00:38:18.228 [2024-12-13 10:40:12.025576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.228 [2024-12-13 10:40:12.025590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.228 qpair failed and we were unable to recover it. 00:38:18.228 [2024-12-13 10:40:12.025810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.228 [2024-12-13 10:40:12.025825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.228 qpair failed and we were unable to recover it. 00:38:18.228 [2024-12-13 10:40:12.026129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.228 [2024-12-13 10:40:12.026172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.228 qpair failed and we were unable to recover it. 00:38:18.228 [2024-12-13 10:40:12.026400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.228 [2024-12-13 10:40:12.026441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.228 qpair failed and we were unable to recover it. 00:38:18.228 [2024-12-13 10:40:12.026683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.228 [2024-12-13 10:40:12.026727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.228 qpair failed and we were unable to recover it. 00:38:18.228 [2024-12-13 10:40:12.026930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.228 [2024-12-13 10:40:12.026984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.228 qpair failed and we were unable to recover it. 00:38:18.228 [2024-12-13 10:40:12.027268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.228 [2024-12-13 10:40:12.027320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.228 qpair failed and we were unable to recover it. 00:38:18.228 [2024-12-13 10:40:12.027574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.228 [2024-12-13 10:40:12.027588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.228 qpair failed and we were unable to recover it. 00:38:18.228 [2024-12-13 10:40:12.027688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.228 [2024-12-13 10:40:12.027704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.228 qpair failed and we were unable to recover it. 00:38:18.228 [2024-12-13 10:40:12.027808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.228 [2024-12-13 10:40:12.027823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.228 qpair failed and we were unable to recover it. 00:38:18.228 [2024-12-13 10:40:12.027977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.228 [2024-12-13 10:40:12.027994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.228 qpair failed and we were unable to recover it. 00:38:18.228 [2024-12-13 10:40:12.028169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.228 [2024-12-13 10:40:12.028183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.228 qpair failed and we were unable to recover it. 00:38:18.228 [2024-12-13 10:40:12.028288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.228 [2024-12-13 10:40:12.028302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.228 qpair failed and we were unable to recover it. 00:38:18.228 [2024-12-13 10:40:12.028483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.228 [2024-12-13 10:40:12.028497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.228 qpair failed and we were unable to recover it. 00:38:18.228 [2024-12-13 10:40:12.028670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.228 [2024-12-13 10:40:12.028684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.228 qpair failed and we were unable to recover it. 00:38:18.228 [2024-12-13 10:40:12.028869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.228 [2024-12-13 10:40:12.028882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.228 qpair failed and we were unable to recover it. 00:38:18.228 [2024-12-13 10:40:12.029055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.228 [2024-12-13 10:40:12.029069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.228 qpair failed and we were unable to recover it. 00:38:18.228 [2024-12-13 10:40:12.029295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.228 [2024-12-13 10:40:12.029309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.228 qpair failed and we were unable to recover it. 00:38:18.228 [2024-12-13 10:40:12.029530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.228 [2024-12-13 10:40:12.029544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.228 qpair failed and we were unable to recover it. 00:38:18.228 [2024-12-13 10:40:12.029721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.228 [2024-12-13 10:40:12.029736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.228 qpair failed and we were unable to recover it. 00:38:18.229 [2024-12-13 10:40:12.029835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.229 [2024-12-13 10:40:12.029849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.229 qpair failed and we were unable to recover it. 00:38:18.229 [2024-12-13 10:40:12.029998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.229 [2024-12-13 10:40:12.030011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.229 qpair failed and we were unable to recover it. 00:38:18.229 [2024-12-13 10:40:12.030205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.229 [2024-12-13 10:40:12.030220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.229 qpair failed and we were unable to recover it. 00:38:18.229 [2024-12-13 10:40:12.030459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.229 [2024-12-13 10:40:12.030473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.229 qpair failed and we were unable to recover it. 00:38:18.229 [2024-12-13 10:40:12.030648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.229 [2024-12-13 10:40:12.030662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.229 qpair failed and we were unable to recover it. 00:38:18.229 [2024-12-13 10:40:12.030769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.229 [2024-12-13 10:40:12.030783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.229 qpair failed and we were unable to recover it. 00:38:18.229 [2024-12-13 10:40:12.030996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.229 [2024-12-13 10:40:12.031010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.229 qpair failed and we were unable to recover it. 00:38:18.229 [2024-12-13 10:40:12.031179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.229 [2024-12-13 10:40:12.031192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.229 qpair failed and we were unable to recover it. 00:38:18.229 [2024-12-13 10:40:12.031416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.229 [2024-12-13 10:40:12.031429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.229 qpair failed and we were unable to recover it. 00:38:18.229 [2024-12-13 10:40:12.031667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.229 [2024-12-13 10:40:12.031681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.229 qpair failed and we were unable to recover it. 00:38:18.229 [2024-12-13 10:40:12.031847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.229 [2024-12-13 10:40:12.031860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.229 qpair failed and we were unable to recover it. 00:38:18.229 [2024-12-13 10:40:12.032012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.229 [2024-12-13 10:40:12.032025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.229 qpair failed and we were unable to recover it. 00:38:18.229 [2024-12-13 10:40:12.032156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.229 [2024-12-13 10:40:12.032170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.229 qpair failed and we were unable to recover it. 00:38:18.229 [2024-12-13 10:40:12.032359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.229 [2024-12-13 10:40:12.032373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.229 qpair failed and we were unable to recover it. 00:38:18.229 [2024-12-13 10:40:12.032517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.229 [2024-12-13 10:40:12.032531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.229 qpair failed and we were unable to recover it. 00:38:18.229 [2024-12-13 10:40:12.032635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.229 [2024-12-13 10:40:12.032649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.229 qpair failed and we were unable to recover it. 00:38:18.229 [2024-12-13 10:40:12.032901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.229 [2024-12-13 10:40:12.032943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.229 qpair failed and we were unable to recover it. 00:38:18.229 [2024-12-13 10:40:12.033167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.229 [2024-12-13 10:40:12.033211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.229 qpair failed and we were unable to recover it. 00:38:18.229 [2024-12-13 10:40:12.033521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.229 [2024-12-13 10:40:12.033566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.229 qpair failed and we were unable to recover it. 00:38:18.229 [2024-12-13 10:40:12.033833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.229 [2024-12-13 10:40:12.033876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.229 qpair failed and we were unable to recover it. 00:38:18.229 [2024-12-13 10:40:12.034168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.229 [2024-12-13 10:40:12.034215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.229 qpair failed and we were unable to recover it. 00:38:18.229 [2024-12-13 10:40:12.034371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.229 [2024-12-13 10:40:12.034386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.229 qpair failed and we were unable to recover it. 00:38:18.229 [2024-12-13 10:40:12.034539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.229 [2024-12-13 10:40:12.034553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.229 qpair failed and we were unable to recover it. 00:38:18.229 [2024-12-13 10:40:12.034672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.229 [2024-12-13 10:40:12.034686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.229 qpair failed and we were unable to recover it. 00:38:18.229 [2024-12-13 10:40:12.034840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.229 [2024-12-13 10:40:12.034853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.229 qpair failed and we were unable to recover it. 00:38:18.229 [2024-12-13 10:40:12.035020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.229 [2024-12-13 10:40:12.035033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.229 qpair failed and we were unable to recover it. 00:38:18.229 [2024-12-13 10:40:12.035189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.229 [2024-12-13 10:40:12.035203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.229 qpair failed and we were unable to recover it. 00:38:18.229 [2024-12-13 10:40:12.035456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.229 [2024-12-13 10:40:12.035470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.229 qpair failed and we were unable to recover it. 00:38:18.229 [2024-12-13 10:40:12.035604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.229 [2024-12-13 10:40:12.035618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.229 qpair failed and we were unable to recover it. 00:38:18.229 [2024-12-13 10:40:12.035749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.229 [2024-12-13 10:40:12.035763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.229 qpair failed and we were unable to recover it. 00:38:18.229 [2024-12-13 10:40:12.035915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.229 [2024-12-13 10:40:12.035931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.229 qpair failed and we were unable to recover it. 00:38:18.229 [2024-12-13 10:40:12.036161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.229 [2024-12-13 10:40:12.036175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.229 qpair failed and we were unable to recover it. 00:38:18.229 [2024-12-13 10:40:12.036325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.229 [2024-12-13 10:40:12.036339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.229 qpair failed and we were unable to recover it. 00:38:18.229 [2024-12-13 10:40:12.036577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.229 [2024-12-13 10:40:12.036592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.229 qpair failed and we were unable to recover it. 00:38:18.229 [2024-12-13 10:40:12.036694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.229 [2024-12-13 10:40:12.036708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.229 qpair failed and we were unable to recover it. 00:38:18.229 [2024-12-13 10:40:12.036864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.229 [2024-12-13 10:40:12.036906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.229 qpair failed and we were unable to recover it. 00:38:18.229 [2024-12-13 10:40:12.037127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.229 [2024-12-13 10:40:12.037171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.229 qpair failed and we were unable to recover it. 00:38:18.229 [2024-12-13 10:40:12.037386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.229 [2024-12-13 10:40:12.037428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.229 qpair failed and we were unable to recover it. 00:38:18.229 [2024-12-13 10:40:12.037625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.229 [2024-12-13 10:40:12.037639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.229 qpair failed and we were unable to recover it. 00:38:18.230 [2024-12-13 10:40:12.037805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.230 [2024-12-13 10:40:12.037820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.230 qpair failed and we were unable to recover it. 00:38:18.230 [2024-12-13 10:40:12.037917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.230 [2024-12-13 10:40:12.037929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.230 qpair failed and we were unable to recover it. 00:38:18.230 [2024-12-13 10:40:12.038075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.230 [2024-12-13 10:40:12.038089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.230 qpair failed and we were unable to recover it. 00:38:18.230 [2024-12-13 10:40:12.038284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.230 [2024-12-13 10:40:12.038298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.230 qpair failed and we were unable to recover it. 00:38:18.230 [2024-12-13 10:40:12.038435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.230 [2024-12-13 10:40:12.038454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.230 qpair failed and we were unable to recover it. 00:38:18.230 [2024-12-13 10:40:12.038608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.230 [2024-12-13 10:40:12.038623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.230 qpair failed and we were unable to recover it. 00:38:18.230 [2024-12-13 10:40:12.038762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.230 [2024-12-13 10:40:12.038776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.230 qpair failed and we were unable to recover it. 00:38:18.230 [2024-12-13 10:40:12.038873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.230 [2024-12-13 10:40:12.038886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.230 qpair failed and we were unable to recover it. 00:38:18.230 [2024-12-13 10:40:12.038971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.230 [2024-12-13 10:40:12.038985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.230 qpair failed and we were unable to recover it. 00:38:18.230 [2024-12-13 10:40:12.039140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.230 [2024-12-13 10:40:12.039155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.230 qpair failed and we were unable to recover it. 00:38:18.230 [2024-12-13 10:40:12.039356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.230 [2024-12-13 10:40:12.039375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.230 qpair failed and we were unable to recover it. 00:38:18.230 [2024-12-13 10:40:12.039526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.230 [2024-12-13 10:40:12.039540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.230 qpair failed and we were unable to recover it. 00:38:18.230 [2024-12-13 10:40:12.039701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.230 [2024-12-13 10:40:12.039715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.230 qpair failed and we were unable to recover it. 00:38:18.230 [2024-12-13 10:40:12.039878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.230 [2024-12-13 10:40:12.039893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.230 qpair failed and we were unable to recover it. 00:38:18.230 [2024-12-13 10:40:12.040040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.230 [2024-12-13 10:40:12.040053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.230 qpair failed and we were unable to recover it. 00:38:18.230 [2024-12-13 10:40:12.040268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.230 [2024-12-13 10:40:12.040281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.230 qpair failed and we were unable to recover it. 00:38:18.230 [2024-12-13 10:40:12.040421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.230 [2024-12-13 10:40:12.040434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.230 qpair failed and we were unable to recover it. 00:38:18.230 [2024-12-13 10:40:12.040559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.230 [2024-12-13 10:40:12.040573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.230 qpair failed and we were unable to recover it. 00:38:18.230 [2024-12-13 10:40:12.040745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.230 [2024-12-13 10:40:12.040758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.230 qpair failed and we were unable to recover it. 00:38:18.230 [2024-12-13 10:40:12.040918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.230 [2024-12-13 10:40:12.040931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.230 qpair failed and we were unable to recover it. 00:38:18.230 [2024-12-13 10:40:12.041154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.230 [2024-12-13 10:40:12.041168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.230 qpair failed and we were unable to recover it. 00:38:18.230 [2024-12-13 10:40:12.041300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.230 [2024-12-13 10:40:12.041313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.230 qpair failed and we were unable to recover it. 00:38:18.230 [2024-12-13 10:40:12.041538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.230 [2024-12-13 10:40:12.041553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.230 qpair failed and we were unable to recover it. 00:38:18.230 [2024-12-13 10:40:12.041788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.230 [2024-12-13 10:40:12.041803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.230 qpair failed and we were unable to recover it. 00:38:18.230 [2024-12-13 10:40:12.041949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.230 [2024-12-13 10:40:12.041963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.230 qpair failed and we were unable to recover it. 00:38:18.230 [2024-12-13 10:40:12.042209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.230 [2024-12-13 10:40:12.042252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.230 qpair failed and we were unable to recover it. 00:38:18.230 [2024-12-13 10:40:12.042513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.230 [2024-12-13 10:40:12.042557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.230 qpair failed and we were unable to recover it. 00:38:18.230 [2024-12-13 10:40:12.042835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.230 [2024-12-13 10:40:12.042877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.230 qpair failed and we were unable to recover it. 00:38:18.230 [2024-12-13 10:40:12.043023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.230 [2024-12-13 10:40:12.043065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.230 qpair failed and we were unable to recover it. 00:38:18.230 [2024-12-13 10:40:12.043348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.230 [2024-12-13 10:40:12.043362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.230 qpair failed and we were unable to recover it. 00:38:18.230 [2024-12-13 10:40:12.043666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.230 [2024-12-13 10:40:12.043681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.230 qpair failed and we were unable to recover it. 00:38:18.230 [2024-12-13 10:40:12.043772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.230 [2024-12-13 10:40:12.043786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.230 qpair failed and we were unable to recover it. 00:38:18.230 [2024-12-13 10:40:12.043953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.230 [2024-12-13 10:40:12.043967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.230 qpair failed and we were unable to recover it. 00:38:18.230 [2024-12-13 10:40:12.044155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.230 [2024-12-13 10:40:12.044169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.230 qpair failed and we were unable to recover it. 00:38:18.230 [2024-12-13 10:40:12.044438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.230 [2024-12-13 10:40:12.044456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.230 qpair failed and we were unable to recover it. 00:38:18.230 [2024-12-13 10:40:12.044666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.230 [2024-12-13 10:40:12.044680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.230 qpair failed and we were unable to recover it. 00:38:18.230 [2024-12-13 10:40:12.044883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.230 [2024-12-13 10:40:12.044897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.230 qpair failed and we were unable to recover it. 00:38:18.230 [2024-12-13 10:40:12.045031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.230 [2024-12-13 10:40:12.045044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.230 qpair failed and we were unable to recover it. 00:38:18.230 [2024-12-13 10:40:12.045152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.230 [2024-12-13 10:40:12.045166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.230 qpair failed and we were unable to recover it. 00:38:18.230 [2024-12-13 10:40:12.045317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.231 [2024-12-13 10:40:12.045331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.231 qpair failed and we were unable to recover it. 00:38:18.231 [2024-12-13 10:40:12.045545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.231 [2024-12-13 10:40:12.045559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.231 qpair failed and we were unable to recover it. 00:38:18.231 [2024-12-13 10:40:12.045660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.231 [2024-12-13 10:40:12.045676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.231 qpair failed and we were unable to recover it. 00:38:18.231 [2024-12-13 10:40:12.045824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.231 [2024-12-13 10:40:12.045854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.231 qpair failed and we were unable to recover it. 00:38:18.231 [2024-12-13 10:40:12.046000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.231 [2024-12-13 10:40:12.046042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.231 qpair failed and we were unable to recover it. 00:38:18.231 [2024-12-13 10:40:12.046255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.231 [2024-12-13 10:40:12.046296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.231 qpair failed and we were unable to recover it. 00:38:18.231 [2024-12-13 10:40:12.046558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.231 [2024-12-13 10:40:12.046572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.231 qpair failed and we were unable to recover it. 00:38:18.231 [2024-12-13 10:40:12.046739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.231 [2024-12-13 10:40:12.046753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.231 qpair failed and we were unable to recover it. 00:38:18.231 [2024-12-13 10:40:12.046899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.231 [2024-12-13 10:40:12.046913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.231 qpair failed and we were unable to recover it. 00:38:18.231 [2024-12-13 10:40:12.047082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.231 [2024-12-13 10:40:12.047096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.231 qpair failed and we were unable to recover it. 00:38:18.231 [2024-12-13 10:40:12.047308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.231 [2024-12-13 10:40:12.047322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.231 qpair failed and we were unable to recover it. 00:38:18.231 [2024-12-13 10:40:12.047420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.231 [2024-12-13 10:40:12.047434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.231 qpair failed and we were unable to recover it. 00:38:18.231 [2024-12-13 10:40:12.047649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.231 [2024-12-13 10:40:12.047663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.231 qpair failed and we were unable to recover it. 00:38:18.231 [2024-12-13 10:40:12.047818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.231 [2024-12-13 10:40:12.047832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.231 qpair failed and we were unable to recover it. 00:38:18.231 [2024-12-13 10:40:12.047981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.231 [2024-12-13 10:40:12.047995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.231 qpair failed and we were unable to recover it. 00:38:18.231 [2024-12-13 10:40:12.048167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.231 [2024-12-13 10:40:12.048180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.231 qpair failed and we were unable to recover it. 00:38:18.231 [2024-12-13 10:40:12.048430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.231 [2024-12-13 10:40:12.048444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.231 qpair failed and we were unable to recover it. 00:38:18.231 [2024-12-13 10:40:12.048575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.231 [2024-12-13 10:40:12.048589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.231 qpair failed and we were unable to recover it. 00:38:18.231 [2024-12-13 10:40:12.048685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.231 [2024-12-13 10:40:12.048698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.231 qpair failed and we were unable to recover it. 00:38:18.231 [2024-12-13 10:40:12.048902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.231 [2024-12-13 10:40:12.048916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.231 qpair failed and we were unable to recover it. 00:38:18.231 [2024-12-13 10:40:12.049107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.231 [2024-12-13 10:40:12.049120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.231 qpair failed and we were unable to recover it. 00:38:18.231 [2024-12-13 10:40:12.049276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.231 [2024-12-13 10:40:12.049290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.231 qpair failed and we were unable to recover it. 00:38:18.231 [2024-12-13 10:40:12.049550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.231 [2024-12-13 10:40:12.049566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.231 qpair failed and we were unable to recover it. 00:38:18.231 [2024-12-13 10:40:12.049667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.231 [2024-12-13 10:40:12.049680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.231 qpair failed and we were unable to recover it. 00:38:18.231 [2024-12-13 10:40:12.049779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.231 [2024-12-13 10:40:12.049792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.231 qpair failed and we were unable to recover it. 00:38:18.231 [2024-12-13 10:40:12.049891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.231 [2024-12-13 10:40:12.049904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.231 qpair failed and we were unable to recover it. 00:38:18.231 [2024-12-13 10:40:12.050197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.231 [2024-12-13 10:40:12.050242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:18.231 qpair failed and we were unable to recover it. 00:38:18.231 [2024-12-13 10:40:12.050485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.231 [2024-12-13 10:40:12.050538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:18.231 qpair failed and we were unable to recover it. 00:38:18.231 [2024-12-13 10:40:12.050700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.231 [2024-12-13 10:40:12.050744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:18.231 qpair failed and we were unable to recover it. 00:38:18.231 [2024-12-13 10:40:12.050958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.231 [2024-12-13 10:40:12.051003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:18.231 qpair failed and we were unable to recover it. 00:38:18.231 [2024-12-13 10:40:12.051207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.231 [2024-12-13 10:40:12.051250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:18.231 qpair failed and we were unable to recover it. 00:38:18.231 [2024-12-13 10:40:12.051473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.231 [2024-12-13 10:40:12.051529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:18.231 qpair failed and we were unable to recover it. 00:38:18.231 [2024-12-13 10:40:12.051674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.231 [2024-12-13 10:40:12.051692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.231 qpair failed and we were unable to recover it. 00:38:18.231 [2024-12-13 10:40:12.051908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.231 [2024-12-13 10:40:12.051922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.231 qpair failed and we were unable to recover it. 00:38:18.231 [2024-12-13 10:40:12.052111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.231 [2024-12-13 10:40:12.052125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.231 qpair failed and we were unable to recover it. 00:38:18.231 [2024-12-13 10:40:12.052296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.231 [2024-12-13 10:40:12.052310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.231 qpair failed and we were unable to recover it. 00:38:18.231 [2024-12-13 10:40:12.052488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.231 [2024-12-13 10:40:12.052502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.231 qpair failed and we were unable to recover it. 00:38:18.231 [2024-12-13 10:40:12.052612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.231 [2024-12-13 10:40:12.052626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.231 qpair failed and we were unable to recover it. 00:38:18.231 [2024-12-13 10:40:12.052714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.231 [2024-12-13 10:40:12.052728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.231 qpair failed and we were unable to recover it. 00:38:18.232 [2024-12-13 10:40:12.052881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.232 [2024-12-13 10:40:12.052894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.232 qpair failed and we were unable to recover it. 00:38:18.232 [2024-12-13 10:40:12.053010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.232 [2024-12-13 10:40:12.053026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.232 qpair failed and we were unable to recover it. 00:38:18.232 [2024-12-13 10:40:12.053290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.232 [2024-12-13 10:40:12.053345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.232 qpair failed and we were unable to recover it. 00:38:18.232 [2024-12-13 10:40:12.053601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.232 [2024-12-13 10:40:12.053645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.232 qpair failed and we were unable to recover it. 00:38:18.232 [2024-12-13 10:40:12.053841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.232 [2024-12-13 10:40:12.053883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.232 qpair failed and we were unable to recover it. 00:38:18.232 [2024-12-13 10:40:12.054282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.232 [2024-12-13 10:40:12.054324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.232 qpair failed and we were unable to recover it. 00:38:18.232 [2024-12-13 10:40:12.054527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.232 [2024-12-13 10:40:12.054541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.232 qpair failed and we were unable to recover it. 00:38:18.232 [2024-12-13 10:40:12.054649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.232 [2024-12-13 10:40:12.054663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.232 qpair failed and we were unable to recover it. 00:38:18.232 [2024-12-13 10:40:12.054817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.232 [2024-12-13 10:40:12.054830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.232 qpair failed and we were unable to recover it. 00:38:18.232 [2024-12-13 10:40:12.054913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.232 [2024-12-13 10:40:12.054926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.232 qpair failed and we were unable to recover it. 00:38:18.232 [2024-12-13 10:40:12.055076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.232 [2024-12-13 10:40:12.055091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.232 qpair failed and we were unable to recover it. 00:38:18.232 [2024-12-13 10:40:12.055188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.232 [2024-12-13 10:40:12.055202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.232 qpair failed and we were unable to recover it. 00:38:18.232 [2024-12-13 10:40:12.055360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.232 [2024-12-13 10:40:12.055374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.232 qpair failed and we were unable to recover it. 00:38:18.232 [2024-12-13 10:40:12.055531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.232 [2024-12-13 10:40:12.055546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.232 qpair failed and we were unable to recover it. 00:38:18.232 [2024-12-13 10:40:12.055622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.232 [2024-12-13 10:40:12.055635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.232 qpair failed and we were unable to recover it. 00:38:18.232 [2024-12-13 10:40:12.055791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.232 [2024-12-13 10:40:12.055804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.232 qpair failed and we were unable to recover it. 00:38:18.232 [2024-12-13 10:40:12.055977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.232 [2024-12-13 10:40:12.055990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.232 qpair failed and we were unable to recover it. 00:38:18.232 [2024-12-13 10:40:12.056162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.232 [2024-12-13 10:40:12.056176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.232 qpair failed and we were unable to recover it. 00:38:18.232 [2024-12-13 10:40:12.056320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.232 [2024-12-13 10:40:12.056334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.232 qpair failed and we were unable to recover it. 00:38:18.232 [2024-12-13 10:40:12.056576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.232 [2024-12-13 10:40:12.056618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.232 qpair failed and we were unable to recover it. 00:38:18.232 [2024-12-13 10:40:12.056796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.232 [2024-12-13 10:40:12.056839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.232 qpair failed and we were unable to recover it. 00:38:18.232 [2024-12-13 10:40:12.057125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.232 [2024-12-13 10:40:12.057167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.232 qpair failed and we were unable to recover it. 00:38:18.232 [2024-12-13 10:40:12.057358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.232 [2024-12-13 10:40:12.057401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.232 qpair failed and we were unable to recover it. 00:38:18.232 [2024-12-13 10:40:12.057619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.232 [2024-12-13 10:40:12.057633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.232 qpair failed and we were unable to recover it. 00:38:18.232 [2024-12-13 10:40:12.057786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.232 [2024-12-13 10:40:12.057800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.232 qpair failed and we were unable to recover it. 00:38:18.232 [2024-12-13 10:40:12.057982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.232 [2024-12-13 10:40:12.057996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.232 qpair failed and we were unable to recover it. 00:38:18.232 [2024-12-13 10:40:12.058154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.232 [2024-12-13 10:40:12.058168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.232 qpair failed and we were unable to recover it. 00:38:18.232 [2024-12-13 10:40:12.058419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.232 [2024-12-13 10:40:12.058433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.232 qpair failed and we were unable to recover it. 00:38:18.232 [2024-12-13 10:40:12.058578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.232 [2024-12-13 10:40:12.058593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.232 qpair failed and we were unable to recover it. 00:38:18.232 [2024-12-13 10:40:12.058734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.232 [2024-12-13 10:40:12.058748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.232 qpair failed and we were unable to recover it. 00:38:18.232 [2024-12-13 10:40:12.058947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.232 [2024-12-13 10:40:12.058961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.232 qpair failed and we were unable to recover it. 00:38:18.232 [2024-12-13 10:40:12.059215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.232 [2024-12-13 10:40:12.059229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.232 qpair failed and we were unable to recover it. 00:38:18.232 [2024-12-13 10:40:12.059445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.232 [2024-12-13 10:40:12.059465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.232 qpair failed and we were unable to recover it. 00:38:18.232 [2024-12-13 10:40:12.059610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.232 [2024-12-13 10:40:12.059626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.232 qpair failed and we were unable to recover it. 00:38:18.232 [2024-12-13 10:40:12.059840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.232 [2024-12-13 10:40:12.059853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.233 qpair failed and we were unable to recover it. 00:38:18.233 [2024-12-13 10:40:12.060010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.233 [2024-12-13 10:40:12.060023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.233 qpair failed and we were unable to recover it. 00:38:18.233 [2024-12-13 10:40:12.060216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.233 [2024-12-13 10:40:12.060229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.233 qpair failed and we were unable to recover it. 00:38:18.233 [2024-12-13 10:40:12.060371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.233 [2024-12-13 10:40:12.060385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.233 qpair failed and we were unable to recover it. 00:38:18.233 [2024-12-13 10:40:12.060549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.233 [2024-12-13 10:40:12.060563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.233 qpair failed and we were unable to recover it. 00:38:18.233 [2024-12-13 10:40:12.060717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.233 [2024-12-13 10:40:12.060731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.233 qpair failed and we were unable to recover it. 00:38:18.233 [2024-12-13 10:40:12.060846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.233 [2024-12-13 10:40:12.060860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.233 qpair failed and we were unable to recover it. 00:38:18.233 [2024-12-13 10:40:12.060991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.233 [2024-12-13 10:40:12.061004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.233 qpair failed and we were unable to recover it. 00:38:18.233 [2024-12-13 10:40:12.061306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.233 [2024-12-13 10:40:12.061320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.233 qpair failed and we were unable to recover it. 00:38:18.233 [2024-12-13 10:40:12.061517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.233 [2024-12-13 10:40:12.061531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.233 qpair failed and we were unable to recover it. 00:38:18.233 [2024-12-13 10:40:12.061683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.233 [2024-12-13 10:40:12.061697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.233 qpair failed and we were unable to recover it. 00:38:18.233 [2024-12-13 10:40:12.061786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.233 [2024-12-13 10:40:12.061799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.233 qpair failed and we were unable to recover it. 00:38:18.233 [2024-12-13 10:40:12.062000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.233 [2024-12-13 10:40:12.062014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.233 qpair failed and we were unable to recover it. 00:38:18.233 [2024-12-13 10:40:12.062276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.233 [2024-12-13 10:40:12.062290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.233 qpair failed and we were unable to recover it. 00:38:18.233 [2024-12-13 10:40:12.062536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.233 [2024-12-13 10:40:12.062549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.233 qpair failed and we were unable to recover it. 00:38:18.233 [2024-12-13 10:40:12.062714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.233 [2024-12-13 10:40:12.062728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.233 qpair failed and we were unable to recover it. 00:38:18.233 [2024-12-13 10:40:12.062820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.233 [2024-12-13 10:40:12.062834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.233 qpair failed and we were unable to recover it. 00:38:18.233 [2024-12-13 10:40:12.062985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.233 [2024-12-13 10:40:12.062999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.233 qpair failed and we were unable to recover it. 00:38:18.233 [2024-12-13 10:40:12.063277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.233 [2024-12-13 10:40:12.063319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.233 qpair failed and we were unable to recover it. 00:38:18.233 [2024-12-13 10:40:12.063601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.233 [2024-12-13 10:40:12.063645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.233 qpair failed and we were unable to recover it. 00:38:18.233 [2024-12-13 10:40:12.063876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.233 [2024-12-13 10:40:12.063918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.233 qpair failed and we were unable to recover it. 00:38:18.233 [2024-12-13 10:40:12.064253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.233 [2024-12-13 10:40:12.064295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.233 qpair failed and we were unable to recover it. 00:38:18.233 [2024-12-13 10:40:12.064581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.233 [2024-12-13 10:40:12.064596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.233 qpair failed and we were unable to recover it. 00:38:18.233 [2024-12-13 10:40:12.064768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.233 [2024-12-13 10:40:12.064781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.233 qpair failed and we were unable to recover it. 00:38:18.233 [2024-12-13 10:40:12.064961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.233 [2024-12-13 10:40:12.064975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.233 qpair failed and we were unable to recover it. 00:38:18.233 [2024-12-13 10:40:12.065233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.233 [2024-12-13 10:40:12.065246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.233 qpair failed and we were unable to recover it. 00:38:18.233 [2024-12-13 10:40:12.065402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.233 [2024-12-13 10:40:12.065416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.233 qpair failed and we were unable to recover it. 00:38:18.233 [2024-12-13 10:40:12.065599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.233 [2024-12-13 10:40:12.065614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.233 qpair failed and we were unable to recover it. 00:38:18.233 [2024-12-13 10:40:12.065834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.233 [2024-12-13 10:40:12.065848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.233 qpair failed and we were unable to recover it. 00:38:18.233 [2024-12-13 10:40:12.066002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.233 [2024-12-13 10:40:12.066016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.233 qpair failed and we were unable to recover it. 00:38:18.233 [2024-12-13 10:40:12.066243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.233 [2024-12-13 10:40:12.066256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.233 qpair failed and we were unable to recover it. 00:38:18.233 [2024-12-13 10:40:12.066416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.233 [2024-12-13 10:40:12.066430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.233 qpair failed and we were unable to recover it. 00:38:18.233 [2024-12-13 10:40:12.066599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.233 [2024-12-13 10:40:12.066618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.233 qpair failed and we were unable to recover it. 00:38:18.233 [2024-12-13 10:40:12.066787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.233 [2024-12-13 10:40:12.066801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.233 qpair failed and we were unable to recover it. 00:38:18.233 [2024-12-13 10:40:12.066971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.233 [2024-12-13 10:40:12.066984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.233 qpair failed and we were unable to recover it. 00:38:18.233 [2024-12-13 10:40:12.067227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.233 [2024-12-13 10:40:12.067242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.233 qpair failed and we were unable to recover it. 00:38:18.233 [2024-12-13 10:40:12.067537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.233 [2024-12-13 10:40:12.067580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.233 qpair failed and we were unable to recover it. 00:38:18.233 [2024-12-13 10:40:12.067836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.233 [2024-12-13 10:40:12.067881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.233 qpair failed and we were unable to recover it. 00:38:18.233 [2024-12-13 10:40:12.068162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.233 [2024-12-13 10:40:12.068205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.233 qpair failed and we were unable to recover it. 00:38:18.518 [2024-12-13 10:40:12.068487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.518 [2024-12-13 10:40:12.068543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.518 qpair failed and we were unable to recover it. 00:38:18.518 [2024-12-13 10:40:12.068748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.518 [2024-12-13 10:40:12.068762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.518 qpair failed and we were unable to recover it. 00:38:18.518 [2024-12-13 10:40:12.068966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.518 [2024-12-13 10:40:12.068980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.518 qpair failed and we were unable to recover it. 00:38:18.518 [2024-12-13 10:40:12.069162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.518 [2024-12-13 10:40:12.069176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.518 qpair failed and we were unable to recover it. 00:38:18.518 [2024-12-13 10:40:12.069342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.518 [2024-12-13 10:40:12.069357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.518 qpair failed and we were unable to recover it. 00:38:18.518 [2024-12-13 10:40:12.069556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.518 [2024-12-13 10:40:12.069571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.518 qpair failed and we were unable to recover it. 00:38:18.518 [2024-12-13 10:40:12.069721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.518 [2024-12-13 10:40:12.069735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.518 qpair failed and we were unable to recover it. 00:38:18.518 [2024-12-13 10:40:12.069955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.518 [2024-12-13 10:40:12.069969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.518 qpair failed and we were unable to recover it. 00:38:18.518 [2024-12-13 10:40:12.070254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.518 [2024-12-13 10:40:12.070267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.518 qpair failed and we were unable to recover it. 00:38:18.518 [2024-12-13 10:40:12.070502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.518 [2024-12-13 10:40:12.070515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.518 qpair failed and we were unable to recover it. 00:38:18.518 [2024-12-13 10:40:12.070661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.518 [2024-12-13 10:40:12.070675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.518 qpair failed and we were unable to recover it. 00:38:18.518 [2024-12-13 10:40:12.070833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.518 [2024-12-13 10:40:12.070846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.518 qpair failed and we were unable to recover it. 00:38:18.518 [2024-12-13 10:40:12.070937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.518 [2024-12-13 10:40:12.070949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.518 qpair failed and we were unable to recover it. 00:38:18.518 [2024-12-13 10:40:12.071115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.518 [2024-12-13 10:40:12.071128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.518 qpair failed and we were unable to recover it. 00:38:18.518 [2024-12-13 10:40:12.071365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.518 [2024-12-13 10:40:12.071378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.518 qpair failed and we were unable to recover it. 00:38:18.518 [2024-12-13 10:40:12.071541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.518 [2024-12-13 10:40:12.071555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.518 qpair failed and we were unable to recover it. 00:38:18.518 [2024-12-13 10:40:12.071648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.518 [2024-12-13 10:40:12.071661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.518 qpair failed and we were unable to recover it. 00:38:18.518 [2024-12-13 10:40:12.071832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.518 [2024-12-13 10:40:12.071845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.518 qpair failed and we were unable to recover it. 00:38:18.518 [2024-12-13 10:40:12.071929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.518 [2024-12-13 10:40:12.071941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.518 qpair failed and we were unable to recover it. 00:38:18.518 [2024-12-13 10:40:12.072191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.518 [2024-12-13 10:40:12.072205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.518 qpair failed and we were unable to recover it. 00:38:18.518 [2024-12-13 10:40:12.072462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.518 [2024-12-13 10:40:12.072477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.518 qpair failed and we were unable to recover it. 00:38:18.518 [2024-12-13 10:40:12.072723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.518 [2024-12-13 10:40:12.072737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.518 qpair failed and we were unable to recover it. 00:38:18.518 [2024-12-13 10:40:12.072838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.518 [2024-12-13 10:40:12.072852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.518 qpair failed and we were unable to recover it. 00:38:18.518 [2024-12-13 10:40:12.073008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.518 [2024-12-13 10:40:12.073022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.518 qpair failed and we were unable to recover it. 00:38:18.518 [2024-12-13 10:40:12.073249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.518 [2024-12-13 10:40:12.073263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.518 qpair failed and we were unable to recover it. 00:38:18.518 [2024-12-13 10:40:12.073514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.518 [2024-12-13 10:40:12.073529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.518 qpair failed and we were unable to recover it. 00:38:18.518 [2024-12-13 10:40:12.073629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.519 [2024-12-13 10:40:12.073654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.519 qpair failed and we were unable to recover it. 00:38:18.519 [2024-12-13 10:40:12.073880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.519 [2024-12-13 10:40:12.073895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.519 qpair failed and we were unable to recover it. 00:38:18.519 [2024-12-13 10:40:12.074003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.519 [2024-12-13 10:40:12.074019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.519 qpair failed and we were unable to recover it. 00:38:18.519 [2024-12-13 10:40:12.074150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.519 [2024-12-13 10:40:12.074163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.519 qpair failed and we were unable to recover it. 00:38:18.519 [2024-12-13 10:40:12.074313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.519 [2024-12-13 10:40:12.074327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.519 qpair failed and we were unable to recover it. 00:38:18.519 [2024-12-13 10:40:12.074481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.519 [2024-12-13 10:40:12.074495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.519 qpair failed and we were unable to recover it. 00:38:18.519 [2024-12-13 10:40:12.074588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.519 [2024-12-13 10:40:12.074601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.519 qpair failed and we were unable to recover it. 00:38:18.519 [2024-12-13 10:40:12.074769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.519 [2024-12-13 10:40:12.074783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.519 qpair failed and we were unable to recover it. 00:38:18.519 [2024-12-13 10:40:12.074948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.519 [2024-12-13 10:40:12.074962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.519 qpair failed and we were unable to recover it. 00:38:18.519 [2024-12-13 10:40:12.075062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.519 [2024-12-13 10:40:12.075076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.519 qpair failed and we were unable to recover it. 00:38:18.519 [2024-12-13 10:40:12.075209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.519 [2024-12-13 10:40:12.075223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.519 qpair failed and we were unable to recover it. 00:38:18.519 [2024-12-13 10:40:12.075459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.519 [2024-12-13 10:40:12.075472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.519 qpair failed and we were unable to recover it. 00:38:18.519 [2024-12-13 10:40:12.075685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.519 [2024-12-13 10:40:12.075699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.519 qpair failed and we were unable to recover it. 00:38:18.519 [2024-12-13 10:40:12.075880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.519 [2024-12-13 10:40:12.075893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.519 qpair failed and we were unable to recover it. 00:38:18.519 [2024-12-13 10:40:12.075989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.519 [2024-12-13 10:40:12.076004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.519 qpair failed and we were unable to recover it. 00:38:18.519 [2024-12-13 10:40:12.076077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.519 [2024-12-13 10:40:12.076089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.519 qpair failed and we were unable to recover it. 00:38:18.519 [2024-12-13 10:40:12.076313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.519 [2024-12-13 10:40:12.076326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.519 qpair failed and we were unable to recover it. 00:38:18.519 [2024-12-13 10:40:12.076526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.519 [2024-12-13 10:40:12.076541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.519 qpair failed and we were unable to recover it. 00:38:18.519 [2024-12-13 10:40:12.076630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.519 [2024-12-13 10:40:12.076643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.519 qpair failed and we were unable to recover it. 00:38:18.519 [2024-12-13 10:40:12.076846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.519 [2024-12-13 10:40:12.076859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.519 qpair failed and we were unable to recover it. 00:38:18.519 [2024-12-13 10:40:12.077033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.519 [2024-12-13 10:40:12.077048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.519 qpair failed and we were unable to recover it. 00:38:18.519 [2024-12-13 10:40:12.077281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.519 [2024-12-13 10:40:12.077323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.519 qpair failed and we were unable to recover it. 00:38:18.519 [2024-12-13 10:40:12.077478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.519 [2024-12-13 10:40:12.077522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.519 qpair failed and we were unable to recover it. 00:38:18.519 [2024-12-13 10:40:12.077711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.519 [2024-12-13 10:40:12.077754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.519 qpair failed and we were unable to recover it. 00:38:18.519 [2024-12-13 10:40:12.077957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.519 [2024-12-13 10:40:12.077999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.519 qpair failed and we were unable to recover it. 00:38:18.519 [2024-12-13 10:40:12.078276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.519 [2024-12-13 10:40:12.078318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.519 qpair failed and we were unable to recover it. 00:38:18.519 [2024-12-13 10:40:12.078488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.519 [2024-12-13 10:40:12.078532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.519 qpair failed and we were unable to recover it. 00:38:18.519 [2024-12-13 10:40:12.078788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.519 [2024-12-13 10:40:12.078802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.519 qpair failed and we were unable to recover it. 00:38:18.519 [2024-12-13 10:40:12.079033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.519 [2024-12-13 10:40:12.079047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.519 qpair failed and we were unable to recover it. 00:38:18.519 [2024-12-13 10:40:12.079249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.519 [2024-12-13 10:40:12.079263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.519 qpair failed and we were unable to recover it. 00:38:18.519 [2024-12-13 10:40:12.079478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.519 [2024-12-13 10:40:12.079492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.519 qpair failed and we were unable to recover it. 00:38:18.519 [2024-12-13 10:40:12.079595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.519 [2024-12-13 10:40:12.079613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.519 qpair failed and we were unable to recover it. 00:38:18.519 [2024-12-13 10:40:12.079820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.519 [2024-12-13 10:40:12.079834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.519 qpair failed and we were unable to recover it. 00:38:18.519 [2024-12-13 10:40:12.080069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.519 [2024-12-13 10:40:12.080084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.519 qpair failed and we were unable to recover it. 00:38:18.519 [2024-12-13 10:40:12.080313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.519 [2024-12-13 10:40:12.080327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.519 qpair failed and we were unable to recover it. 00:38:18.519 [2024-12-13 10:40:12.080495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.519 [2024-12-13 10:40:12.080509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.519 qpair failed and we were unable to recover it. 00:38:18.519 [2024-12-13 10:40:12.080664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.519 [2024-12-13 10:40:12.080679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.519 qpair failed and we were unable to recover it. 00:38:18.519 [2024-12-13 10:40:12.080766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.519 [2024-12-13 10:40:12.080779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.519 qpair failed and we were unable to recover it. 00:38:18.519 [2024-12-13 10:40:12.080881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.519 [2024-12-13 10:40:12.080893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.519 qpair failed and we were unable to recover it. 00:38:18.519 [2024-12-13 10:40:12.080989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.520 [2024-12-13 10:40:12.081001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.520 qpair failed and we were unable to recover it. 00:38:18.520 [2024-12-13 10:40:12.081177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.520 [2024-12-13 10:40:12.081190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.520 qpair failed and we were unable to recover it. 00:38:18.520 [2024-12-13 10:40:12.081375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.520 [2024-12-13 10:40:12.081389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.520 qpair failed and we were unable to recover it. 00:38:18.520 [2024-12-13 10:40:12.081608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.520 [2024-12-13 10:40:12.081622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.520 qpair failed and we were unable to recover it. 00:38:18.520 [2024-12-13 10:40:12.081729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.520 [2024-12-13 10:40:12.081772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.520 qpair failed and we were unable to recover it. 00:38:18.520 [2024-12-13 10:40:12.081919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.520 [2024-12-13 10:40:12.081962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.520 qpair failed and we were unable to recover it. 00:38:18.520 [2024-12-13 10:40:12.082185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.520 [2024-12-13 10:40:12.082229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.520 qpair failed and we were unable to recover it. 00:38:18.520 [2024-12-13 10:40:12.082531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.520 [2024-12-13 10:40:12.082546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.520 qpair failed and we were unable to recover it. 00:38:18.520 [2024-12-13 10:40:12.082650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.520 [2024-12-13 10:40:12.082665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.520 qpair failed and we were unable to recover it. 00:38:18.520 [2024-12-13 10:40:12.082911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.520 [2024-12-13 10:40:12.082924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.520 qpair failed and we were unable to recover it. 00:38:18.520 [2024-12-13 10:40:12.083101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.520 [2024-12-13 10:40:12.083115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.520 qpair failed and we were unable to recover it. 00:38:18.520 [2024-12-13 10:40:12.083247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.520 [2024-12-13 10:40:12.083261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.520 qpair failed and we were unable to recover it. 00:38:18.520 [2024-12-13 10:40:12.083497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.520 [2024-12-13 10:40:12.083510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.520 qpair failed and we were unable to recover it. 00:38:18.520 [2024-12-13 10:40:12.083605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.520 [2024-12-13 10:40:12.083617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.520 qpair failed and we were unable to recover it. 00:38:18.520 [2024-12-13 10:40:12.083720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.520 [2024-12-13 10:40:12.083733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.520 qpair failed and we were unable to recover it. 00:38:18.520 [2024-12-13 10:40:12.083878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.520 [2024-12-13 10:40:12.083894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.520 qpair failed and we were unable to recover it. 00:38:18.520 [2024-12-13 10:40:12.084040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.520 [2024-12-13 10:40:12.084054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.520 qpair failed and we were unable to recover it. 00:38:18.520 [2024-12-13 10:40:12.084209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.520 [2024-12-13 10:40:12.084222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.520 qpair failed and we were unable to recover it. 00:38:18.520 [2024-12-13 10:40:12.084363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.520 [2024-12-13 10:40:12.084377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.520 qpair failed and we were unable to recover it. 00:38:18.520 [2024-12-13 10:40:12.084471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.520 [2024-12-13 10:40:12.084484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.520 qpair failed and we were unable to recover it. 00:38:18.520 [2024-12-13 10:40:12.084562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.520 [2024-12-13 10:40:12.084575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.520 qpair failed and we were unable to recover it. 00:38:18.520 [2024-12-13 10:40:12.084727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.520 [2024-12-13 10:40:12.084740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.520 qpair failed and we were unable to recover it. 00:38:18.520 [2024-12-13 10:40:12.084896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.520 [2024-12-13 10:40:12.084910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.520 qpair failed and we were unable to recover it. 00:38:18.520 [2024-12-13 10:40:12.085057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.520 [2024-12-13 10:40:12.085071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.520 qpair failed and we were unable to recover it. 00:38:18.520 [2024-12-13 10:40:12.085233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.520 [2024-12-13 10:40:12.085246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.520 qpair failed and we were unable to recover it. 00:38:18.520 [2024-12-13 10:40:12.085505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.520 [2024-12-13 10:40:12.085520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.520 qpair failed and we were unable to recover it. 00:38:18.520 [2024-12-13 10:40:12.085672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.520 [2024-12-13 10:40:12.085686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.520 qpair failed and we were unable to recover it. 00:38:18.520 [2024-12-13 10:40:12.085772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.520 [2024-12-13 10:40:12.085785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.520 qpair failed and we were unable to recover it. 00:38:18.520 [2024-12-13 10:40:12.085884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.520 [2024-12-13 10:40:12.085898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.520 qpair failed and we were unable to recover it. 00:38:18.520 [2024-12-13 10:40:12.086104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.520 [2024-12-13 10:40:12.086117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.520 qpair failed and we were unable to recover it. 00:38:18.520 [2024-12-13 10:40:12.086349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.520 [2024-12-13 10:40:12.086363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.520 qpair failed and we were unable to recover it. 00:38:18.520 [2024-12-13 10:40:12.086572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.520 [2024-12-13 10:40:12.086585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.520 qpair failed and we were unable to recover it. 00:38:18.520 [2024-12-13 10:40:12.086729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.520 [2024-12-13 10:40:12.086742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.520 qpair failed and we were unable to recover it. 00:38:18.520 [2024-12-13 10:40:12.086896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.520 [2024-12-13 10:40:12.086911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.520 qpair failed and we were unable to recover it. 00:38:18.520 [2024-12-13 10:40:12.086992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.520 [2024-12-13 10:40:12.087005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.520 qpair failed and we were unable to recover it. 00:38:18.520 [2024-12-13 10:40:12.087249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.520 [2024-12-13 10:40:12.087263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.520 qpair failed and we were unable to recover it. 00:38:18.520 [2024-12-13 10:40:12.087467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.520 [2024-12-13 10:40:12.087482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.520 qpair failed and we were unable to recover it. 00:38:18.520 [2024-12-13 10:40:12.087666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.520 [2024-12-13 10:40:12.087680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.520 qpair failed and we were unable to recover it. 00:38:18.520 [2024-12-13 10:40:12.087811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.520 [2024-12-13 10:40:12.087825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.520 qpair failed and we were unable to recover it. 00:38:18.521 [2024-12-13 10:40:12.088030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.521 [2024-12-13 10:40:12.088045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.521 qpair failed and we were unable to recover it. 00:38:18.521 [2024-12-13 10:40:12.088290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.521 [2024-12-13 10:40:12.088304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.521 qpair failed and we were unable to recover it. 00:38:18.521 [2024-12-13 10:40:12.088476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.521 [2024-12-13 10:40:12.088491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.521 qpair failed and we were unable to recover it. 00:38:18.521 [2024-12-13 10:40:12.088608] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325d00 is same with the state(6) to be set 00:38:18.521 [2024-12-13 10:40:12.088886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.521 [2024-12-13 10:40:12.088919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:18.521 qpair failed and we were unable to recover it. 00:38:18.521 [2024-12-13 10:40:12.089101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.521 [2024-12-13 10:40:12.089127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:18.521 qpair failed and we were unable to recover it. 00:38:18.521 [2024-12-13 10:40:12.089364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.521 [2024-12-13 10:40:12.089386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:18.521 qpair failed and we were unable to recover it. 00:38:18.521 [2024-12-13 10:40:12.089563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.521 [2024-12-13 10:40:12.089586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:18.521 qpair failed and we were unable to recover it. 00:38:18.521 [2024-12-13 10:40:12.089776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.521 [2024-12-13 10:40:12.089798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:18.521 qpair failed and we were unable to recover it. 00:38:18.521 [2024-12-13 10:40:12.089958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.521 [2024-12-13 10:40:12.089979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:18.521 qpair failed and we were unable to recover it. 00:38:18.521 [2024-12-13 10:40:12.090203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.521 [2024-12-13 10:40:12.090225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:18.521 qpair failed and we were unable to recover it. 00:38:18.521 [2024-12-13 10:40:12.090485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.521 [2024-12-13 10:40:12.090507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:18.521 qpair failed and we were unable to recover it. 00:38:18.521 [2024-12-13 10:40:12.090618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.521 [2024-12-13 10:40:12.090639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:18.521 qpair failed and we were unable to recover it. 00:38:18.521 [2024-12-13 10:40:12.090744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.521 [2024-12-13 10:40:12.090760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.521 qpair failed and we were unable to recover it. 00:38:18.521 [2024-12-13 10:40:12.090851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.521 [2024-12-13 10:40:12.090864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.521 qpair failed and we were unable to recover it. 00:38:18.521 [2024-12-13 10:40:12.091003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.521 [2024-12-13 10:40:12.091016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.521 qpair failed and we were unable to recover it. 00:38:18.521 [2024-12-13 10:40:12.091169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.521 [2024-12-13 10:40:12.091183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.521 qpair failed and we were unable to recover it. 00:38:18.521 [2024-12-13 10:40:12.091320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.521 [2024-12-13 10:40:12.091334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.521 qpair failed and we were unable to recover it. 00:38:18.521 [2024-12-13 10:40:12.091505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.521 [2024-12-13 10:40:12.091519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.521 qpair failed and we were unable to recover it. 00:38:18.521 [2024-12-13 10:40:12.091670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.521 [2024-12-13 10:40:12.091684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.521 qpair failed and we were unable to recover it. 00:38:18.521 [2024-12-13 10:40:12.091856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.521 [2024-12-13 10:40:12.091869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.521 qpair failed and we were unable to recover it. 00:38:18.521 [2024-12-13 10:40:12.092119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.521 [2024-12-13 10:40:12.092131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.521 qpair failed and we were unable to recover it. 00:38:18.521 [2024-12-13 10:40:12.092376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.521 [2024-12-13 10:40:12.092389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.521 qpair failed and we were unable to recover it. 00:38:18.521 [2024-12-13 10:40:12.092474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.521 [2024-12-13 10:40:12.092487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.521 qpair failed and we were unable to recover it. 00:38:18.521 [2024-12-13 10:40:12.092628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.521 [2024-12-13 10:40:12.092641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.521 qpair failed and we were unable to recover it. 00:38:18.521 [2024-12-13 10:40:12.092726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.521 [2024-12-13 10:40:12.092738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.521 qpair failed and we were unable to recover it. 00:38:18.521 [2024-12-13 10:40:12.092876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.521 [2024-12-13 10:40:12.092890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.521 qpair failed and we were unable to recover it. 00:38:18.521 [2024-12-13 10:40:12.092987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.521 [2024-12-13 10:40:12.093005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.521 qpair failed and we were unable to recover it. 00:38:18.521 [2024-12-13 10:40:12.093156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.521 [2024-12-13 10:40:12.093170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.521 qpair failed and we were unable to recover it. 00:38:18.521 [2024-12-13 10:40:12.093371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.521 [2024-12-13 10:40:12.093384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.521 qpair failed and we were unable to recover it. 00:38:18.521 [2024-12-13 10:40:12.093630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.521 [2024-12-13 10:40:12.093646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.521 qpair failed and we were unable to recover it. 00:38:18.521 [2024-12-13 10:40:12.093814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.521 [2024-12-13 10:40:12.093828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.521 qpair failed and we were unable to recover it. 00:38:18.521 [2024-12-13 10:40:12.093995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.521 [2024-12-13 10:40:12.094009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.521 qpair failed and we were unable to recover it. 00:38:18.521 [2024-12-13 10:40:12.094164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.521 [2024-12-13 10:40:12.094178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.521 qpair failed and we were unable to recover it. 00:38:18.521 [2024-12-13 10:40:12.094318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.521 [2024-12-13 10:40:12.094331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.521 qpair failed and we were unable to recover it. 00:38:18.521 [2024-12-13 10:40:12.094428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.521 [2024-12-13 10:40:12.094441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.521 qpair failed and we were unable to recover it. 00:38:18.521 [2024-12-13 10:40:12.094692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.521 [2024-12-13 10:40:12.094707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.521 qpair failed and we were unable to recover it. 00:38:18.521 [2024-12-13 10:40:12.094871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.521 [2024-12-13 10:40:12.094885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.521 qpair failed and we were unable to recover it. 00:38:18.521 [2024-12-13 10:40:12.095101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.521 [2024-12-13 10:40:12.095116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.521 qpair failed and we were unable to recover it. 00:38:18.521 [2024-12-13 10:40:12.095268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.522 [2024-12-13 10:40:12.095282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.522 qpair failed and we were unable to recover it. 00:38:18.522 [2024-12-13 10:40:12.095433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.522 [2024-12-13 10:40:12.095453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.522 qpair failed and we were unable to recover it. 00:38:18.522 [2024-12-13 10:40:12.095593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.522 [2024-12-13 10:40:12.095606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.522 qpair failed and we were unable to recover it. 00:38:18.522 [2024-12-13 10:40:12.095808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.522 [2024-12-13 10:40:12.095823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.522 qpair failed and we were unable to recover it. 00:38:18.522 [2024-12-13 10:40:12.095897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.522 [2024-12-13 10:40:12.095911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.522 qpair failed and we were unable to recover it. 00:38:18.522 [2024-12-13 10:40:12.095994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.522 [2024-12-13 10:40:12.096007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.522 qpair failed and we were unable to recover it. 00:38:18.522 [2024-12-13 10:40:12.096095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.522 [2024-12-13 10:40:12.096108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.522 qpair failed and we were unable to recover it. 00:38:18.522 [2024-12-13 10:40:12.096342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.522 [2024-12-13 10:40:12.096356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.522 qpair failed and we were unable to recover it. 00:38:18.522 [2024-12-13 10:40:12.096582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.522 [2024-12-13 10:40:12.096598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.522 qpair failed and we were unable to recover it. 00:38:18.522 [2024-12-13 10:40:12.096871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.522 [2024-12-13 10:40:12.096885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.522 qpair failed and we were unable to recover it. 00:38:18.522 [2024-12-13 10:40:12.097049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.522 [2024-12-13 10:40:12.097062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.522 qpair failed and we were unable to recover it. 00:38:18.522 [2024-12-13 10:40:12.097289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.522 [2024-12-13 10:40:12.097304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.522 qpair failed and we were unable to recover it. 00:38:18.522 [2024-12-13 10:40:12.097428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.522 [2024-12-13 10:40:12.097441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.522 qpair failed and we were unable to recover it. 00:38:18.522 [2024-12-13 10:40:12.097649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.522 [2024-12-13 10:40:12.097663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.522 qpair failed and we were unable to recover it. 00:38:18.522 [2024-12-13 10:40:12.097804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.522 [2024-12-13 10:40:12.097817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.522 qpair failed and we were unable to recover it. 00:38:18.522 [2024-12-13 10:40:12.097966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.522 [2024-12-13 10:40:12.097980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.522 qpair failed and we were unable to recover it. 00:38:18.522 [2024-12-13 10:40:12.098075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.522 [2024-12-13 10:40:12.098087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.522 qpair failed and we were unable to recover it. 00:38:18.522 [2024-12-13 10:40:12.098186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.522 [2024-12-13 10:40:12.098201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.522 qpair failed and we were unable to recover it. 00:38:18.522 [2024-12-13 10:40:12.098338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.522 [2024-12-13 10:40:12.098351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.522 qpair failed and we were unable to recover it. 00:38:18.522 [2024-12-13 10:40:12.098523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.522 [2024-12-13 10:40:12.098538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.522 qpair failed and we were unable to recover it. 00:38:18.522 [2024-12-13 10:40:12.098740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.522 [2024-12-13 10:40:12.098782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.522 qpair failed and we were unable to recover it. 00:38:18.522 [2024-12-13 10:40:12.099013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.522 [2024-12-13 10:40:12.099054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.522 qpair failed and we were unable to recover it. 00:38:18.522 [2024-12-13 10:40:12.099342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.522 [2024-12-13 10:40:12.099356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.522 qpair failed and we were unable to recover it. 00:38:18.522 [2024-12-13 10:40:12.099445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.522 [2024-12-13 10:40:12.099462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.522 qpair failed and we were unable to recover it. 00:38:18.522 [2024-12-13 10:40:12.099605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.522 [2024-12-13 10:40:12.099618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.522 qpair failed and we were unable to recover it. 00:38:18.522 [2024-12-13 10:40:12.099757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.522 [2024-12-13 10:40:12.099771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.522 qpair failed and we were unable to recover it. 00:38:18.522 [2024-12-13 10:40:12.099960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.522 [2024-12-13 10:40:12.099973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.522 qpair failed and we were unable to recover it. 00:38:18.522 [2024-12-13 10:40:12.100050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.522 [2024-12-13 10:40:12.100062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.522 qpair failed and we were unable to recover it. 00:38:18.522 [2024-12-13 10:40:12.100208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.522 [2024-12-13 10:40:12.100221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.522 qpair failed and we were unable to recover it. 00:38:18.522 [2024-12-13 10:40:12.100306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.522 [2024-12-13 10:40:12.100320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.522 qpair failed and we were unable to recover it. 00:38:18.522 [2024-12-13 10:40:12.100407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.522 [2024-12-13 10:40:12.100419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.522 qpair failed and we were unable to recover it. 00:38:18.522 [2024-12-13 10:40:12.100704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.522 [2024-12-13 10:40:12.100759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.522 qpair failed and we were unable to recover it. 00:38:18.522 [2024-12-13 10:40:12.100972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.522 [2024-12-13 10:40:12.101017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.522 qpair failed and we were unable to recover it. 00:38:18.522 [2024-12-13 10:40:12.101323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.522 [2024-12-13 10:40:12.101367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.522 qpair failed and we were unable to recover it. 00:38:18.522 [2024-12-13 10:40:12.101598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.523 [2024-12-13 10:40:12.101641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.523 qpair failed and we were unable to recover it. 00:38:18.523 [2024-12-13 10:40:12.101936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.523 [2024-12-13 10:40:12.101951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.523 qpair failed and we were unable to recover it. 00:38:18.523 [2024-12-13 10:40:12.102107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.523 [2024-12-13 10:40:12.102121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.523 qpair failed and we were unable to recover it. 00:38:18.523 [2024-12-13 10:40:12.102246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.523 [2024-12-13 10:40:12.102260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.523 qpair failed and we were unable to recover it. 00:38:18.523 [2024-12-13 10:40:12.102423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.523 [2024-12-13 10:40:12.102437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.523 qpair failed and we were unable to recover it. 00:38:18.523 [2024-12-13 10:40:12.102669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.523 [2024-12-13 10:40:12.102684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.523 qpair failed and we were unable to recover it. 00:38:18.523 [2024-12-13 10:40:12.102791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.523 [2024-12-13 10:40:12.102805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.523 qpair failed and we were unable to recover it. 00:38:18.523 [2024-12-13 10:40:12.103031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.523 [2024-12-13 10:40:12.103044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.523 qpair failed and we were unable to recover it. 00:38:18.523 [2024-12-13 10:40:12.103198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.523 [2024-12-13 10:40:12.103211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.523 qpair failed and we were unable to recover it. 00:38:18.523 [2024-12-13 10:40:12.103364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.523 [2024-12-13 10:40:12.103378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.523 qpair failed and we were unable to recover it. 00:38:18.523 [2024-12-13 10:40:12.103459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.523 [2024-12-13 10:40:12.103473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.523 qpair failed and we were unable to recover it. 00:38:18.523 [2024-12-13 10:40:12.103685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.523 [2024-12-13 10:40:12.103700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.523 qpair failed and we were unable to recover it. 00:38:18.523 [2024-12-13 10:40:12.103906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.523 [2024-12-13 10:40:12.103920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.523 qpair failed and we were unable to recover it. 00:38:18.523 [2024-12-13 10:40:12.104063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.523 [2024-12-13 10:40:12.104076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.523 qpair failed and we were unable to recover it. 00:38:18.523 [2024-12-13 10:40:12.104243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.523 [2024-12-13 10:40:12.104257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.523 qpair failed and we were unable to recover it. 00:38:18.523 [2024-12-13 10:40:12.104483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.523 [2024-12-13 10:40:12.104499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.523 qpair failed and we were unable to recover it. 00:38:18.523 [2024-12-13 10:40:12.104704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.523 [2024-12-13 10:40:12.104717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.523 qpair failed and we were unable to recover it. 00:38:18.523 [2024-12-13 10:40:12.104806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.523 [2024-12-13 10:40:12.104821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.523 qpair failed and we were unable to recover it. 00:38:18.523 [2024-12-13 10:40:12.105072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.523 [2024-12-13 10:40:12.105091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.523 qpair failed and we were unable to recover it. 00:38:18.523 [2024-12-13 10:40:12.105317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.523 [2024-12-13 10:40:12.105331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.523 qpair failed and we were unable to recover it. 00:38:18.523 [2024-12-13 10:40:12.105417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.523 [2024-12-13 10:40:12.105430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.523 qpair failed and we were unable to recover it. 00:38:18.523 [2024-12-13 10:40:12.105640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.523 [2024-12-13 10:40:12.105654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.523 qpair failed and we were unable to recover it. 00:38:18.523 [2024-12-13 10:40:12.105859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.523 [2024-12-13 10:40:12.105872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.523 qpair failed and we were unable to recover it. 00:38:18.523 [2024-12-13 10:40:12.106038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.523 [2024-12-13 10:40:12.106052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.523 qpair failed and we were unable to recover it. 00:38:18.523 [2024-12-13 10:40:12.106265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.523 [2024-12-13 10:40:12.106279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.523 qpair failed and we were unable to recover it. 00:38:18.523 [2024-12-13 10:40:12.106436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.523 [2024-12-13 10:40:12.106454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.523 qpair failed and we were unable to recover it. 00:38:18.523 [2024-12-13 10:40:12.106560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.523 [2024-12-13 10:40:12.106574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.523 qpair failed and we were unable to recover it. 00:38:18.523 [2024-12-13 10:40:12.106803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.523 [2024-12-13 10:40:12.106817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.523 qpair failed and we were unable to recover it. 00:38:18.523 [2024-12-13 10:40:12.106962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.523 [2024-12-13 10:40:12.106975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.523 qpair failed and we were unable to recover it. 00:38:18.523 [2024-12-13 10:40:12.107124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.523 [2024-12-13 10:40:12.107137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.523 qpair failed and we were unable to recover it. 00:38:18.523 [2024-12-13 10:40:12.107285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.523 [2024-12-13 10:40:12.107298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.523 qpair failed and we were unable to recover it. 00:38:18.523 [2024-12-13 10:40:12.107518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.523 [2024-12-13 10:40:12.107532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.523 qpair failed and we were unable to recover it. 00:38:18.523 [2024-12-13 10:40:12.107681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.523 [2024-12-13 10:40:12.107696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.523 qpair failed and we were unable to recover it. 00:38:18.523 [2024-12-13 10:40:12.107852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.523 [2024-12-13 10:40:12.107865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.523 qpair failed and we were unable to recover it. 00:38:18.523 [2024-12-13 10:40:12.108010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.523 [2024-12-13 10:40:12.108024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.523 qpair failed and we were unable to recover it. 00:38:18.523 [2024-12-13 10:40:12.108184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.523 [2024-12-13 10:40:12.108198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.523 qpair failed and we were unable to recover it. 00:38:18.523 [2024-12-13 10:40:12.108400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.523 [2024-12-13 10:40:12.108413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.523 qpair failed and we were unable to recover it. 00:38:18.523 [2024-12-13 10:40:12.108549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.523 [2024-12-13 10:40:12.108566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.523 qpair failed and we were unable to recover it. 00:38:18.523 [2024-12-13 10:40:12.108808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.523 [2024-12-13 10:40:12.108824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.523 qpair failed and we were unable to recover it. 00:38:18.523 [2024-12-13 10:40:12.108911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.524 [2024-12-13 10:40:12.108924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.524 qpair failed and we were unable to recover it. 00:38:18.524 [2024-12-13 10:40:12.109173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.524 [2024-12-13 10:40:12.109186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.524 qpair failed and we were unable to recover it. 00:38:18.524 [2024-12-13 10:40:12.109408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.524 [2024-12-13 10:40:12.109423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.524 qpair failed and we were unable to recover it. 00:38:18.524 [2024-12-13 10:40:12.109690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.524 [2024-12-13 10:40:12.109704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.524 qpair failed and we were unable to recover it. 00:38:18.524 [2024-12-13 10:40:12.109855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.524 [2024-12-13 10:40:12.109869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.524 qpair failed and we were unable to recover it. 00:38:18.524 [2024-12-13 10:40:12.110023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.524 [2024-12-13 10:40:12.110036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.524 qpair failed and we were unable to recover it. 00:38:18.524 [2024-12-13 10:40:12.110259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.524 [2024-12-13 10:40:12.110272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.524 qpair failed and we were unable to recover it. 00:38:18.524 [2024-12-13 10:40:12.110422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.524 [2024-12-13 10:40:12.110435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.524 qpair failed and we were unable to recover it. 00:38:18.524 [2024-12-13 10:40:12.110518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.524 [2024-12-13 10:40:12.110532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.524 qpair failed and we were unable to recover it. 00:38:18.524 [2024-12-13 10:40:12.110675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.524 [2024-12-13 10:40:12.110688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.524 qpair failed and we were unable to recover it. 00:38:18.524 [2024-12-13 10:40:12.110793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.524 [2024-12-13 10:40:12.110806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.524 qpair failed and we were unable to recover it. 00:38:18.524 [2024-12-13 10:40:12.110981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.524 [2024-12-13 10:40:12.110995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.524 qpair failed and we were unable to recover it. 00:38:18.524 [2024-12-13 10:40:12.111211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.524 [2024-12-13 10:40:12.111224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.524 qpair failed and we were unable to recover it. 00:38:18.524 [2024-12-13 10:40:12.111390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.524 [2024-12-13 10:40:12.111403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.524 qpair failed and we were unable to recover it. 00:38:18.524 [2024-12-13 10:40:12.111587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.524 [2024-12-13 10:40:12.111602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.524 qpair failed and we were unable to recover it. 00:38:18.524 [2024-12-13 10:40:12.111770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.524 [2024-12-13 10:40:12.111783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.524 qpair failed and we were unable to recover it. 00:38:18.524 [2024-12-13 10:40:12.111938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.524 [2024-12-13 10:40:12.111950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.524 qpair failed and we were unable to recover it. 00:38:18.524 [2024-12-13 10:40:12.112060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.524 [2024-12-13 10:40:12.112074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.524 qpair failed and we were unable to recover it. 00:38:18.524 [2024-12-13 10:40:12.112326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.524 [2024-12-13 10:40:12.112339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.524 qpair failed and we were unable to recover it. 00:38:18.524 [2024-12-13 10:40:12.112405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.524 [2024-12-13 10:40:12.112417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.524 qpair failed and we were unable to recover it. 00:38:18.524 [2024-12-13 10:40:12.112590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.524 [2024-12-13 10:40:12.112604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.524 qpair failed and we were unable to recover it. 00:38:18.524 [2024-12-13 10:40:12.112695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.524 [2024-12-13 10:40:12.112708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.524 qpair failed and we were unable to recover it. 00:38:18.524 [2024-12-13 10:40:12.112826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.524 [2024-12-13 10:40:12.112840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.524 qpair failed and we were unable to recover it. 00:38:18.524 [2024-12-13 10:40:12.113031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.524 [2024-12-13 10:40:12.113044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.524 qpair failed and we were unable to recover it. 00:38:18.524 [2024-12-13 10:40:12.113247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.524 [2024-12-13 10:40:12.113260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.524 qpair failed and we were unable to recover it. 00:38:18.524 [2024-12-13 10:40:12.113493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.524 [2024-12-13 10:40:12.113510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.524 qpair failed and we were unable to recover it. 00:38:18.524 [2024-12-13 10:40:12.113654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.524 [2024-12-13 10:40:12.113667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.524 qpair failed and we were unable to recover it. 00:38:18.524 [2024-12-13 10:40:12.113761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.524 [2024-12-13 10:40:12.113773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.524 qpair failed and we were unable to recover it. 00:38:18.524 [2024-12-13 10:40:12.113975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.524 [2024-12-13 10:40:12.113990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.524 qpair failed and we were unable to recover it. 00:38:18.524 [2024-12-13 10:40:12.114203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.524 [2024-12-13 10:40:12.114218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.524 qpair failed and we were unable to recover it. 00:38:18.524 [2024-12-13 10:40:12.114369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.524 [2024-12-13 10:40:12.114382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.524 qpair failed and we were unable to recover it. 00:38:18.524 [2024-12-13 10:40:12.114470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.524 [2024-12-13 10:40:12.114484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.524 qpair failed and we were unable to recover it. 00:38:18.524 [2024-12-13 10:40:12.114635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.524 [2024-12-13 10:40:12.114649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.524 qpair failed and we were unable to recover it. 00:38:18.524 [2024-12-13 10:40:12.114876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.524 [2024-12-13 10:40:12.114889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.524 qpair failed and we were unable to recover it. 00:38:18.524 [2024-12-13 10:40:12.115028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.524 [2024-12-13 10:40:12.115042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.524 qpair failed and we were unable to recover it. 00:38:18.524 [2024-12-13 10:40:12.115269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.524 [2024-12-13 10:40:12.115283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.524 qpair failed and we were unable to recover it. 00:38:18.524 [2024-12-13 10:40:12.115503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.524 [2024-12-13 10:40:12.115517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.524 qpair failed and we were unable to recover it. 00:38:18.524 [2024-12-13 10:40:12.115786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.524 [2024-12-13 10:40:12.115799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.524 qpair failed and we were unable to recover it. 00:38:18.524 [2024-12-13 10:40:12.115955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.524 [2024-12-13 10:40:12.115971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.524 qpair failed and we were unable to recover it. 00:38:18.524 [2024-12-13 10:40:12.116173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.525 [2024-12-13 10:40:12.116186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.525 qpair failed and we were unable to recover it. 00:38:18.525 [2024-12-13 10:40:12.116379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.525 [2024-12-13 10:40:12.116392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.525 qpair failed and we were unable to recover it. 00:38:18.525 [2024-12-13 10:40:12.116579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.525 [2024-12-13 10:40:12.116593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.525 qpair failed and we were unable to recover it. 00:38:18.525 [2024-12-13 10:40:12.116843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.525 [2024-12-13 10:40:12.116857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.525 qpair failed and we were unable to recover it. 00:38:18.525 [2024-12-13 10:40:12.117101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.525 [2024-12-13 10:40:12.117120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.525 qpair failed and we were unable to recover it. 00:38:18.525 [2024-12-13 10:40:12.117293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.525 [2024-12-13 10:40:12.117307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.525 qpair failed and we were unable to recover it. 00:38:18.525 [2024-12-13 10:40:12.117505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.525 [2024-12-13 10:40:12.117547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.525 qpair failed and we were unable to recover it. 00:38:18.525 [2024-12-13 10:40:12.117775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.525 [2024-12-13 10:40:12.117817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.525 qpair failed and we were unable to recover it. 00:38:18.525 [2024-12-13 10:40:12.117961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.525 [2024-12-13 10:40:12.118004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.525 qpair failed and we were unable to recover it. 00:38:18.525 [2024-12-13 10:40:12.118213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.525 [2024-12-13 10:40:12.118257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.525 qpair failed and we were unable to recover it. 00:38:18.525 [2024-12-13 10:40:12.118526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.525 [2024-12-13 10:40:12.118571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.525 qpair failed and we were unable to recover it. 00:38:18.525 [2024-12-13 10:40:12.118752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.525 [2024-12-13 10:40:12.118765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.525 qpair failed and we were unable to recover it. 00:38:18.525 [2024-12-13 10:40:12.118870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.525 [2024-12-13 10:40:12.118884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.525 qpair failed and we were unable to recover it. 00:38:18.525 [2024-12-13 10:40:12.119030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.525 [2024-12-13 10:40:12.119045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.525 qpair failed and we were unable to recover it. 00:38:18.525 [2024-12-13 10:40:12.119205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.525 [2024-12-13 10:40:12.119248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.525 qpair failed and we were unable to recover it. 00:38:18.525 [2024-12-13 10:40:12.119526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.525 [2024-12-13 10:40:12.119570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.525 qpair failed and we were unable to recover it. 00:38:18.525 [2024-12-13 10:40:12.119775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.525 [2024-12-13 10:40:12.119788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.525 qpair failed and we were unable to recover it. 00:38:18.525 [2024-12-13 10:40:12.120016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.525 [2024-12-13 10:40:12.120029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.525 qpair failed and we were unable to recover it. 00:38:18.525 [2024-12-13 10:40:12.120267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.525 [2024-12-13 10:40:12.120281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.525 qpair failed and we were unable to recover it. 00:38:18.525 [2024-12-13 10:40:12.120427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.525 [2024-12-13 10:40:12.120441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.525 qpair failed and we were unable to recover it. 00:38:18.525 [2024-12-13 10:40:12.120585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.525 [2024-12-13 10:40:12.120600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.525 qpair failed and we were unable to recover it. 00:38:18.525 [2024-12-13 10:40:12.120802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.525 [2024-12-13 10:40:12.120816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.525 qpair failed and we were unable to recover it. 00:38:18.525 [2024-12-13 10:40:12.120979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.525 [2024-12-13 10:40:12.121021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.525 qpair failed and we were unable to recover it. 00:38:18.525 [2024-12-13 10:40:12.121351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.525 [2024-12-13 10:40:12.121395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.525 qpair failed and we were unable to recover it. 00:38:18.525 [2024-12-13 10:40:12.121664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.525 [2024-12-13 10:40:12.121679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.525 qpair failed and we were unable to recover it. 00:38:18.525 [2024-12-13 10:40:12.121836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.525 [2024-12-13 10:40:12.121849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.525 qpair failed and we were unable to recover it. 00:38:18.525 [2024-12-13 10:40:12.122015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.525 [2024-12-13 10:40:12.122058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.525 qpair failed and we were unable to recover it. 00:38:18.525 [2024-12-13 10:40:12.122372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.525 [2024-12-13 10:40:12.122415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.525 qpair failed and we were unable to recover it. 00:38:18.525 [2024-12-13 10:40:12.122762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.525 [2024-12-13 10:40:12.122827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:18.525 qpair failed and we were unable to recover it. 00:38:18.525 [2024-12-13 10:40:12.123065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.525 [2024-12-13 10:40:12.123114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:18.525 qpair failed and we were unable to recover it. 00:38:18.525 [2024-12-13 10:40:12.123284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.525 [2024-12-13 10:40:12.123328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:18.525 qpair failed and we were unable to recover it. 00:38:18.525 [2024-12-13 10:40:12.123571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.525 [2024-12-13 10:40:12.123597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:18.525 qpair failed and we were unable to recover it. 00:38:18.525 [2024-12-13 10:40:12.123884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.525 [2024-12-13 10:40:12.123933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:18.525 qpair failed and we were unable to recover it. 00:38:18.525 [2024-12-13 10:40:12.124168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.525 [2024-12-13 10:40:12.124211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:18.525 qpair failed and we were unable to recover it. 00:38:18.525 [2024-12-13 10:40:12.124514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.525 [2024-12-13 10:40:12.124573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:18.525 qpair failed and we were unable to recover it. 00:38:18.525 [2024-12-13 10:40:12.124849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.525 [2024-12-13 10:40:12.124870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:18.525 qpair failed and we were unable to recover it. 00:38:18.525 [2024-12-13 10:40:12.125125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.525 [2024-12-13 10:40:12.125147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:18.525 qpair failed and we were unable to recover it. 00:38:18.525 [2024-12-13 10:40:12.125315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.525 [2024-12-13 10:40:12.125335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:18.525 qpair failed and we were unable to recover it. 00:38:18.525 [2024-12-13 10:40:12.125536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.525 [2024-12-13 10:40:12.125559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:18.525 qpair failed and we were unable to recover it. 00:38:18.525 [2024-12-13 10:40:12.125758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.525 [2024-12-13 10:40:12.125809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:18.526 qpair failed and we were unable to recover it. 00:38:18.526 [2024-12-13 10:40:12.126042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.526 [2024-12-13 10:40:12.126086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:18.526 qpair failed and we were unable to recover it. 00:38:18.526 [2024-12-13 10:40:12.126384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.526 [2024-12-13 10:40:12.126427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:18.526 qpair failed and we were unable to recover it. 00:38:18.526 [2024-12-13 10:40:12.126750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.526 [2024-12-13 10:40:12.126773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:18.526 qpair failed and we were unable to recover it. 00:38:18.526 [2024-12-13 10:40:12.126991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.526 [2024-12-13 10:40:12.127013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:18.526 qpair failed and we were unable to recover it. 00:38:18.526 [2024-12-13 10:40:12.127185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.526 [2024-12-13 10:40:12.127208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:18.526 qpair failed and we were unable to recover it. 00:38:18.526 [2024-12-13 10:40:12.127380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.526 [2024-12-13 10:40:12.127401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:18.526 qpair failed and we were unable to recover it. 00:38:18.526 [2024-12-13 10:40:12.127622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.526 [2024-12-13 10:40:12.127639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.526 qpair failed and we were unable to recover it. 00:38:18.526 [2024-12-13 10:40:12.127867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.526 [2024-12-13 10:40:12.127881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.526 qpair failed and we were unable to recover it. 00:38:18.526 [2024-12-13 10:40:12.128015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.526 [2024-12-13 10:40:12.128029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.526 qpair failed and we were unable to recover it. 00:38:18.526 [2024-12-13 10:40:12.128283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.526 [2024-12-13 10:40:12.128298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.526 qpair failed and we were unable to recover it. 00:38:18.526 [2024-12-13 10:40:12.128458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.526 [2024-12-13 10:40:12.128472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.526 qpair failed and we were unable to recover it. 00:38:18.526 [2024-12-13 10:40:12.128663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.526 [2024-12-13 10:40:12.128677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.526 qpair failed and we were unable to recover it. 00:38:18.526 [2024-12-13 10:40:12.128827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.526 [2024-12-13 10:40:12.128870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.526 qpair failed and we were unable to recover it. 00:38:18.526 [2024-12-13 10:40:12.129016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.526 [2024-12-13 10:40:12.129060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.526 qpair failed and we were unable to recover it. 00:38:18.526 [2024-12-13 10:40:12.129347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.526 [2024-12-13 10:40:12.129390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.526 qpair failed and we were unable to recover it. 00:38:18.526 [2024-12-13 10:40:12.129593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.526 [2024-12-13 10:40:12.129607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.526 qpair failed and we were unable to recover it. 00:38:18.526 [2024-12-13 10:40:12.129701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.526 [2024-12-13 10:40:12.129749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.526 qpair failed and we were unable to recover it. 00:38:18.526 [2024-12-13 10:40:12.130011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.526 [2024-12-13 10:40:12.130056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.526 qpair failed and we were unable to recover it. 00:38:18.526 [2024-12-13 10:40:12.130319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.526 [2024-12-13 10:40:12.130361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.526 qpair failed and we were unable to recover it. 00:38:18.526 [2024-12-13 10:40:12.130642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.526 [2024-12-13 10:40:12.130688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.526 qpair failed and we were unable to recover it. 00:38:18.526 [2024-12-13 10:40:12.130965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.526 [2024-12-13 10:40:12.130979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.526 qpair failed and we were unable to recover it. 00:38:18.526 [2024-12-13 10:40:12.131159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.526 [2024-12-13 10:40:12.131173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.526 qpair failed and we were unable to recover it. 00:38:18.526 [2024-12-13 10:40:12.131341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.526 [2024-12-13 10:40:12.131354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.526 qpair failed and we were unable to recover it. 00:38:18.526 [2024-12-13 10:40:12.131587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.526 [2024-12-13 10:40:12.131601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.526 qpair failed and we were unable to recover it. 00:38:18.526 [2024-12-13 10:40:12.131784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.526 [2024-12-13 10:40:12.131828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.526 qpair failed and we were unable to recover it. 00:38:18.526 [2024-12-13 10:40:12.132064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.526 [2024-12-13 10:40:12.132107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.526 qpair failed and we were unable to recover it. 00:38:18.526 [2024-12-13 10:40:12.132410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.526 [2024-12-13 10:40:12.132464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.526 qpair failed and we were unable to recover it. 00:38:18.526 [2024-12-13 10:40:12.132691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.526 [2024-12-13 10:40:12.132733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.526 qpair failed and we were unable to recover it. 00:38:18.526 [2024-12-13 10:40:12.132968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.526 [2024-12-13 10:40:12.132982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.526 qpair failed and we were unable to recover it. 00:38:18.526 [2024-12-13 10:40:12.133237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.526 [2024-12-13 10:40:12.133251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.526 qpair failed and we were unable to recover it. 00:38:18.526 [2024-12-13 10:40:12.133394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.526 [2024-12-13 10:40:12.133407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.526 qpair failed and we were unable to recover it. 00:38:18.526 [2024-12-13 10:40:12.133522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.526 [2024-12-13 10:40:12.133535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.526 qpair failed and we were unable to recover it. 00:38:18.526 [2024-12-13 10:40:12.133777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.526 [2024-12-13 10:40:12.133792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.526 qpair failed and we were unable to recover it. 00:38:18.526 [2024-12-13 10:40:12.133953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.526 [2024-12-13 10:40:12.133967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.526 qpair failed and we were unable to recover it. 00:38:18.526 [2024-12-13 10:40:12.134200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.526 [2024-12-13 10:40:12.134243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.526 qpair failed and we were unable to recover it. 00:38:18.526 [2024-12-13 10:40:12.134503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.526 [2024-12-13 10:40:12.134547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.526 qpair failed and we were unable to recover it. 00:38:18.526 [2024-12-13 10:40:12.134820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.526 [2024-12-13 10:40:12.134834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.526 qpair failed and we were unable to recover it. 00:38:18.526 [2024-12-13 10:40:12.135040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.526 [2024-12-13 10:40:12.135054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.526 qpair failed and we were unable to recover it. 00:38:18.526 [2024-12-13 10:40:12.135274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.526 [2024-12-13 10:40:12.135289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.526 qpair failed and we were unable to recover it. 00:38:18.527 [2024-12-13 10:40:12.135431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.527 [2024-12-13 10:40:12.135452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.527 qpair failed and we were unable to recover it. 00:38:18.527 [2024-12-13 10:40:12.135554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.527 [2024-12-13 10:40:12.135568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.527 qpair failed and we were unable to recover it. 00:38:18.527 [2024-12-13 10:40:12.135733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.527 [2024-12-13 10:40:12.135776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.527 qpair failed and we were unable to recover it. 00:38:18.527 [2024-12-13 10:40:12.135972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.527 [2024-12-13 10:40:12.136014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.527 qpair failed and we were unable to recover it. 00:38:18.527 [2024-12-13 10:40:12.136156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.527 [2024-12-13 10:40:12.136199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.527 qpair failed and we were unable to recover it. 00:38:18.527 [2024-12-13 10:40:12.136434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.527 [2024-12-13 10:40:12.136495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.527 qpair failed and we were unable to recover it. 00:38:18.527 [2024-12-13 10:40:12.136770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.527 [2024-12-13 10:40:12.136812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.527 qpair failed and we were unable to recover it. 00:38:18.527 [2024-12-13 10:40:12.136963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.527 [2024-12-13 10:40:12.136982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.527 qpair failed and we were unable to recover it. 00:38:18.527 [2024-12-13 10:40:12.137098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.527 [2024-12-13 10:40:12.137111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.527 qpair failed and we were unable to recover it. 00:38:18.527 [2024-12-13 10:40:12.137315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.527 [2024-12-13 10:40:12.137328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.527 qpair failed and we were unable to recover it. 00:38:18.527 [2024-12-13 10:40:12.137404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.527 [2024-12-13 10:40:12.137418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.527 qpair failed and we were unable to recover it. 00:38:18.527 [2024-12-13 10:40:12.137562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.527 [2024-12-13 10:40:12.137577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.527 qpair failed and we were unable to recover it. 00:38:18.527 [2024-12-13 10:40:12.137747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.527 [2024-12-13 10:40:12.137761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.527 qpair failed and we were unable to recover it. 00:38:18.527 [2024-12-13 10:40:12.137904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.527 [2024-12-13 10:40:12.137918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.527 qpair failed and we were unable to recover it. 00:38:18.527 [2024-12-13 10:40:12.138096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.527 [2024-12-13 10:40:12.138110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.527 qpair failed and we were unable to recover it. 00:38:18.527 [2024-12-13 10:40:12.138206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.527 [2024-12-13 10:40:12.138219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.527 qpair failed and we were unable to recover it. 00:38:18.527 [2024-12-13 10:40:12.138375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.527 [2024-12-13 10:40:12.138390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.527 qpair failed and we were unable to recover it. 00:38:18.527 [2024-12-13 10:40:12.138531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.527 [2024-12-13 10:40:12.138545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.527 qpair failed and we were unable to recover it. 00:38:18.527 [2024-12-13 10:40:12.138712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.527 [2024-12-13 10:40:12.138726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.527 qpair failed and we were unable to recover it. 00:38:18.527 [2024-12-13 10:40:12.138874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.527 [2024-12-13 10:40:12.138888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.527 qpair failed and we were unable to recover it. 00:38:18.527 [2024-12-13 10:40:12.139029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.527 [2024-12-13 10:40:12.139044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.527 qpair failed and we were unable to recover it. 00:38:18.527 [2024-12-13 10:40:12.139211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.527 [2024-12-13 10:40:12.139224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.527 qpair failed and we were unable to recover it. 00:38:18.527 [2024-12-13 10:40:12.139392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.527 [2024-12-13 10:40:12.139405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.527 qpair failed and we were unable to recover it. 00:38:18.527 [2024-12-13 10:40:12.139561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.527 [2024-12-13 10:40:12.139576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.527 qpair failed and we were unable to recover it. 00:38:18.527 [2024-12-13 10:40:12.139809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.527 [2024-12-13 10:40:12.139822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.527 qpair failed and we were unable to recover it. 00:38:18.527 [2024-12-13 10:40:12.139999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.527 [2024-12-13 10:40:12.140013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.527 qpair failed and we were unable to recover it. 00:38:18.527 [2024-12-13 10:40:12.140148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.527 [2024-12-13 10:40:12.140161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.527 qpair failed and we were unable to recover it. 00:38:18.527 [2024-12-13 10:40:12.140407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.527 [2024-12-13 10:40:12.140433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:18.527 qpair failed and we were unable to recover it. 00:38:18.527 [2024-12-13 10:40:12.140704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.527 [2024-12-13 10:40:12.140726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:18.527 qpair failed and we were unable to recover it. 00:38:18.527 [2024-12-13 10:40:12.140887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.527 [2024-12-13 10:40:12.140908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:18.527 qpair failed and we were unable to recover it. 00:38:18.527 [2024-12-13 10:40:12.141091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.527 [2024-12-13 10:40:12.141112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:18.527 qpair failed and we were unable to recover it. 00:38:18.527 [2024-12-13 10:40:12.141311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.527 [2024-12-13 10:40:12.141334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:18.527 qpair failed and we were unable to recover it. 00:38:18.527 [2024-12-13 10:40:12.141606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.527 [2024-12-13 10:40:12.141628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:18.527 qpair failed and we were unable to recover it. 00:38:18.527 [2024-12-13 10:40:12.141744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.527 [2024-12-13 10:40:12.141760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.527 qpair failed and we were unable to recover it. 00:38:18.527 [2024-12-13 10:40:12.142006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.528 [2024-12-13 10:40:12.142020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.528 qpair failed and we were unable to recover it. 00:38:18.528 [2024-12-13 10:40:12.142270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.528 [2024-12-13 10:40:12.142314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.528 qpair failed and we were unable to recover it. 00:38:18.528 [2024-12-13 10:40:12.142610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.528 [2024-12-13 10:40:12.142624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.528 qpair failed and we were unable to recover it. 00:38:18.528 [2024-12-13 10:40:12.142810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.528 [2024-12-13 10:40:12.142853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.528 qpair failed and we were unable to recover it. 00:38:18.528 [2024-12-13 10:40:12.143091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.528 [2024-12-13 10:40:12.143135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.528 qpair failed and we were unable to recover it. 00:38:18.528 [2024-12-13 10:40:12.143374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.528 [2024-12-13 10:40:12.143418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.528 qpair failed and we were unable to recover it. 00:38:18.528 [2024-12-13 10:40:12.143662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.528 [2024-12-13 10:40:12.143679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.528 qpair failed and we were unable to recover it. 00:38:18.528 [2024-12-13 10:40:12.143837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.528 [2024-12-13 10:40:12.143880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.528 qpair failed and we were unable to recover it. 00:38:18.528 [2024-12-13 10:40:12.144031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.528 [2024-12-13 10:40:12.144074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.528 qpair failed and we were unable to recover it. 00:38:18.528 [2024-12-13 10:40:12.144368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.528 [2024-12-13 10:40:12.144412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.528 qpair failed and we were unable to recover it. 00:38:18.528 [2024-12-13 10:40:12.144691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.528 [2024-12-13 10:40:12.144706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.528 qpair failed and we were unable to recover it. 00:38:18.528 [2024-12-13 10:40:12.144929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.528 [2024-12-13 10:40:12.144943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.528 qpair failed and we were unable to recover it. 00:38:18.528 [2024-12-13 10:40:12.145106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.528 [2024-12-13 10:40:12.145122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.528 qpair failed and we were unable to recover it. 00:38:18.528 [2024-12-13 10:40:12.145195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.528 [2024-12-13 10:40:12.145209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.528 qpair failed and we were unable to recover it. 00:38:18.528 [2024-12-13 10:40:12.145436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.528 [2024-12-13 10:40:12.145455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.528 qpair failed and we were unable to recover it. 00:38:18.528 [2024-12-13 10:40:12.145605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.528 [2024-12-13 10:40:12.145619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.528 qpair failed and we were unable to recover it. 00:38:18.528 [2024-12-13 10:40:12.145788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.528 [2024-12-13 10:40:12.145830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.528 qpair failed and we were unable to recover it. 00:38:18.528 [2024-12-13 10:40:12.146039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.528 [2024-12-13 10:40:12.146083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.528 qpair failed and we were unable to recover it. 00:38:18.528 [2024-12-13 10:40:12.146309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.528 [2024-12-13 10:40:12.146353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.528 qpair failed and we were unable to recover it. 00:38:18.528 [2024-12-13 10:40:12.146567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.528 [2024-12-13 10:40:12.146581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.528 qpair failed and we were unable to recover it. 00:38:18.528 [2024-12-13 10:40:12.146812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.528 [2024-12-13 10:40:12.146826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.528 qpair failed and we were unable to recover it. 00:38:18.528 [2024-12-13 10:40:12.146991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.528 [2024-12-13 10:40:12.147036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.528 qpair failed and we were unable to recover it. 00:38:18.528 [2024-12-13 10:40:12.147178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.528 [2024-12-13 10:40:12.147221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.528 qpair failed and we were unable to recover it. 00:38:18.528 [2024-12-13 10:40:12.147510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.528 [2024-12-13 10:40:12.147555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.528 qpair failed and we were unable to recover it. 00:38:18.528 [2024-12-13 10:40:12.147819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.528 [2024-12-13 10:40:12.147833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.528 qpair failed and we were unable to recover it. 00:38:18.528 [2024-12-13 10:40:12.148062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.528 [2024-12-13 10:40:12.148093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.528 qpair failed and we were unable to recover it. 00:38:18.528 [2024-12-13 10:40:12.148295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.528 [2024-12-13 10:40:12.148309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.528 qpair failed and we were unable to recover it. 00:38:18.528 [2024-12-13 10:40:12.148482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.528 [2024-12-13 10:40:12.148497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.528 qpair failed and we were unable to recover it. 00:38:18.528 [2024-12-13 10:40:12.148579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.528 [2024-12-13 10:40:12.148593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.528 qpair failed and we were unable to recover it. 00:38:18.528 [2024-12-13 10:40:12.148735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.528 [2024-12-13 10:40:12.148748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.528 qpair failed and we were unable to recover it. 00:38:18.528 [2024-12-13 10:40:12.148884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.528 [2024-12-13 10:40:12.148897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.528 qpair failed and we were unable to recover it. 00:38:18.528 [2024-12-13 10:40:12.149047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.528 [2024-12-13 10:40:12.149060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.528 qpair failed and we were unable to recover it. 00:38:18.528 [2024-12-13 10:40:12.149246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.528 [2024-12-13 10:40:12.149259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.528 qpair failed and we were unable to recover it. 00:38:18.528 [2024-12-13 10:40:12.149456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.528 [2024-12-13 10:40:12.149470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.528 qpair failed and we were unable to recover it. 00:38:18.528 [2024-12-13 10:40:12.149620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.528 [2024-12-13 10:40:12.149633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.528 qpair failed and we were unable to recover it. 00:38:18.528 [2024-12-13 10:40:12.149855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.528 [2024-12-13 10:40:12.149868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.528 qpair failed and we were unable to recover it. 00:38:18.528 [2024-12-13 10:40:12.149964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.528 [2024-12-13 10:40:12.149979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.528 qpair failed and we were unable to recover it. 00:38:18.528 [2024-12-13 10:40:12.150197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.528 [2024-12-13 10:40:12.150210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.528 qpair failed and we were unable to recover it. 00:38:18.528 [2024-12-13 10:40:12.150288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.528 [2024-12-13 10:40:12.150302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.528 qpair failed and we were unable to recover it. 00:38:18.528 [2024-12-13 10:40:12.150562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.529 [2024-12-13 10:40:12.150608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.529 qpair failed and we were unable to recover it. 00:38:18.529 [2024-12-13 10:40:12.150897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.529 [2024-12-13 10:40:12.150940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.529 qpair failed and we were unable to recover it. 00:38:18.529 [2024-12-13 10:40:12.151141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.529 [2024-12-13 10:40:12.151184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.529 qpair failed and we were unable to recover it. 00:38:18.529 [2024-12-13 10:40:12.151466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.529 [2024-12-13 10:40:12.151480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.529 qpair failed and we were unable to recover it. 00:38:18.529 [2024-12-13 10:40:12.151729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.529 [2024-12-13 10:40:12.151747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.529 qpair failed and we were unable to recover it. 00:38:18.529 [2024-12-13 10:40:12.151924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.529 [2024-12-13 10:40:12.151938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.529 qpair failed and we were unable to recover it. 00:38:18.529 [2024-12-13 10:40:12.152116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.529 [2024-12-13 10:40:12.152129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.529 qpair failed and we were unable to recover it. 00:38:18.529 [2024-12-13 10:40:12.152389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.529 [2024-12-13 10:40:12.152405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.529 qpair failed and we were unable to recover it. 00:38:18.529 [2024-12-13 10:40:12.152488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.529 [2024-12-13 10:40:12.152503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.529 qpair failed and we were unable to recover it. 00:38:18.529 [2024-12-13 10:40:12.152739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.529 [2024-12-13 10:40:12.152753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.529 qpair failed and we were unable to recover it. 00:38:18.529 [2024-12-13 10:40:12.152968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.529 [2024-12-13 10:40:12.152982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.529 qpair failed and we were unable to recover it. 00:38:18.529 [2024-12-13 10:40:12.153154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.529 [2024-12-13 10:40:12.153167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.529 qpair failed and we were unable to recover it. 00:38:18.529 [2024-12-13 10:40:12.153254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.529 [2024-12-13 10:40:12.153267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.529 qpair failed and we were unable to recover it. 00:38:18.529 [2024-12-13 10:40:12.153400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.529 [2024-12-13 10:40:12.153414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.529 qpair failed and we were unable to recover it. 00:38:18.529 [2024-12-13 10:40:12.153611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.529 [2024-12-13 10:40:12.153625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.529 qpair failed and we were unable to recover it. 00:38:18.529 [2024-12-13 10:40:12.153770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.529 [2024-12-13 10:40:12.153784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.529 qpair failed and we were unable to recover it. 00:38:18.529 [2024-12-13 10:40:12.153993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.529 [2024-12-13 10:40:12.154007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.529 qpair failed and we were unable to recover it. 00:38:18.529 [2024-12-13 10:40:12.154177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.529 [2024-12-13 10:40:12.154190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.529 qpair failed and we were unable to recover it. 00:38:18.529 [2024-12-13 10:40:12.154346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.529 [2024-12-13 10:40:12.154360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.529 qpair failed and we were unable to recover it. 00:38:18.529 [2024-12-13 10:40:12.154563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.529 [2024-12-13 10:40:12.154577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.529 qpair failed and we were unable to recover it. 00:38:18.529 [2024-12-13 10:40:12.154683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.529 [2024-12-13 10:40:12.154696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.529 qpair failed and we were unable to recover it. 00:38:18.529 [2024-12-13 10:40:12.154791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.529 [2024-12-13 10:40:12.154805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.529 qpair failed and we were unable to recover it. 00:38:18.529 [2024-12-13 10:40:12.154872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.529 [2024-12-13 10:40:12.154885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.529 qpair failed and we were unable to recover it. 00:38:18.529 [2024-12-13 10:40:12.155017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.529 [2024-12-13 10:40:12.155030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.529 qpair failed and we were unable to recover it. 00:38:18.529 [2024-12-13 10:40:12.155294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.529 [2024-12-13 10:40:12.155309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.529 qpair failed and we were unable to recover it. 00:38:18.529 [2024-12-13 10:40:12.155473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.529 [2024-12-13 10:40:12.155519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.529 qpair failed and we were unable to recover it. 00:38:18.529 [2024-12-13 10:40:12.155727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.529 [2024-12-13 10:40:12.155770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.529 qpair failed and we were unable to recover it. 00:38:18.529 [2024-12-13 10:40:12.155970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.529 [2024-12-13 10:40:12.156013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.529 qpair failed and we were unable to recover it. 00:38:18.529 [2024-12-13 10:40:12.156276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.529 [2024-12-13 10:40:12.156319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.529 qpair failed and we were unable to recover it. 00:38:18.529 [2024-12-13 10:40:12.156470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.529 [2024-12-13 10:40:12.156514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.529 qpair failed and we were unable to recover it. 00:38:18.529 [2024-12-13 10:40:12.156731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.529 [2024-12-13 10:40:12.156745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.529 qpair failed and we were unable to recover it. 00:38:18.529 [2024-12-13 10:40:12.156970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.529 [2024-12-13 10:40:12.156984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.529 qpair failed and we were unable to recover it. 00:38:18.529 [2024-12-13 10:40:12.157186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.529 [2024-12-13 10:40:12.157199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.529 qpair failed and we were unable to recover it. 00:38:18.529 [2024-12-13 10:40:12.157270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.529 [2024-12-13 10:40:12.157283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.529 qpair failed and we were unable to recover it. 00:38:18.529 [2024-12-13 10:40:12.157428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.529 [2024-12-13 10:40:12.157479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:18.529 qpair failed and we were unable to recover it. 00:38:18.529 [2024-12-13 10:40:12.157598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.529 [2024-12-13 10:40:12.157625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:18.529 qpair failed and we were unable to recover it. 00:38:18.529 [2024-12-13 10:40:12.157730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.529 [2024-12-13 10:40:12.157751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:18.529 qpair failed and we were unable to recover it. 00:38:18.529 [2024-12-13 10:40:12.157974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.529 [2024-12-13 10:40:12.157996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:18.529 qpair failed and we were unable to recover it. 00:38:18.529 [2024-12-13 10:40:12.158150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.529 [2024-12-13 10:40:12.158171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:18.529 qpair failed and we were unable to recover it. 00:38:18.529 [2024-12-13 10:40:12.158336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.530 [2024-12-13 10:40:12.158358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:18.530 qpair failed and we were unable to recover it. 00:38:18.530 [2024-12-13 10:40:12.158592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.530 [2024-12-13 10:40:12.158608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.530 qpair failed and we were unable to recover it. 00:38:18.530 [2024-12-13 10:40:12.158710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.530 [2024-12-13 10:40:12.158723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.530 qpair failed and we were unable to recover it. 00:38:18.530 [2024-12-13 10:40:12.158827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.530 [2024-12-13 10:40:12.158841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.530 qpair failed and we were unable to recover it. 00:38:18.530 [2024-12-13 10:40:12.159043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.530 [2024-12-13 10:40:12.159057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.530 qpair failed and we were unable to recover it. 00:38:18.530 [2024-12-13 10:40:12.159137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.530 [2024-12-13 10:40:12.159150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.530 qpair failed and we were unable to recover it. 00:38:18.530 [2024-12-13 10:40:12.159292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.530 [2024-12-13 10:40:12.159306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.530 qpair failed and we were unable to recover it. 00:38:18.530 [2024-12-13 10:40:12.159459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.530 [2024-12-13 10:40:12.159473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.530 qpair failed and we were unable to recover it. 00:38:18.530 [2024-12-13 10:40:12.159555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.530 [2024-12-13 10:40:12.159570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.530 qpair failed and we were unable to recover it. 00:38:18.530 [2024-12-13 10:40:12.159649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.530 [2024-12-13 10:40:12.159663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.530 qpair failed and we were unable to recover it. 00:38:18.530 [2024-12-13 10:40:12.159727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.530 [2024-12-13 10:40:12.159741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.530 qpair failed and we were unable to recover it. 00:38:18.530 [2024-12-13 10:40:12.159836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.530 [2024-12-13 10:40:12.159849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.530 qpair failed and we were unable to recover it. 00:38:18.530 [2024-12-13 10:40:12.160039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.530 [2024-12-13 10:40:12.160053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.530 qpair failed and we were unable to recover it. 00:38:18.530 [2024-12-13 10:40:12.160199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.530 [2024-12-13 10:40:12.160213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.530 qpair failed and we were unable to recover it. 00:38:18.530 [2024-12-13 10:40:12.160340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.530 [2024-12-13 10:40:12.160354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.530 qpair failed and we were unable to recover it. 00:38:18.530 [2024-12-13 10:40:12.160492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.530 [2024-12-13 10:40:12.160506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.530 qpair failed and we were unable to recover it. 00:38:18.530 [2024-12-13 10:40:12.160638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.530 [2024-12-13 10:40:12.160651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.530 qpair failed and we were unable to recover it. 00:38:18.530 [2024-12-13 10:40:12.160881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.530 [2024-12-13 10:40:12.160894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.530 qpair failed and we were unable to recover it. 00:38:18.530 [2024-12-13 10:40:12.161116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.530 [2024-12-13 10:40:12.161129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.530 qpair failed and we were unable to recover it. 00:38:18.530 [2024-12-13 10:40:12.161333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.530 [2024-12-13 10:40:12.161346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.530 qpair failed and we were unable to recover it. 00:38:18.530 [2024-12-13 10:40:12.161417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.530 [2024-12-13 10:40:12.161430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.530 qpair failed and we were unable to recover it. 00:38:18.530 [2024-12-13 10:40:12.161582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.530 [2024-12-13 10:40:12.161596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.530 qpair failed and we were unable to recover it. 00:38:18.530 [2024-12-13 10:40:12.161680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.530 [2024-12-13 10:40:12.161694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.530 qpair failed and we were unable to recover it. 00:38:18.530 [2024-12-13 10:40:12.161767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.530 [2024-12-13 10:40:12.161780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.530 qpair failed and we were unable to recover it. 00:38:18.530 [2024-12-13 10:40:12.162010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.530 [2024-12-13 10:40:12.162024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.530 qpair failed and we were unable to recover it. 00:38:18.530 [2024-12-13 10:40:12.162190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.530 [2024-12-13 10:40:12.162204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.530 qpair failed and we were unable to recover it. 00:38:18.530 [2024-12-13 10:40:12.162285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.530 [2024-12-13 10:40:12.162298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.530 qpair failed and we were unable to recover it. 00:38:18.530 [2024-12-13 10:40:12.162528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.530 [2024-12-13 10:40:12.162542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.530 qpair failed and we were unable to recover it. 00:38:18.530 [2024-12-13 10:40:12.162696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.530 [2024-12-13 10:40:12.162710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.530 qpair failed and we were unable to recover it. 00:38:18.530 [2024-12-13 10:40:12.162808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.530 [2024-12-13 10:40:12.162821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.530 qpair failed and we were unable to recover it. 00:38:18.530 [2024-12-13 10:40:12.162924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.530 [2024-12-13 10:40:12.162939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.530 qpair failed and we were unable to recover it. 00:38:18.530 [2024-12-13 10:40:12.163092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.530 [2024-12-13 10:40:12.163106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.530 qpair failed and we were unable to recover it. 00:38:18.530 [2024-12-13 10:40:12.163328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.530 [2024-12-13 10:40:12.163341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.530 qpair failed and we were unable to recover it. 00:38:18.530 [2024-12-13 10:40:12.163427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.530 [2024-12-13 10:40:12.163441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.530 qpair failed and we were unable to recover it. 00:38:18.530 [2024-12-13 10:40:12.163587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.530 [2024-12-13 10:40:12.163601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.530 qpair failed and we were unable to recover it. 00:38:18.530 [2024-12-13 10:40:12.163832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.530 [2024-12-13 10:40:12.163861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:18.530 qpair failed and we were unable to recover it. 00:38:18.530 [2024-12-13 10:40:12.163992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.530 [2024-12-13 10:40:12.164015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:18.530 qpair failed and we were unable to recover it. 00:38:18.530 [2024-12-13 10:40:12.164190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.530 [2024-12-13 10:40:12.164212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:18.530 qpair failed and we were unable to recover it. 00:38:18.530 [2024-12-13 10:40:12.164311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.530 [2024-12-13 10:40:12.164333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:18.530 qpair failed and we were unable to recover it. 00:38:18.530 [2024-12-13 10:40:12.164437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.530 [2024-12-13 10:40:12.164467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:18.531 qpair failed and we were unable to recover it. 00:38:18.531 [2024-12-13 10:40:12.164688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.531 [2024-12-13 10:40:12.164710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:18.531 qpair failed and we were unable to recover it. 00:38:18.531 [2024-12-13 10:40:12.164898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.531 [2024-12-13 10:40:12.164919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:18.531 qpair failed and we were unable to recover it. 00:38:18.531 [2024-12-13 10:40:12.165084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.531 [2024-12-13 10:40:12.165106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:18.531 qpair failed and we were unable to recover it. 00:38:18.531 [2024-12-13 10:40:12.165204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.531 [2024-12-13 10:40:12.165227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:18.531 qpair failed and we were unable to recover it. 00:38:18.531 [2024-12-13 10:40:12.165312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.531 [2024-12-13 10:40:12.165348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.531 qpair failed and we were unable to recover it. 00:38:18.531 [2024-12-13 10:40:12.165559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.531 [2024-12-13 10:40:12.165574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.531 qpair failed and we were unable to recover it. 00:38:18.531 [2024-12-13 10:40:12.165715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.531 [2024-12-13 10:40:12.165729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.531 qpair failed and we were unable to recover it. 00:38:18.531 [2024-12-13 10:40:12.165891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.531 [2024-12-13 10:40:12.165905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.531 qpair failed and we were unable to recover it. 00:38:18.531 [2024-12-13 10:40:12.166045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.531 [2024-12-13 10:40:12.166063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.531 qpair failed and we were unable to recover it. 00:38:18.531 [2024-12-13 10:40:12.166151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.531 [2024-12-13 10:40:12.166165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.531 qpair failed and we were unable to recover it. 00:38:18.531 [2024-12-13 10:40:12.166242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.531 [2024-12-13 10:40:12.166255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.531 qpair failed and we were unable to recover it. 00:38:18.531 [2024-12-13 10:40:12.166471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.531 [2024-12-13 10:40:12.166485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.531 qpair failed and we were unable to recover it. 00:38:18.531 [2024-12-13 10:40:12.166569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.531 [2024-12-13 10:40:12.166583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.531 qpair failed and we were unable to recover it. 00:38:18.531 [2024-12-13 10:40:12.166666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.531 [2024-12-13 10:40:12.166680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.531 qpair failed and we were unable to recover it. 00:38:18.531 [2024-12-13 10:40:12.166816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.531 [2024-12-13 10:40:12.166832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.531 qpair failed and we were unable to recover it. 00:38:18.531 [2024-12-13 10:40:12.166964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.531 [2024-12-13 10:40:12.166978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.531 qpair failed and we were unable to recover it. 00:38:18.531 [2024-12-13 10:40:12.167054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.531 [2024-12-13 10:40:12.167067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.531 qpair failed and we were unable to recover it. 00:38:18.531 [2024-12-13 10:40:12.167213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.531 [2024-12-13 10:40:12.167227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.531 qpair failed and we were unable to recover it. 00:38:18.531 [2024-12-13 10:40:12.167368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.531 [2024-12-13 10:40:12.167383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.531 qpair failed and we were unable to recover it. 00:38:18.531 [2024-12-13 10:40:12.167587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.531 [2024-12-13 10:40:12.167601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.531 qpair failed and we were unable to recover it. 00:38:18.531 [2024-12-13 10:40:12.167748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.531 [2024-12-13 10:40:12.167762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.531 qpair failed and we were unable to recover it. 00:38:18.531 [2024-12-13 10:40:12.167853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.531 [2024-12-13 10:40:12.167867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.531 qpair failed and we were unable to recover it. 00:38:18.531 [2024-12-13 10:40:12.168121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.531 [2024-12-13 10:40:12.168136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.531 qpair failed and we were unable to recover it. 00:38:18.531 [2024-12-13 10:40:12.168285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.531 [2024-12-13 10:40:12.168299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.531 qpair failed and we were unable to recover it. 00:38:18.531 [2024-12-13 10:40:12.168439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.531 [2024-12-13 10:40:12.168458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.531 qpair failed and we were unable to recover it. 00:38:18.531 [2024-12-13 10:40:12.168610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.531 [2024-12-13 10:40:12.168625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.531 qpair failed and we were unable to recover it. 00:38:18.531 [2024-12-13 10:40:12.168719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.531 [2024-12-13 10:40:12.168733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.531 qpair failed and we were unable to recover it. 00:38:18.531 [2024-12-13 10:40:12.168803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.531 [2024-12-13 10:40:12.168817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.531 qpair failed and we were unable to recover it. 00:38:18.531 [2024-12-13 10:40:12.168991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.531 [2024-12-13 10:40:12.169004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.531 qpair failed and we were unable to recover it. 00:38:18.531 [2024-12-13 10:40:12.169170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.531 [2024-12-13 10:40:12.169184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.531 qpair failed and we were unable to recover it. 00:38:18.531 [2024-12-13 10:40:12.169277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.531 [2024-12-13 10:40:12.169290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.531 qpair failed and we were unable to recover it. 00:38:18.531 [2024-12-13 10:40:12.169433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.531 [2024-12-13 10:40:12.169446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.531 qpair failed and we were unable to recover it. 00:38:18.531 [2024-12-13 10:40:12.169525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.531 [2024-12-13 10:40:12.169539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.531 qpair failed and we were unable to recover it. 00:38:18.531 [2024-12-13 10:40:12.169711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.531 [2024-12-13 10:40:12.169724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.531 qpair failed and we were unable to recover it. 00:38:18.531 [2024-12-13 10:40:12.169798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.531 [2024-12-13 10:40:12.169811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.532 qpair failed and we were unable to recover it. 00:38:18.532 [2024-12-13 10:40:12.169960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.532 [2024-12-13 10:40:12.169975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.532 qpair failed and we were unable to recover it. 00:38:18.532 [2024-12-13 10:40:12.170063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.532 [2024-12-13 10:40:12.170078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.532 qpair failed and we were unable to recover it. 00:38:18.532 [2024-12-13 10:40:12.170225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.532 [2024-12-13 10:40:12.170239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.532 qpair failed and we were unable to recover it. 00:38:18.532 [2024-12-13 10:40:12.170393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.532 [2024-12-13 10:40:12.170407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.532 qpair failed and we were unable to recover it. 00:38:18.532 [2024-12-13 10:40:12.170545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.532 [2024-12-13 10:40:12.170559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.532 qpair failed and we were unable to recover it. 00:38:18.532 [2024-12-13 10:40:12.170651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.532 [2024-12-13 10:40:12.170666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.532 qpair failed and we were unable to recover it. 00:38:18.532 [2024-12-13 10:40:12.170747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.532 [2024-12-13 10:40:12.170761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.532 qpair failed and we were unable to recover it. 00:38:18.532 [2024-12-13 10:40:12.170834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.532 [2024-12-13 10:40:12.170848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.532 qpair failed and we were unable to recover it. 00:38:18.532 [2024-12-13 10:40:12.170926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.532 [2024-12-13 10:40:12.170941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.532 qpair failed and we were unable to recover it. 00:38:18.532 [2024-12-13 10:40:12.171017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.532 [2024-12-13 10:40:12.171031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.532 qpair failed and we were unable to recover it. 00:38:18.532 [2024-12-13 10:40:12.171168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.532 [2024-12-13 10:40:12.171183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.532 qpair failed and we were unable to recover it. 00:38:18.532 [2024-12-13 10:40:12.171296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.532 [2024-12-13 10:40:12.171313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.532 qpair failed and we were unable to recover it. 00:38:18.532 [2024-12-13 10:40:12.171469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.532 [2024-12-13 10:40:12.171485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.532 qpair failed and we were unable to recover it. 00:38:18.532 [2024-12-13 10:40:12.171572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.532 [2024-12-13 10:40:12.171589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.532 qpair failed and we were unable to recover it. 00:38:18.532 [2024-12-13 10:40:12.171675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.532 [2024-12-13 10:40:12.171692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.532 qpair failed and we were unable to recover it. 00:38:18.532 [2024-12-13 10:40:12.171794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.532 [2024-12-13 10:40:12.171808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.532 qpair failed and we were unable to recover it. 00:38:18.532 [2024-12-13 10:40:12.171946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.532 [2024-12-13 10:40:12.171960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.532 qpair failed and we were unable to recover it. 00:38:18.532 [2024-12-13 10:40:12.172227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.532 [2024-12-13 10:40:12.172269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.532 qpair failed and we were unable to recover it. 00:38:18.532 [2024-12-13 10:40:12.172489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.532 [2024-12-13 10:40:12.172535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.532 qpair failed and we were unable to recover it. 00:38:18.532 [2024-12-13 10:40:12.172689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.532 [2024-12-13 10:40:12.172729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.532 qpair failed and we were unable to recover it. 00:38:18.532 [2024-12-13 10:40:12.172796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.532 [2024-12-13 10:40:12.172810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.532 qpair failed and we were unable to recover it. 00:38:18.532 [2024-12-13 10:40:12.172960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.532 [2024-12-13 10:40:12.172974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.532 qpair failed and we were unable to recover it. 00:38:18.532 [2024-12-13 10:40:12.173076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.532 [2024-12-13 10:40:12.173089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.532 qpair failed and we were unable to recover it. 00:38:18.532 [2024-12-13 10:40:12.173177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.532 [2024-12-13 10:40:12.173190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.532 qpair failed and we were unable to recover it. 00:38:18.532 [2024-12-13 10:40:12.173347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.532 [2024-12-13 10:40:12.173361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.532 qpair failed and we were unable to recover it. 00:38:18.532 [2024-12-13 10:40:12.173460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.532 [2024-12-13 10:40:12.173474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.532 qpair failed and we were unable to recover it. 00:38:18.532 [2024-12-13 10:40:12.173540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.532 [2024-12-13 10:40:12.173554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.532 qpair failed and we were unable to recover it. 00:38:18.532 [2024-12-13 10:40:12.173665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.532 [2024-12-13 10:40:12.173678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.532 qpair failed and we were unable to recover it. 00:38:18.532 [2024-12-13 10:40:12.173833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.532 [2024-12-13 10:40:12.173847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.532 qpair failed and we were unable to recover it. 00:38:18.532 [2024-12-13 10:40:12.173943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.532 [2024-12-13 10:40:12.173956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.532 qpair failed and we were unable to recover it. 00:38:18.532 [2024-12-13 10:40:12.174037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.532 [2024-12-13 10:40:12.174051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.532 qpair failed and we were unable to recover it. 00:38:18.532 [2024-12-13 10:40:12.174200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.532 [2024-12-13 10:40:12.174214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.532 qpair failed and we were unable to recover it. 00:38:18.532 [2024-12-13 10:40:12.174371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.532 [2024-12-13 10:40:12.174414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.532 qpair failed and we were unable to recover it. 00:38:18.532 [2024-12-13 10:40:12.174696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.532 [2024-12-13 10:40:12.174753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.532 qpair failed and we were unable to recover it. 00:38:18.532 [2024-12-13 10:40:12.174911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.532 [2024-12-13 10:40:12.174925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.532 qpair failed and we were unable to recover it. 00:38:18.532 [2024-12-13 10:40:12.175072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.532 [2024-12-13 10:40:12.175086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.532 qpair failed and we were unable to recover it. 00:38:18.532 [2024-12-13 10:40:12.175268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.532 [2024-12-13 10:40:12.175282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.532 qpair failed and we were unable to recover it. 00:38:18.532 [2024-12-13 10:40:12.175427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.532 [2024-12-13 10:40:12.175440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.532 qpair failed and we were unable to recover it. 00:38:18.532 [2024-12-13 10:40:12.175594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.532 [2024-12-13 10:40:12.175608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.533 qpair failed and we were unable to recover it. 00:38:18.533 [2024-12-13 10:40:12.175704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.533 [2024-12-13 10:40:12.175717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.533 qpair failed and we were unable to recover it. 00:38:18.533 [2024-12-13 10:40:12.175868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.533 [2024-12-13 10:40:12.175882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.533 qpair failed and we were unable to recover it. 00:38:18.533 [2024-12-13 10:40:12.176029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.533 [2024-12-13 10:40:12.176042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.533 qpair failed and we were unable to recover it. 00:38:18.533 [2024-12-13 10:40:12.176205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.533 [2024-12-13 10:40:12.176219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.533 qpair failed and we were unable to recover it. 00:38:18.533 [2024-12-13 10:40:12.176305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.533 [2024-12-13 10:40:12.176318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.533 qpair failed and we were unable to recover it. 00:38:18.533 [2024-12-13 10:40:12.176410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.533 [2024-12-13 10:40:12.176423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.533 qpair failed and we were unable to recover it. 00:38:18.533 [2024-12-13 10:40:12.176567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.533 [2024-12-13 10:40:12.176581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.533 qpair failed and we were unable to recover it. 00:38:18.533 [2024-12-13 10:40:12.176719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.533 [2024-12-13 10:40:12.176732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.533 qpair failed and we were unable to recover it. 00:38:18.533 [2024-12-13 10:40:12.176807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.533 [2024-12-13 10:40:12.176821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.533 qpair failed and we were unable to recover it. 00:38:18.533 [2024-12-13 10:40:12.176965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.533 [2024-12-13 10:40:12.176979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.533 qpair failed and we were unable to recover it. 00:38:18.533 [2024-12-13 10:40:12.177128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.533 [2024-12-13 10:40:12.177141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.533 qpair failed and we were unable to recover it. 00:38:18.533 [2024-12-13 10:40:12.177229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.533 [2024-12-13 10:40:12.177242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.533 qpair failed and we were unable to recover it. 00:38:18.533 [2024-12-13 10:40:12.177315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.533 [2024-12-13 10:40:12.177328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.533 qpair failed and we were unable to recover it. 00:38:18.533 [2024-12-13 10:40:12.177408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.533 [2024-12-13 10:40:12.177423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.533 qpair failed and we were unable to recover it. 00:38:18.533 [2024-12-13 10:40:12.177605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.533 [2024-12-13 10:40:12.177620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.533 qpair failed and we were unable to recover it. 00:38:18.533 [2024-12-13 10:40:12.177794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.533 [2024-12-13 10:40:12.177809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.533 qpair failed and we were unable to recover it. 00:38:18.533 [2024-12-13 10:40:12.177967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.533 [2024-12-13 10:40:12.177981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.533 qpair failed and we were unable to recover it. 00:38:18.533 [2024-12-13 10:40:12.178132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.533 [2024-12-13 10:40:12.178176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.533 qpair failed and we were unable to recover it. 00:38:18.533 [2024-12-13 10:40:12.178399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.533 [2024-12-13 10:40:12.178443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.533 qpair failed and we were unable to recover it. 00:38:18.533 [2024-12-13 10:40:12.178592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.533 [2024-12-13 10:40:12.178646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.533 qpair failed and we were unable to recover it. 00:38:18.533 [2024-12-13 10:40:12.178854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.533 [2024-12-13 10:40:12.178868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.533 qpair failed and we were unable to recover it. 00:38:18.533 [2024-12-13 10:40:12.178942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.533 [2024-12-13 10:40:12.178956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.533 qpair failed and we were unable to recover it. 00:38:18.533 [2024-12-13 10:40:12.179033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.533 [2024-12-13 10:40:12.179047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.533 qpair failed and we were unable to recover it. 00:38:18.533 [2024-12-13 10:40:12.179137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.533 [2024-12-13 10:40:12.179151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.533 qpair failed and we were unable to recover it. 00:38:18.533 [2024-12-13 10:40:12.179291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.533 [2024-12-13 10:40:12.179305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.533 qpair failed and we were unable to recover it. 00:38:18.533 [2024-12-13 10:40:12.179386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.533 [2024-12-13 10:40:12.179399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.533 qpair failed and we were unable to recover it. 00:38:18.533 [2024-12-13 10:40:12.179604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.533 [2024-12-13 10:40:12.179619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.533 qpair failed and we were unable to recover it. 00:38:18.533 [2024-12-13 10:40:12.179766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.533 [2024-12-13 10:40:12.179810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.533 qpair failed and we were unable to recover it. 00:38:18.533 [2024-12-13 10:40:12.180010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.533 [2024-12-13 10:40:12.180053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.533 qpair failed and we were unable to recover it. 00:38:18.533 [2024-12-13 10:40:12.180216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.533 [2024-12-13 10:40:12.180260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.533 qpair failed and we were unable to recover it. 00:38:18.533 [2024-12-13 10:40:12.180403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.533 [2024-12-13 10:40:12.180446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.533 qpair failed and we were unable to recover it. 00:38:18.533 [2024-12-13 10:40:12.180606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.533 [2024-12-13 10:40:12.180650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.533 qpair failed and we were unable to recover it. 00:38:18.533 [2024-12-13 10:40:12.180811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.533 [2024-12-13 10:40:12.180854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.533 qpair failed and we were unable to recover it. 00:38:18.533 [2024-12-13 10:40:12.181058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.533 [2024-12-13 10:40:12.181071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.533 qpair failed and we were unable to recover it. 00:38:18.533 [2024-12-13 10:40:12.181210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.533 [2024-12-13 10:40:12.181224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.533 qpair failed and we were unable to recover it. 00:38:18.533 [2024-12-13 10:40:12.181377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.533 [2024-12-13 10:40:12.181390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.533 qpair failed and we were unable to recover it. 00:38:18.533 [2024-12-13 10:40:12.181487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.533 [2024-12-13 10:40:12.181503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.533 qpair failed and we were unable to recover it. 00:38:18.533 [2024-12-13 10:40:12.181592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.533 [2024-12-13 10:40:12.181607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.533 qpair failed and we were unable to recover it. 00:38:18.533 [2024-12-13 10:40:12.181777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.533 [2024-12-13 10:40:12.181820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.533 qpair failed and we were unable to recover it. 00:38:18.534 [2024-12-13 10:40:12.181973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.534 [2024-12-13 10:40:12.182016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.534 qpair failed and we were unable to recover it. 00:38:18.534 [2024-12-13 10:40:12.182231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.534 [2024-12-13 10:40:12.182273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.534 qpair failed and we were unable to recover it. 00:38:18.534 [2024-12-13 10:40:12.182492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.534 [2024-12-13 10:40:12.182543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.534 qpair failed and we were unable to recover it. 00:38:18.534 [2024-12-13 10:40:12.182687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.534 [2024-12-13 10:40:12.182731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.534 qpair failed and we were unable to recover it. 00:38:18.534 [2024-12-13 10:40:12.182985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.534 [2024-12-13 10:40:12.183028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.534 qpair failed and we were unable to recover it. 00:38:18.534 [2024-12-13 10:40:12.183288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.534 [2024-12-13 10:40:12.183331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.534 qpair failed and we were unable to recover it. 00:38:18.534 [2024-12-13 10:40:12.183592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.534 [2024-12-13 10:40:12.183637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.534 qpair failed and we were unable to recover it. 00:38:18.534 [2024-12-13 10:40:12.183841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.534 [2024-12-13 10:40:12.183884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.534 qpair failed and we were unable to recover it. 00:38:18.534 [2024-12-13 10:40:12.183991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.534 [2024-12-13 10:40:12.184004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.534 qpair failed and we were unable to recover it. 00:38:18.534 [2024-12-13 10:40:12.184155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.534 [2024-12-13 10:40:12.184169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.534 qpair failed and we were unable to recover it. 00:38:18.534 [2024-12-13 10:40:12.184370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.534 [2024-12-13 10:40:12.184384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.534 qpair failed and we were unable to recover it. 00:38:18.534 [2024-12-13 10:40:12.184546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.534 [2024-12-13 10:40:12.184559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.534 qpair failed and we were unable to recover it. 00:38:18.534 [2024-12-13 10:40:12.184772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.534 [2024-12-13 10:40:12.184786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.534 qpair failed and we were unable to recover it. 00:38:18.534 [2024-12-13 10:40:12.184877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.534 [2024-12-13 10:40:12.184890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.534 qpair failed and we were unable to recover it. 00:38:18.534 [2024-12-13 10:40:12.185045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.534 [2024-12-13 10:40:12.185061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.534 qpair failed and we were unable to recover it. 00:38:18.534 [2024-12-13 10:40:12.185164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.534 [2024-12-13 10:40:12.185201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.534 qpair failed and we were unable to recover it. 00:38:18.534 [2024-12-13 10:40:12.185445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.534 [2024-12-13 10:40:12.185501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.534 qpair failed and we were unable to recover it. 00:38:18.534 [2024-12-13 10:40:12.185655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.534 [2024-12-13 10:40:12.185697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.534 qpair failed and we were unable to recover it. 00:38:18.534 [2024-12-13 10:40:12.185917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.534 [2024-12-13 10:40:12.185931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.534 qpair failed and we were unable to recover it. 00:38:18.534 [2024-12-13 10:40:12.186084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.534 [2024-12-13 10:40:12.186139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.534 qpair failed and we were unable to recover it. 00:38:18.534 [2024-12-13 10:40:12.186372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.534 [2024-12-13 10:40:12.186414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.534 qpair failed and we were unable to recover it. 00:38:18.534 [2024-12-13 10:40:12.186700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.534 [2024-12-13 10:40:12.186714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.534 qpair failed and we were unable to recover it. 00:38:18.534 [2024-12-13 10:40:12.186862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.534 [2024-12-13 10:40:12.186876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.534 qpair failed and we were unable to recover it. 00:38:18.534 [2024-12-13 10:40:12.187009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.534 [2024-12-13 10:40:12.187023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.534 qpair failed and we were unable to recover it. 00:38:18.534 [2024-12-13 10:40:12.187241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.534 [2024-12-13 10:40:12.187285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.534 qpair failed and we were unable to recover it. 00:38:18.534 [2024-12-13 10:40:12.187500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.534 [2024-12-13 10:40:12.187544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.534 qpair failed and we were unable to recover it. 00:38:18.534 [2024-12-13 10:40:12.187679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.534 [2024-12-13 10:40:12.187722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.534 qpair failed and we were unable to recover it. 00:38:18.534 [2024-12-13 10:40:12.187937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.534 [2024-12-13 10:40:12.187951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.534 qpair failed and we were unable to recover it. 00:38:18.534 [2024-12-13 10:40:12.188086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.534 [2024-12-13 10:40:12.188099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.534 qpair failed and we were unable to recover it. 00:38:18.534 [2024-12-13 10:40:12.188183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.534 [2024-12-13 10:40:12.188196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.534 qpair failed and we were unable to recover it. 00:38:18.534 [2024-12-13 10:40:12.188340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.534 [2024-12-13 10:40:12.188353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.534 qpair failed and we were unable to recover it. 00:38:18.534 [2024-12-13 10:40:12.188517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.534 [2024-12-13 10:40:12.188536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.534 qpair failed and we were unable to recover it. 00:38:18.534 [2024-12-13 10:40:12.188674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.534 [2024-12-13 10:40:12.188687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.534 qpair failed and we were unable to recover it. 00:38:18.534 [2024-12-13 10:40:12.188773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.534 [2024-12-13 10:40:12.188786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.534 qpair failed and we were unable to recover it. 00:38:18.534 [2024-12-13 10:40:12.188957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.534 [2024-12-13 10:40:12.188971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.534 qpair failed and we were unable to recover it. 00:38:18.534 [2024-12-13 10:40:12.189054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.534 [2024-12-13 10:40:12.189067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.534 qpair failed and we were unable to recover it. 00:38:18.534 [2024-12-13 10:40:12.189208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.534 [2024-12-13 10:40:12.189222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.534 qpair failed and we were unable to recover it. 00:38:18.534 [2024-12-13 10:40:12.189453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.534 [2024-12-13 10:40:12.189467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.534 qpair failed and we were unable to recover it. 00:38:18.534 [2024-12-13 10:40:12.189619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.534 [2024-12-13 10:40:12.189633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.534 qpair failed and we were unable to recover it. 00:38:18.535 [2024-12-13 10:40:12.189784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.535 [2024-12-13 10:40:12.189826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.535 qpair failed and we were unable to recover it. 00:38:18.535 [2024-12-13 10:40:12.189973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.535 [2024-12-13 10:40:12.190016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.535 qpair failed and we were unable to recover it. 00:38:18.535 [2024-12-13 10:40:12.190226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.535 [2024-12-13 10:40:12.190270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.535 qpair failed and we were unable to recover it. 00:38:18.535 [2024-12-13 10:40:12.190413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.535 [2024-12-13 10:40:12.190493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.535 qpair failed and we were unable to recover it. 00:38:18.535 [2024-12-13 10:40:12.190707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.535 [2024-12-13 10:40:12.190750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.535 qpair failed and we were unable to recover it. 00:38:18.535 [2024-12-13 10:40:12.190914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.535 [2024-12-13 10:40:12.190956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.535 qpair failed and we were unable to recover it. 00:38:18.535 [2024-12-13 10:40:12.191227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.535 [2024-12-13 10:40:12.191270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.535 qpair failed and we were unable to recover it. 00:38:18.535 [2024-12-13 10:40:12.191500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.535 [2024-12-13 10:40:12.191544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.535 qpair failed and we were unable to recover it. 00:38:18.535 [2024-12-13 10:40:12.191821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.535 [2024-12-13 10:40:12.191836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.535 qpair failed and we were unable to recover it. 00:38:18.535 [2024-12-13 10:40:12.191937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.535 [2024-12-13 10:40:12.191951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.535 qpair failed and we were unable to recover it. 00:38:18.535 [2024-12-13 10:40:12.192103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.535 [2024-12-13 10:40:12.192117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.535 qpair failed and we were unable to recover it. 00:38:18.535 [2024-12-13 10:40:12.192208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.535 [2024-12-13 10:40:12.192221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.535 qpair failed and we were unable to recover it. 00:38:18.535 [2024-12-13 10:40:12.192381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.535 [2024-12-13 10:40:12.192394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.535 qpair failed and we were unable to recover it. 00:38:18.535 [2024-12-13 10:40:12.192493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.535 [2024-12-13 10:40:12.192507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.535 qpair failed and we were unable to recover it. 00:38:18.535 [2024-12-13 10:40:12.192596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.535 [2024-12-13 10:40:12.192610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.535 qpair failed and we were unable to recover it. 00:38:18.535 [2024-12-13 10:40:12.192751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.535 [2024-12-13 10:40:12.192765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.535 qpair failed and we were unable to recover it. 00:38:18.535 [2024-12-13 10:40:12.192923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.535 [2024-12-13 10:40:12.192936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.535 qpair failed and we were unable to recover it. 00:38:18.535 [2024-12-13 10:40:12.193020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.535 [2024-12-13 10:40:12.193034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.535 qpair failed and we were unable to recover it. 00:38:18.535 [2024-12-13 10:40:12.193201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.535 [2024-12-13 10:40:12.193215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.535 qpair failed and we were unable to recover it. 00:38:18.535 [2024-12-13 10:40:12.193369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.535 [2024-12-13 10:40:12.193382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.535 qpair failed and we were unable to recover it. 00:38:18.535 [2024-12-13 10:40:12.193444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.535 [2024-12-13 10:40:12.193463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.535 qpair failed and we were unable to recover it. 00:38:18.535 [2024-12-13 10:40:12.193605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.535 [2024-12-13 10:40:12.193619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.535 qpair failed and we were unable to recover it. 00:38:18.535 [2024-12-13 10:40:12.193755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.535 [2024-12-13 10:40:12.193768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.535 qpair failed and we were unable to recover it. 00:38:18.535 [2024-12-13 10:40:12.193841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.535 [2024-12-13 10:40:12.193856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.535 qpair failed and we were unable to recover it. 00:38:18.535 [2024-12-13 10:40:12.194029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.535 [2024-12-13 10:40:12.194042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.535 qpair failed and we were unable to recover it. 00:38:18.535 [2024-12-13 10:40:12.194207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.535 [2024-12-13 10:40:12.194223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.535 qpair failed and we were unable to recover it. 00:38:18.535 [2024-12-13 10:40:12.194370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.535 [2024-12-13 10:40:12.194414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.535 qpair failed and we were unable to recover it. 00:38:18.535 [2024-12-13 10:40:12.194677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.535 [2024-12-13 10:40:12.194722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.535 qpair failed and we were unable to recover it. 00:38:18.535 [2024-12-13 10:40:12.194888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.535 [2024-12-13 10:40:12.194901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.535 qpair failed and we were unable to recover it. 00:38:18.535 [2024-12-13 10:40:12.195034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.535 [2024-12-13 10:40:12.195048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.535 qpair failed and we were unable to recover it. 00:38:18.535 [2024-12-13 10:40:12.195202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.535 [2024-12-13 10:40:12.195216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.535 qpair failed and we were unable to recover it. 00:38:18.535 [2024-12-13 10:40:12.195432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.535 [2024-12-13 10:40:12.195445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.535 qpair failed and we were unable to recover it. 00:38:18.535 [2024-12-13 10:40:12.195597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.535 [2024-12-13 10:40:12.195610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.535 qpair failed and we were unable to recover it. 00:38:18.535 [2024-12-13 10:40:12.195748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.535 [2024-12-13 10:40:12.195762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.535 qpair failed and we were unable to recover it. 00:38:18.535 [2024-12-13 10:40:12.195846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.535 [2024-12-13 10:40:12.195859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.535 qpair failed and we were unable to recover it. 00:38:18.535 [2024-12-13 10:40:12.195996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.535 [2024-12-13 10:40:12.196009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.535 qpair failed and we were unable to recover it. 00:38:18.535 [2024-12-13 10:40:12.196188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.535 [2024-12-13 10:40:12.196202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.535 qpair failed and we were unable to recover it. 00:38:18.535 [2024-12-13 10:40:12.196382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.535 [2024-12-13 10:40:12.196395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.536 qpair failed and we were unable to recover it. 00:38:18.536 [2024-12-13 10:40:12.196605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.536 [2024-12-13 10:40:12.196619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.536 qpair failed and we were unable to recover it. 00:38:18.536 [2024-12-13 10:40:12.196777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.536 [2024-12-13 10:40:12.196792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.536 qpair failed and we were unable to recover it. 00:38:18.536 [2024-12-13 10:40:12.196869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.536 [2024-12-13 10:40:12.196883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.536 qpair failed and we were unable to recover it. 00:38:18.536 [2024-12-13 10:40:12.197017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.536 [2024-12-13 10:40:12.197030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.536 qpair failed and we were unable to recover it. 00:38:18.536 [2024-12-13 10:40:12.197118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.536 [2024-12-13 10:40:12.197132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.536 qpair failed and we were unable to recover it. 00:38:18.536 [2024-12-13 10:40:12.197289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.536 [2024-12-13 10:40:12.197305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.536 qpair failed and we were unable to recover it. 00:38:18.536 [2024-12-13 10:40:12.197441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.536 [2024-12-13 10:40:12.197467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.536 qpair failed and we were unable to recover it. 00:38:18.536 [2024-12-13 10:40:12.197618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.536 [2024-12-13 10:40:12.197632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.536 qpair failed and we were unable to recover it. 00:38:18.536 [2024-12-13 10:40:12.197725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.536 [2024-12-13 10:40:12.197739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.536 qpair failed and we were unable to recover it. 00:38:18.536 [2024-12-13 10:40:12.197871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.536 [2024-12-13 10:40:12.197885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.536 qpair failed and we were unable to recover it. 00:38:18.536 [2024-12-13 10:40:12.198034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.536 [2024-12-13 10:40:12.198049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.536 qpair failed and we were unable to recover it. 00:38:18.536 [2024-12-13 10:40:12.198255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.536 [2024-12-13 10:40:12.198268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.536 qpair failed and we were unable to recover it. 00:38:18.536 [2024-12-13 10:40:12.198413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.536 [2024-12-13 10:40:12.198427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.536 qpair failed and we were unable to recover it. 00:38:18.536 [2024-12-13 10:40:12.198585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.536 [2024-12-13 10:40:12.198599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.536 qpair failed and we were unable to recover it. 00:38:18.536 [2024-12-13 10:40:12.198763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.536 [2024-12-13 10:40:12.198805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.536 qpair failed and we were unable to recover it. 00:38:18.536 [2024-12-13 10:40:12.199053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.536 [2024-12-13 10:40:12.199096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.536 qpair failed and we were unable to recover it. 00:38:18.536 [2024-12-13 10:40:12.199225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.536 [2024-12-13 10:40:12.199269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.536 qpair failed and we were unable to recover it. 00:38:18.536 [2024-12-13 10:40:12.199471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.536 [2024-12-13 10:40:12.199515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.536 qpair failed and we were unable to recover it. 00:38:18.536 [2024-12-13 10:40:12.199738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.536 [2024-12-13 10:40:12.199781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.536 qpair failed and we were unable to recover it. 00:38:18.536 [2024-12-13 10:40:12.200019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.536 [2024-12-13 10:40:12.200033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.536 qpair failed and we were unable to recover it. 00:38:18.536 [2024-12-13 10:40:12.200210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.536 [2024-12-13 10:40:12.200223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.536 qpair failed and we were unable to recover it. 00:38:18.536 [2024-12-13 10:40:12.200367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.536 [2024-12-13 10:40:12.200380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.536 qpair failed and we were unable to recover it. 00:38:18.536 [2024-12-13 10:40:12.200530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.536 [2024-12-13 10:40:12.200570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.536 qpair failed and we were unable to recover it. 00:38:18.536 [2024-12-13 10:40:12.200712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.536 [2024-12-13 10:40:12.200756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.536 qpair failed and we were unable to recover it. 00:38:18.536 [2024-12-13 10:40:12.200887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.536 [2024-12-13 10:40:12.200929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.536 qpair failed and we were unable to recover it. 00:38:18.536 [2024-12-13 10:40:12.201133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.536 [2024-12-13 10:40:12.201177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.536 qpair failed and we were unable to recover it. 00:38:18.536 [2024-12-13 10:40:12.201329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.536 [2024-12-13 10:40:12.201374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.536 qpair failed and we were unable to recover it. 00:38:18.536 [2024-12-13 10:40:12.201533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.536 [2024-12-13 10:40:12.201547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.536 qpair failed and we were unable to recover it. 00:38:18.536 [2024-12-13 10:40:12.201776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.536 [2024-12-13 10:40:12.201790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.536 qpair failed and we were unable to recover it. 00:38:18.536 [2024-12-13 10:40:12.201938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.536 [2024-12-13 10:40:12.201952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.536 qpair failed and we were unable to recover it. 00:38:18.536 [2024-12-13 10:40:12.202110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.536 [2024-12-13 10:40:12.202123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.536 qpair failed and we were unable to recover it. 00:38:18.536 [2024-12-13 10:40:12.202345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.536 [2024-12-13 10:40:12.202359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.536 qpair failed and we were unable to recover it. 00:38:18.536 [2024-12-13 10:40:12.202510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.536 [2024-12-13 10:40:12.202523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.536 qpair failed and we were unable to recover it. 00:38:18.536 [2024-12-13 10:40:12.202767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.536 [2024-12-13 10:40:12.202780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.536 qpair failed and we were unable to recover it. 00:38:18.536 [2024-12-13 10:40:12.202948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.536 [2024-12-13 10:40:12.202962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.536 qpair failed and we were unable to recover it. 00:38:18.536 [2024-12-13 10:40:12.203028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.536 [2024-12-13 10:40:12.203041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.536 qpair failed and we were unable to recover it. 00:38:18.537 [2024-12-13 10:40:12.203204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.537 [2024-12-13 10:40:12.203218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.537 qpair failed and we were unable to recover it. 00:38:18.537 [2024-12-13 10:40:12.203304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.537 [2024-12-13 10:40:12.203318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.537 qpair failed and we were unable to recover it. 00:38:18.537 [2024-12-13 10:40:12.203383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.537 [2024-12-13 10:40:12.203397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.537 qpair failed and we were unable to recover it. 00:38:18.537 [2024-12-13 10:40:12.203471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.537 [2024-12-13 10:40:12.203485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.537 qpair failed and we were unable to recover it. 00:38:18.537 [2024-12-13 10:40:12.203647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.537 [2024-12-13 10:40:12.203661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.537 qpair failed and we were unable to recover it. 00:38:18.537 [2024-12-13 10:40:12.203806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.537 [2024-12-13 10:40:12.203820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.537 qpair failed and we were unable to recover it. 00:38:18.537 [2024-12-13 10:40:12.203993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.537 [2024-12-13 10:40:12.204007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.537 qpair failed and we were unable to recover it. 00:38:18.537 [2024-12-13 10:40:12.204098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.537 [2024-12-13 10:40:12.204112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.537 qpair failed and we were unable to recover it. 00:38:18.537 [2024-12-13 10:40:12.204193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.537 [2024-12-13 10:40:12.204206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.537 qpair failed and we were unable to recover it. 00:38:18.537 [2024-12-13 10:40:12.204290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.537 [2024-12-13 10:40:12.204305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.537 qpair failed and we were unable to recover it. 00:38:18.537 [2024-12-13 10:40:12.204377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.537 [2024-12-13 10:40:12.204391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.537 qpair failed and we were unable to recover it. 00:38:18.537 [2024-12-13 10:40:12.204546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.537 [2024-12-13 10:40:12.204560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.537 qpair failed and we were unable to recover it. 00:38:18.537 [2024-12-13 10:40:12.204630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.537 [2024-12-13 10:40:12.204643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.537 qpair failed and we were unable to recover it. 00:38:18.537 [2024-12-13 10:40:12.204724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.537 [2024-12-13 10:40:12.204738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.537 qpair failed and we were unable to recover it. 00:38:18.537 [2024-12-13 10:40:12.204818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.537 [2024-12-13 10:40:12.204831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.537 qpair failed and we were unable to recover it. 00:38:18.537 [2024-12-13 10:40:12.204977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.537 [2024-12-13 10:40:12.204990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.537 qpair failed and we were unable to recover it. 00:38:18.537 [2024-12-13 10:40:12.205128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.537 [2024-12-13 10:40:12.205142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.537 qpair failed and we were unable to recover it. 00:38:18.537 [2024-12-13 10:40:12.205230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.537 [2024-12-13 10:40:12.205244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.537 qpair failed and we were unable to recover it. 00:38:18.537 [2024-12-13 10:40:12.205411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.537 [2024-12-13 10:40:12.205425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.537 qpair failed and we were unable to recover it. 00:38:18.537 [2024-12-13 10:40:12.205528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.537 [2024-12-13 10:40:12.205543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.537 qpair failed and we were unable to recover it. 00:38:18.537 [2024-12-13 10:40:12.205691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.537 [2024-12-13 10:40:12.205705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.537 qpair failed and we were unable to recover it. 00:38:18.537 [2024-12-13 10:40:12.205849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.537 [2024-12-13 10:40:12.205862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.537 qpair failed and we were unable to recover it. 00:38:18.537 [2024-12-13 10:40:12.206047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.537 [2024-12-13 10:40:12.206060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.537 qpair failed and we were unable to recover it. 00:38:18.537 [2024-12-13 10:40:12.206210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.537 [2024-12-13 10:40:12.206224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.537 qpair failed and we were unable to recover it. 00:38:18.537 [2024-12-13 10:40:12.206427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.537 [2024-12-13 10:40:12.206440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.537 qpair failed and we were unable to recover it. 00:38:18.537 [2024-12-13 10:40:12.206524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.537 [2024-12-13 10:40:12.206538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.537 qpair failed and we were unable to recover it. 00:38:18.537 [2024-12-13 10:40:12.206626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.537 [2024-12-13 10:40:12.206639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.537 qpair failed and we were unable to recover it. 00:38:18.537 [2024-12-13 10:40:12.206806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.537 [2024-12-13 10:40:12.206820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.537 qpair failed and we were unable to recover it. 00:38:18.537 [2024-12-13 10:40:12.207051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.537 [2024-12-13 10:40:12.207095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.537 qpair failed and we were unable to recover it. 00:38:18.537 [2024-12-13 10:40:12.207357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.537 [2024-12-13 10:40:12.207399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.537 qpair failed and we were unable to recover it. 00:38:18.537 [2024-12-13 10:40:12.207540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.537 [2024-12-13 10:40:12.207586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.537 qpair failed and we were unable to recover it. 00:38:18.537 [2024-12-13 10:40:12.207745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.537 [2024-12-13 10:40:12.207788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.537 qpair failed and we were unable to recover it. 00:38:18.537 [2024-12-13 10:40:12.207895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.537 [2024-12-13 10:40:12.207908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.537 qpair failed and we were unable to recover it. 00:38:18.537 [2024-12-13 10:40:12.208077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.537 [2024-12-13 10:40:12.208091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.537 qpair failed and we were unable to recover it. 00:38:18.537 [2024-12-13 10:40:12.208178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.537 [2024-12-13 10:40:12.208196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.537 qpair failed and we were unable to recover it. 00:38:18.537 [2024-12-13 10:40:12.208342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.537 [2024-12-13 10:40:12.208355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.537 qpair failed and we were unable to recover it. 00:38:18.537 [2024-12-13 10:40:12.208608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.537 [2024-12-13 10:40:12.208622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.537 qpair failed and we were unable to recover it. 00:38:18.537 [2024-12-13 10:40:12.208697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.537 [2024-12-13 10:40:12.208711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.537 qpair failed and we were unable to recover it. 00:38:18.537 [2024-12-13 10:40:12.208776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.537 [2024-12-13 10:40:12.208789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.537 qpair failed and we were unable to recover it. 00:38:18.538 [2024-12-13 10:40:12.208942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.538 [2024-12-13 10:40:12.208955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.538 qpair failed and we were unable to recover it. 00:38:18.538 [2024-12-13 10:40:12.209092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.538 [2024-12-13 10:40:12.209105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.538 qpair failed and we were unable to recover it. 00:38:18.538 [2024-12-13 10:40:12.209268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.538 [2024-12-13 10:40:12.209283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.538 qpair failed and we were unable to recover it. 00:38:18.538 [2024-12-13 10:40:12.209363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.538 [2024-12-13 10:40:12.209375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.538 qpair failed and we were unable to recover it. 00:38:18.538 [2024-12-13 10:40:12.209465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.538 [2024-12-13 10:40:12.209480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.538 qpair failed and we were unable to recover it. 00:38:18.538 [2024-12-13 10:40:12.209629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.538 [2024-12-13 10:40:12.209643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.538 qpair failed and we were unable to recover it. 00:38:18.538 [2024-12-13 10:40:12.209798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.538 [2024-12-13 10:40:12.209812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.538 qpair failed and we were unable to recover it. 00:38:18.538 [2024-12-13 10:40:12.209970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.538 [2024-12-13 10:40:12.210024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.538 qpair failed and we were unable to recover it. 00:38:18.538 [2024-12-13 10:40:12.210248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.538 [2024-12-13 10:40:12.210289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.538 qpair failed and we were unable to recover it. 00:38:18.538 [2024-12-13 10:40:12.210506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.538 [2024-12-13 10:40:12.210551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.538 qpair failed and we were unable to recover it. 00:38:18.538 [2024-12-13 10:40:12.210709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.538 [2024-12-13 10:40:12.210759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.538 qpair failed and we were unable to recover it. 00:38:18.538 [2024-12-13 10:40:12.210958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.538 [2024-12-13 10:40:12.211001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.538 qpair failed and we were unable to recover it. 00:38:18.538 [2024-12-13 10:40:12.211127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.538 [2024-12-13 10:40:12.211140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.538 qpair failed and we were unable to recover it. 00:38:18.538 [2024-12-13 10:40:12.211371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.538 [2024-12-13 10:40:12.211385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.538 qpair failed and we were unable to recover it. 00:38:18.538 [2024-12-13 10:40:12.211523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.538 [2024-12-13 10:40:12.211537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.538 qpair failed and we were unable to recover it. 00:38:18.538 [2024-12-13 10:40:12.211691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.538 [2024-12-13 10:40:12.211704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.538 qpair failed and we were unable to recover it. 00:38:18.538 [2024-12-13 10:40:12.211790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.538 [2024-12-13 10:40:12.211804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.538 qpair failed and we were unable to recover it. 00:38:18.538 [2024-12-13 10:40:12.211890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.538 [2024-12-13 10:40:12.211905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.538 qpair failed and we were unable to recover it. 00:38:18.538 [2024-12-13 10:40:12.212041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.538 [2024-12-13 10:40:12.212055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.538 qpair failed and we were unable to recover it. 00:38:18.538 [2024-12-13 10:40:12.212311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.538 [2024-12-13 10:40:12.212355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.538 qpair failed and we were unable to recover it. 00:38:18.538 [2024-12-13 10:40:12.212618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.538 [2024-12-13 10:40:12.212663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.538 qpair failed and we were unable to recover it. 00:38:18.538 [2024-12-13 10:40:12.212805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.538 [2024-12-13 10:40:12.212819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.538 qpair failed and we were unable to recover it. 00:38:18.538 [2024-12-13 10:40:12.212965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.538 [2024-12-13 10:40:12.212979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.538 qpair failed and we were unable to recover it. 00:38:18.538 [2024-12-13 10:40:12.213182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.538 [2024-12-13 10:40:12.213195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.538 qpair failed and we were unable to recover it. 00:38:18.538 [2024-12-13 10:40:12.213281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.538 [2024-12-13 10:40:12.213295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.538 qpair failed and we were unable to recover it. 00:38:18.538 [2024-12-13 10:40:12.213376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.538 [2024-12-13 10:40:12.213391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.538 qpair failed and we were unable to recover it. 00:38:18.538 [2024-12-13 10:40:12.213488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.538 [2024-12-13 10:40:12.213501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.538 qpair failed and we were unable to recover it. 00:38:18.538 [2024-12-13 10:40:12.213648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.538 [2024-12-13 10:40:12.213662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.538 qpair failed and we were unable to recover it. 00:38:18.538 [2024-12-13 10:40:12.213763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.538 [2024-12-13 10:40:12.213777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.538 qpair failed and we were unable to recover it. 00:38:18.538 [2024-12-13 10:40:12.213860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.538 [2024-12-13 10:40:12.213874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.538 qpair failed and we were unable to recover it. 00:38:18.538 [2024-12-13 10:40:12.214128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.538 [2024-12-13 10:40:12.214171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.538 qpair failed and we were unable to recover it. 00:38:18.538 [2024-12-13 10:40:12.214315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.538 [2024-12-13 10:40:12.214358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.538 qpair failed and we were unable to recover it. 00:38:18.538 [2024-12-13 10:40:12.214581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.538 [2024-12-13 10:40:12.214626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.538 qpair failed and we were unable to recover it. 00:38:18.538 [2024-12-13 10:40:12.214848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.538 [2024-12-13 10:40:12.214862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.538 qpair failed and we were unable to recover it. 00:38:18.538 [2024-12-13 10:40:12.214948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.538 [2024-12-13 10:40:12.214963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.538 qpair failed and we were unable to recover it. 00:38:18.538 [2024-12-13 10:40:12.215224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.538 [2024-12-13 10:40:12.215237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.538 qpair failed and we were unable to recover it. 00:38:18.538 [2024-12-13 10:40:12.215376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.538 [2024-12-13 10:40:12.215390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.538 qpair failed and we were unable to recover it. 00:38:18.538 [2024-12-13 10:40:12.215608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.538 [2024-12-13 10:40:12.215654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.538 qpair failed and we were unable to recover it. 00:38:18.538 [2024-12-13 10:40:12.215791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.538 [2024-12-13 10:40:12.215833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.538 qpair failed and we were unable to recover it. 00:38:18.538 [2024-12-13 10:40:12.216027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.539 [2024-12-13 10:40:12.216069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.539 qpair failed and we were unable to recover it. 00:38:18.539 [2024-12-13 10:40:12.216292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.539 [2024-12-13 10:40:12.216335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.539 qpair failed and we were unable to recover it. 00:38:18.539 [2024-12-13 10:40:12.216613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.539 [2024-12-13 10:40:12.216658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.539 qpair failed and we were unable to recover it. 00:38:18.539 [2024-12-13 10:40:12.216873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.539 [2024-12-13 10:40:12.216917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.539 qpair failed and we were unable to recover it. 00:38:18.539 [2024-12-13 10:40:12.217116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.539 [2024-12-13 10:40:12.217132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.539 qpair failed and we were unable to recover it. 00:38:18.539 [2024-12-13 10:40:12.217288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.539 [2024-12-13 10:40:12.217304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.539 qpair failed and we were unable to recover it. 00:38:18.539 [2024-12-13 10:40:12.217388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.539 [2024-12-13 10:40:12.217403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.539 qpair failed and we were unable to recover it. 00:38:18.539 [2024-12-13 10:40:12.217583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.539 [2024-12-13 10:40:12.217628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.539 qpair failed and we were unable to recover it. 00:38:18.539 [2024-12-13 10:40:12.217771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.539 [2024-12-13 10:40:12.217815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.539 qpair failed and we were unable to recover it. 00:38:18.539 [2024-12-13 10:40:12.217969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.539 [2024-12-13 10:40:12.218015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.539 qpair failed and we were unable to recover it. 00:38:18.539 [2024-12-13 10:40:12.218149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.539 [2024-12-13 10:40:12.218165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.539 qpair failed and we were unable to recover it. 00:38:18.539 [2024-12-13 10:40:12.218352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.539 [2024-12-13 10:40:12.218412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.539 qpair failed and we were unable to recover it. 00:38:18.539 [2024-12-13 10:40:12.218714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.539 [2024-12-13 10:40:12.218759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.539 qpair failed and we were unable to recover it. 00:38:18.539 [2024-12-13 10:40:12.218889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.539 [2024-12-13 10:40:12.218932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.539 qpair failed and we were unable to recover it. 00:38:18.539 [2024-12-13 10:40:12.219121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.539 [2024-12-13 10:40:12.219136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.539 qpair failed and we were unable to recover it. 00:38:18.539 [2024-12-13 10:40:12.219336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.539 [2024-12-13 10:40:12.219351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.539 qpair failed and we were unable to recover it. 00:38:18.539 [2024-12-13 10:40:12.219556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.539 [2024-12-13 10:40:12.219572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.539 qpair failed and we were unable to recover it. 00:38:18.539 [2024-12-13 10:40:12.219659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.539 [2024-12-13 10:40:12.219674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.539 qpair failed and we were unable to recover it. 00:38:18.539 [2024-12-13 10:40:12.219817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.539 [2024-12-13 10:40:12.219830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.539 qpair failed and we were unable to recover it. 00:38:18.539 [2024-12-13 10:40:12.219910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.539 [2024-12-13 10:40:12.219924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.539 qpair failed and we were unable to recover it. 00:38:18.539 [2024-12-13 10:40:12.220118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.539 [2024-12-13 10:40:12.220138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.539 qpair failed and we were unable to recover it. 00:38:18.539 [2024-12-13 10:40:12.220223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.539 [2024-12-13 10:40:12.220238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.539 qpair failed and we were unable to recover it. 00:38:18.539 [2024-12-13 10:40:12.220322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.539 [2024-12-13 10:40:12.220336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.539 qpair failed and we were unable to recover it. 00:38:18.539 [2024-12-13 10:40:12.220492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.539 [2024-12-13 10:40:12.220508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.539 qpair failed and we were unable to recover it. 00:38:18.539 [2024-12-13 10:40:12.220640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.539 [2024-12-13 10:40:12.220655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.539 qpair failed and we were unable to recover it. 00:38:18.539 [2024-12-13 10:40:12.220732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.539 [2024-12-13 10:40:12.220747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.539 qpair failed and we were unable to recover it. 00:38:18.539 [2024-12-13 10:40:12.220892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.539 [2024-12-13 10:40:12.220908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.539 qpair failed and we were unable to recover it. 00:38:18.539 [2024-12-13 10:40:12.220979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.539 [2024-12-13 10:40:12.220993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.539 qpair failed and we were unable to recover it. 00:38:18.539 [2024-12-13 10:40:12.221154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.539 [2024-12-13 10:40:12.221168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.539 qpair failed and we were unable to recover it. 00:38:18.539 [2024-12-13 10:40:12.221304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.539 [2024-12-13 10:40:12.221319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.539 qpair failed and we were unable to recover it. 00:38:18.539 [2024-12-13 10:40:12.221426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.539 [2024-12-13 10:40:12.221439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.539 qpair failed and we were unable to recover it. 00:38:18.539 [2024-12-13 10:40:12.221592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.539 [2024-12-13 10:40:12.221608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.539 qpair failed and we were unable to recover it. 00:38:18.539 [2024-12-13 10:40:12.221818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.539 [2024-12-13 10:40:12.221833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.539 qpair failed and we were unable to recover it. 00:38:18.539 [2024-12-13 10:40:12.222058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.539 [2024-12-13 10:40:12.222074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.539 qpair failed and we were unable to recover it. 00:38:18.539 [2024-12-13 10:40:12.222244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.539 [2024-12-13 10:40:12.222259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.539 qpair failed and we were unable to recover it. 00:38:18.539 [2024-12-13 10:40:12.222345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.539 [2024-12-13 10:40:12.222359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.539 qpair failed and we were unable to recover it. 00:38:18.539 [2024-12-13 10:40:12.222459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.539 [2024-12-13 10:40:12.222473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.539 qpair failed and we were unable to recover it. 00:38:18.539 [2024-12-13 10:40:12.222559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.539 [2024-12-13 10:40:12.222596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.539 qpair failed and we were unable to recover it. 00:38:18.539 [2024-12-13 10:40:12.222800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.539 [2024-12-13 10:40:12.222843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.539 qpair failed and we were unable to recover it. 00:38:18.539 [2024-12-13 10:40:12.223042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.540 [2024-12-13 10:40:12.223085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.540 qpair failed and we were unable to recover it. 00:38:18.540 [2024-12-13 10:40:12.223302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.540 [2024-12-13 10:40:12.223347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.540 qpair failed and we were unable to recover it. 00:38:18.540 [2024-12-13 10:40:12.223474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.540 [2024-12-13 10:40:12.223519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.540 qpair failed and we were unable to recover it. 00:38:18.540 [2024-12-13 10:40:12.223730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.540 [2024-12-13 10:40:12.223774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.540 qpair failed and we were unable to recover it. 00:38:18.540 [2024-12-13 10:40:12.223977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.540 [2024-12-13 10:40:12.224021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.540 qpair failed and we were unable to recover it. 00:38:18.540 [2024-12-13 10:40:12.224125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.540 [2024-12-13 10:40:12.224139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.540 qpair failed and we were unable to recover it. 00:38:18.540 [2024-12-13 10:40:12.224222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.540 [2024-12-13 10:40:12.224236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.540 qpair failed and we were unable to recover it. 00:38:18.540 [2024-12-13 10:40:12.224459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.540 [2024-12-13 10:40:12.224474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.540 qpair failed and we were unable to recover it. 00:38:18.540 [2024-12-13 10:40:12.224692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.540 [2024-12-13 10:40:12.224707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.540 qpair failed and we were unable to recover it. 00:38:18.540 [2024-12-13 10:40:12.224770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.540 [2024-12-13 10:40:12.224786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.540 qpair failed and we were unable to recover it. 00:38:18.540 [2024-12-13 10:40:12.224932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.540 [2024-12-13 10:40:12.224946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.540 qpair failed and we were unable to recover it. 00:38:18.540 [2024-12-13 10:40:12.225098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.540 [2024-12-13 10:40:12.225115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.540 qpair failed and we were unable to recover it. 00:38:18.540 [2024-12-13 10:40:12.225194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.540 [2024-12-13 10:40:12.225211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.540 qpair failed and we were unable to recover it. 00:38:18.540 [2024-12-13 10:40:12.225312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.540 [2024-12-13 10:40:12.225326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.540 qpair failed and we were unable to recover it. 00:38:18.540 [2024-12-13 10:40:12.225461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.540 [2024-12-13 10:40:12.225477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.540 qpair failed and we were unable to recover it. 00:38:18.540 [2024-12-13 10:40:12.225715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.540 [2024-12-13 10:40:12.225759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.540 qpair failed and we were unable to recover it. 00:38:18.540 [2024-12-13 10:40:12.226029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.540 [2024-12-13 10:40:12.226072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.540 qpair failed and we were unable to recover it. 00:38:18.540 [2024-12-13 10:40:12.226211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.540 [2024-12-13 10:40:12.226254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.540 qpair failed and we were unable to recover it. 00:38:18.540 [2024-12-13 10:40:12.226471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.540 [2024-12-13 10:40:12.226515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.540 qpair failed and we were unable to recover it. 00:38:18.540 [2024-12-13 10:40:12.226824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.540 [2024-12-13 10:40:12.226867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.540 qpair failed and we were unable to recover it. 00:38:18.540 [2024-12-13 10:40:12.227050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.540 [2024-12-13 10:40:12.227066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.540 qpair failed and we were unable to recover it. 00:38:18.540 [2024-12-13 10:40:12.227135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.540 [2024-12-13 10:40:12.227149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.540 qpair failed and we were unable to recover it. 00:38:18.540 [2024-12-13 10:40:12.227297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.540 [2024-12-13 10:40:12.227312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.540 qpair failed and we were unable to recover it. 00:38:18.540 [2024-12-13 10:40:12.227401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.540 [2024-12-13 10:40:12.227415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.540 qpair failed and we were unable to recover it. 00:38:18.540 [2024-12-13 10:40:12.227510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.540 [2024-12-13 10:40:12.227524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.540 qpair failed and we were unable to recover it. 00:38:18.540 [2024-12-13 10:40:12.227618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.540 [2024-12-13 10:40:12.227632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.540 qpair failed and we were unable to recover it. 00:38:18.540 [2024-12-13 10:40:12.227713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.540 [2024-12-13 10:40:12.227726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.540 qpair failed and we were unable to recover it. 00:38:18.540 [2024-12-13 10:40:12.227872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.540 [2024-12-13 10:40:12.227886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.540 qpair failed and we were unable to recover it. 00:38:18.540 [2024-12-13 10:40:12.228055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.540 [2024-12-13 10:40:12.228107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.540 qpair failed and we were unable to recover it. 00:38:18.540 [2024-12-13 10:40:12.228244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.540 [2024-12-13 10:40:12.228288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.540 qpair failed and we were unable to recover it. 00:38:18.540 [2024-12-13 10:40:12.228574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.540 [2024-12-13 10:40:12.228619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.540 qpair failed and we were unable to recover it. 00:38:18.540 [2024-12-13 10:40:12.228736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.540 [2024-12-13 10:40:12.228750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.540 qpair failed and we were unable to recover it. 00:38:18.540 [2024-12-13 10:40:12.228908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.540 [2024-12-13 10:40:12.228923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.540 qpair failed and we were unable to recover it. 00:38:18.540 [2024-12-13 10:40:12.229130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.540 [2024-12-13 10:40:12.229145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.541 qpair failed and we were unable to recover it. 00:38:18.541 [2024-12-13 10:40:12.229293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.541 [2024-12-13 10:40:12.229309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.541 qpair failed and we were unable to recover it. 00:38:18.541 [2024-12-13 10:40:12.229379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.541 [2024-12-13 10:40:12.229392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.541 qpair failed and we were unable to recover it. 00:38:18.541 [2024-12-13 10:40:12.229567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.541 [2024-12-13 10:40:12.229582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.541 qpair failed and we were unable to recover it. 00:38:18.541 [2024-12-13 10:40:12.229728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.541 [2024-12-13 10:40:12.229744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.541 qpair failed and we were unable to recover it. 00:38:18.541 [2024-12-13 10:40:12.229831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.541 [2024-12-13 10:40:12.229846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.541 qpair failed and we were unable to recover it. 00:38:18.541 [2024-12-13 10:40:12.229929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.541 [2024-12-13 10:40:12.229943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.541 qpair failed and we were unable to recover it. 00:38:18.541 [2024-12-13 10:40:12.230013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.541 [2024-12-13 10:40:12.230027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.541 qpair failed and we were unable to recover it. 00:38:18.541 [2024-12-13 10:40:12.230098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.541 [2024-12-13 10:40:12.230113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.541 qpair failed and we were unable to recover it. 00:38:18.541 [2024-12-13 10:40:12.230328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.541 [2024-12-13 10:40:12.230372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.541 qpair failed and we were unable to recover it. 00:38:18.541 [2024-12-13 10:40:12.230533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.541 [2024-12-13 10:40:12.230579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.541 qpair failed and we were unable to recover it. 00:38:18.541 [2024-12-13 10:40:12.230858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.541 [2024-12-13 10:40:12.230903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.541 qpair failed and we were unable to recover it. 00:38:18.541 [2024-12-13 10:40:12.231104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.541 [2024-12-13 10:40:12.231159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.541 qpair failed and we were unable to recover it. 00:38:18.541 [2024-12-13 10:40:12.231382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.541 [2024-12-13 10:40:12.231426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.541 qpair failed and we were unable to recover it. 00:38:18.541 [2024-12-13 10:40:12.231586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.541 [2024-12-13 10:40:12.231630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.541 qpair failed and we were unable to recover it. 00:38:18.541 [2024-12-13 10:40:12.231794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.541 [2024-12-13 10:40:12.231837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.541 qpair failed and we were unable to recover it. 00:38:18.541 [2024-12-13 10:40:12.232105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.541 [2024-12-13 10:40:12.232121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.541 qpair failed and we were unable to recover it. 00:38:18.541 [2024-12-13 10:40:12.232256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.541 [2024-12-13 10:40:12.232271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.541 qpair failed and we were unable to recover it. 00:38:18.541 [2024-12-13 10:40:12.232410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.541 [2024-12-13 10:40:12.232425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.541 qpair failed and we were unable to recover it. 00:38:18.541 [2024-12-13 10:40:12.232527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.541 [2024-12-13 10:40:12.232544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.541 qpair failed and we were unable to recover it. 00:38:18.541 [2024-12-13 10:40:12.232625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.541 [2024-12-13 10:40:12.232639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.541 qpair failed and we were unable to recover it. 00:38:18.541 [2024-12-13 10:40:12.232791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.541 [2024-12-13 10:40:12.232806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.541 qpair failed and we were unable to recover it. 00:38:18.541 [2024-12-13 10:40:12.232890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.541 [2024-12-13 10:40:12.232903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.541 qpair failed and we were unable to recover it. 00:38:18.541 [2024-12-13 10:40:12.233040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.541 [2024-12-13 10:40:12.233053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.541 qpair failed and we were unable to recover it. 00:38:18.541 [2024-12-13 10:40:12.233121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.541 [2024-12-13 10:40:12.233135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.541 qpair failed and we were unable to recover it. 00:38:18.541 [2024-12-13 10:40:12.233202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.541 [2024-12-13 10:40:12.233215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.541 qpair failed and we were unable to recover it. 00:38:18.541 [2024-12-13 10:40:12.233297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.541 [2024-12-13 10:40:12.233311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.541 qpair failed and we were unable to recover it. 00:38:18.541 [2024-12-13 10:40:12.233467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.541 [2024-12-13 10:40:12.233481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.541 qpair failed and we were unable to recover it. 00:38:18.541 [2024-12-13 10:40:12.233568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.541 [2024-12-13 10:40:12.233582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.541 qpair failed and we were unable to recover it. 00:38:18.541 [2024-12-13 10:40:12.233666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.541 [2024-12-13 10:40:12.233680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.541 qpair failed and we were unable to recover it. 00:38:18.541 [2024-12-13 10:40:12.233838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.541 [2024-12-13 10:40:12.233853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.541 qpair failed and we were unable to recover it. 00:38:18.541 [2024-12-13 10:40:12.233990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.541 [2024-12-13 10:40:12.234005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.541 qpair failed and we were unable to recover it. 00:38:18.541 [2024-12-13 10:40:12.234083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.541 [2024-12-13 10:40:12.234097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.541 qpair failed and we were unable to recover it. 00:38:18.541 [2024-12-13 10:40:12.234186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.541 [2024-12-13 10:40:12.234200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.541 qpair failed and we were unable to recover it. 00:38:18.541 [2024-12-13 10:40:12.234342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.541 [2024-12-13 10:40:12.234357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.541 qpair failed and we were unable to recover it. 00:38:18.541 [2024-12-13 10:40:12.234436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.541 [2024-12-13 10:40:12.234454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.541 qpair failed and we were unable to recover it. 00:38:18.541 [2024-12-13 10:40:12.234613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.541 [2024-12-13 10:40:12.234630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.541 qpair failed and we were unable to recover it. 00:38:18.541 [2024-12-13 10:40:12.234775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.541 [2024-12-13 10:40:12.234790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.541 qpair failed and we were unable to recover it. 00:38:18.541 [2024-12-13 10:40:12.234925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.541 [2024-12-13 10:40:12.234940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.541 qpair failed and we were unable to recover it. 00:38:18.541 [2024-12-13 10:40:12.235027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.542 [2024-12-13 10:40:12.235041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.542 qpair failed and we were unable to recover it. 00:38:18.542 [2024-12-13 10:40:12.235176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.542 [2024-12-13 10:40:12.235192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.542 qpair failed and we were unable to recover it. 00:38:18.542 [2024-12-13 10:40:12.235281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.542 [2024-12-13 10:40:12.235294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.542 qpair failed and we were unable to recover it. 00:38:18.542 [2024-12-13 10:40:12.235364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.542 [2024-12-13 10:40:12.235378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.542 qpair failed and we were unable to recover it. 00:38:18.542 [2024-12-13 10:40:12.235543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.542 [2024-12-13 10:40:12.235557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.542 qpair failed and we were unable to recover it. 00:38:18.542 [2024-12-13 10:40:12.235628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.542 [2024-12-13 10:40:12.235642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.542 qpair failed and we were unable to recover it. 00:38:18.542 [2024-12-13 10:40:12.235738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.542 [2024-12-13 10:40:12.235752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.542 qpair failed and we were unable to recover it. 00:38:18.542 [2024-12-13 10:40:12.235890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.542 [2024-12-13 10:40:12.235905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.542 qpair failed and we were unable to recover it. 00:38:18.542 [2024-12-13 10:40:12.236046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.542 [2024-12-13 10:40:12.236061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.542 qpair failed and we were unable to recover it. 00:38:18.542 [2024-12-13 10:40:12.236219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.542 [2024-12-13 10:40:12.236234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.542 qpair failed and we were unable to recover it. 00:38:18.542 [2024-12-13 10:40:12.236319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.542 [2024-12-13 10:40:12.236333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.542 qpair failed and we were unable to recover it. 00:38:18.542 [2024-12-13 10:40:12.236438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.542 [2024-12-13 10:40:12.236457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.542 qpair failed and we were unable to recover it. 00:38:18.542 [2024-12-13 10:40:12.236670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.542 [2024-12-13 10:40:12.236715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.542 qpair failed and we were unable to recover it. 00:38:18.542 [2024-12-13 10:40:12.236935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.542 [2024-12-13 10:40:12.236979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.542 qpair failed and we were unable to recover it. 00:38:18.542 [2024-12-13 10:40:12.237131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.542 [2024-12-13 10:40:12.237176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.542 qpair failed and we were unable to recover it. 00:38:18.542 [2024-12-13 10:40:12.237379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.542 [2024-12-13 10:40:12.237423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.542 qpair failed and we were unable to recover it. 00:38:18.542 [2024-12-13 10:40:12.237660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.542 [2024-12-13 10:40:12.237705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.542 qpair failed and we were unable to recover it. 00:38:18.542 [2024-12-13 10:40:12.237898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.542 [2024-12-13 10:40:12.237940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.542 qpair failed and we were unable to recover it. 00:38:18.542 [2024-12-13 10:40:12.238103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.542 [2024-12-13 10:40:12.238146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.542 qpair failed and we were unable to recover it. 00:38:18.542 [2024-12-13 10:40:12.238341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.542 [2024-12-13 10:40:12.238386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.542 qpair failed and we were unable to recover it. 00:38:18.542 [2024-12-13 10:40:12.238661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.542 [2024-12-13 10:40:12.238713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.542 qpair failed and we were unable to recover it. 00:38:18.542 [2024-12-13 10:40:12.238859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.542 [2024-12-13 10:40:12.238903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.542 qpair failed and we were unable to recover it. 00:38:18.542 [2024-12-13 10:40:12.239097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.542 [2024-12-13 10:40:12.239112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.542 qpair failed and we were unable to recover it. 00:38:18.542 [2024-12-13 10:40:12.239209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.542 [2024-12-13 10:40:12.239222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.542 qpair failed and we were unable to recover it. 00:38:18.542 [2024-12-13 10:40:12.239434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.542 [2024-12-13 10:40:12.239458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.542 qpair failed and we were unable to recover it. 00:38:18.542 [2024-12-13 10:40:12.239540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.542 [2024-12-13 10:40:12.239554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.542 qpair failed and we were unable to recover it. 00:38:18.542 [2024-12-13 10:40:12.239637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.542 [2024-12-13 10:40:12.239651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.542 qpair failed and we were unable to recover it. 00:38:18.542 [2024-12-13 10:40:12.239724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.542 [2024-12-13 10:40:12.239775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.542 qpair failed and we were unable to recover it. 00:38:18.542 [2024-12-13 10:40:12.239913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.542 [2024-12-13 10:40:12.239957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.542 qpair failed and we were unable to recover it. 00:38:18.542 [2024-12-13 10:40:12.240239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.542 [2024-12-13 10:40:12.240284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.542 qpair failed and we were unable to recover it. 00:38:18.542 [2024-12-13 10:40:12.240435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.542 [2024-12-13 10:40:12.240492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.542 qpair failed and we were unable to recover it. 00:38:18.542 [2024-12-13 10:40:12.240754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.542 [2024-12-13 10:40:12.240769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.542 qpair failed and we were unable to recover it. 00:38:18.542 [2024-12-13 10:40:12.240997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.542 [2024-12-13 10:40:12.241012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.542 qpair failed and we were unable to recover it. 00:38:18.542 [2024-12-13 10:40:12.241164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.542 [2024-12-13 10:40:12.241178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.542 qpair failed and we were unable to recover it. 00:38:18.542 [2024-12-13 10:40:12.241324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.542 [2024-12-13 10:40:12.241340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.542 qpair failed and we were unable to recover it. 00:38:18.542 [2024-12-13 10:40:12.241473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.542 [2024-12-13 10:40:12.241489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.542 qpair failed and we were unable to recover it. 00:38:18.542 [2024-12-13 10:40:12.241670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.542 [2024-12-13 10:40:12.241685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.542 qpair failed and we were unable to recover it. 00:38:18.542 [2024-12-13 10:40:12.241777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.542 [2024-12-13 10:40:12.241797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.542 qpair failed and we were unable to recover it. 00:38:18.542 [2024-12-13 10:40:12.241876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.542 [2024-12-13 10:40:12.241891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.542 qpair failed and we were unable to recover it. 00:38:18.542 [2024-12-13 10:40:12.241988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.543 [2024-12-13 10:40:12.242004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.543 qpair failed and we were unable to recover it. 00:38:18.543 [2024-12-13 10:40:12.242091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.543 [2024-12-13 10:40:12.242104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.543 qpair failed and we were unable to recover it. 00:38:18.543 [2024-12-13 10:40:12.242275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.543 [2024-12-13 10:40:12.242319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.543 qpair failed and we were unable to recover it. 00:38:18.543 [2024-12-13 10:40:12.242467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.543 [2024-12-13 10:40:12.242511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.543 qpair failed and we were unable to recover it. 00:38:18.543 [2024-12-13 10:40:12.242720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.543 [2024-12-13 10:40:12.242764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.543 qpair failed and we were unable to recover it. 00:38:18.543 [2024-12-13 10:40:12.242890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.543 [2024-12-13 10:40:12.242934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.543 qpair failed and we were unable to recover it. 00:38:18.543 [2024-12-13 10:40:12.243221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.543 [2024-12-13 10:40:12.243265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.543 qpair failed and we were unable to recover it. 00:38:18.543 [2024-12-13 10:40:12.243528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.543 [2024-12-13 10:40:12.243573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.543 qpair failed and we were unable to recover it. 00:38:18.543 [2024-12-13 10:40:12.243778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.543 [2024-12-13 10:40:12.243823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.543 qpair failed and we were unable to recover it. 00:38:18.543 [2024-12-13 10:40:12.244123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.543 [2024-12-13 10:40:12.244138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.543 qpair failed and we were unable to recover it. 00:38:18.543 [2024-12-13 10:40:12.244230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.543 [2024-12-13 10:40:12.244245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.543 qpair failed and we were unable to recover it. 00:38:18.543 [2024-12-13 10:40:12.244472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.543 [2024-12-13 10:40:12.244517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.543 qpair failed and we were unable to recover it. 00:38:18.543 [2024-12-13 10:40:12.244720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.543 [2024-12-13 10:40:12.244763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.543 qpair failed and we were unable to recover it. 00:38:18.543 [2024-12-13 10:40:12.244898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.543 [2024-12-13 10:40:12.244914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.543 qpair failed and we were unable to recover it. 00:38:18.543 [2024-12-13 10:40:12.245066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.543 [2024-12-13 10:40:12.245081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.543 qpair failed and we were unable to recover it. 00:38:18.543 [2024-12-13 10:40:12.245162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.543 [2024-12-13 10:40:12.245177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.543 qpair failed and we were unable to recover it. 00:38:18.543 [2024-12-13 10:40:12.245269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.543 [2024-12-13 10:40:12.245283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.543 qpair failed and we were unable to recover it. 00:38:18.543 [2024-12-13 10:40:12.245372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.543 [2024-12-13 10:40:12.245386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.543 qpair failed and we were unable to recover it. 00:38:18.543 [2024-12-13 10:40:12.245467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.543 [2024-12-13 10:40:12.245482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.543 qpair failed and we were unable to recover it. 00:38:18.543 [2024-12-13 10:40:12.245559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.543 [2024-12-13 10:40:12.245574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.543 qpair failed and we were unable to recover it. 00:38:18.543 [2024-12-13 10:40:12.245669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.543 [2024-12-13 10:40:12.245684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.543 qpair failed and we were unable to recover it. 00:38:18.543 [2024-12-13 10:40:12.245846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.543 [2024-12-13 10:40:12.245896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.543 qpair failed and we were unable to recover it. 00:38:18.543 [2024-12-13 10:40:12.246031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.543 [2024-12-13 10:40:12.246075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.543 qpair failed and we were unable to recover it. 00:38:18.543 [2024-12-13 10:40:12.246268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.543 [2024-12-13 10:40:12.246312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.543 qpair failed and we were unable to recover it. 00:38:18.543 [2024-12-13 10:40:12.246509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.543 [2024-12-13 10:40:12.246553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.543 qpair failed and we were unable to recover it. 00:38:18.543 [2024-12-13 10:40:12.246775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.543 [2024-12-13 10:40:12.246790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.543 qpair failed and we were unable to recover it. 00:38:18.543 [2024-12-13 10:40:12.246877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.543 [2024-12-13 10:40:12.246892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.543 qpair failed and we were unable to recover it. 00:38:18.543 [2024-12-13 10:40:12.246961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.543 [2024-12-13 10:40:12.246974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.543 qpair failed and we were unable to recover it. 00:38:18.543 [2024-12-13 10:40:12.247137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.543 [2024-12-13 10:40:12.247180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.543 qpair failed and we were unable to recover it. 00:38:18.543 [2024-12-13 10:40:12.247315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.543 [2024-12-13 10:40:12.247358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.543 qpair failed and we were unable to recover it. 00:38:18.543 [2024-12-13 10:40:12.247487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.543 [2024-12-13 10:40:12.247531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.543 qpair failed and we were unable to recover it. 00:38:18.543 [2024-12-13 10:40:12.247724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.543 [2024-12-13 10:40:12.247768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.543 qpair failed and we were unable to recover it. 00:38:18.543 [2024-12-13 10:40:12.247934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.543 [2024-12-13 10:40:12.247949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.543 qpair failed and we were unable to recover it. 00:38:18.543 [2024-12-13 10:40:12.248042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.543 [2024-12-13 10:40:12.248057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.543 qpair failed and we were unable to recover it. 00:38:18.543 [2024-12-13 10:40:12.248141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.543 [2024-12-13 10:40:12.248157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.543 qpair failed and we were unable to recover it. 00:38:18.543 [2024-12-13 10:40:12.248259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.543 [2024-12-13 10:40:12.248274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.543 qpair failed and we were unable to recover it. 00:38:18.543 [2024-12-13 10:40:12.248521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.543 [2024-12-13 10:40:12.248537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.543 qpair failed and we were unable to recover it. 00:38:18.543 [2024-12-13 10:40:12.248676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.543 [2024-12-13 10:40:12.248691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.543 qpair failed and we were unable to recover it. 00:38:18.543 [2024-12-13 10:40:12.248853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.543 [2024-12-13 10:40:12.248895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.543 qpair failed and we were unable to recover it. 00:38:18.543 [2024-12-13 10:40:12.249035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.544 [2024-12-13 10:40:12.249079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.544 qpair failed and we were unable to recover it. 00:38:18.544 [2024-12-13 10:40:12.249282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.544 [2024-12-13 10:40:12.249325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.544 qpair failed and we were unable to recover it. 00:38:18.544 [2024-12-13 10:40:12.249474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.544 [2024-12-13 10:40:12.249518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.544 qpair failed and we were unable to recover it. 00:38:18.544 [2024-12-13 10:40:12.249801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.544 [2024-12-13 10:40:12.249844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.544 qpair failed and we were unable to recover it. 00:38:18.544 [2024-12-13 10:40:12.250029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.544 [2024-12-13 10:40:12.250043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.544 qpair failed and we were unable to recover it. 00:38:18.544 [2024-12-13 10:40:12.250108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.544 [2024-12-13 10:40:12.250121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.544 qpair failed and we were unable to recover it. 00:38:18.544 [2024-12-13 10:40:12.250273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.544 [2024-12-13 10:40:12.250287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.544 qpair failed and we were unable to recover it. 00:38:18.544 [2024-12-13 10:40:12.250490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.544 [2024-12-13 10:40:12.250504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.544 qpair failed and we were unable to recover it. 00:38:18.544 [2024-12-13 10:40:12.250579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.544 [2024-12-13 10:40:12.250593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.544 qpair failed and we were unable to recover it. 00:38:18.544 [2024-12-13 10:40:12.250821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.544 [2024-12-13 10:40:12.250836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.544 qpair failed and we were unable to recover it. 00:38:18.544 [2024-12-13 10:40:12.250988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.544 [2024-12-13 10:40:12.251003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.544 qpair failed and we were unable to recover it. 00:38:18.544 [2024-12-13 10:40:12.251135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.544 [2024-12-13 10:40:12.251151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.544 qpair failed and we were unable to recover it. 00:38:18.544 [2024-12-13 10:40:12.251311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.544 [2024-12-13 10:40:12.251325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.544 qpair failed and we were unable to recover it. 00:38:18.544 [2024-12-13 10:40:12.251409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.544 [2024-12-13 10:40:12.251423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.544 qpair failed and we were unable to recover it. 00:38:18.544 [2024-12-13 10:40:12.251536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.544 [2024-12-13 10:40:12.251578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.544 qpair failed and we were unable to recover it. 00:38:18.544 [2024-12-13 10:40:12.251734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.544 [2024-12-13 10:40:12.251778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.544 qpair failed and we were unable to recover it. 00:38:18.544 [2024-12-13 10:40:12.252036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.544 [2024-12-13 10:40:12.252084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.544 qpair failed and we were unable to recover it. 00:38:18.544 [2024-12-13 10:40:12.252238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.544 [2024-12-13 10:40:12.252253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.544 qpair failed and we were unable to recover it. 00:38:18.544 [2024-12-13 10:40:12.252323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.544 [2024-12-13 10:40:12.252336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.544 qpair failed and we were unable to recover it. 00:38:18.544 [2024-12-13 10:40:12.252437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.544 [2024-12-13 10:40:12.252457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.544 qpair failed and we were unable to recover it. 00:38:18.544 [2024-12-13 10:40:12.252600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.544 [2024-12-13 10:40:12.252615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.544 qpair failed and we were unable to recover it. 00:38:18.544 [2024-12-13 10:40:12.252780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.544 [2024-12-13 10:40:12.252795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.544 qpair failed and we were unable to recover it. 00:38:18.544 [2024-12-13 10:40:12.252902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.544 [2024-12-13 10:40:12.252921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.544 qpair failed and we were unable to recover it. 00:38:18.544 [2024-12-13 10:40:12.253007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.544 [2024-12-13 10:40:12.253025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.544 qpair failed and we were unable to recover it. 00:38:18.544 [2024-12-13 10:40:12.253177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.544 [2024-12-13 10:40:12.253193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.544 qpair failed and we were unable to recover it. 00:38:18.544 [2024-12-13 10:40:12.253270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.544 [2024-12-13 10:40:12.253283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.544 qpair failed and we were unable to recover it. 00:38:18.544 [2024-12-13 10:40:12.253443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.544 [2024-12-13 10:40:12.253476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.544 qpair failed and we were unable to recover it. 00:38:18.544 [2024-12-13 10:40:12.253546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.544 [2024-12-13 10:40:12.253560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.544 qpair failed and we were unable to recover it. 00:38:18.544 [2024-12-13 10:40:12.253622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.544 [2024-12-13 10:40:12.253637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.544 qpair failed and we were unable to recover it. 00:38:18.544 [2024-12-13 10:40:12.253718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.544 [2024-12-13 10:40:12.253732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.544 qpair failed and we were unable to recover it. 00:38:18.544 [2024-12-13 10:40:12.253888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.544 [2024-12-13 10:40:12.253902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.544 qpair failed and we were unable to recover it. 00:38:18.544 [2024-12-13 10:40:12.253995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.544 [2024-12-13 10:40:12.254009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.544 qpair failed and we were unable to recover it. 00:38:18.544 [2024-12-13 10:40:12.254113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.544 [2024-12-13 10:40:12.254129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.544 qpair failed and we were unable to recover it. 00:38:18.544 [2024-12-13 10:40:12.254207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.544 [2024-12-13 10:40:12.254221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.544 qpair failed and we were unable to recover it. 00:38:18.544 [2024-12-13 10:40:12.254365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.544 [2024-12-13 10:40:12.254380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.544 qpair failed and we were unable to recover it. 00:38:18.544 [2024-12-13 10:40:12.254693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.545 [2024-12-13 10:40:12.254738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.545 qpair failed and we were unable to recover it. 00:38:18.545 [2024-12-13 10:40:12.255017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.545 [2024-12-13 10:40:12.255032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.545 qpair failed and we were unable to recover it. 00:38:18.545 [2024-12-13 10:40:12.255162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.545 [2024-12-13 10:40:12.255177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.545 qpair failed and we were unable to recover it. 00:38:18.545 [2024-12-13 10:40:12.255261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.545 [2024-12-13 10:40:12.255274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.545 qpair failed and we were unable to recover it. 00:38:18.545 [2024-12-13 10:40:12.255407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.545 [2024-12-13 10:40:12.255422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.545 qpair failed and we were unable to recover it. 00:38:18.545 [2024-12-13 10:40:12.255580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.545 [2024-12-13 10:40:12.255595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.545 qpair failed and we were unable to recover it. 00:38:18.545 [2024-12-13 10:40:12.255692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.545 [2024-12-13 10:40:12.255707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.545 qpair failed and we were unable to recover it. 00:38:18.545 [2024-12-13 10:40:12.255907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.545 [2024-12-13 10:40:12.255922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.545 qpair failed and we were unable to recover it. 00:38:18.545 [2024-12-13 10:40:12.256078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.545 [2024-12-13 10:40:12.256121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.545 qpair failed and we were unable to recover it. 00:38:18.545 [2024-12-13 10:40:12.256283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.545 [2024-12-13 10:40:12.256327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.545 qpair failed and we were unable to recover it. 00:38:18.545 [2024-12-13 10:40:12.256532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.545 [2024-12-13 10:40:12.256576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.545 qpair failed and we were unable to recover it. 00:38:18.545 [2024-12-13 10:40:12.256774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.545 [2024-12-13 10:40:12.256818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.545 qpair failed and we were unable to recover it. 00:38:18.545 [2024-12-13 10:40:12.256964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.545 [2024-12-13 10:40:12.256979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.545 qpair failed and we were unable to recover it. 00:38:18.545 [2024-12-13 10:40:12.257140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.545 [2024-12-13 10:40:12.257156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.545 qpair failed and we were unable to recover it. 00:38:18.545 [2024-12-13 10:40:12.257413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.545 [2024-12-13 10:40:12.257466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:18.545 qpair failed and we were unable to recover it. 00:38:18.545 [2024-12-13 10:40:12.257687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.545 [2024-12-13 10:40:12.257734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:18.545 qpair failed and we were unable to recover it. 00:38:18.545 [2024-12-13 10:40:12.257897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.545 [2024-12-13 10:40:12.257942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:18.545 qpair failed and we were unable to recover it. 00:38:18.545 [2024-12-13 10:40:12.258110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.545 [2024-12-13 10:40:12.258127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.545 qpair failed and we were unable to recover it. 00:38:18.545 [2024-12-13 10:40:12.258292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.545 [2024-12-13 10:40:12.258308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.545 qpair failed and we were unable to recover it. 00:38:18.545 [2024-12-13 10:40:12.258397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.545 [2024-12-13 10:40:12.258412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.545 qpair failed and we were unable to recover it. 00:38:18.545 [2024-12-13 10:40:12.258500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.545 [2024-12-13 10:40:12.258515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.545 qpair failed and we were unable to recover it. 00:38:18.545 [2024-12-13 10:40:12.258589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.545 [2024-12-13 10:40:12.258603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.545 qpair failed and we were unable to recover it. 00:38:18.545 [2024-12-13 10:40:12.258670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.545 [2024-12-13 10:40:12.258684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.545 qpair failed and we were unable to recover it. 00:38:18.545 [2024-12-13 10:40:12.258835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.545 [2024-12-13 10:40:12.258852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.545 qpair failed and we were unable to recover it. 00:38:18.545 [2024-12-13 10:40:12.258935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.545 [2024-12-13 10:40:12.258985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.545 qpair failed and we were unable to recover it. 00:38:18.545 [2024-12-13 10:40:12.259226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.545 [2024-12-13 10:40:12.259283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:18.545 qpair failed and we were unable to recover it. 00:38:18.545 [2024-12-13 10:40:12.259495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.545 [2024-12-13 10:40:12.259551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:18.545 qpair failed and we were unable to recover it. 00:38:18.545 [2024-12-13 10:40:12.259753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.545 [2024-12-13 10:40:12.259805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:18.545 qpair failed and we were unable to recover it. 00:38:18.545 [2024-12-13 10:40:12.260004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.545 [2024-12-13 10:40:12.260026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:18.545 qpair failed and we were unable to recover it. 00:38:18.545 [2024-12-13 10:40:12.260183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.545 [2024-12-13 10:40:12.260206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:18.545 qpair failed and we were unable to recover it. 00:38:18.545 [2024-12-13 10:40:12.260375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.545 [2024-12-13 10:40:12.260398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:18.545 qpair failed and we were unable to recover it. 00:38:18.545 [2024-12-13 10:40:12.260613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.545 [2024-12-13 10:40:12.260629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.545 qpair failed and we were unable to recover it. 00:38:18.545 [2024-12-13 10:40:12.260784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.545 [2024-12-13 10:40:12.260800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.545 qpair failed and we were unable to recover it. 00:38:18.545 [2024-12-13 10:40:12.260971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.545 [2024-12-13 10:40:12.260987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.545 qpair failed and we were unable to recover it. 00:38:18.545 [2024-12-13 10:40:12.261141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.545 [2024-12-13 10:40:12.261185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.545 qpair failed and we were unable to recover it. 00:38:18.545 [2024-12-13 10:40:12.261314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.545 [2024-12-13 10:40:12.261358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.545 qpair failed and we were unable to recover it. 00:38:18.545 [2024-12-13 10:40:12.261512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.545 [2024-12-13 10:40:12.261558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.545 qpair failed and we were unable to recover it. 00:38:18.545 [2024-12-13 10:40:12.261702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.545 [2024-12-13 10:40:12.261747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.545 qpair failed and we were unable to recover it. 00:38:18.545 [2024-12-13 10:40:12.261893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.545 [2024-12-13 10:40:12.261909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.545 qpair failed and we were unable to recover it. 00:38:18.546 [2024-12-13 10:40:12.262059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.546 [2024-12-13 10:40:12.262075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.546 qpair failed and we were unable to recover it. 00:38:18.546 [2024-12-13 10:40:12.262313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.546 [2024-12-13 10:40:12.262328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.546 qpair failed and we were unable to recover it. 00:38:18.546 [2024-12-13 10:40:12.262419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.546 [2024-12-13 10:40:12.262435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.546 qpair failed and we were unable to recover it. 00:38:18.546 [2024-12-13 10:40:12.262536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.546 [2024-12-13 10:40:12.262552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.546 qpair failed and we were unable to recover it. 00:38:18.546 [2024-12-13 10:40:12.262700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.546 [2024-12-13 10:40:12.262715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.546 qpair failed and we were unable to recover it. 00:38:18.546 [2024-12-13 10:40:12.262848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.546 [2024-12-13 10:40:12.262863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.546 qpair failed and we were unable to recover it. 00:38:18.546 [2024-12-13 10:40:12.263017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.546 [2024-12-13 10:40:12.263032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.546 qpair failed and we were unable to recover it. 00:38:18.546 [2024-12-13 10:40:12.263121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.546 [2024-12-13 10:40:12.263135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.546 qpair failed and we were unable to recover it. 00:38:18.546 [2024-12-13 10:40:12.263284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.546 [2024-12-13 10:40:12.263299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.546 qpair failed and we were unable to recover it. 00:38:18.546 [2024-12-13 10:40:12.263383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.546 [2024-12-13 10:40:12.263397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.546 qpair failed and we were unable to recover it. 00:38:18.546 [2024-12-13 10:40:12.263481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.546 [2024-12-13 10:40:12.263498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.546 qpair failed and we were unable to recover it. 00:38:18.546 [2024-12-13 10:40:12.263657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.546 [2024-12-13 10:40:12.263673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.546 qpair failed and we were unable to recover it. 00:38:18.546 [2024-12-13 10:40:12.263818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.546 [2024-12-13 10:40:12.263834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.546 qpair failed and we were unable to recover it. 00:38:18.546 [2024-12-13 10:40:12.263898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.546 [2024-12-13 10:40:12.263913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.546 qpair failed and we were unable to recover it. 00:38:18.546 [2024-12-13 10:40:12.264045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.546 [2024-12-13 10:40:12.264088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.546 qpair failed and we were unable to recover it. 00:38:18.546 [2024-12-13 10:40:12.264323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.546 [2024-12-13 10:40:12.264379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:18.546 qpair failed and we were unable to recover it. 00:38:18.546 [2024-12-13 10:40:12.264530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.546 [2024-12-13 10:40:12.264580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:18.546 qpair failed and we were unable to recover it. 00:38:18.546 [2024-12-13 10:40:12.264782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.546 [2024-12-13 10:40:12.264825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:18.546 qpair failed and we were unable to recover it. 00:38:18.546 [2024-12-13 10:40:12.265030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.546 [2024-12-13 10:40:12.265074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:18.546 qpair failed and we were unable to recover it. 00:38:18.546 [2024-12-13 10:40:12.265279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.546 [2024-12-13 10:40:12.265302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:18.546 qpair failed and we were unable to recover it. 00:38:18.546 [2024-12-13 10:40:12.265407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.546 [2024-12-13 10:40:12.265429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:18.546 qpair failed and we were unable to recover it. 00:38:18.546 [2024-12-13 10:40:12.265535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.546 [2024-12-13 10:40:12.265566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:18.546 qpair failed and we were unable to recover it. 00:38:18.546 [2024-12-13 10:40:12.265732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.546 [2024-12-13 10:40:12.265756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:18.546 qpair failed and we were unable to recover it. 00:38:18.546 [2024-12-13 10:40:12.265997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.546 [2024-12-13 10:40:12.266020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:18.546 qpair failed and we were unable to recover it. 00:38:18.546 [2024-12-13 10:40:12.266179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.546 [2024-12-13 10:40:12.266198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.546 qpair failed and we were unable to recover it. 00:38:18.546 [2024-12-13 10:40:12.266383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.546 [2024-12-13 10:40:12.266427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.546 qpair failed and we were unable to recover it. 00:38:18.546 [2024-12-13 10:40:12.266664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.546 [2024-12-13 10:40:12.266709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.546 qpair failed and we were unable to recover it. 00:38:18.546 [2024-12-13 10:40:12.266907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.546 [2024-12-13 10:40:12.266922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.546 qpair failed and we were unable to recover it. 00:38:18.546 [2024-12-13 10:40:12.267004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.546 [2024-12-13 10:40:12.267018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.546 qpair failed and we were unable to recover it. 00:38:18.546 [2024-12-13 10:40:12.267118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.546 [2024-12-13 10:40:12.267134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.546 qpair failed and we were unable to recover it. 00:38:18.546 [2024-12-13 10:40:12.267347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.546 [2024-12-13 10:40:12.267363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.546 qpair failed and we were unable to recover it. 00:38:18.546 [2024-12-13 10:40:12.267468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.546 [2024-12-13 10:40:12.267490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.546 qpair failed and we were unable to recover it. 00:38:18.546 [2024-12-13 10:40:12.267667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.546 [2024-12-13 10:40:12.267682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.546 qpair failed and we were unable to recover it. 00:38:18.546 [2024-12-13 10:40:12.267769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.546 [2024-12-13 10:40:12.267782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.546 qpair failed and we were unable to recover it. 00:38:18.546 [2024-12-13 10:40:12.267863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.546 [2024-12-13 10:40:12.267877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.546 qpair failed and we were unable to recover it. 00:38:18.546 [2024-12-13 10:40:12.268022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.546 [2024-12-13 10:40:12.268037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.546 qpair failed and we were unable to recover it. 00:38:18.546 [2024-12-13 10:40:12.268176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.546 [2024-12-13 10:40:12.268191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.546 qpair failed and we were unable to recover it. 00:38:18.546 [2024-12-13 10:40:12.268329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.546 [2024-12-13 10:40:12.268344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.546 qpair failed and we were unable to recover it. 00:38:18.546 [2024-12-13 10:40:12.268524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.546 [2024-12-13 10:40:12.268540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.546 qpair failed and we were unable to recover it. 00:38:18.546 [2024-12-13 10:40:12.268709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.547 [2024-12-13 10:40:12.268752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.547 qpair failed and we were unable to recover it. 00:38:18.547 [2024-12-13 10:40:12.268885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.547 [2024-12-13 10:40:12.268928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.547 qpair failed and we were unable to recover it. 00:38:18.547 [2024-12-13 10:40:12.269191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.547 [2024-12-13 10:40:12.269233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.547 qpair failed and we were unable to recover it. 00:38:18.547 [2024-12-13 10:40:12.269368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.547 [2024-12-13 10:40:12.269413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.547 qpair failed and we were unable to recover it. 00:38:18.547 [2024-12-13 10:40:12.269583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.547 [2024-12-13 10:40:12.269635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:18.547 qpair failed and we were unable to recover it. 00:38:18.547 [2024-12-13 10:40:12.269857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.547 [2024-12-13 10:40:12.269901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:18.547 qpair failed and we were unable to recover it. 00:38:18.547 [2024-12-13 10:40:12.270098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.547 [2024-12-13 10:40:12.270122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:18.547 qpair failed and we were unable to recover it. 00:38:18.547 [2024-12-13 10:40:12.270369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.547 [2024-12-13 10:40:12.270418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.547 qpair failed and we were unable to recover it. 00:38:18.547 [2024-12-13 10:40:12.270585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.547 [2024-12-13 10:40:12.270628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.547 qpair failed and we were unable to recover it. 00:38:18.547 [2024-12-13 10:40:12.270841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.547 [2024-12-13 10:40:12.270884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.547 qpair failed and we were unable to recover it. 00:38:18.547 [2024-12-13 10:40:12.271098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.547 [2024-12-13 10:40:12.271142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.547 qpair failed and we were unable to recover it. 00:38:18.547 [2024-12-13 10:40:12.271347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.547 [2024-12-13 10:40:12.271391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.547 qpair failed and we were unable to recover it. 00:38:18.547 [2024-12-13 10:40:12.271552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.547 [2024-12-13 10:40:12.271596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.547 qpair failed and we were unable to recover it. 00:38:18.547 [2024-12-13 10:40:12.271740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.547 [2024-12-13 10:40:12.271784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.547 qpair failed and we were unable to recover it. 00:38:18.547 [2024-12-13 10:40:12.271976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.547 [2024-12-13 10:40:12.272018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.547 qpair failed and we were unable to recover it. 00:38:18.547 [2024-12-13 10:40:12.272258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.547 [2024-12-13 10:40:12.272301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.547 qpair failed and we were unable to recover it. 00:38:18.547 [2024-12-13 10:40:12.272501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.547 [2024-12-13 10:40:12.272552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.547 qpair failed and we were unable to recover it. 00:38:18.547 [2024-12-13 10:40:12.272824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.547 [2024-12-13 10:40:12.272867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.547 qpair failed and we were unable to recover it. 00:38:18.547 [2024-12-13 10:40:12.273003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.547 [2024-12-13 10:40:12.273047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.547 qpair failed and we were unable to recover it. 00:38:18.547 [2024-12-13 10:40:12.273325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.547 [2024-12-13 10:40:12.273342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.547 qpair failed and we were unable to recover it. 00:38:18.547 [2024-12-13 10:40:12.273479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.547 [2024-12-13 10:40:12.273494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.547 qpair failed and we were unable to recover it. 00:38:18.547 [2024-12-13 10:40:12.273573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.547 [2024-12-13 10:40:12.273587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.547 qpair failed and we were unable to recover it. 00:38:18.547 [2024-12-13 10:40:12.273751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.547 [2024-12-13 10:40:12.273767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.547 qpair failed and we were unable to recover it. 00:38:18.547 [2024-12-13 10:40:12.273903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.547 [2024-12-13 10:40:12.273918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.547 qpair failed and we were unable to recover it. 00:38:18.547 [2024-12-13 10:40:12.274057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.547 [2024-12-13 10:40:12.274073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.547 qpair failed and we were unable to recover it. 00:38:18.547 [2024-12-13 10:40:12.274228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.547 [2024-12-13 10:40:12.274270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.547 qpair failed and we were unable to recover it. 00:38:18.547 [2024-12-13 10:40:12.274487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.547 [2024-12-13 10:40:12.274533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.547 qpair failed and we were unable to recover it. 00:38:18.547 [2024-12-13 10:40:12.274691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.547 [2024-12-13 10:40:12.274735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.547 qpair failed and we were unable to recover it. 00:38:18.547 [2024-12-13 10:40:12.274882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.547 [2024-12-13 10:40:12.274926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.547 qpair failed and we were unable to recover it. 00:38:18.547 [2024-12-13 10:40:12.275228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.547 [2024-12-13 10:40:12.275243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.547 qpair failed and we were unable to recover it. 00:38:18.547 [2024-12-13 10:40:12.275383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.547 [2024-12-13 10:40:12.275399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.547 qpair failed and we were unable to recover it. 00:38:18.547 [2024-12-13 10:40:12.275474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.547 [2024-12-13 10:40:12.275488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.547 qpair failed and we were unable to recover it. 00:38:18.547 [2024-12-13 10:40:12.275670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.547 [2024-12-13 10:40:12.275686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.547 qpair failed and we were unable to recover it. 00:38:18.547 [2024-12-13 10:40:12.275767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.547 [2024-12-13 10:40:12.275780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.547 qpair failed and we were unable to recover it. 00:38:18.547 [2024-12-13 10:40:12.275959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.547 [2024-12-13 10:40:12.276009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.547 qpair failed and we were unable to recover it. 00:38:18.547 [2024-12-13 10:40:12.276152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.547 [2024-12-13 10:40:12.276195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.547 qpair failed and we were unable to recover it. 00:38:18.547 [2024-12-13 10:40:12.276443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.547 [2024-12-13 10:40:12.276502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.547 qpair failed and we were unable to recover it. 00:38:18.547 [2024-12-13 10:40:12.276766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.547 [2024-12-13 10:40:12.276817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.547 qpair failed and we were unable to recover it. 00:38:18.547 [2024-12-13 10:40:12.276952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.547 [2024-12-13 10:40:12.276968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.547 qpair failed and we were unable to recover it. 00:38:18.547 [2024-12-13 10:40:12.277136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.548 [2024-12-13 10:40:12.277182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.548 qpair failed and we were unable to recover it. 00:38:18.548 [2024-12-13 10:40:12.277331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.548 [2024-12-13 10:40:12.277375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.548 qpair failed and we were unable to recover it. 00:38:18.548 [2024-12-13 10:40:12.277689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.548 [2024-12-13 10:40:12.277732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.548 qpair failed and we were unable to recover it. 00:38:18.548 [2024-12-13 10:40:12.277877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.548 [2024-12-13 10:40:12.277892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.548 qpair failed and we were unable to recover it. 00:38:18.548 [2024-12-13 10:40:12.277973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.548 [2024-12-13 10:40:12.277987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.548 qpair failed and we were unable to recover it. 00:38:18.548 [2024-12-13 10:40:12.278125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.548 [2024-12-13 10:40:12.278141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.548 qpair failed and we were unable to recover it. 00:38:18.548 [2024-12-13 10:40:12.278285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.548 [2024-12-13 10:40:12.278301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.548 qpair failed and we were unable to recover it. 00:38:18.548 [2024-12-13 10:40:12.278443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.548 [2024-12-13 10:40:12.278465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.548 qpair failed and we were unable to recover it. 00:38:18.548 [2024-12-13 10:40:12.278636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.548 [2024-12-13 10:40:12.278651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.548 qpair failed and we were unable to recover it. 00:38:18.548 [2024-12-13 10:40:12.278796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.548 [2024-12-13 10:40:12.278811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.548 qpair failed and we were unable to recover it. 00:38:18.548 [2024-12-13 10:40:12.278892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.548 [2024-12-13 10:40:12.278907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.548 qpair failed and we were unable to recover it. 00:38:18.548 [2024-12-13 10:40:12.279006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.548 [2024-12-13 10:40:12.279020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.548 qpair failed and we were unable to recover it. 00:38:18.548 [2024-12-13 10:40:12.279157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.548 [2024-12-13 10:40:12.279173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.548 qpair failed and we were unable to recover it. 00:38:18.548 [2024-12-13 10:40:12.279333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.548 [2024-12-13 10:40:12.279378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.548 qpair failed and we were unable to recover it. 00:38:18.548 [2024-12-13 10:40:12.279513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.548 [2024-12-13 10:40:12.279556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.548 qpair failed and we were unable to recover it. 00:38:18.548 [2024-12-13 10:40:12.279703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.548 [2024-12-13 10:40:12.279747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.548 qpair failed and we were unable to recover it. 00:38:18.548 [2024-12-13 10:40:12.279949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.548 [2024-12-13 10:40:12.279965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.548 qpair failed and we were unable to recover it. 00:38:18.548 [2024-12-13 10:40:12.280050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.548 [2024-12-13 10:40:12.280066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.548 qpair failed and we were unable to recover it. 00:38:18.548 [2024-12-13 10:40:12.280166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.548 [2024-12-13 10:40:12.280179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.548 qpair failed and we were unable to recover it. 00:38:18.548 [2024-12-13 10:40:12.280268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.548 [2024-12-13 10:40:12.280284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.548 qpair failed and we were unable to recover it. 00:38:18.548 [2024-12-13 10:40:12.280424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.548 [2024-12-13 10:40:12.280444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.548 qpair failed and we were unable to recover it. 00:38:18.548 [2024-12-13 10:40:12.280601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.548 [2024-12-13 10:40:12.280616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.548 qpair failed and we were unable to recover it. 00:38:18.548 [2024-12-13 10:40:12.280707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.548 [2024-12-13 10:40:12.280721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.548 qpair failed and we were unable to recover it. 00:38:18.548 [2024-12-13 10:40:12.280800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.548 [2024-12-13 10:40:12.280814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.548 qpair failed and we were unable to recover it. 00:38:18.548 [2024-12-13 10:40:12.280962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.548 [2024-12-13 10:40:12.280975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.548 qpair failed and we were unable to recover it. 00:38:18.548 [2024-12-13 10:40:12.281145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.548 [2024-12-13 10:40:12.281160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.548 qpair failed and we were unable to recover it. 00:38:18.548 [2024-12-13 10:40:12.281324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.548 [2024-12-13 10:40:12.281339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.548 qpair failed and we were unable to recover it. 00:38:18.548 [2024-12-13 10:40:12.281477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.548 [2024-12-13 10:40:12.281494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.548 qpair failed and we were unable to recover it. 00:38:18.548 [2024-12-13 10:40:12.281591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.548 [2024-12-13 10:40:12.281606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.548 qpair failed and we were unable to recover it. 00:38:18.548 [2024-12-13 10:40:12.281685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.548 [2024-12-13 10:40:12.281700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.548 qpair failed and we were unable to recover it. 00:38:18.548 [2024-12-13 10:40:12.281779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.548 [2024-12-13 10:40:12.281793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.548 qpair failed and we were unable to recover it. 00:38:18.548 [2024-12-13 10:40:12.281948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.548 [2024-12-13 10:40:12.281992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.548 qpair failed and we were unable to recover it. 00:38:18.548 [2024-12-13 10:40:12.282134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.548 [2024-12-13 10:40:12.282177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.548 qpair failed and we were unable to recover it. 00:38:18.548 [2024-12-13 10:40:12.282306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.548 [2024-12-13 10:40:12.282350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.548 qpair failed and we were unable to recover it. 00:38:18.548 [2024-12-13 10:40:12.282481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.548 [2024-12-13 10:40:12.282525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.548 qpair failed and we were unable to recover it. 00:38:18.548 [2024-12-13 10:40:12.282740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.548 [2024-12-13 10:40:12.282784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.548 qpair failed and we were unable to recover it. 00:38:18.548 [2024-12-13 10:40:12.282929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.548 [2024-12-13 10:40:12.282973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.548 qpair failed and we were unable to recover it. 00:38:18.548 [2024-12-13 10:40:12.283234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.548 [2024-12-13 10:40:12.283250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.548 qpair failed and we were unable to recover it. 00:38:18.548 [2024-12-13 10:40:12.283324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.548 [2024-12-13 10:40:12.283338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.548 qpair failed and we were unable to recover it. 00:38:18.548 [2024-12-13 10:40:12.283475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.548 [2024-12-13 10:40:12.283491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.549 qpair failed and we were unable to recover it. 00:38:18.549 [2024-12-13 10:40:12.283590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.549 [2024-12-13 10:40:12.283604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.549 qpair failed and we were unable to recover it. 00:38:18.549 [2024-12-13 10:40:12.283698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.549 [2024-12-13 10:40:12.283711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.549 qpair failed and we were unable to recover it. 00:38:18.549 [2024-12-13 10:40:12.283882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.549 [2024-12-13 10:40:12.283897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.549 qpair failed and we were unable to recover it. 00:38:18.549 [2024-12-13 10:40:12.283985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.549 [2024-12-13 10:40:12.283999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.549 qpair failed and we were unable to recover it. 00:38:18.549 [2024-12-13 10:40:12.284085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.549 [2024-12-13 10:40:12.284100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.549 qpair failed and we were unable to recover it. 00:38:18.549 [2024-12-13 10:40:12.284193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.549 [2024-12-13 10:40:12.284208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.549 qpair failed and we were unable to recover it. 00:38:18.549 [2024-12-13 10:40:12.284295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.549 [2024-12-13 10:40:12.284308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.549 qpair failed and we were unable to recover it. 00:38:18.549 [2024-12-13 10:40:12.284384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.549 [2024-12-13 10:40:12.284398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.549 qpair failed and we were unable to recover it. 00:38:18.549 [2024-12-13 10:40:12.284538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.549 [2024-12-13 10:40:12.284554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.549 qpair failed and we were unable to recover it. 00:38:18.549 [2024-12-13 10:40:12.284703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.549 [2024-12-13 10:40:12.284719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.549 qpair failed and we were unable to recover it. 00:38:18.549 [2024-12-13 10:40:12.284787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.549 [2024-12-13 10:40:12.284801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.549 qpair failed and we were unable to recover it. 00:38:18.549 [2024-12-13 10:40:12.284953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.549 [2024-12-13 10:40:12.284969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.549 qpair failed and we were unable to recover it. 00:38:18.549 [2024-12-13 10:40:12.285105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.549 [2024-12-13 10:40:12.285120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.549 qpair failed and we were unable to recover it. 00:38:18.549 [2024-12-13 10:40:12.285254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.549 [2024-12-13 10:40:12.285269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.549 qpair failed and we were unable to recover it. 00:38:18.549 [2024-12-13 10:40:12.285350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.549 [2024-12-13 10:40:12.285363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.549 qpair failed and we were unable to recover it. 00:38:18.549 [2024-12-13 10:40:12.285460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.549 [2024-12-13 10:40:12.285475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.549 qpair failed and we were unable to recover it. 00:38:18.549 [2024-12-13 10:40:12.285624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.549 [2024-12-13 10:40:12.285639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.549 qpair failed and we were unable to recover it. 00:38:18.549 [2024-12-13 10:40:12.285789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.549 [2024-12-13 10:40:12.285806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.549 qpair failed and we were unable to recover it. 00:38:18.549 [2024-12-13 10:40:12.285901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.549 [2024-12-13 10:40:12.285915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.549 qpair failed and we were unable to recover it. 00:38:18.549 [2024-12-13 10:40:12.286084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.549 [2024-12-13 10:40:12.286100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.549 qpair failed and we were unable to recover it. 00:38:18.549 [2024-12-13 10:40:12.286170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.549 [2024-12-13 10:40:12.286183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.549 qpair failed and we were unable to recover it. 00:38:18.549 [2024-12-13 10:40:12.286368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.549 [2024-12-13 10:40:12.286384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.549 qpair failed and we were unable to recover it. 00:38:18.549 [2024-12-13 10:40:12.286531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.549 [2024-12-13 10:40:12.286546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.549 qpair failed and we were unable to recover it. 00:38:18.549 [2024-12-13 10:40:12.286684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.549 [2024-12-13 10:40:12.286699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.549 qpair failed and we were unable to recover it. 00:38:18.549 [2024-12-13 10:40:12.286841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.549 [2024-12-13 10:40:12.286858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.549 qpair failed and we were unable to recover it. 00:38:18.549 [2024-12-13 10:40:12.287021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.549 [2024-12-13 10:40:12.287067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.549 qpair failed and we were unable to recover it. 00:38:18.549 [2024-12-13 10:40:12.287199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.549 [2024-12-13 10:40:12.287243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.549 qpair failed and we were unable to recover it. 00:38:18.549 [2024-12-13 10:40:12.287404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.549 [2024-12-13 10:40:12.287458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.549 qpair failed and we were unable to recover it. 00:38:18.549 [2024-12-13 10:40:12.287771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.549 [2024-12-13 10:40:12.287814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.549 qpair failed and we were unable to recover it. 00:38:18.549 [2024-12-13 10:40:12.287941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.549 [2024-12-13 10:40:12.287983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.549 qpair failed and we were unable to recover it. 00:38:18.549 [2024-12-13 10:40:12.288117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.549 [2024-12-13 10:40:12.288133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.549 qpair failed and we were unable to recover it. 00:38:18.549 [2024-12-13 10:40:12.288294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.549 [2024-12-13 10:40:12.288341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.549 qpair failed and we were unable to recover it. 00:38:18.549 [2024-12-13 10:40:12.288550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.549 [2024-12-13 10:40:12.288593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.549 qpair failed and we were unable to recover it. 00:38:18.549 [2024-12-13 10:40:12.288789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.549 [2024-12-13 10:40:12.288834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.549 qpair failed and we were unable to recover it. 00:38:18.549 [2024-12-13 10:40:12.289003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.550 [2024-12-13 10:40:12.289018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.550 qpair failed and we were unable to recover it. 00:38:18.550 [2024-12-13 10:40:12.289189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.550 [2024-12-13 10:40:12.289204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.550 qpair failed and we were unable to recover it. 00:38:18.550 [2024-12-13 10:40:12.289367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.550 [2024-12-13 10:40:12.289409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.550 qpair failed and we were unable to recover it. 00:38:18.550 [2024-12-13 10:40:12.289580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.550 [2024-12-13 10:40:12.289626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.550 qpair failed and we were unable to recover it. 00:38:18.550 [2024-12-13 10:40:12.289843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.550 [2024-12-13 10:40:12.289886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.550 qpair failed and we were unable to recover it. 00:38:18.550 [2024-12-13 10:40:12.290161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.550 [2024-12-13 10:40:12.290205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.550 qpair failed and we were unable to recover it. 00:38:18.550 [2024-12-13 10:40:12.290399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.550 [2024-12-13 10:40:12.290443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.550 qpair failed and we were unable to recover it. 00:38:18.550 [2024-12-13 10:40:12.290655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.550 [2024-12-13 10:40:12.290700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.550 qpair failed and we were unable to recover it. 00:38:18.550 [2024-12-13 10:40:12.290899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.550 [2024-12-13 10:40:12.290915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.550 qpair failed and we were unable to recover it. 00:38:18.550 [2024-12-13 10:40:12.291129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.550 [2024-12-13 10:40:12.291147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.550 qpair failed and we were unable to recover it. 00:38:18.550 [2024-12-13 10:40:12.291399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.550 [2024-12-13 10:40:12.291418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.550 qpair failed and we were unable to recover it. 00:38:18.550 [2024-12-13 10:40:12.291566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.550 [2024-12-13 10:40:12.291583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.550 qpair failed and we were unable to recover it. 00:38:18.550 [2024-12-13 10:40:12.291670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.550 [2024-12-13 10:40:12.291684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.550 qpair failed and we were unable to recover it. 00:38:18.550 [2024-12-13 10:40:12.291770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.550 [2024-12-13 10:40:12.291784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.550 qpair failed and we were unable to recover it. 00:38:18.550 [2024-12-13 10:40:12.291947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.550 [2024-12-13 10:40:12.291962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.550 qpair failed and we were unable to recover it. 00:38:18.550 [2024-12-13 10:40:12.292255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.550 [2024-12-13 10:40:12.292299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.550 qpair failed and we were unable to recover it. 00:38:18.550 [2024-12-13 10:40:12.292500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.550 [2024-12-13 10:40:12.292556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.550 qpair failed and we were unable to recover it. 00:38:18.550 [2024-12-13 10:40:12.292776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.550 [2024-12-13 10:40:12.292825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.550 qpair failed and we were unable to recover it. 00:38:18.550 [2024-12-13 10:40:12.293006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.550 [2024-12-13 10:40:12.293021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.550 qpair failed and we were unable to recover it. 00:38:18.550 [2024-12-13 10:40:12.293198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.550 [2024-12-13 10:40:12.293242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.550 qpair failed and we were unable to recover it. 00:38:18.550 [2024-12-13 10:40:12.293470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.550 [2024-12-13 10:40:12.293515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.550 qpair failed and we were unable to recover it. 00:38:18.550 [2024-12-13 10:40:12.293727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.550 [2024-12-13 10:40:12.293771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.550 qpair failed and we were unable to recover it. 00:38:18.550 [2024-12-13 10:40:12.294001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.550 [2024-12-13 10:40:12.294045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.550 qpair failed and we were unable to recover it. 00:38:18.550 [2024-12-13 10:40:12.294336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.550 [2024-12-13 10:40:12.294387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.550 qpair failed and we were unable to recover it. 00:38:18.550 [2024-12-13 10:40:12.294561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.550 [2024-12-13 10:40:12.294606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.550 qpair failed and we were unable to recover it. 00:38:18.550 [2024-12-13 10:40:12.294751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.550 [2024-12-13 10:40:12.294795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.550 qpair failed and we were unable to recover it. 00:38:18.550 [2024-12-13 10:40:12.295002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.550 [2024-12-13 10:40:12.295018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.550 qpair failed and we were unable to recover it. 00:38:18.550 [2024-12-13 10:40:12.295177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.550 [2024-12-13 10:40:12.295216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.550 qpair failed and we were unable to recover it. 00:38:18.550 [2024-12-13 10:40:12.295460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.550 [2024-12-13 10:40:12.295505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.550 qpair failed and we were unable to recover it. 00:38:18.550 [2024-12-13 10:40:12.295767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.550 [2024-12-13 10:40:12.295811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.550 qpair failed and we were unable to recover it. 00:38:18.550 [2024-12-13 10:40:12.296020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.550 [2024-12-13 10:40:12.296036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.550 qpair failed and we were unable to recover it. 00:38:18.550 [2024-12-13 10:40:12.296259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.550 [2024-12-13 10:40:12.296284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.550 qpair failed and we were unable to recover it. 00:38:18.550 [2024-12-13 10:40:12.296423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.550 [2024-12-13 10:40:12.296439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.550 qpair failed and we were unable to recover it. 00:38:18.550 [2024-12-13 10:40:12.296557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.550 [2024-12-13 10:40:12.296573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.550 qpair failed and we were unable to recover it. 00:38:18.550 [2024-12-13 10:40:12.296709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.550 [2024-12-13 10:40:12.296724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.550 qpair failed and we were unable to recover it. 00:38:18.550 [2024-12-13 10:40:12.296810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.550 [2024-12-13 10:40:12.296825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.550 qpair failed and we were unable to recover it. 00:38:18.550 [2024-12-13 10:40:12.297041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.550 [2024-12-13 10:40:12.297084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.550 qpair failed and we were unable to recover it. 00:38:18.550 [2024-12-13 10:40:12.297241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.550 [2024-12-13 10:40:12.297286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.550 qpair failed and we were unable to recover it. 00:38:18.550 [2024-12-13 10:40:12.297487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.550 [2024-12-13 10:40:12.297531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.550 qpair failed and we were unable to recover it. 00:38:18.550 [2024-12-13 10:40:12.297736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.550 [2024-12-13 10:40:12.297779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.551 qpair failed and we were unable to recover it. 00:38:18.551 [2024-12-13 10:40:12.297981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.551 [2024-12-13 10:40:12.298022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.551 qpair failed and we were unable to recover it. 00:38:18.551 [2024-12-13 10:40:12.298217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.551 [2024-12-13 10:40:12.298272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.551 qpair failed and we were unable to recover it. 00:38:18.551 [2024-12-13 10:40:12.298496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.551 [2024-12-13 10:40:12.298512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.551 qpair failed and we were unable to recover it. 00:38:18.551 [2024-12-13 10:40:12.298601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.551 [2024-12-13 10:40:12.298615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.551 qpair failed and we were unable to recover it. 00:38:18.551 [2024-12-13 10:40:12.298780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.551 [2024-12-13 10:40:12.298817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.551 qpair failed and we were unable to recover it. 00:38:18.551 [2024-12-13 10:40:12.298984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.551 [2024-12-13 10:40:12.299029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.551 qpair failed and we were unable to recover it. 00:38:18.551 [2024-12-13 10:40:12.299236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.551 [2024-12-13 10:40:12.299280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.551 qpair failed and we were unable to recover it. 00:38:18.551 [2024-12-13 10:40:12.299488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.551 [2024-12-13 10:40:12.299533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.551 qpair failed and we were unable to recover it. 00:38:18.551 [2024-12-13 10:40:12.299823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.551 [2024-12-13 10:40:12.299866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.551 qpair failed and we were unable to recover it. 00:38:18.551 [2024-12-13 10:40:12.300065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.551 [2024-12-13 10:40:12.300081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.551 qpair failed and we were unable to recover it. 00:38:18.551 [2024-12-13 10:40:12.300285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.551 [2024-12-13 10:40:12.300301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.551 qpair failed and we were unable to recover it. 00:38:18.551 [2024-12-13 10:40:12.300467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.551 [2024-12-13 10:40:12.300512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.551 qpair failed and we were unable to recover it. 00:38:18.551 [2024-12-13 10:40:12.300667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.551 [2024-12-13 10:40:12.300712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.551 qpair failed and we were unable to recover it. 00:38:18.551 [2024-12-13 10:40:12.300848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.551 [2024-12-13 10:40:12.300892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.551 qpair failed and we were unable to recover it. 00:38:18.551 [2024-12-13 10:40:12.301144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.551 [2024-12-13 10:40:12.301160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.551 qpair failed and we were unable to recover it. 00:38:18.551 [2024-12-13 10:40:12.301387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.551 [2024-12-13 10:40:12.301402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.551 qpair failed and we were unable to recover it. 00:38:18.551 [2024-12-13 10:40:12.301484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.551 [2024-12-13 10:40:12.301499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.551 qpair failed and we were unable to recover it. 00:38:18.551 [2024-12-13 10:40:12.301663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.551 [2024-12-13 10:40:12.301709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.551 qpair failed and we were unable to recover it. 00:38:18.551 [2024-12-13 10:40:12.301918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.551 [2024-12-13 10:40:12.301962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.551 qpair failed and we were unable to recover it. 00:38:18.551 [2024-12-13 10:40:12.302110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.551 [2024-12-13 10:40:12.302156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.551 qpair failed and we were unable to recover it. 00:38:18.551 [2024-12-13 10:40:12.302297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.551 [2024-12-13 10:40:12.302340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.551 qpair failed and we were unable to recover it. 00:38:18.551 [2024-12-13 10:40:12.302563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.551 [2024-12-13 10:40:12.302608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.551 qpair failed and we were unable to recover it. 00:38:18.551 [2024-12-13 10:40:12.302803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.551 [2024-12-13 10:40:12.302848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.551 qpair failed and we were unable to recover it. 00:38:18.551 [2024-12-13 10:40:12.303037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.551 [2024-12-13 10:40:12.303088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.551 qpair failed and we were unable to recover it. 00:38:18.551 [2024-12-13 10:40:12.303234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.551 [2024-12-13 10:40:12.303278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.551 qpair failed and we were unable to recover it. 00:38:18.551 [2024-12-13 10:40:12.303474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.551 [2024-12-13 10:40:12.303519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.551 qpair failed and we were unable to recover it. 00:38:18.551 [2024-12-13 10:40:12.303760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.551 [2024-12-13 10:40:12.303804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.551 qpair failed and we were unable to recover it. 00:38:18.551 [2024-12-13 10:40:12.303940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.551 [2024-12-13 10:40:12.303983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.551 qpair failed and we were unable to recover it. 00:38:18.551 [2024-12-13 10:40:12.304210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.551 [2024-12-13 10:40:12.304253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.551 qpair failed and we were unable to recover it. 00:38:18.551 [2024-12-13 10:40:12.304439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.551 [2024-12-13 10:40:12.304459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.551 qpair failed and we were unable to recover it. 00:38:18.551 [2024-12-13 10:40:12.304549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.551 [2024-12-13 10:40:12.304563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.551 qpair failed and we were unable to recover it. 00:38:18.551 [2024-12-13 10:40:12.304746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.551 [2024-12-13 10:40:12.304762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.551 qpair failed and we were unable to recover it. 00:38:18.551 [2024-12-13 10:40:12.304914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.551 [2024-12-13 10:40:12.304930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.551 qpair failed and we were unable to recover it. 00:38:18.551 [2024-12-13 10:40:12.305026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.551 [2024-12-13 10:40:12.305041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.551 qpair failed and we were unable to recover it. 00:38:18.551 [2024-12-13 10:40:12.305187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.551 [2024-12-13 10:40:12.305207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.551 qpair failed and we were unable to recover it. 00:38:18.551 [2024-12-13 10:40:12.305296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.551 [2024-12-13 10:40:12.305310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.551 qpair failed and we were unable to recover it. 00:38:18.551 [2024-12-13 10:40:12.305398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.551 [2024-12-13 10:40:12.305411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.551 qpair failed and we were unable to recover it. 00:38:18.551 [2024-12-13 10:40:12.305569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.551 [2024-12-13 10:40:12.305585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.551 qpair failed and we were unable to recover it. 00:38:18.551 [2024-12-13 10:40:12.305670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.551 [2024-12-13 10:40:12.305684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.551 qpair failed and we were unable to recover it. 00:38:18.552 [2024-12-13 10:40:12.305845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.552 [2024-12-13 10:40:12.305861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.552 qpair failed and we were unable to recover it. 00:38:18.552 [2024-12-13 10:40:12.305998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.552 [2024-12-13 10:40:12.306014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.552 qpair failed and we were unable to recover it. 00:38:18.552 [2024-12-13 10:40:12.306115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.552 [2024-12-13 10:40:12.306129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.552 qpair failed and we were unable to recover it. 00:38:18.552 [2024-12-13 10:40:12.306229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.552 [2024-12-13 10:40:12.306273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.552 qpair failed and we were unable to recover it. 00:38:18.552 [2024-12-13 10:40:12.306550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.552 [2024-12-13 10:40:12.306594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.552 qpair failed and we were unable to recover it. 00:38:18.552 [2024-12-13 10:40:12.306881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.552 [2024-12-13 10:40:12.306924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.552 qpair failed and we were unable to recover it. 00:38:18.552 [2024-12-13 10:40:12.307178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.552 [2024-12-13 10:40:12.307193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.552 qpair failed and we were unable to recover it. 00:38:18.552 [2024-12-13 10:40:12.307403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.552 [2024-12-13 10:40:12.307418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.552 qpair failed and we were unable to recover it. 00:38:18.552 [2024-12-13 10:40:12.307645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.552 [2024-12-13 10:40:12.307661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.552 qpair failed and we were unable to recover it. 00:38:18.552 [2024-12-13 10:40:12.307753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.552 [2024-12-13 10:40:12.307767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.552 qpair failed and we were unable to recover it. 00:38:18.552 [2024-12-13 10:40:12.307904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.552 [2024-12-13 10:40:12.307920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.552 qpair failed and we were unable to recover it. 00:38:18.552 [2024-12-13 10:40:12.307992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.552 [2024-12-13 10:40:12.308007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.552 qpair failed and we were unable to recover it. 00:38:18.552 [2024-12-13 10:40:12.308086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.552 [2024-12-13 10:40:12.308099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.552 qpair failed and we were unable to recover it. 00:38:18.552 [2024-12-13 10:40:12.308270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.552 [2024-12-13 10:40:12.308285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.552 qpair failed and we were unable to recover it. 00:38:18.552 [2024-12-13 10:40:12.308369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.552 [2024-12-13 10:40:12.308384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.552 qpair failed and we were unable to recover it. 00:38:18.552 [2024-12-13 10:40:12.308547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.552 [2024-12-13 10:40:12.308593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.552 qpair failed and we were unable to recover it. 00:38:18.552 [2024-12-13 10:40:12.308857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.552 [2024-12-13 10:40:12.308901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.552 qpair failed and we were unable to recover it. 00:38:18.552 [2024-12-13 10:40:12.309106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.552 [2024-12-13 10:40:12.309149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.552 qpair failed and we were unable to recover it. 00:38:18.552 [2024-12-13 10:40:12.309367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.552 [2024-12-13 10:40:12.309382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.552 qpair failed and we were unable to recover it. 00:38:18.552 [2024-12-13 10:40:12.309532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.552 [2024-12-13 10:40:12.309548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.552 qpair failed and we were unable to recover it. 00:38:18.552 [2024-12-13 10:40:12.309687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.552 [2024-12-13 10:40:12.309703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.552 qpair failed and we were unable to recover it. 00:38:18.552 [2024-12-13 10:40:12.309850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.552 [2024-12-13 10:40:12.309894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.552 qpair failed and we were unable to recover it. 00:38:18.552 [2024-12-13 10:40:12.310179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.552 [2024-12-13 10:40:12.310222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.552 qpair failed and we were unable to recover it. 00:38:18.552 [2024-12-13 10:40:12.310349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.552 [2024-12-13 10:40:12.310393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.552 qpair failed and we were unable to recover it. 00:38:18.552 [2024-12-13 10:40:12.310613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.552 [2024-12-13 10:40:12.310666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.552 qpair failed and we were unable to recover it. 00:38:18.552 [2024-12-13 10:40:12.310880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.552 [2024-12-13 10:40:12.310924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.552 qpair failed and we were unable to recover it. 00:38:18.552 [2024-12-13 10:40:12.311155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.552 [2024-12-13 10:40:12.311200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.552 qpair failed and we were unable to recover it. 00:38:18.552 [2024-12-13 10:40:12.311452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.552 [2024-12-13 10:40:12.311468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.552 qpair failed and we were unable to recover it. 00:38:18.552 [2024-12-13 10:40:12.311718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.552 [2024-12-13 10:40:12.311734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.552 qpair failed and we were unable to recover it. 00:38:18.552 [2024-12-13 10:40:12.311884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.552 [2024-12-13 10:40:12.311899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.552 qpair failed and we were unable to recover it. 00:38:18.552 [2024-12-13 10:40:12.312037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.552 [2024-12-13 10:40:12.312052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.552 qpair failed and we were unable to recover it. 00:38:18.552 [2024-12-13 10:40:12.312296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.552 [2024-12-13 10:40:12.312312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.552 qpair failed and we were unable to recover it. 00:38:18.552 [2024-12-13 10:40:12.312455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.552 [2024-12-13 10:40:12.312471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.552 qpair failed and we were unable to recover it. 00:38:18.552 [2024-12-13 10:40:12.312555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.552 [2024-12-13 10:40:12.312569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.552 qpair failed and we were unable to recover it. 00:38:18.552 [2024-12-13 10:40:12.312724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.552 [2024-12-13 10:40:12.312739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.552 qpair failed and we were unable to recover it. 00:38:18.552 [2024-12-13 10:40:12.312953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.552 [2024-12-13 10:40:12.312969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.552 qpair failed and we were unable to recover it. 00:38:18.552 [2024-12-13 10:40:12.313145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.552 [2024-12-13 10:40:12.313161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.552 qpair failed and we were unable to recover it. 00:38:18.552 [2024-12-13 10:40:12.313374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.552 [2024-12-13 10:40:12.313390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.552 qpair failed and we were unable to recover it. 00:38:18.552 [2024-12-13 10:40:12.313627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.553 [2024-12-13 10:40:12.313646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.553 qpair failed and we were unable to recover it. 00:38:18.553 [2024-12-13 10:40:12.313786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.553 [2024-12-13 10:40:12.313802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.553 qpair failed and we were unable to recover it. 00:38:18.553 [2024-12-13 10:40:12.313957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.553 [2024-12-13 10:40:12.313971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.553 qpair failed and we were unable to recover it. 00:38:18.553 [2024-12-13 10:40:12.314125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.553 [2024-12-13 10:40:12.314140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.553 qpair failed and we were unable to recover it. 00:38:18.553 [2024-12-13 10:40:12.314422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.553 [2024-12-13 10:40:12.314436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.553 qpair failed and we were unable to recover it. 00:38:18.553 [2024-12-13 10:40:12.314509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.553 [2024-12-13 10:40:12.314524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.553 qpair failed and we were unable to recover it. 00:38:18.553 [2024-12-13 10:40:12.314620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.553 [2024-12-13 10:40:12.314634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.553 qpair failed and we were unable to recover it. 00:38:18.553 [2024-12-13 10:40:12.314841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.553 [2024-12-13 10:40:12.314856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.553 qpair failed and we were unable to recover it. 00:38:18.553 [2024-12-13 10:40:12.314922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.553 [2024-12-13 10:40:12.314936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.553 qpair failed and we were unable to recover it. 00:38:18.553 [2024-12-13 10:40:12.315096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.553 [2024-12-13 10:40:12.315113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.553 qpair failed and we were unable to recover it. 00:38:18.553 [2024-12-13 10:40:12.315275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.553 [2024-12-13 10:40:12.315289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.553 qpair failed and we were unable to recover it. 00:38:18.553 [2024-12-13 10:40:12.315440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.553 [2024-12-13 10:40:12.315461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.553 qpair failed and we were unable to recover it. 00:38:18.553 [2024-12-13 10:40:12.315560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.553 [2024-12-13 10:40:12.315574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.553 qpair failed and we were unable to recover it. 00:38:18.553 [2024-12-13 10:40:12.315658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.553 [2024-12-13 10:40:12.315673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.553 qpair failed and we were unable to recover it. 00:38:18.553 [2024-12-13 10:40:12.315837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.553 [2024-12-13 10:40:12.315852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.553 qpair failed and we were unable to recover it. 00:38:18.553 [2024-12-13 10:40:12.315934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.553 [2024-12-13 10:40:12.315948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.553 qpair failed and we were unable to recover it. 00:38:18.553 [2024-12-13 10:40:12.316151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.553 [2024-12-13 10:40:12.316194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.553 qpair failed and we were unable to recover it. 00:38:18.553 [2024-12-13 10:40:12.316337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.553 [2024-12-13 10:40:12.316381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.553 qpair failed and we were unable to recover it. 00:38:18.553 [2024-12-13 10:40:12.316534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.553 [2024-12-13 10:40:12.316578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.553 qpair failed and we were unable to recover it. 00:38:18.553 [2024-12-13 10:40:12.316798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.553 [2024-12-13 10:40:12.316843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.553 qpair failed and we were unable to recover it. 00:38:18.553 [2024-12-13 10:40:12.317160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.553 [2024-12-13 10:40:12.317205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.553 qpair failed and we were unable to recover it. 00:38:18.553 [2024-12-13 10:40:12.317417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.553 [2024-12-13 10:40:12.317485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.553 qpair failed and we were unable to recover it. 00:38:18.553 [2024-12-13 10:40:12.317691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.553 [2024-12-13 10:40:12.317736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.553 qpair failed and we were unable to recover it. 00:38:18.553 [2024-12-13 10:40:12.317926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.553 [2024-12-13 10:40:12.317968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.553 qpair failed and we were unable to recover it. 00:38:18.553 [2024-12-13 10:40:12.318156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.553 [2024-12-13 10:40:12.318171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.553 qpair failed and we were unable to recover it. 00:38:18.553 [2024-12-13 10:40:12.318372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.553 [2024-12-13 10:40:12.318388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.553 qpair failed and we were unable to recover it. 00:38:18.553 [2024-12-13 10:40:12.318528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.553 [2024-12-13 10:40:12.318546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.553 qpair failed and we were unable to recover it. 00:38:18.553 [2024-12-13 10:40:12.318698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.553 [2024-12-13 10:40:12.318713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.553 qpair failed and we were unable to recover it. 00:38:18.553 [2024-12-13 10:40:12.318791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.553 [2024-12-13 10:40:12.318805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.553 qpair failed and we were unable to recover it. 00:38:18.553 [2024-12-13 10:40:12.318887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.553 [2024-12-13 10:40:12.318901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.553 qpair failed and we were unable to recover it. 00:38:18.553 [2024-12-13 10:40:12.319108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.553 [2024-12-13 10:40:12.319125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.553 qpair failed and we were unable to recover it. 00:38:18.553 [2024-12-13 10:40:12.319274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.553 [2024-12-13 10:40:12.319289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.553 qpair failed and we were unable to recover it. 00:38:18.553 [2024-12-13 10:40:12.319370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.553 [2024-12-13 10:40:12.319384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.553 qpair failed and we were unable to recover it. 00:38:18.553 [2024-12-13 10:40:12.319483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.553 [2024-12-13 10:40:12.319498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.553 qpair failed and we were unable to recover it. 00:38:18.553 [2024-12-13 10:40:12.319732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.553 [2024-12-13 10:40:12.319748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.553 qpair failed and we were unable to recover it. 00:38:18.554 [2024-12-13 10:40:12.319837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.554 [2024-12-13 10:40:12.319851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.554 qpair failed and we were unable to recover it. 00:38:18.554 [2024-12-13 10:40:12.319985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.554 [2024-12-13 10:40:12.320000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.554 qpair failed and we were unable to recover it. 00:38:18.554 [2024-12-13 10:40:12.320139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.554 [2024-12-13 10:40:12.320155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.554 qpair failed and we were unable to recover it. 00:38:18.554 [2024-12-13 10:40:12.320311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.554 [2024-12-13 10:40:12.320326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.554 qpair failed and we were unable to recover it. 00:38:18.554 [2024-12-13 10:40:12.320458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.554 [2024-12-13 10:40:12.320473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.554 qpair failed and we were unable to recover it. 00:38:18.554 [2024-12-13 10:40:12.320629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.554 [2024-12-13 10:40:12.320645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.554 qpair failed and we were unable to recover it. 00:38:18.554 [2024-12-13 10:40:12.320859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.554 [2024-12-13 10:40:12.320875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.554 qpair failed and we were unable to recover it. 00:38:18.554 [2024-12-13 10:40:12.321027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.554 [2024-12-13 10:40:12.321042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.554 qpair failed and we were unable to recover it. 00:38:18.554 [2024-12-13 10:40:12.321138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.554 [2024-12-13 10:40:12.321152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.554 qpair failed and we were unable to recover it. 00:38:18.554 [2024-12-13 10:40:12.321245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.554 [2024-12-13 10:40:12.321259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.554 qpair failed and we were unable to recover it. 00:38:18.554 [2024-12-13 10:40:12.321468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.554 [2024-12-13 10:40:12.321483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.554 qpair failed and we were unable to recover it. 00:38:18.554 [2024-12-13 10:40:12.321585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.554 [2024-12-13 10:40:12.321599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.554 qpair failed and we were unable to recover it. 00:38:18.554 [2024-12-13 10:40:12.321680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.554 [2024-12-13 10:40:12.321695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.554 qpair failed and we were unable to recover it. 00:38:18.554 [2024-12-13 10:40:12.321857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.554 [2024-12-13 10:40:12.321873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.554 qpair failed and we were unable to recover it. 00:38:18.554 [2024-12-13 10:40:12.322034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.554 [2024-12-13 10:40:12.322050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.554 qpair failed and we were unable to recover it. 00:38:18.554 [2024-12-13 10:40:12.322191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.554 [2024-12-13 10:40:12.322233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.554 qpair failed and we were unable to recover it. 00:38:18.554 [2024-12-13 10:40:12.322431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.554 [2024-12-13 10:40:12.322488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.554 qpair failed and we were unable to recover it. 00:38:18.554 [2024-12-13 10:40:12.322751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.554 [2024-12-13 10:40:12.322794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.554 qpair failed and we were unable to recover it. 00:38:18.554 [2024-12-13 10:40:12.323010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.554 [2024-12-13 10:40:12.323057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:18.554 qpair failed and we were unable to recover it. 00:38:18.554 [2024-12-13 10:40:12.323208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.554 [2024-12-13 10:40:12.323254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:18.554 qpair failed and we were unable to recover it. 00:38:18.554 [2024-12-13 10:40:12.323494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.554 [2024-12-13 10:40:12.323541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:18.554 qpair failed and we were unable to recover it. 00:38:18.554 [2024-12-13 10:40:12.323669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.554 [2024-12-13 10:40:12.323686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.554 qpair failed and we were unable to recover it. 00:38:18.554 [2024-12-13 10:40:12.323930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.554 [2024-12-13 10:40:12.323946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.554 qpair failed and we were unable to recover it. 00:38:18.554 [2024-12-13 10:40:12.324007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.554 [2024-12-13 10:40:12.324022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.554 qpair failed and we were unable to recover it. 00:38:18.554 [2024-12-13 10:40:12.324173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.554 [2024-12-13 10:40:12.324189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.554 qpair failed and we were unable to recover it. 00:38:18.554 [2024-12-13 10:40:12.324391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.554 [2024-12-13 10:40:12.324406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.554 qpair failed and we were unable to recover it. 00:38:18.554 [2024-12-13 10:40:12.324579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.554 [2024-12-13 10:40:12.324594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.554 qpair failed and we were unable to recover it. 00:38:18.554 [2024-12-13 10:40:12.324676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.554 [2024-12-13 10:40:12.324691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.554 qpair failed and we were unable to recover it. 00:38:18.554 [2024-12-13 10:40:12.324893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.554 [2024-12-13 10:40:12.324908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.554 qpair failed and we were unable to recover it. 00:38:18.554 [2024-12-13 10:40:12.325110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.554 [2024-12-13 10:40:12.325125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.554 qpair failed and we were unable to recover it. 00:38:18.554 [2024-12-13 10:40:12.325197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.554 [2024-12-13 10:40:12.325211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.554 qpair failed and we were unable to recover it. 00:38:18.554 [2024-12-13 10:40:12.325294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.554 [2024-12-13 10:40:12.325310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.554 qpair failed and we were unable to recover it. 00:38:18.554 [2024-12-13 10:40:12.325456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.554 [2024-12-13 10:40:12.325470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.554 qpair failed and we were unable to recover it. 00:38:18.554 [2024-12-13 10:40:12.325539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.554 [2024-12-13 10:40:12.325553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.554 qpair failed and we were unable to recover it. 00:38:18.554 [2024-12-13 10:40:12.325725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.554 [2024-12-13 10:40:12.325741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.554 qpair failed and we were unable to recover it. 00:38:18.554 [2024-12-13 10:40:12.325920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.554 [2024-12-13 10:40:12.325935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.554 qpair failed and we were unable to recover it. 00:38:18.554 [2024-12-13 10:40:12.326036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.554 [2024-12-13 10:40:12.326050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.554 qpair failed and we were unable to recover it. 00:38:18.554 [2024-12-13 10:40:12.326220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.554 [2024-12-13 10:40:12.326264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.554 qpair failed and we were unable to recover it. 00:38:18.554 [2024-12-13 10:40:12.326397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.554 [2024-12-13 10:40:12.326440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.554 qpair failed and we were unable to recover it. 00:38:18.555 [2024-12-13 10:40:12.326722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.555 [2024-12-13 10:40:12.326766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.555 qpair failed and we were unable to recover it. 00:38:18.555 [2024-12-13 10:40:12.326990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.555 [2024-12-13 10:40:12.327005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.555 qpair failed and we were unable to recover it. 00:38:18.555 [2024-12-13 10:40:12.327072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.555 [2024-12-13 10:40:12.327086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.555 qpair failed and we were unable to recover it. 00:38:18.555 [2024-12-13 10:40:12.327237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.555 [2024-12-13 10:40:12.327251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.555 qpair failed and we were unable to recover it. 00:38:18.555 [2024-12-13 10:40:12.327482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.555 [2024-12-13 10:40:12.327498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.555 qpair failed and we were unable to recover it. 00:38:18.555 [2024-12-13 10:40:12.327664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.555 [2024-12-13 10:40:12.327680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.555 qpair failed and we were unable to recover it. 00:38:18.555 [2024-12-13 10:40:12.327849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.555 [2024-12-13 10:40:12.327864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.555 qpair failed and we were unable to recover it. 00:38:18.555 [2024-12-13 10:40:12.327965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.555 [2024-12-13 10:40:12.327979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.555 qpair failed and we were unable to recover it. 00:38:18.555 [2024-12-13 10:40:12.328132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.555 [2024-12-13 10:40:12.328147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.555 qpair failed and we were unable to recover it. 00:38:18.555 [2024-12-13 10:40:12.328294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.555 [2024-12-13 10:40:12.328309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.555 qpair failed and we were unable to recover it. 00:38:18.555 [2024-12-13 10:40:12.328443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.555 [2024-12-13 10:40:12.328467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.555 qpair failed and we were unable to recover it. 00:38:18.555 [2024-12-13 10:40:12.328615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.555 [2024-12-13 10:40:12.328630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.555 qpair failed and we were unable to recover it. 00:38:18.555 [2024-12-13 10:40:12.328711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.555 [2024-12-13 10:40:12.328727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.555 qpair failed and we were unable to recover it. 00:38:18.555 [2024-12-13 10:40:12.328960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.555 [2024-12-13 10:40:12.328976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.555 qpair failed and we were unable to recover it. 00:38:18.555 [2024-12-13 10:40:12.329074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.555 [2024-12-13 10:40:12.329094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.555 qpair failed and we were unable to recover it. 00:38:18.555 [2024-12-13 10:40:12.329165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.555 [2024-12-13 10:40:12.329179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.555 qpair failed and we were unable to recover it. 00:38:18.555 [2024-12-13 10:40:12.329245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.555 [2024-12-13 10:40:12.329259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.555 qpair failed and we were unable to recover it. 00:38:18.555 [2024-12-13 10:40:12.329463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.555 [2024-12-13 10:40:12.329479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.555 qpair failed and we were unable to recover it. 00:38:18.555 [2024-12-13 10:40:12.329564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.555 [2024-12-13 10:40:12.329578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.555 qpair failed and we were unable to recover it. 00:38:18.555 [2024-12-13 10:40:12.329673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.555 [2024-12-13 10:40:12.329700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:18.555 qpair failed and we were unable to recover it. 00:38:18.555 [2024-12-13 10:40:12.329820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.555 [2024-12-13 10:40:12.329848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:18.555 qpair failed and we were unable to recover it. 00:38:18.555 [2024-12-13 10:40:12.330039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.555 [2024-12-13 10:40:12.330070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:18.555 qpair failed and we were unable to recover it. 00:38:18.555 [2024-12-13 10:40:12.330232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.555 [2024-12-13 10:40:12.330249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.555 qpair failed and we were unable to recover it. 00:38:18.555 [2024-12-13 10:40:12.330323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.555 [2024-12-13 10:40:12.330337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.555 qpair failed and we were unable to recover it. 00:38:18.555 [2024-12-13 10:40:12.330421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.555 [2024-12-13 10:40:12.330435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.555 qpair failed and we were unable to recover it. 00:38:18.555 [2024-12-13 10:40:12.330534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.555 [2024-12-13 10:40:12.330549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.555 qpair failed and we were unable to recover it. 00:38:18.555 [2024-12-13 10:40:12.330621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.555 [2024-12-13 10:40:12.330634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.555 qpair failed and we were unable to recover it. 00:38:18.555 [2024-12-13 10:40:12.330769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.555 [2024-12-13 10:40:12.330784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.555 qpair failed and we were unable to recover it. 00:38:18.555 [2024-12-13 10:40:12.330923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.555 [2024-12-13 10:40:12.330964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.555 qpair failed and we were unable to recover it. 00:38:18.555 [2024-12-13 10:40:12.331127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.555 [2024-12-13 10:40:12.331171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.555 qpair failed and we were unable to recover it. 00:38:18.555 [2024-12-13 10:40:12.331383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.555 [2024-12-13 10:40:12.331425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.555 qpair failed and we were unable to recover it. 00:38:18.555 [2024-12-13 10:40:12.331715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.555 [2024-12-13 10:40:12.331765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:18.555 qpair failed and we were unable to recover it. 00:38:18.555 [2024-12-13 10:40:12.332048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.555 [2024-12-13 10:40:12.332115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:18.555 qpair failed and we were unable to recover it. 00:38:18.555 [2024-12-13 10:40:12.332236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.555 [2024-12-13 10:40:12.332263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:18.555 qpair failed and we were unable to recover it. 00:38:18.555 [2024-12-13 10:40:12.332507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.555 [2024-12-13 10:40:12.332524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.555 qpair failed and we were unable to recover it. 00:38:18.555 [2024-12-13 10:40:12.332680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.555 [2024-12-13 10:40:12.332696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.555 qpair failed and we were unable to recover it. 00:38:18.555 [2024-12-13 10:40:12.332815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.555 [2024-12-13 10:40:12.332828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.555 qpair failed and we were unable to recover it. 00:38:18.555 [2024-12-13 10:40:12.332968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.555 [2024-12-13 10:40:12.332984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.555 qpair failed and we were unable to recover it. 00:38:18.555 [2024-12-13 10:40:12.333064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.555 [2024-12-13 10:40:12.333079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.555 qpair failed and we were unable to recover it. 00:38:18.555 [2024-12-13 10:40:12.333211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.556 [2024-12-13 10:40:12.333227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.556 qpair failed and we were unable to recover it. 00:38:18.556 [2024-12-13 10:40:12.333321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.556 [2024-12-13 10:40:12.333336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.556 qpair failed and we were unable to recover it. 00:38:18.556 [2024-12-13 10:40:12.333426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.556 [2024-12-13 10:40:12.333441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.556 qpair failed and we were unable to recover it. 00:38:18.556 [2024-12-13 10:40:12.333601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.556 [2024-12-13 10:40:12.333616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.556 qpair failed and we were unable to recover it. 00:38:18.556 [2024-12-13 10:40:12.333778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.556 [2024-12-13 10:40:12.333793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.556 qpair failed and we were unable to recover it. 00:38:18.556 [2024-12-13 10:40:12.334015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.556 [2024-12-13 10:40:12.334030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.556 qpair failed and we were unable to recover it. 00:38:18.556 [2024-12-13 10:40:12.334110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.556 [2024-12-13 10:40:12.334123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.556 qpair failed and we were unable to recover it. 00:38:18.556 [2024-12-13 10:40:12.334196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.556 [2024-12-13 10:40:12.334210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.556 qpair failed and we were unable to recover it. 00:38:18.556 [2024-12-13 10:40:12.334282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.556 [2024-12-13 10:40:12.334296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.556 qpair failed and we were unable to recover it. 00:38:18.556 [2024-12-13 10:40:12.334374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.556 [2024-12-13 10:40:12.334389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.556 qpair failed and we were unable to recover it. 00:38:18.556 [2024-12-13 10:40:12.334525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.556 [2024-12-13 10:40:12.334539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.556 qpair failed and we were unable to recover it. 00:38:18.556 [2024-12-13 10:40:12.334691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.556 [2024-12-13 10:40:12.334706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.556 qpair failed and we were unable to recover it. 00:38:18.556 [2024-12-13 10:40:12.334775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.556 [2024-12-13 10:40:12.334790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.556 qpair failed and we were unable to recover it. 00:38:18.556 [2024-12-13 10:40:12.334867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.556 [2024-12-13 10:40:12.334880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.556 qpair failed and we were unable to recover it. 00:38:18.556 [2024-12-13 10:40:12.335040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.556 [2024-12-13 10:40:12.335056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.556 qpair failed and we were unable to recover it. 00:38:18.556 [2024-12-13 10:40:12.335131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.556 [2024-12-13 10:40:12.335145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.556 qpair failed and we were unable to recover it. 00:38:18.556 [2024-12-13 10:40:12.335305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.556 [2024-12-13 10:40:12.335349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.556 qpair failed and we were unable to recover it. 00:38:18.556 [2024-12-13 10:40:12.335622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.556 [2024-12-13 10:40:12.335671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:18.556 qpair failed and we were unable to recover it. 00:38:18.556 [2024-12-13 10:40:12.335882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.556 [2024-12-13 10:40:12.335908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:18.556 qpair failed and we were unable to recover it. 00:38:18.556 [2024-12-13 10:40:12.336211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.556 [2024-12-13 10:40:12.336237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:18.556 qpair failed and we were unable to recover it. 00:38:18.556 [2024-12-13 10:40:12.336348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.556 [2024-12-13 10:40:12.336365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.556 qpair failed and we were unable to recover it. 00:38:18.556 [2024-12-13 10:40:12.336559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.556 [2024-12-13 10:40:12.336576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.556 qpair failed and we were unable to recover it. 00:38:18.556 [2024-12-13 10:40:12.336790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.556 [2024-12-13 10:40:12.336805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.556 qpair failed and we were unable to recover it. 00:38:18.556 [2024-12-13 10:40:12.336955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.556 [2024-12-13 10:40:12.336970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.556 qpair failed and we were unable to recover it. 00:38:18.556 [2024-12-13 10:40:12.337195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.556 [2024-12-13 10:40:12.337210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.556 qpair failed and we were unable to recover it. 00:38:18.556 [2024-12-13 10:40:12.337437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.556 [2024-12-13 10:40:12.337456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.556 qpair failed and we were unable to recover it. 00:38:18.556 [2024-12-13 10:40:12.337634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.556 [2024-12-13 10:40:12.337649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.556 qpair failed and we were unable to recover it. 00:38:18.556 [2024-12-13 10:40:12.337805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.556 [2024-12-13 10:40:12.337823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.556 qpair failed and we were unable to recover it. 00:38:18.556 [2024-12-13 10:40:12.337907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.556 [2024-12-13 10:40:12.337921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.556 qpair failed and we were unable to recover it. 00:38:18.556 [2024-12-13 10:40:12.338005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.556 [2024-12-13 10:40:12.338019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.556 qpair failed and we were unable to recover it. 00:38:18.556 [2024-12-13 10:40:12.338225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.556 [2024-12-13 10:40:12.338241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.556 qpair failed and we were unable to recover it. 00:38:18.556 [2024-12-13 10:40:12.338405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.556 [2024-12-13 10:40:12.338460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.556 qpair failed and we were unable to recover it. 00:38:18.556 [2024-12-13 10:40:12.338608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.556 [2024-12-13 10:40:12.338653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.556 qpair failed and we were unable to recover it. 00:38:18.556 [2024-12-13 10:40:12.338857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.556 [2024-12-13 10:40:12.338906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.556 qpair failed and we were unable to recover it. 00:38:18.556 [2024-12-13 10:40:12.339120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.556 [2024-12-13 10:40:12.339162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.556 qpair failed and we were unable to recover it. 00:38:18.556 [2024-12-13 10:40:12.339300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.556 [2024-12-13 10:40:12.339344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.556 qpair failed and we were unable to recover it. 00:38:18.556 [2024-12-13 10:40:12.339527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.556 [2024-12-13 10:40:12.339542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.556 qpair failed and we were unable to recover it. 00:38:18.556 [2024-12-13 10:40:12.339753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.556 [2024-12-13 10:40:12.339767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.556 qpair failed and we were unable to recover it. 00:38:18.556 [2024-12-13 10:40:12.339938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.556 [2024-12-13 10:40:12.339952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.556 qpair failed and we were unable to recover it. 00:38:18.556 [2024-12-13 10:40:12.340040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.557 [2024-12-13 10:40:12.340054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.557 qpair failed and we were unable to recover it. 00:38:18.557 [2024-12-13 10:40:12.340206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.557 [2024-12-13 10:40:12.340221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.557 qpair failed and we were unable to recover it. 00:38:18.557 [2024-12-13 10:40:12.340425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.557 [2024-12-13 10:40:12.340440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.557 qpair failed and we were unable to recover it. 00:38:18.557 [2024-12-13 10:40:12.340531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.557 [2024-12-13 10:40:12.340545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.557 qpair failed and we were unable to recover it. 00:38:18.557 [2024-12-13 10:40:12.340704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.557 [2024-12-13 10:40:12.340719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.557 qpair failed and we were unable to recover it. 00:38:18.557 [2024-12-13 10:40:12.340937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.557 [2024-12-13 10:40:12.340952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.557 qpair failed and we were unable to recover it. 00:38:18.557 [2024-12-13 10:40:12.341156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.557 [2024-12-13 10:40:12.341171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.557 qpair failed and we were unable to recover it. 00:38:18.557 [2024-12-13 10:40:12.341312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.557 [2024-12-13 10:40:12.341328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.557 qpair failed and we were unable to recover it. 00:38:18.557 [2024-12-13 10:40:12.341491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.557 [2024-12-13 10:40:12.341507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.557 qpair failed and we were unable to recover it. 00:38:18.557 [2024-12-13 10:40:12.341653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.557 [2024-12-13 10:40:12.341689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.557 qpair failed and we were unable to recover it. 00:38:18.557 [2024-12-13 10:40:12.341789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.557 [2024-12-13 10:40:12.341805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.557 qpair failed and we were unable to recover it. 00:38:18.557 [2024-12-13 10:40:12.342062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.557 [2024-12-13 10:40:12.342107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.557 qpair failed and we were unable to recover it. 00:38:18.557 [2024-12-13 10:40:12.342267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.557 [2024-12-13 10:40:12.342311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.557 qpair failed and we were unable to recover it. 00:38:18.557 [2024-12-13 10:40:12.342525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.557 [2024-12-13 10:40:12.342568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.557 qpair failed and we were unable to recover it. 00:38:18.557 [2024-12-13 10:40:12.342778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.557 [2024-12-13 10:40:12.342821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.557 qpair failed and we were unable to recover it. 00:38:18.557 [2024-12-13 10:40:12.342970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.557 [2024-12-13 10:40:12.343013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.557 qpair failed and we were unable to recover it. 00:38:18.557 [2024-12-13 10:40:12.343279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.557 [2024-12-13 10:40:12.343294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.557 qpair failed and we were unable to recover it. 00:38:18.557 [2024-12-13 10:40:12.343458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.557 [2024-12-13 10:40:12.343473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.557 qpair failed and we were unable to recover it. 00:38:18.557 [2024-12-13 10:40:12.343647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.557 [2024-12-13 10:40:12.343662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.557 qpair failed and we were unable to recover it. 00:38:18.557 [2024-12-13 10:40:12.343746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.557 [2024-12-13 10:40:12.343760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.557 qpair failed and we were unable to recover it. 00:38:18.557 [2024-12-13 10:40:12.343915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.557 [2024-12-13 10:40:12.343930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.557 qpair failed and we were unable to recover it. 00:38:18.557 [2024-12-13 10:40:12.344036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.557 [2024-12-13 10:40:12.344061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:18.557 qpair failed and we were unable to recover it. 00:38:18.557 [2024-12-13 10:40:12.344190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.557 [2024-12-13 10:40:12.344217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:18.557 qpair failed and we were unable to recover it. 00:38:18.557 [2024-12-13 10:40:12.344464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.557 [2024-12-13 10:40:12.344494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:18.557 qpair failed and we were unable to recover it. 00:38:18.557 [2024-12-13 10:40:12.344645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.557 [2024-12-13 10:40:12.344662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.557 qpair failed and we were unable to recover it. 00:38:18.557 [2024-12-13 10:40:12.344799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.557 [2024-12-13 10:40:12.344814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.557 qpair failed and we were unable to recover it. 00:38:18.557 [2024-12-13 10:40:12.344894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.557 [2024-12-13 10:40:12.344908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.557 qpair failed and we were unable to recover it. 00:38:18.557 [2024-12-13 10:40:12.344976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.557 [2024-12-13 10:40:12.344989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.557 qpair failed and we were unable to recover it. 00:38:18.557 [2024-12-13 10:40:12.345143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.557 [2024-12-13 10:40:12.345160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.557 qpair failed and we were unable to recover it. 00:38:18.557 [2024-12-13 10:40:12.345233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.557 [2024-12-13 10:40:12.345248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.557 qpair failed and we were unable to recover it. 00:38:18.557 [2024-12-13 10:40:12.345408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.557 [2024-12-13 10:40:12.345465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.557 qpair failed and we were unable to recover it. 00:38:18.557 [2024-12-13 10:40:12.345603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.557 [2024-12-13 10:40:12.345647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.557 qpair failed and we were unable to recover it. 00:38:18.557 [2024-12-13 10:40:12.345863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.557 [2024-12-13 10:40:12.345906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.557 qpair failed and we were unable to recover it. 00:38:18.557 [2024-12-13 10:40:12.346159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.557 [2024-12-13 10:40:12.346174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.557 qpair failed and we were unable to recover it. 00:38:18.557 [2024-12-13 10:40:12.346244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.557 [2024-12-13 10:40:12.346260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.557 qpair failed and we were unable to recover it. 00:38:18.557 [2024-12-13 10:40:12.346399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.557 [2024-12-13 10:40:12.346414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.557 qpair failed and we were unable to recover it. 00:38:18.557 [2024-12-13 10:40:12.346676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.557 [2024-12-13 10:40:12.346692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.557 qpair failed and we were unable to recover it. 00:38:18.557 [2024-12-13 10:40:12.346846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.557 [2024-12-13 10:40:12.346860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.557 qpair failed and we were unable to recover it. 00:38:18.557 [2024-12-13 10:40:12.346948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.557 [2024-12-13 10:40:12.346962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.557 qpair failed and we were unable to recover it. 00:38:18.557 [2024-12-13 10:40:12.347117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.558 [2024-12-13 10:40:12.347133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.558 qpair failed and we were unable to recover it. 00:38:18.558 [2024-12-13 10:40:12.347275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.558 [2024-12-13 10:40:12.347316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.558 qpair failed and we were unable to recover it. 00:38:18.558 [2024-12-13 10:40:12.347561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.558 [2024-12-13 10:40:12.347607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.558 qpair failed and we were unable to recover it. 00:38:18.558 [2024-12-13 10:40:12.347767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.558 [2024-12-13 10:40:12.347810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.558 qpair failed and we were unable to recover it. 00:38:18.558 [2024-12-13 10:40:12.348052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.558 [2024-12-13 10:40:12.348095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.558 qpair failed and we were unable to recover it. 00:38:18.558 [2024-12-13 10:40:12.348239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.558 [2024-12-13 10:40:12.348253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.558 qpair failed and we were unable to recover it. 00:38:18.558 [2024-12-13 10:40:12.348417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.558 [2024-12-13 10:40:12.348432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.558 qpair failed and we were unable to recover it. 00:38:18.558 [2024-12-13 10:40:12.348589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.558 [2024-12-13 10:40:12.348604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.558 qpair failed and we were unable to recover it. 00:38:18.558 [2024-12-13 10:40:12.348762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.558 [2024-12-13 10:40:12.348777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.558 qpair failed and we were unable to recover it. 00:38:18.558 [2024-12-13 10:40:12.348883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.558 [2024-12-13 10:40:12.348898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.558 qpair failed and we were unable to recover it. 00:38:18.558 [2024-12-13 10:40:12.348968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.558 [2024-12-13 10:40:12.348981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.558 qpair failed and we were unable to recover it. 00:38:18.558 [2024-12-13 10:40:12.349124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.558 [2024-12-13 10:40:12.349140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.558 qpair failed and we were unable to recover it. 00:38:18.558 [2024-12-13 10:40:12.349208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.558 [2024-12-13 10:40:12.349223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.558 qpair failed and we were unable to recover it. 00:38:18.558 [2024-12-13 10:40:12.349391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.558 [2024-12-13 10:40:12.349406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.558 qpair failed and we were unable to recover it. 00:38:18.558 [2024-12-13 10:40:12.349498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.558 [2024-12-13 10:40:12.349513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.558 qpair failed and we were unable to recover it. 00:38:18.558 [2024-12-13 10:40:12.349625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.558 [2024-12-13 10:40:12.349669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.558 qpair failed and we were unable to recover it. 00:38:18.558 [2024-12-13 10:40:12.349875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.558 [2024-12-13 10:40:12.349917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.558 qpair failed and we were unable to recover it. 00:38:18.558 [2024-12-13 10:40:12.350049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.558 [2024-12-13 10:40:12.350090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.558 qpair failed and we were unable to recover it. 00:38:18.558 [2024-12-13 10:40:12.350274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.558 [2024-12-13 10:40:12.350289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.558 qpair failed and we were unable to recover it. 00:38:18.558 [2024-12-13 10:40:12.350462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.558 [2024-12-13 10:40:12.350508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.558 qpair failed and we were unable to recover it. 00:38:18.558 [2024-12-13 10:40:12.350664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.558 [2024-12-13 10:40:12.350705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.558 qpair failed and we were unable to recover it. 00:38:18.558 [2024-12-13 10:40:12.350844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.558 [2024-12-13 10:40:12.350886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.558 qpair failed and we were unable to recover it. 00:38:18.558 [2024-12-13 10:40:12.351161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.558 [2024-12-13 10:40:12.351211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:18.558 qpair failed and we were unable to recover it. 00:38:18.558 [2024-12-13 10:40:12.351362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.558 [2024-12-13 10:40:12.351387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:18.558 qpair failed and we were unable to recover it. 00:38:18.558 [2024-12-13 10:40:12.351494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.558 [2024-12-13 10:40:12.351521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:18.558 qpair failed and we were unable to recover it. 00:38:18.558 [2024-12-13 10:40:12.351625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.558 [2024-12-13 10:40:12.351642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.558 qpair failed and we were unable to recover it. 00:38:18.558 [2024-12-13 10:40:12.351727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.558 [2024-12-13 10:40:12.351741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.558 qpair failed and we were unable to recover it. 00:38:18.558 [2024-12-13 10:40:12.351892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.558 [2024-12-13 10:40:12.351907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.558 qpair failed and we were unable to recover it. 00:38:18.558 [2024-12-13 10:40:12.352063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.558 [2024-12-13 10:40:12.352078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.558 qpair failed and we were unable to recover it. 00:38:18.558 [2024-12-13 10:40:12.352234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.558 [2024-12-13 10:40:12.352250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.558 qpair failed and we were unable to recover it. 00:38:18.558 [2024-12-13 10:40:12.352340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.558 [2024-12-13 10:40:12.352353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.558 qpair failed and we were unable to recover it. 00:38:18.558 [2024-12-13 10:40:12.352587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.558 [2024-12-13 10:40:12.352633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.558 qpair failed and we were unable to recover it. 00:38:18.558 [2024-12-13 10:40:12.352783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.558 [2024-12-13 10:40:12.352827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.558 qpair failed and we were unable to recover it. 00:38:18.558 [2024-12-13 10:40:12.352964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.558 [2024-12-13 10:40:12.353009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.558 qpair failed and we were unable to recover it. 00:38:18.558 [2024-12-13 10:40:12.353143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.558 [2024-12-13 10:40:12.353184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.558 qpair failed and we were unable to recover it. 00:38:18.558 [2024-12-13 10:40:12.353378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.559 [2024-12-13 10:40:12.353430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.559 qpair failed and we were unable to recover it. 00:38:18.559 [2024-12-13 10:40:12.353598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.559 [2024-12-13 10:40:12.353642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.559 qpair failed and we were unable to recover it. 00:38:18.559 [2024-12-13 10:40:12.353855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.559 [2024-12-13 10:40:12.353898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.559 qpair failed and we were unable to recover it. 00:38:18.559 [2024-12-13 10:40:12.354045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.559 [2024-12-13 10:40:12.354088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.559 qpair failed and we were unable to recover it. 00:38:18.559 [2024-12-13 10:40:12.354317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.559 [2024-12-13 10:40:12.354362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.559 qpair failed and we were unable to recover it. 00:38:18.559 [2024-12-13 10:40:12.354566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.559 [2024-12-13 10:40:12.354630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.559 qpair failed and we were unable to recover it. 00:38:18.559 [2024-12-13 10:40:12.354849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.559 [2024-12-13 10:40:12.354899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:18.559 qpair failed and we were unable to recover it. 00:38:18.559 [2024-12-13 10:40:12.355020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.559 [2024-12-13 10:40:12.355044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:18.559 qpair failed and we were unable to recover it. 00:38:18.559 [2024-12-13 10:40:12.355157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.559 [2024-12-13 10:40:12.355183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:18.559 qpair failed and we were unable to recover it. 00:38:18.559 [2024-12-13 10:40:12.355348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.559 [2024-12-13 10:40:12.355364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.559 qpair failed and we were unable to recover it. 00:38:18.559 [2024-12-13 10:40:12.355531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.559 [2024-12-13 10:40:12.355575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.559 qpair failed and we were unable to recover it. 00:38:18.559 [2024-12-13 10:40:12.355862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.559 [2024-12-13 10:40:12.355907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.559 qpair failed and we were unable to recover it. 00:38:18.559 [2024-12-13 10:40:12.356042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.559 [2024-12-13 10:40:12.356057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.559 qpair failed and we were unable to recover it. 00:38:18.559 [2024-12-13 10:40:12.356283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.559 [2024-12-13 10:40:12.356299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.559 qpair failed and we were unable to recover it. 00:38:18.559 [2024-12-13 10:40:12.356394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.559 [2024-12-13 10:40:12.356409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.559 qpair failed and we were unable to recover it. 00:38:18.559 [2024-12-13 10:40:12.356504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.559 [2024-12-13 10:40:12.356519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.559 qpair failed and we were unable to recover it. 00:38:18.559 [2024-12-13 10:40:12.356591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.559 [2024-12-13 10:40:12.356605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.559 qpair failed and we were unable to recover it. 00:38:18.559 [2024-12-13 10:40:12.356668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.559 [2024-12-13 10:40:12.356681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.559 qpair failed and we were unable to recover it. 00:38:18.559 [2024-12-13 10:40:12.356824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.559 [2024-12-13 10:40:12.356841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.559 qpair failed and we were unable to recover it. 00:38:18.559 [2024-12-13 10:40:12.356929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.559 [2024-12-13 10:40:12.356943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.559 qpair failed and we were unable to recover it. 00:38:18.559 [2024-12-13 10:40:12.357038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.559 [2024-12-13 10:40:12.357079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.559 qpair failed and we were unable to recover it. 00:38:18.559 [2024-12-13 10:40:12.357220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.559 [2024-12-13 10:40:12.357262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.559 qpair failed and we were unable to recover it. 00:38:18.559 [2024-12-13 10:40:12.357395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.559 [2024-12-13 10:40:12.357436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.559 qpair failed and we were unable to recover it. 00:38:18.559 [2024-12-13 10:40:12.357689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.559 [2024-12-13 10:40:12.357734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.559 qpair failed and we were unable to recover it. 00:38:18.559 [2024-12-13 10:40:12.357874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.559 [2024-12-13 10:40:12.357915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.559 qpair failed and we were unable to recover it. 00:38:18.559 [2024-12-13 10:40:12.358183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.559 [2024-12-13 10:40:12.358221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.559 qpair failed and we were unable to recover it. 00:38:18.559 [2024-12-13 10:40:12.358301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.559 [2024-12-13 10:40:12.358315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.559 qpair failed and we were unable to recover it. 00:38:18.559 [2024-12-13 10:40:12.358440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.559 [2024-12-13 10:40:12.358471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:18.559 qpair failed and we were unable to recover it. 00:38:18.559 [2024-12-13 10:40:12.358663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.559 [2024-12-13 10:40:12.358688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:18.559 qpair failed and we were unable to recover it. 00:38:18.559 [2024-12-13 10:40:12.358869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.559 [2024-12-13 10:40:12.358894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:18.559 qpair failed and we were unable to recover it. 00:38:18.559 [2024-12-13 10:40:12.359115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.559 [2024-12-13 10:40:12.359133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.559 qpair failed and we were unable to recover it. 00:38:18.559 [2024-12-13 10:40:12.359216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.559 [2024-12-13 10:40:12.359230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.559 qpair failed and we were unable to recover it. 00:38:18.559 [2024-12-13 10:40:12.359386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.559 [2024-12-13 10:40:12.359402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.559 qpair failed and we were unable to recover it. 00:38:18.559 [2024-12-13 10:40:12.359495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.559 [2024-12-13 10:40:12.359509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.559 qpair failed and we were unable to recover it. 00:38:18.559 [2024-12-13 10:40:12.359596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.559 [2024-12-13 10:40:12.359610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.559 qpair failed and we were unable to recover it. 00:38:18.559 [2024-12-13 10:40:12.359688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.559 [2024-12-13 10:40:12.359734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.559 qpair failed and we were unable to recover it. 00:38:18.559 [2024-12-13 10:40:12.359882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.559 [2024-12-13 10:40:12.359928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.559 qpair failed and we were unable to recover it. 00:38:18.559 [2024-12-13 10:40:12.360133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.559 [2024-12-13 10:40:12.360177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.559 qpair failed and we were unable to recover it. 00:38:18.559 [2024-12-13 10:40:12.360369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.559 [2024-12-13 10:40:12.360413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.559 qpair failed and we were unable to recover it. 00:38:18.559 [2024-12-13 10:40:12.360642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.560 [2024-12-13 10:40:12.360689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:18.560 qpair failed and we were unable to recover it. 00:38:18.560 [2024-12-13 10:40:12.360912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.560 [2024-12-13 10:40:12.360970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:18.560 qpair failed and we were unable to recover it. 00:38:18.560 [2024-12-13 10:40:12.361114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.560 [2024-12-13 10:40:12.361162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:18.560 qpair failed and we were unable to recover it. 00:38:18.560 [2024-12-13 10:40:12.361371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.560 [2024-12-13 10:40:12.361416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.560 qpair failed and we were unable to recover it. 00:38:18.560 [2024-12-13 10:40:12.361649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.560 [2024-12-13 10:40:12.361693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.560 qpair failed and we were unable to recover it. 00:38:18.560 [2024-12-13 10:40:12.361993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.560 [2024-12-13 10:40:12.362037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.560 qpair failed and we were unable to recover it. 00:38:18.560 [2024-12-13 10:40:12.362173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.560 [2024-12-13 10:40:12.362216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.560 qpair failed and we were unable to recover it. 00:38:18.560 [2024-12-13 10:40:12.362417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.560 [2024-12-13 10:40:12.362471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.560 qpair failed and we were unable to recover it. 00:38:18.560 [2024-12-13 10:40:12.362734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.560 [2024-12-13 10:40:12.362776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.560 qpair failed and we were unable to recover it. 00:38:18.560 [2024-12-13 10:40:12.362988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.560 [2024-12-13 10:40:12.363031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.560 qpair failed and we were unable to recover it. 00:38:18.560 [2024-12-13 10:40:12.363209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.560 [2024-12-13 10:40:12.363223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.560 qpair failed and we were unable to recover it. 00:38:18.560 [2024-12-13 10:40:12.363298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.560 [2024-12-13 10:40:12.363334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.560 qpair failed and we were unable to recover it. 00:38:18.560 [2024-12-13 10:40:12.363493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.560 [2024-12-13 10:40:12.363539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.560 qpair failed and we were unable to recover it. 00:38:18.560 [2024-12-13 10:40:12.363739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.560 [2024-12-13 10:40:12.363783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.560 qpair failed and we were unable to recover it. 00:38:18.560 [2024-12-13 10:40:12.364009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.560 [2024-12-13 10:40:12.364064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.560 qpair failed and we were unable to recover it. 00:38:18.560 [2024-12-13 10:40:12.364148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.560 [2024-12-13 10:40:12.364162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.560 qpair failed and we were unable to recover it. 00:38:18.560 [2024-12-13 10:40:12.364309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.560 [2024-12-13 10:40:12.364324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.560 qpair failed and we were unable to recover it. 00:38:18.560 [2024-12-13 10:40:12.364566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.560 [2024-12-13 10:40:12.364587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.560 qpair failed and we were unable to recover it. 00:38:18.560 [2024-12-13 10:40:12.364668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.560 [2024-12-13 10:40:12.364683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.560 qpair failed and we were unable to recover it. 00:38:18.560 [2024-12-13 10:40:12.364828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.560 [2024-12-13 10:40:12.364842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.560 qpair failed and we were unable to recover it. 00:38:18.560 [2024-12-13 10:40:12.365045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.560 [2024-12-13 10:40:12.365089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.560 qpair failed and we were unable to recover it. 00:38:18.560 [2024-12-13 10:40:12.365239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.560 [2024-12-13 10:40:12.365280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.560 qpair failed and we were unable to recover it. 00:38:18.560 [2024-12-13 10:40:12.365482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.560 [2024-12-13 10:40:12.365498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.560 qpair failed and we were unable to recover it. 00:38:18.560 [2024-12-13 10:40:12.365661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.560 [2024-12-13 10:40:12.365704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.560 qpair failed and we were unable to recover it. 00:38:18.560 [2024-12-13 10:40:12.365855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.560 [2024-12-13 10:40:12.365897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.560 qpair failed and we were unable to recover it. 00:38:18.560 [2024-12-13 10:40:12.366162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.560 [2024-12-13 10:40:12.366206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.560 qpair failed and we were unable to recover it. 00:38:18.560 [2024-12-13 10:40:12.366334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.560 [2024-12-13 10:40:12.366378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.560 qpair failed and we were unable to recover it. 00:38:18.560 [2024-12-13 10:40:12.366472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.560 [2024-12-13 10:40:12.366486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.560 qpair failed and we were unable to recover it. 00:38:18.560 [2024-12-13 10:40:12.366583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.560 [2024-12-13 10:40:12.366598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.560 qpair failed and we were unable to recover it. 00:38:18.560 [2024-12-13 10:40:12.366767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.560 [2024-12-13 10:40:12.366782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.560 qpair failed and we were unable to recover it. 00:38:18.560 [2024-12-13 10:40:12.366851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.560 [2024-12-13 10:40:12.366865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.560 qpair failed and we were unable to recover it. 00:38:18.560 [2024-12-13 10:40:12.367036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.560 [2024-12-13 10:40:12.367078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.560 qpair failed and we were unable to recover it. 00:38:18.560 [2024-12-13 10:40:12.367310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.560 [2024-12-13 10:40:12.367353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.560 qpair failed and we were unable to recover it. 00:38:18.560 [2024-12-13 10:40:12.367592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.560 [2024-12-13 10:40:12.367638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.560 qpair failed and we were unable to recover it. 00:38:18.560 [2024-12-13 10:40:12.367899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.560 [2024-12-13 10:40:12.367943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.560 qpair failed and we were unable to recover it. 00:38:18.560 [2024-12-13 10:40:12.368087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.560 [2024-12-13 10:40:12.368131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.560 qpair failed and we were unable to recover it. 00:38:18.560 [2024-12-13 10:40:12.368382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.560 [2024-12-13 10:40:12.368396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.560 qpair failed and we were unable to recover it. 00:38:18.560 [2024-12-13 10:40:12.368551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.560 [2024-12-13 10:40:12.368568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.560 qpair failed and we were unable to recover it. 00:38:18.560 [2024-12-13 10:40:12.368742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.560 [2024-12-13 10:40:12.368788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.560 qpair failed and we were unable to recover it. 00:38:18.560 [2024-12-13 10:40:12.369057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.561 [2024-12-13 10:40:12.369113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.561 qpair failed and we were unable to recover it. 00:38:18.561 [2024-12-13 10:40:12.369255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.561 [2024-12-13 10:40:12.369298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.561 qpair failed and we were unable to recover it. 00:38:18.561 [2024-12-13 10:40:12.369474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.561 [2024-12-13 10:40:12.369491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.561 qpair failed and we were unable to recover it. 00:38:18.561 [2024-12-13 10:40:12.369573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.561 [2024-12-13 10:40:12.369602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.561 qpair failed and we were unable to recover it. 00:38:18.561 [2024-12-13 10:40:12.369832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.561 [2024-12-13 10:40:12.369875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.561 qpair failed and we were unable to recover it. 00:38:18.561 [2024-12-13 10:40:12.370133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.561 [2024-12-13 10:40:12.370174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.561 qpair failed and we were unable to recover it. 00:38:18.561 [2024-12-13 10:40:12.370467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.561 [2024-12-13 10:40:12.370513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.561 qpair failed and we were unable to recover it. 00:38:18.561 [2024-12-13 10:40:12.370659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.561 [2024-12-13 10:40:12.370704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.561 qpair failed and we were unable to recover it. 00:38:18.561 [2024-12-13 10:40:12.370927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.561 [2024-12-13 10:40:12.370969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.561 qpair failed and we were unable to recover it. 00:38:18.561 [2024-12-13 10:40:12.371175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.561 [2024-12-13 10:40:12.371217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.561 qpair failed and we were unable to recover it. 00:38:18.561 [2024-12-13 10:40:12.371412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.561 [2024-12-13 10:40:12.371426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.561 qpair failed and we were unable to recover it. 00:38:18.561 [2024-12-13 10:40:12.371639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.561 [2024-12-13 10:40:12.371653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.561 qpair failed and we were unable to recover it. 00:38:18.561 [2024-12-13 10:40:12.371736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.561 [2024-12-13 10:40:12.371749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.561 qpair failed and we were unable to recover it. 00:38:18.561 [2024-12-13 10:40:12.371951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.561 [2024-12-13 10:40:12.371966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.561 qpair failed and we were unable to recover it. 00:38:18.561 [2024-12-13 10:40:12.372215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.561 [2024-12-13 10:40:12.372229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.561 qpair failed and we were unable to recover it. 00:38:18.561 [2024-12-13 10:40:12.372376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.561 [2024-12-13 10:40:12.372390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.561 qpair failed and we were unable to recover it. 00:38:18.561 [2024-12-13 10:40:12.372570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.561 [2024-12-13 10:40:12.372586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.561 qpair failed and we were unable to recover it. 00:38:18.561 [2024-12-13 10:40:12.372788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.561 [2024-12-13 10:40:12.372803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.561 qpair failed and we were unable to recover it. 00:38:18.561 [2024-12-13 10:40:12.372886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.561 [2024-12-13 10:40:12.372900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.561 qpair failed and we were unable to recover it. 00:38:18.561 [2024-12-13 10:40:12.373074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.561 [2024-12-13 10:40:12.373089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.561 qpair failed and we were unable to recover it. 00:38:18.561 [2024-12-13 10:40:12.373250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.561 [2024-12-13 10:40:12.373266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.561 qpair failed and we were unable to recover it. 00:38:18.561 [2024-12-13 10:40:12.373354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.561 [2024-12-13 10:40:12.373368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.561 qpair failed and we were unable to recover it. 00:38:18.561 [2024-12-13 10:40:12.373513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.561 [2024-12-13 10:40:12.373528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.561 qpair failed and we were unable to recover it. 00:38:18.561 [2024-12-13 10:40:12.373615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.561 [2024-12-13 10:40:12.373629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.561 qpair failed and we were unable to recover it. 00:38:18.561 [2024-12-13 10:40:12.373836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.561 [2024-12-13 10:40:12.373852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.561 qpair failed and we were unable to recover it. 00:38:18.561 [2024-12-13 10:40:12.373942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.561 [2024-12-13 10:40:12.373956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.561 qpair failed and we were unable to recover it. 00:38:18.561 [2024-12-13 10:40:12.374057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.561 [2024-12-13 10:40:12.374072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.561 qpair failed and we were unable to recover it. 00:38:18.561 [2024-12-13 10:40:12.374215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.561 [2024-12-13 10:40:12.374259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.561 qpair failed and we were unable to recover it. 00:38:18.561 [2024-12-13 10:40:12.374397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.561 [2024-12-13 10:40:12.374441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.561 qpair failed and we were unable to recover it. 00:38:18.561 [2024-12-13 10:40:12.374675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.561 [2024-12-13 10:40:12.374726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.561 qpair failed and we were unable to recover it. 00:38:18.561 [2024-12-13 10:40:12.374940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.561 [2024-12-13 10:40:12.374985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.561 qpair failed and we were unable to recover it. 00:38:18.561 [2024-12-13 10:40:12.375075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.561 [2024-12-13 10:40:12.375089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.561 qpair failed and we were unable to recover it. 00:38:18.561 [2024-12-13 10:40:12.375231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.561 [2024-12-13 10:40:12.375246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.561 qpair failed and we were unable to recover it. 00:38:18.561 [2024-12-13 10:40:12.375430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.561 [2024-12-13 10:40:12.375446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.561 qpair failed and we were unable to recover it. 00:38:18.561 [2024-12-13 10:40:12.375677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.561 [2024-12-13 10:40:12.375693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.561 qpair failed and we were unable to recover it. 00:38:18.561 [2024-12-13 10:40:12.375866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.561 [2024-12-13 10:40:12.375881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.561 qpair failed and we were unable to recover it. 00:38:18.561 [2024-12-13 10:40:12.375989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.561 [2024-12-13 10:40:12.376032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.561 qpair failed and we were unable to recover it. 00:38:18.561 [2024-12-13 10:40:12.376235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.561 [2024-12-13 10:40:12.376278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.561 qpair failed and we were unable to recover it. 00:38:18.561 [2024-12-13 10:40:12.376421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.561 [2024-12-13 10:40:12.376490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.561 qpair failed and we were unable to recover it. 00:38:18.561 [2024-12-13 10:40:12.376711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.561 [2024-12-13 10:40:12.376755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.561 qpair failed and we were unable to recover it. 00:38:18.561 [2024-12-13 10:40:12.376942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.561 [2024-12-13 10:40:12.376958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.561 qpair failed and we were unable to recover it. 00:38:18.561 [2024-12-13 10:40:12.377049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.561 [2024-12-13 10:40:12.377062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.561 qpair failed and we were unable to recover it. 00:38:18.561 [2024-12-13 10:40:12.377293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.562 [2024-12-13 10:40:12.377309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.562 qpair failed and we were unable to recover it. 00:38:18.562 [2024-12-13 10:40:12.377467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.562 [2024-12-13 10:40:12.377484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.562 qpair failed and we were unable to recover it. 00:38:18.562 [2024-12-13 10:40:12.377595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.562 [2024-12-13 10:40:12.377639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.562 qpair failed and we were unable to recover it. 00:38:18.562 [2024-12-13 10:40:12.377844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.562 [2024-12-13 10:40:12.377888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.562 qpair failed and we were unable to recover it. 00:38:18.562 [2024-12-13 10:40:12.378166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.562 [2024-12-13 10:40:12.378210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.562 qpair failed and we were unable to recover it. 00:38:18.562 [2024-12-13 10:40:12.378325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.562 [2024-12-13 10:40:12.378340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.562 qpair failed and we were unable to recover it. 00:38:18.562 [2024-12-13 10:40:12.378483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.562 [2024-12-13 10:40:12.378499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.562 qpair failed and we were unable to recover it. 00:38:18.562 [2024-12-13 10:40:12.378710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.562 [2024-12-13 10:40:12.378754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.562 qpair failed and we were unable to recover it. 00:38:18.562 [2024-12-13 10:40:12.378962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.562 [2024-12-13 10:40:12.379004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.562 qpair failed and we were unable to recover it. 00:38:18.562 [2024-12-13 10:40:12.379195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.562 [2024-12-13 10:40:12.379211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.562 qpair failed and we were unable to recover it. 00:38:18.562 [2024-12-13 10:40:12.379415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.562 [2024-12-13 10:40:12.379430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.562 qpair failed and we were unable to recover it. 00:38:18.562 [2024-12-13 10:40:12.379521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.562 [2024-12-13 10:40:12.379535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.562 qpair failed and we were unable to recover it. 00:38:18.562 [2024-12-13 10:40:12.379604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.562 [2024-12-13 10:40:12.379618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.562 qpair failed and we were unable to recover it. 00:38:18.562 [2024-12-13 10:40:12.379769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.562 [2024-12-13 10:40:12.379782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.562 qpair failed and we were unable to recover it. 00:38:18.562 [2024-12-13 10:40:12.379872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.562 [2024-12-13 10:40:12.379885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.562 qpair failed and we were unable to recover it. 00:38:18.562 [2024-12-13 10:40:12.380021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.562 [2024-12-13 10:40:12.380035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.562 qpair failed and we were unable to recover it. 00:38:18.562 [2024-12-13 10:40:12.380237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.562 [2024-12-13 10:40:12.380251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.562 qpair failed and we were unable to recover it. 00:38:18.562 [2024-12-13 10:40:12.380337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.562 [2024-12-13 10:40:12.380351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.562 qpair failed and we were unable to recover it. 00:38:18.562 [2024-12-13 10:40:12.380443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.562 [2024-12-13 10:40:12.380463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.562 qpair failed and we were unable to recover it. 00:38:18.562 [2024-12-13 10:40:12.380624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.562 [2024-12-13 10:40:12.380639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.562 qpair failed and we were unable to recover it. 00:38:18.562 [2024-12-13 10:40:12.380782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.562 [2024-12-13 10:40:12.380796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.562 qpair failed and we were unable to recover it. 00:38:18.562 [2024-12-13 10:40:12.380896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.562 [2024-12-13 10:40:12.380910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.562 qpair failed and we were unable to recover it. 00:38:18.562 [2024-12-13 10:40:12.381109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.562 [2024-12-13 10:40:12.381128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.562 qpair failed and we were unable to recover it. 00:38:18.562 [2024-12-13 10:40:12.381274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.562 [2024-12-13 10:40:12.381290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.562 qpair failed and we were unable to recover it. 00:38:18.562 [2024-12-13 10:40:12.381444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.562 [2024-12-13 10:40:12.381465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.562 qpair failed and we were unable to recover it. 00:38:18.562 [2024-12-13 10:40:12.381649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.562 [2024-12-13 10:40:12.381664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.562 qpair failed and we were unable to recover it. 00:38:18.562 [2024-12-13 10:40:12.381904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.562 [2024-12-13 10:40:12.381918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.562 qpair failed and we were unable to recover it. 00:38:18.562 [2024-12-13 10:40:12.382137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.562 [2024-12-13 10:40:12.382155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.562 qpair failed and we were unable to recover it. 00:38:18.562 [2024-12-13 10:40:12.382291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.562 [2024-12-13 10:40:12.382306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.562 qpair failed and we were unable to recover it. 00:38:18.562 [2024-12-13 10:40:12.382403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.562 [2024-12-13 10:40:12.382417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.562 qpair failed and we were unable to recover it. 00:38:18.562 [2024-12-13 10:40:12.382566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.562 [2024-12-13 10:40:12.382581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.562 qpair failed and we were unable to recover it. 00:38:18.562 [2024-12-13 10:40:12.382676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.562 [2024-12-13 10:40:12.382690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.562 qpair failed and we were unable to recover it. 00:38:18.562 [2024-12-13 10:40:12.382916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.562 [2024-12-13 10:40:12.382931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.562 qpair failed and we were unable to recover it. 00:38:18.562 [2024-12-13 10:40:12.383011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.562 [2024-12-13 10:40:12.383025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.562 qpair failed and we were unable to recover it. 00:38:18.562 [2024-12-13 10:40:12.383126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.562 [2024-12-13 10:40:12.383139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.562 qpair failed and we were unable to recover it. 00:38:18.562 [2024-12-13 10:40:12.383233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.562 [2024-12-13 10:40:12.383247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.562 qpair failed and we were unable to recover it. 00:38:18.562 [2024-12-13 10:40:12.383402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.562 [2024-12-13 10:40:12.383416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.562 qpair failed and we were unable to recover it. 00:38:18.562 [2024-12-13 10:40:12.383620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.562 [2024-12-13 10:40:12.383636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.562 qpair failed and we were unable to recover it. 00:38:18.562 [2024-12-13 10:40:12.383720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.563 [2024-12-13 10:40:12.383734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.563 qpair failed and we were unable to recover it. 00:38:18.563 [2024-12-13 10:40:12.383822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.563 [2024-12-13 10:40:12.383835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.563 qpair failed and we were unable to recover it. 00:38:18.563 [2024-12-13 10:40:12.383978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.563 [2024-12-13 10:40:12.383993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.563 qpair failed and we were unable to recover it. 00:38:18.563 [2024-12-13 10:40:12.384091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.563 [2024-12-13 10:40:12.384106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.563 qpair failed and we were unable to recover it. 00:38:18.563 [2024-12-13 10:40:12.384177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.563 [2024-12-13 10:40:12.384192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.563 qpair failed and we were unable to recover it. 00:38:18.563 [2024-12-13 10:40:12.384443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.563 [2024-12-13 10:40:12.384516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.563 qpair failed and we were unable to recover it. 00:38:18.563 [2024-12-13 10:40:12.384716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.563 [2024-12-13 10:40:12.384759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.563 qpair failed and we were unable to recover it. 00:38:18.563 [2024-12-13 10:40:12.385030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.563 [2024-12-13 10:40:12.385071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.563 qpair failed and we were unable to recover it. 00:38:18.563 [2024-12-13 10:40:12.385257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.563 [2024-12-13 10:40:12.385271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.563 qpair failed and we were unable to recover it. 00:38:18.845 [2024-12-13 10:40:12.385416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.845 [2024-12-13 10:40:12.385431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.845 qpair failed and we were unable to recover it. 00:38:18.845 [2024-12-13 10:40:12.385596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.845 [2024-12-13 10:40:12.385612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.845 qpair failed and we were unable to recover it. 00:38:18.845 [2024-12-13 10:40:12.385830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.845 [2024-12-13 10:40:12.385845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.845 qpair failed and we were unable to recover it. 00:38:18.845 [2024-12-13 10:40:12.386046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.845 [2024-12-13 10:40:12.386061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.845 qpair failed and we were unable to recover it. 00:38:18.845 [2024-12-13 10:40:12.386151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.845 [2024-12-13 10:40:12.386165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.845 qpair failed and we were unable to recover it. 00:38:18.845 [2024-12-13 10:40:12.386254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.845 [2024-12-13 10:40:12.386268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.845 qpair failed and we were unable to recover it. 00:38:18.845 [2024-12-13 10:40:12.386420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.845 [2024-12-13 10:40:12.386436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.845 qpair failed and we were unable to recover it. 00:38:18.845 [2024-12-13 10:40:12.386594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.845 [2024-12-13 10:40:12.386609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.845 qpair failed and we were unable to recover it. 00:38:18.845 [2024-12-13 10:40:12.386844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.845 [2024-12-13 10:40:12.386859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.845 qpair failed and we were unable to recover it. 00:38:18.845 [2024-12-13 10:40:12.386995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.845 [2024-12-13 10:40:12.387010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.845 qpair failed and we were unable to recover it. 00:38:18.845 [2024-12-13 10:40:12.387152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.845 [2024-12-13 10:40:12.387167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.845 qpair failed and we were unable to recover it. 00:38:18.845 [2024-12-13 10:40:12.387265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.845 [2024-12-13 10:40:12.387279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.845 qpair failed and we were unable to recover it. 00:38:18.845 [2024-12-13 10:40:12.387370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.845 [2024-12-13 10:40:12.387384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.845 qpair failed and we were unable to recover it. 00:38:18.845 [2024-12-13 10:40:12.387538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.845 [2024-12-13 10:40:12.387555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.845 qpair failed and we were unable to recover it. 00:38:18.845 [2024-12-13 10:40:12.387640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.845 [2024-12-13 10:40:12.387655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.845 qpair failed and we were unable to recover it. 00:38:18.845 [2024-12-13 10:40:12.387812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.845 [2024-12-13 10:40:12.387826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.845 qpair failed and we were unable to recover it. 00:38:18.845 [2024-12-13 10:40:12.388058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.845 [2024-12-13 10:40:12.388073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.845 qpair failed and we were unable to recover it. 00:38:18.845 [2024-12-13 10:40:12.388156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.845 [2024-12-13 10:40:12.388173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.845 qpair failed and we were unable to recover it. 00:38:18.845 [2024-12-13 10:40:12.388320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.845 [2024-12-13 10:40:12.388335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.845 qpair failed and we were unable to recover it. 00:38:18.845 [2024-12-13 10:40:12.388420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.845 [2024-12-13 10:40:12.388434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.845 qpair failed and we were unable to recover it. 00:38:18.845 [2024-12-13 10:40:12.388518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.845 [2024-12-13 10:40:12.388534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.845 qpair failed and we were unable to recover it. 00:38:18.845 [2024-12-13 10:40:12.388620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.845 [2024-12-13 10:40:12.388634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.845 qpair failed and we were unable to recover it. 00:38:18.845 [2024-12-13 10:40:12.388803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.845 [2024-12-13 10:40:12.388818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.845 qpair failed and we were unable to recover it. 00:38:18.845 [2024-12-13 10:40:12.388892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.845 [2024-12-13 10:40:12.388906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.845 qpair failed and we were unable to recover it. 00:38:18.845 [2024-12-13 10:40:12.388986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.845 [2024-12-13 10:40:12.388999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.845 qpair failed and we were unable to recover it. 00:38:18.845 [2024-12-13 10:40:12.389170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.845 [2024-12-13 10:40:12.389185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.845 qpair failed and we were unable to recover it. 00:38:18.845 [2024-12-13 10:40:12.389265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.845 [2024-12-13 10:40:12.389280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.845 qpair failed and we were unable to recover it. 00:38:18.846 [2024-12-13 10:40:12.389361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.846 [2024-12-13 10:40:12.389376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.846 qpair failed and we were unable to recover it. 00:38:18.846 [2024-12-13 10:40:12.389530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.846 [2024-12-13 10:40:12.389545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.846 qpair failed and we were unable to recover it. 00:38:18.846 [2024-12-13 10:40:12.389680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.846 [2024-12-13 10:40:12.389695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.846 qpair failed and we were unable to recover it. 00:38:18.846 [2024-12-13 10:40:12.389774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.846 [2024-12-13 10:40:12.389788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.846 qpair failed and we were unable to recover it. 00:38:18.846 [2024-12-13 10:40:12.389872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.846 [2024-12-13 10:40:12.389886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.846 qpair failed and we were unable to recover it. 00:38:18.846 [2024-12-13 10:40:12.389964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.846 [2024-12-13 10:40:12.389977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.846 qpair failed and we were unable to recover it. 00:38:18.846 [2024-12-13 10:40:12.390124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.846 [2024-12-13 10:40:12.390140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.846 qpair failed and we were unable to recover it. 00:38:18.846 [2024-12-13 10:40:12.390291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.846 [2024-12-13 10:40:12.390308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.846 qpair failed and we were unable to recover it. 00:38:18.846 [2024-12-13 10:40:12.390446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.846 [2024-12-13 10:40:12.390467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.846 qpair failed and we were unable to recover it. 00:38:18.846 [2024-12-13 10:40:12.390618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.846 [2024-12-13 10:40:12.390634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.846 qpair failed and we were unable to recover it. 00:38:18.846 [2024-12-13 10:40:12.390723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.846 [2024-12-13 10:40:12.390737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.846 qpair failed and we were unable to recover it. 00:38:18.846 [2024-12-13 10:40:12.390914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.846 [2024-12-13 10:40:12.390929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.846 qpair failed and we were unable to recover it. 00:38:18.846 [2024-12-13 10:40:12.391044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.846 [2024-12-13 10:40:12.391104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.846 qpair failed and we were unable to recover it. 00:38:18.846 [2024-12-13 10:40:12.391306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.846 [2024-12-13 10:40:12.391321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.846 qpair failed and we were unable to recover it. 00:38:18.846 [2024-12-13 10:40:12.391417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.846 [2024-12-13 10:40:12.391431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.846 qpair failed and we were unable to recover it. 00:38:18.846 [2024-12-13 10:40:12.391633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.846 [2024-12-13 10:40:12.391649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.846 qpair failed and we were unable to recover it. 00:38:18.846 [2024-12-13 10:40:12.391731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.846 [2024-12-13 10:40:12.391745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.846 qpair failed and we were unable to recover it. 00:38:18.846 [2024-12-13 10:40:12.391953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.846 [2024-12-13 10:40:12.391967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.846 qpair failed and we were unable to recover it. 00:38:18.846 [2024-12-13 10:40:12.392059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.846 [2024-12-13 10:40:12.392074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.846 qpair failed and we were unable to recover it. 00:38:18.846 [2024-12-13 10:40:12.392157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.846 [2024-12-13 10:40:12.392171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.846 qpair failed and we were unable to recover it. 00:38:18.846 [2024-12-13 10:40:12.392242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.846 [2024-12-13 10:40:12.392256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.846 qpair failed and we were unable to recover it. 00:38:18.846 [2024-12-13 10:40:12.392334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.846 [2024-12-13 10:40:12.392347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.846 qpair failed and we were unable to recover it. 00:38:18.846 [2024-12-13 10:40:12.392525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.846 [2024-12-13 10:40:12.392540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.846 qpair failed and we were unable to recover it. 00:38:18.846 [2024-12-13 10:40:12.392776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.846 [2024-12-13 10:40:12.392790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.846 qpair failed and we were unable to recover it. 00:38:18.846 [2024-12-13 10:40:12.392940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.846 [2024-12-13 10:40:12.392955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.846 qpair failed and we were unable to recover it. 00:38:18.846 [2024-12-13 10:40:12.393052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.846 [2024-12-13 10:40:12.393065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.846 qpair failed and we were unable to recover it. 00:38:18.846 [2024-12-13 10:40:12.393225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.846 [2024-12-13 10:40:12.393240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.846 qpair failed and we were unable to recover it. 00:38:18.846 [2024-12-13 10:40:12.393381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.846 [2024-12-13 10:40:12.393395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.846 qpair failed and we were unable to recover it. 00:38:18.846 [2024-12-13 10:40:12.393484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.846 [2024-12-13 10:40:12.393499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.846 qpair failed and we were unable to recover it. 00:38:18.846 [2024-12-13 10:40:12.393593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.846 [2024-12-13 10:40:12.393608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.846 qpair failed and we were unable to recover it. 00:38:18.846 [2024-12-13 10:40:12.393789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.846 [2024-12-13 10:40:12.393805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.846 qpair failed and we were unable to recover it. 00:38:18.846 [2024-12-13 10:40:12.394028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.846 [2024-12-13 10:40:12.394042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.846 qpair failed and we were unable to recover it. 00:38:18.846 [2024-12-13 10:40:12.394172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.846 [2024-12-13 10:40:12.394187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.846 qpair failed and we were unable to recover it. 00:38:18.846 [2024-12-13 10:40:12.394399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.846 [2024-12-13 10:40:12.394471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.846 qpair failed and we were unable to recover it. 00:38:18.846 [2024-12-13 10:40:12.394669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.846 [2024-12-13 10:40:12.394713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.846 qpair failed and we were unable to recover it. 00:38:18.846 [2024-12-13 10:40:12.394863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.846 [2024-12-13 10:40:12.394906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.846 qpair failed and we were unable to recover it. 00:38:18.846 [2024-12-13 10:40:12.395056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.846 [2024-12-13 10:40:12.395101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.846 qpair failed and we were unable to recover it. 00:38:18.847 [2024-12-13 10:40:12.395295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.847 [2024-12-13 10:40:12.395337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.847 qpair failed and we were unable to recover it. 00:38:18.847 [2024-12-13 10:40:12.395533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.847 [2024-12-13 10:40:12.395548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.847 qpair failed and we were unable to recover it. 00:38:18.847 [2024-12-13 10:40:12.395698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.847 [2024-12-13 10:40:12.395713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.847 qpair failed and we were unable to recover it. 00:38:18.847 [2024-12-13 10:40:12.395803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.847 [2024-12-13 10:40:12.395817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.847 qpair failed and we were unable to recover it. 00:38:18.847 [2024-12-13 10:40:12.395952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.847 [2024-12-13 10:40:12.395966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.847 qpair failed and we were unable to recover it. 00:38:18.847 [2024-12-13 10:40:12.396046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.847 [2024-12-13 10:40:12.396060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.847 qpair failed and we were unable to recover it. 00:38:18.847 [2024-12-13 10:40:12.396208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.847 [2024-12-13 10:40:12.396223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.847 qpair failed and we were unable to recover it. 00:38:18.847 [2024-12-13 10:40:12.396443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.847 [2024-12-13 10:40:12.396473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.847 qpair failed and we were unable to recover it. 00:38:18.847 [2024-12-13 10:40:12.396555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.847 [2024-12-13 10:40:12.396569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.847 qpair failed and we were unable to recover it. 00:38:18.847 [2024-12-13 10:40:12.396723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.847 [2024-12-13 10:40:12.396739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.847 qpair failed and we were unable to recover it. 00:38:18.847 [2024-12-13 10:40:12.396878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.847 [2024-12-13 10:40:12.396893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.847 qpair failed and we were unable to recover it. 00:38:18.847 [2024-12-13 10:40:12.396988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.847 [2024-12-13 10:40:12.397031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.847 qpair failed and we were unable to recover it. 00:38:18.847 [2024-12-13 10:40:12.397259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.847 [2024-12-13 10:40:12.397303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.847 qpair failed and we were unable to recover it. 00:38:18.847 [2024-12-13 10:40:12.397497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.847 [2024-12-13 10:40:12.397545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.847 qpair failed and we were unable to recover it. 00:38:18.847 [2024-12-13 10:40:12.397754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.847 [2024-12-13 10:40:12.397798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.847 qpair failed and we were unable to recover it. 00:38:18.847 [2024-12-13 10:40:12.398058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.847 [2024-12-13 10:40:12.398102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.847 qpair failed and we were unable to recover it. 00:38:18.847 [2024-12-13 10:40:12.398287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.847 [2024-12-13 10:40:12.398303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.847 qpair failed and we were unable to recover it. 00:38:18.847 [2024-12-13 10:40:12.398453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.847 [2024-12-13 10:40:12.398469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.847 qpair failed and we were unable to recover it. 00:38:18.847 [2024-12-13 10:40:12.398564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.847 [2024-12-13 10:40:12.398581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.847 qpair failed and we were unable to recover it. 00:38:18.847 [2024-12-13 10:40:12.398741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.847 [2024-12-13 10:40:12.398756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.847 qpair failed and we were unable to recover it. 00:38:18.847 [2024-12-13 10:40:12.398896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.847 [2024-12-13 10:40:12.398911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.847 qpair failed and we were unable to recover it. 00:38:18.847 [2024-12-13 10:40:12.399002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.847 [2024-12-13 10:40:12.399017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.847 qpair failed and we were unable to recover it. 00:38:18.847 [2024-12-13 10:40:12.399091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.847 [2024-12-13 10:40:12.399105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.847 qpair failed and we were unable to recover it. 00:38:18.847 [2024-12-13 10:40:12.399189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.847 [2024-12-13 10:40:12.399205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.847 qpair failed and we were unable to recover it. 00:38:18.847 [2024-12-13 10:40:12.399426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.847 [2024-12-13 10:40:12.399479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.847 qpair failed and we were unable to recover it. 00:38:18.847 [2024-12-13 10:40:12.399615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.847 [2024-12-13 10:40:12.399659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.847 qpair failed and we were unable to recover it. 00:38:18.847 [2024-12-13 10:40:12.399869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.847 [2024-12-13 10:40:12.399913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.847 qpair failed and we were unable to recover it. 00:38:18.847 [2024-12-13 10:40:12.400033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.847 [2024-12-13 10:40:12.400048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.847 qpair failed and we were unable to recover it. 00:38:18.847 [2024-12-13 10:40:12.400321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.847 [2024-12-13 10:40:12.400335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.847 qpair failed and we were unable to recover it. 00:38:18.847 [2024-12-13 10:40:12.400489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.847 [2024-12-13 10:40:12.400504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.847 qpair failed and we were unable to recover it. 00:38:18.847 [2024-12-13 10:40:12.400592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.847 [2024-12-13 10:40:12.400606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.847 qpair failed and we were unable to recover it. 00:38:18.847 [2024-12-13 10:40:12.400754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.847 [2024-12-13 10:40:12.400768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.847 qpair failed and we were unable to recover it. 00:38:18.847 [2024-12-13 10:40:12.400908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.847 [2024-12-13 10:40:12.400923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.847 qpair failed and we were unable to recover it. 00:38:18.847 [2024-12-13 10:40:12.400994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.847 [2024-12-13 10:40:12.401007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.847 qpair failed and we were unable to recover it. 00:38:18.847 [2024-12-13 10:40:12.401149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.847 [2024-12-13 10:40:12.401165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.847 qpair failed and we were unable to recover it. 00:38:18.847 [2024-12-13 10:40:12.401321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.847 [2024-12-13 10:40:12.401337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.847 qpair failed and we were unable to recover it. 00:38:18.847 [2024-12-13 10:40:12.401538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.847 [2024-12-13 10:40:12.401556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.847 qpair failed and we were unable to recover it. 00:38:18.847 [2024-12-13 10:40:12.401711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.847 [2024-12-13 10:40:12.401727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.847 qpair failed and we were unable to recover it. 00:38:18.848 [2024-12-13 10:40:12.401916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.848 [2024-12-13 10:40:12.401960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.848 qpair failed and we were unable to recover it. 00:38:18.848 [2024-12-13 10:40:12.402158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.848 [2024-12-13 10:40:12.402214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.848 qpair failed and we were unable to recover it. 00:38:18.848 [2024-12-13 10:40:12.402490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.848 [2024-12-13 10:40:12.402535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.848 qpair failed and we were unable to recover it. 00:38:18.848 [2024-12-13 10:40:12.402678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.848 [2024-12-13 10:40:12.402721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.848 qpair failed and we were unable to recover it. 00:38:18.848 [2024-12-13 10:40:12.403009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.848 [2024-12-13 10:40:12.403052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.848 qpair failed and we were unable to recover it. 00:38:18.848 [2024-12-13 10:40:12.403244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.848 [2024-12-13 10:40:12.403288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.848 qpair failed and we were unable to recover it. 00:38:18.848 [2024-12-13 10:40:12.403426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.848 [2024-12-13 10:40:12.403480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.848 qpair failed and we were unable to recover it. 00:38:18.848 [2024-12-13 10:40:12.403656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.848 [2024-12-13 10:40:12.403672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.848 qpair failed and we were unable to recover it. 00:38:18.848 [2024-12-13 10:40:12.403895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.848 [2024-12-13 10:40:12.403909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.848 qpair failed and we were unable to recover it. 00:38:18.848 [2024-12-13 10:40:12.404094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.848 [2024-12-13 10:40:12.404109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.848 qpair failed and we were unable to recover it. 00:38:18.848 [2024-12-13 10:40:12.404269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.848 [2024-12-13 10:40:12.404283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.848 qpair failed and we were unable to recover it. 00:38:18.848 [2024-12-13 10:40:12.404468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.848 [2024-12-13 10:40:12.404484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.848 qpair failed and we were unable to recover it. 00:38:18.848 [2024-12-13 10:40:12.404635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.848 [2024-12-13 10:40:12.404650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.848 qpair failed and we were unable to recover it. 00:38:18.848 [2024-12-13 10:40:12.404787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.848 [2024-12-13 10:40:12.404802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.848 qpair failed and we were unable to recover it. 00:38:18.848 [2024-12-13 10:40:12.404955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.848 [2024-12-13 10:40:12.404969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.848 qpair failed and we were unable to recover it. 00:38:18.848 [2024-12-13 10:40:12.405105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.848 [2024-12-13 10:40:12.405120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.848 qpair failed and we were unable to recover it. 00:38:18.848 [2024-12-13 10:40:12.405288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.848 [2024-12-13 10:40:12.405303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.848 qpair failed and we were unable to recover it. 00:38:18.848 [2024-12-13 10:40:12.405460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.848 [2024-12-13 10:40:12.405476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.848 qpair failed and we were unable to recover it. 00:38:18.848 [2024-12-13 10:40:12.405567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.848 [2024-12-13 10:40:12.405582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.848 qpair failed and we were unable to recover it. 00:38:18.848 [2024-12-13 10:40:12.405666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.848 [2024-12-13 10:40:12.405680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.848 qpair failed and we were unable to recover it. 00:38:18.848 [2024-12-13 10:40:12.405827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.848 [2024-12-13 10:40:12.405842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.848 qpair failed and we were unable to recover it. 00:38:18.848 [2024-12-13 10:40:12.406002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.848 [2024-12-13 10:40:12.406047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.848 qpair failed and we were unable to recover it. 00:38:18.848 [2024-12-13 10:40:12.406204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.848 [2024-12-13 10:40:12.406248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.848 qpair failed and we were unable to recover it. 00:38:18.848 [2024-12-13 10:40:12.406468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.848 [2024-12-13 10:40:12.406510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.848 qpair failed and we were unable to recover it. 00:38:18.848 [2024-12-13 10:40:12.406771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.848 [2024-12-13 10:40:12.406787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.848 qpair failed and we were unable to recover it. 00:38:18.848 [2024-12-13 10:40:12.406990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.848 [2024-12-13 10:40:12.407005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.848 qpair failed and we were unable to recover it. 00:38:18.848 [2024-12-13 10:40:12.407188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.848 [2024-12-13 10:40:12.407231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.848 qpair failed and we were unable to recover it. 00:38:18.848 [2024-12-13 10:40:12.407372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.848 [2024-12-13 10:40:12.407417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.848 qpair failed and we were unable to recover it. 00:38:18.848 [2024-12-13 10:40:12.407638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.848 [2024-12-13 10:40:12.407682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.848 qpair failed and we were unable to recover it. 00:38:18.848 [2024-12-13 10:40:12.407833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.848 [2024-12-13 10:40:12.407878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.848 qpair failed and we were unable to recover it. 00:38:18.848 [2024-12-13 10:40:12.408156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.848 [2024-12-13 10:40:12.408201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.848 qpair failed and we were unable to recover it. 00:38:18.848 [2024-12-13 10:40:12.408498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.848 [2024-12-13 10:40:12.408543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.848 qpair failed and we were unable to recover it. 00:38:18.848 [2024-12-13 10:40:12.408739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.848 [2024-12-13 10:40:12.408782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.848 qpair failed and we were unable to recover it. 00:38:18.848 [2024-12-13 10:40:12.409046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.848 [2024-12-13 10:40:12.409089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.848 qpair failed and we were unable to recover it. 00:38:18.848 [2024-12-13 10:40:12.409232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.848 [2024-12-13 10:40:12.409275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.848 qpair failed and we were unable to recover it. 00:38:18.848 [2024-12-13 10:40:12.409415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.848 [2024-12-13 10:40:12.409430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.848 qpair failed and we were unable to recover it. 00:38:18.848 [2024-12-13 10:40:12.409664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.848 [2024-12-13 10:40:12.409679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.849 qpair failed and we were unable to recover it. 00:38:18.849 [2024-12-13 10:40:12.409908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.849 [2024-12-13 10:40:12.409923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.849 qpair failed and we were unable to recover it. 00:38:18.849 [2024-12-13 10:40:12.410065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.849 [2024-12-13 10:40:12.410082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.849 qpair failed and we were unable to recover it. 00:38:18.849 [2024-12-13 10:40:12.410285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.849 [2024-12-13 10:40:12.410301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.849 qpair failed and we were unable to recover it. 00:38:18.849 [2024-12-13 10:40:12.410397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.849 [2024-12-13 10:40:12.410412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.849 qpair failed and we were unable to recover it. 00:38:18.849 [2024-12-13 10:40:12.410671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.849 [2024-12-13 10:40:12.410687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.849 qpair failed and we were unable to recover it. 00:38:18.849 [2024-12-13 10:40:12.410774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.849 [2024-12-13 10:40:12.410787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.849 qpair failed and we were unable to recover it. 00:38:18.849 [2024-12-13 10:40:12.410933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.849 [2024-12-13 10:40:12.410949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.849 qpair failed and we were unable to recover it. 00:38:18.849 [2024-12-13 10:40:12.411101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.849 [2024-12-13 10:40:12.411116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.849 qpair failed and we were unable to recover it. 00:38:18.849 [2024-12-13 10:40:12.411287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.849 [2024-12-13 10:40:12.411329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.849 qpair failed and we were unable to recover it. 00:38:18.849 [2024-12-13 10:40:12.411555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.849 [2024-12-13 10:40:12.411601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.849 qpair failed and we were unable to recover it. 00:38:18.849 [2024-12-13 10:40:12.411759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.849 [2024-12-13 10:40:12.411804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.849 qpair failed and we were unable to recover it. 00:38:18.849 [2024-12-13 10:40:12.412008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.849 [2024-12-13 10:40:12.412052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.849 qpair failed and we were unable to recover it. 00:38:18.849 [2024-12-13 10:40:12.412200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.849 [2024-12-13 10:40:12.412215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.849 qpair failed and we were unable to recover it. 00:38:18.849 [2024-12-13 10:40:12.412308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.849 [2024-12-13 10:40:12.412325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.849 qpair failed and we were unable to recover it. 00:38:18.849 [2024-12-13 10:40:12.412489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.849 [2024-12-13 10:40:12.412535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.849 qpair failed and we were unable to recover it. 00:38:18.849 [2024-12-13 10:40:12.412802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.849 [2024-12-13 10:40:12.412846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.849 qpair failed and we were unable to recover it. 00:38:18.849 [2024-12-13 10:40:12.413066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.849 [2024-12-13 10:40:12.413109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.849 qpair failed and we were unable to recover it. 00:38:18.849 [2024-12-13 10:40:12.413374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.849 [2024-12-13 10:40:12.413418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.849 qpair failed and we were unable to recover it. 00:38:18.849 [2024-12-13 10:40:12.413589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.849 [2024-12-13 10:40:12.413634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.849 qpair failed and we were unable to recover it. 00:38:18.849 [2024-12-13 10:40:12.413826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.849 [2024-12-13 10:40:12.413867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.849 qpair failed and we were unable to recover it. 00:38:18.849 [2024-12-13 10:40:12.414127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.849 [2024-12-13 10:40:12.414168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.849 qpair failed and we were unable to recover it. 00:38:18.849 [2024-12-13 10:40:12.414375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.849 [2024-12-13 10:40:12.414419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.849 qpair failed and we were unable to recover it. 00:38:18.849 [2024-12-13 10:40:12.414693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.849 [2024-12-13 10:40:12.414739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.849 qpair failed and we were unable to recover it. 00:38:18.849 [2024-12-13 10:40:12.414937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.849 [2024-12-13 10:40:12.414979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.849 qpair failed and we were unable to recover it. 00:38:18.849 [2024-12-13 10:40:12.415173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.849 [2024-12-13 10:40:12.415187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.849 qpair failed and we were unable to recover it. 00:38:18.849 [2024-12-13 10:40:12.415344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.849 [2024-12-13 10:40:12.415385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.849 qpair failed and we were unable to recover it. 00:38:18.849 [2024-12-13 10:40:12.415542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.849 [2024-12-13 10:40:12.415586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.849 qpair failed and we were unable to recover it. 00:38:18.849 [2024-12-13 10:40:12.415796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.849 [2024-12-13 10:40:12.415841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.849 qpair failed and we were unable to recover it. 00:38:18.849 [2024-12-13 10:40:12.416204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.849 [2024-12-13 10:40:12.416285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:18.849 qpair failed and we were unable to recover it. 00:38:18.849 [2024-12-13 10:40:12.416494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.849 [2024-12-13 10:40:12.416540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:18.849 qpair failed and we were unable to recover it. 00:38:18.850 [2024-12-13 10:40:12.416835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.850 [2024-12-13 10:40:12.416921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:18.850 qpair failed and we were unable to recover it. 00:38:18.850 [2024-12-13 10:40:12.417225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.850 [2024-12-13 10:40:12.417285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.850 qpair failed and we were unable to recover it. 00:38:18.850 [2024-12-13 10:40:12.417502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.850 [2024-12-13 10:40:12.417517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.850 qpair failed and we were unable to recover it. 00:38:18.850 [2024-12-13 10:40:12.417751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.850 [2024-12-13 10:40:12.417795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.850 qpair failed and we were unable to recover it. 00:38:18.850 [2024-12-13 10:40:12.418056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.850 [2024-12-13 10:40:12.418098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.850 qpair failed and we were unable to recover it. 00:38:18.850 [2024-12-13 10:40:12.418298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.850 [2024-12-13 10:40:12.418341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.850 qpair failed and we were unable to recover it. 00:38:18.850 [2024-12-13 10:40:12.418618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.850 [2024-12-13 10:40:12.418673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:18.850 qpair failed and we were unable to recover it. 00:38:18.850 [2024-12-13 10:40:12.418848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.850 [2024-12-13 10:40:12.418906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:18.850 qpair failed and we were unable to recover it. 00:38:18.850 [2024-12-13 10:40:12.419163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.850 [2024-12-13 10:40:12.419191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:18.850 qpair failed and we were unable to recover it. 00:38:18.850 [2024-12-13 10:40:12.419338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.850 [2024-12-13 10:40:12.419355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.850 qpair failed and we were unable to recover it. 00:38:18.850 [2024-12-13 10:40:12.419564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.850 [2024-12-13 10:40:12.419608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.850 qpair failed and we were unable to recover it. 00:38:18.850 [2024-12-13 10:40:12.419872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.850 [2024-12-13 10:40:12.419922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.850 qpair failed and we were unable to recover it. 00:38:18.850 [2024-12-13 10:40:12.420145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.850 [2024-12-13 10:40:12.420161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.850 qpair failed and we were unable to recover it. 00:38:18.850 [2024-12-13 10:40:12.420397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.850 [2024-12-13 10:40:12.420440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.850 qpair failed and we were unable to recover it. 00:38:18.850 [2024-12-13 10:40:12.420629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.850 [2024-12-13 10:40:12.420673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.850 qpair failed and we were unable to recover it. 00:38:18.850 [2024-12-13 10:40:12.420935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.850 [2024-12-13 10:40:12.420979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.850 qpair failed and we were unable to recover it. 00:38:18.850 [2024-12-13 10:40:12.421268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.850 [2024-12-13 10:40:12.421312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.850 qpair failed and we were unable to recover it. 00:38:18.850 [2024-12-13 10:40:12.421527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.850 [2024-12-13 10:40:12.421542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.850 qpair failed and we were unable to recover it. 00:38:18.850 [2024-12-13 10:40:12.421818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.850 [2024-12-13 10:40:12.421833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.850 qpair failed and we were unable to recover it. 00:38:18.850 [2024-12-13 10:40:12.422035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.850 [2024-12-13 10:40:12.422050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.850 qpair failed and we were unable to recover it. 00:38:18.850 [2024-12-13 10:40:12.422189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.850 [2024-12-13 10:40:12.422204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.850 qpair failed and we were unable to recover it. 00:38:18.850 [2024-12-13 10:40:12.422409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.850 [2024-12-13 10:40:12.422423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.850 qpair failed and we were unable to recover it. 00:38:18.850 [2024-12-13 10:40:12.422518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.850 [2024-12-13 10:40:12.422532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.850 qpair failed and we were unable to recover it. 00:38:18.850 [2024-12-13 10:40:12.422685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.850 [2024-12-13 10:40:12.422700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.850 qpair failed and we were unable to recover it. 00:38:18.850 [2024-12-13 10:40:12.422770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.850 [2024-12-13 10:40:12.422785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.850 qpair failed and we were unable to recover it. 00:38:18.850 [2024-12-13 10:40:12.422874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.850 [2024-12-13 10:40:12.422888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.850 qpair failed and we were unable to recover it. 00:38:18.850 [2024-12-13 10:40:12.422982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.850 [2024-12-13 10:40:12.422997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.850 qpair failed and we were unable to recover it. 00:38:18.850 [2024-12-13 10:40:12.423202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.850 [2024-12-13 10:40:12.423217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.850 qpair failed and we were unable to recover it. 00:38:18.850 [2024-12-13 10:40:12.423353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.850 [2024-12-13 10:40:12.423368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.850 qpair failed and we were unable to recover it. 00:38:18.850 [2024-12-13 10:40:12.423526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.850 [2024-12-13 10:40:12.423542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.850 qpair failed and we were unable to recover it. 00:38:18.850 [2024-12-13 10:40:12.423721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.850 [2024-12-13 10:40:12.423736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.850 qpair failed and we were unable to recover it. 00:38:18.850 [2024-12-13 10:40:12.423839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.850 [2024-12-13 10:40:12.423854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.850 qpair failed and we were unable to recover it. 00:38:18.850 [2024-12-13 10:40:12.424023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.850 [2024-12-13 10:40:12.424038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.850 qpair failed and we were unable to recover it. 00:38:18.850 [2024-12-13 10:40:12.424119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.850 [2024-12-13 10:40:12.424132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.850 qpair failed and we were unable to recover it. 00:38:18.850 [2024-12-13 10:40:12.424270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.850 [2024-12-13 10:40:12.424285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.850 qpair failed and we were unable to recover it. 00:38:18.850 [2024-12-13 10:40:12.424421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.850 [2024-12-13 10:40:12.424436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.851 qpair failed and we were unable to recover it. 00:38:18.851 [2024-12-13 10:40:12.424518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.851 [2024-12-13 10:40:12.424532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.851 qpair failed and we were unable to recover it. 00:38:18.851 [2024-12-13 10:40:12.424662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.851 [2024-12-13 10:40:12.424678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.851 qpair failed and we were unable to recover it. 00:38:18.851 [2024-12-13 10:40:12.424856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.851 [2024-12-13 10:40:12.424900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.851 qpair failed and we were unable to recover it. 00:38:18.851 [2024-12-13 10:40:12.425112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.851 [2024-12-13 10:40:12.425156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.851 qpair failed and we were unable to recover it. 00:38:18.851 [2024-12-13 10:40:12.425286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.851 [2024-12-13 10:40:12.425330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.851 qpair failed and we were unable to recover it. 00:38:18.851 [2024-12-13 10:40:12.425582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.851 [2024-12-13 10:40:12.425599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.851 qpair failed and we were unable to recover it. 00:38:18.851 [2024-12-13 10:40:12.425689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.851 [2024-12-13 10:40:12.425703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.851 qpair failed and we were unable to recover it. 00:38:18.851 [2024-12-13 10:40:12.425918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.851 [2024-12-13 10:40:12.425935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.851 qpair failed and we were unable to recover it. 00:38:18.851 [2024-12-13 10:40:12.426095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.851 [2024-12-13 10:40:12.426139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.851 qpair failed and we were unable to recover it. 00:38:18.851 [2024-12-13 10:40:12.426290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.851 [2024-12-13 10:40:12.426335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.851 qpair failed and we were unable to recover it. 00:38:18.851 [2024-12-13 10:40:12.426561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.851 [2024-12-13 10:40:12.426605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.851 qpair failed and we were unable to recover it. 00:38:18.851 [2024-12-13 10:40:12.426717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.851 [2024-12-13 10:40:12.426733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.851 qpair failed and we were unable to recover it. 00:38:18.851 [2024-12-13 10:40:12.426813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.851 [2024-12-13 10:40:12.426827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.851 qpair failed and we were unable to recover it. 00:38:18.851 [2024-12-13 10:40:12.426996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.851 [2024-12-13 10:40:12.427040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.851 qpair failed and we were unable to recover it. 00:38:18.851 [2024-12-13 10:40:12.427255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.851 [2024-12-13 10:40:12.427315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:18.851 qpair failed and we were unable to recover it. 00:38:18.851 [2024-12-13 10:40:12.427488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.851 [2024-12-13 10:40:12.427522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:18.851 qpair failed and we were unable to recover it. 00:38:18.851 [2024-12-13 10:40:12.427801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.851 [2024-12-13 10:40:12.427853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:18.851 qpair failed and we were unable to recover it. 00:38:18.851 [2024-12-13 10:40:12.428006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.851 [2024-12-13 10:40:12.428052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:18.851 qpair failed and we were unable to recover it. 00:38:18.851 [2024-12-13 10:40:12.428269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.851 [2024-12-13 10:40:12.428291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:18.851 qpair failed and we were unable to recover it. 00:38:18.851 [2024-12-13 10:40:12.428405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.851 [2024-12-13 10:40:12.428428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:18.851 qpair failed and we were unable to recover it. 00:38:18.851 [2024-12-13 10:40:12.428546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.851 [2024-12-13 10:40:12.428563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.851 qpair failed and we were unable to recover it. 00:38:18.851 [2024-12-13 10:40:12.428662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.851 [2024-12-13 10:40:12.428676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.851 qpair failed and we were unable to recover it. 00:38:18.851 [2024-12-13 10:40:12.428763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.851 [2024-12-13 10:40:12.428777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.851 qpair failed and we were unable to recover it. 00:38:18.851 [2024-12-13 10:40:12.428924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.851 [2024-12-13 10:40:12.428940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.851 qpair failed and we were unable to recover it. 00:38:18.851 [2024-12-13 10:40:12.429090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.851 [2024-12-13 10:40:12.429105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.851 qpair failed and we were unable to recover it. 00:38:18.851 [2024-12-13 10:40:12.429251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.851 [2024-12-13 10:40:12.429266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.851 qpair failed and we were unable to recover it. 00:38:18.851 [2024-12-13 10:40:12.429411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.851 [2024-12-13 10:40:12.429427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.851 qpair failed and we were unable to recover it. 00:38:18.851 [2024-12-13 10:40:12.429532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.851 [2024-12-13 10:40:12.429546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.851 qpair failed and we were unable to recover it. 00:38:18.851 [2024-12-13 10:40:12.429646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.851 [2024-12-13 10:40:12.429662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.851 qpair failed and we were unable to recover it. 00:38:18.851 [2024-12-13 10:40:12.429754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.851 [2024-12-13 10:40:12.429768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.851 qpair failed and we were unable to recover it. 00:38:18.851 [2024-12-13 10:40:12.429910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.851 [2024-12-13 10:40:12.429954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.851 qpair failed and we were unable to recover it. 00:38:18.851 [2024-12-13 10:40:12.430146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.851 [2024-12-13 10:40:12.430162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.851 qpair failed and we were unable to recover it. 00:38:18.851 [2024-12-13 10:40:12.430244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.851 [2024-12-13 10:40:12.430258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.851 qpair failed and we were unable to recover it. 00:38:18.851 [2024-12-13 10:40:12.430412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.851 [2024-12-13 10:40:12.430427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.851 qpair failed and we were unable to recover it. 00:38:18.851 [2024-12-13 10:40:12.430571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.851 [2024-12-13 10:40:12.430586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.851 qpair failed and we were unable to recover it. 00:38:18.851 [2024-12-13 10:40:12.430672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.851 [2024-12-13 10:40:12.430686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.851 qpair failed and we were unable to recover it. 00:38:18.851 [2024-12-13 10:40:12.430770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.851 [2024-12-13 10:40:12.430784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.851 qpair failed and we were unable to recover it. 00:38:18.851 [2024-12-13 10:40:12.430875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.851 [2024-12-13 10:40:12.430889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.851 qpair failed and we were unable to recover it. 00:38:18.851 [2024-12-13 10:40:12.430973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.852 [2024-12-13 10:40:12.430991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.852 qpair failed and we were unable to recover it. 00:38:18.852 [2024-12-13 10:40:12.431065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.852 [2024-12-13 10:40:12.431079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.852 qpair failed and we were unable to recover it. 00:38:18.852 [2024-12-13 10:40:12.431249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.852 [2024-12-13 10:40:12.431264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.852 qpair failed and we were unable to recover it. 00:38:18.852 [2024-12-13 10:40:12.431412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.852 [2024-12-13 10:40:12.431427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.852 qpair failed and we were unable to recover it. 00:38:18.852 [2024-12-13 10:40:12.431599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.852 [2024-12-13 10:40:12.431655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:18.852 qpair failed and we were unable to recover it. 00:38:18.852 [2024-12-13 10:40:12.431835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.852 [2024-12-13 10:40:12.431889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:18.852 qpair failed and we were unable to recover it. 00:38:18.852 [2024-12-13 10:40:12.432139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.852 [2024-12-13 10:40:12.432188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:18.852 qpair failed and we were unable to recover it. 00:38:18.852 [2024-12-13 10:40:12.432423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.852 [2024-12-13 10:40:12.432479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:18.852 qpair failed and we were unable to recover it. 00:38:18.852 [2024-12-13 10:40:12.432696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.852 [2024-12-13 10:40:12.432717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:18.852 qpair failed and we were unable to recover it. 00:38:18.852 [2024-12-13 10:40:12.432887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.852 [2024-12-13 10:40:12.432909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:18.852 qpair failed and we were unable to recover it. 00:38:18.852 [2024-12-13 10:40:12.433173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.852 [2024-12-13 10:40:12.433189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.852 qpair failed and we were unable to recover it. 00:38:18.852 [2024-12-13 10:40:12.433275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.852 [2024-12-13 10:40:12.433288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.852 qpair failed and we were unable to recover it. 00:38:18.852 [2024-12-13 10:40:12.433365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.852 [2024-12-13 10:40:12.433379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.852 qpair failed and we were unable to recover it. 00:38:18.852 [2024-12-13 10:40:12.433530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.852 [2024-12-13 10:40:12.433546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.852 qpair failed and we were unable to recover it. 00:38:18.852 [2024-12-13 10:40:12.433682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.852 [2024-12-13 10:40:12.433696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.852 qpair failed and we were unable to recover it. 00:38:18.852 [2024-12-13 10:40:12.433880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.852 [2024-12-13 10:40:12.433895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.852 qpair failed and we were unable to recover it. 00:38:18.852 [2024-12-13 10:40:12.434039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.852 [2024-12-13 10:40:12.434054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.852 qpair failed and we were unable to recover it. 00:38:18.852 [2024-12-13 10:40:12.434188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.852 [2024-12-13 10:40:12.434205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.852 qpair failed and we were unable to recover it. 00:38:18.852 [2024-12-13 10:40:12.434302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.852 [2024-12-13 10:40:12.434318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.852 qpair failed and we were unable to recover it. 00:38:18.852 [2024-12-13 10:40:12.434469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.852 [2024-12-13 10:40:12.434485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.852 qpair failed and we were unable to recover it. 00:38:18.852 [2024-12-13 10:40:12.434600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.852 [2024-12-13 10:40:12.434643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.852 qpair failed and we were unable to recover it. 00:38:18.852 [2024-12-13 10:40:12.434851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.852 [2024-12-13 10:40:12.434893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.852 qpair failed and we were unable to recover it. 00:38:18.852 [2024-12-13 10:40:12.435119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.852 [2024-12-13 10:40:12.435171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.852 qpair failed and we were unable to recover it. 00:38:18.852 [2024-12-13 10:40:12.435243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.852 [2024-12-13 10:40:12.435256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.852 qpair failed and we were unable to recover it. 00:38:18.852 [2024-12-13 10:40:12.435400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.852 [2024-12-13 10:40:12.435414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.852 qpair failed and we were unable to recover it. 00:38:18.852 [2024-12-13 10:40:12.435569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.852 [2024-12-13 10:40:12.435585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.852 qpair failed and we were unable to recover it. 00:38:18.852 [2024-12-13 10:40:12.435741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.852 [2024-12-13 10:40:12.435756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.852 qpair failed and we were unable to recover it. 00:38:18.852 [2024-12-13 10:40:12.435823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.852 [2024-12-13 10:40:12.435837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.852 qpair failed and we were unable to recover it. 00:38:18.852 [2024-12-13 10:40:12.435990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.852 [2024-12-13 10:40:12.436005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.852 qpair failed and we were unable to recover it. 00:38:18.852 [2024-12-13 10:40:12.436165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.852 [2024-12-13 10:40:12.436182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.852 qpair failed and we were unable to recover it. 00:38:18.852 [2024-12-13 10:40:12.436346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.852 [2024-12-13 10:40:12.436387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.852 qpair failed and we were unable to recover it. 00:38:18.852 [2024-12-13 10:40:12.436533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.852 [2024-12-13 10:40:12.436577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.852 qpair failed and we were unable to recover it. 00:38:18.852 [2024-12-13 10:40:12.436714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.852 [2024-12-13 10:40:12.436759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.852 qpair failed and we were unable to recover it. 00:38:18.852 [2024-12-13 10:40:12.436908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.852 [2024-12-13 10:40:12.436951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.852 qpair failed and we were unable to recover it. 00:38:18.852 [2024-12-13 10:40:12.437201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.852 [2024-12-13 10:40:12.437215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.852 qpair failed and we were unable to recover it. 00:38:18.852 [2024-12-13 10:40:12.437375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.852 [2024-12-13 10:40:12.437390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.852 qpair failed and we were unable to recover it. 00:38:18.852 [2024-12-13 10:40:12.437596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.852 [2024-12-13 10:40:12.437611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.852 qpair failed and we were unable to recover it. 00:38:18.852 [2024-12-13 10:40:12.437798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.853 [2024-12-13 10:40:12.437813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.853 qpair failed and we were unable to recover it. 00:38:18.853 [2024-12-13 10:40:12.437904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.853 [2024-12-13 10:40:12.437917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.853 qpair failed and we were unable to recover it. 00:38:18.853 [2024-12-13 10:40:12.438173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.853 [2024-12-13 10:40:12.438188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.853 qpair failed and we were unable to recover it. 00:38:18.853 [2024-12-13 10:40:12.438357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.853 [2024-12-13 10:40:12.438372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.853 qpair failed and we were unable to recover it. 00:38:18.853 [2024-12-13 10:40:12.438604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.853 [2024-12-13 10:40:12.438619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.853 qpair failed and we were unable to recover it. 00:38:18.853 [2024-12-13 10:40:12.438772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.853 [2024-12-13 10:40:12.438787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.853 qpair failed and we were unable to recover it. 00:38:18.853 [2024-12-13 10:40:12.438927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.853 [2024-12-13 10:40:12.438942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.853 qpair failed and we were unable to recover it. 00:38:18.853 [2024-12-13 10:40:12.439098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.853 [2024-12-13 10:40:12.439113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.853 qpair failed and we were unable to recover it. 00:38:18.853 [2024-12-13 10:40:12.439250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.853 [2024-12-13 10:40:12.439265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.853 qpair failed and we were unable to recover it. 00:38:18.853 [2024-12-13 10:40:12.439347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.853 [2024-12-13 10:40:12.439361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.853 qpair failed and we were unable to recover it. 00:38:18.853 [2024-12-13 10:40:12.439440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.853 [2024-12-13 10:40:12.439460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.853 qpair failed and we were unable to recover it. 00:38:18.853 [2024-12-13 10:40:12.439605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.853 [2024-12-13 10:40:12.439620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.853 qpair failed and we were unable to recover it. 00:38:18.853 [2024-12-13 10:40:12.439781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.853 [2024-12-13 10:40:12.439796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.853 qpair failed and we were unable to recover it. 00:38:18.853 [2024-12-13 10:40:12.439945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.853 [2024-12-13 10:40:12.439961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.853 qpair failed and we were unable to recover it. 00:38:18.853 [2024-12-13 10:40:12.440053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.853 [2024-12-13 10:40:12.440068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.853 qpair failed and we were unable to recover it. 00:38:18.853 [2024-12-13 10:40:12.440215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.853 [2024-12-13 10:40:12.440260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.853 qpair failed and we were unable to recover it. 00:38:18.853 [2024-12-13 10:40:12.440496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.853 [2024-12-13 10:40:12.440542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.853 qpair failed and we were unable to recover it. 00:38:18.853 [2024-12-13 10:40:12.440762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.853 [2024-12-13 10:40:12.440807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.853 qpair failed and we were unable to recover it. 00:38:18.853 [2024-12-13 10:40:12.440943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.853 [2024-12-13 10:40:12.440986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.853 qpair failed and we were unable to recover it. 00:38:18.853 [2024-12-13 10:40:12.441228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.853 [2024-12-13 10:40:12.441271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.853 qpair failed and we were unable to recover it. 00:38:18.853 [2024-12-13 10:40:12.441416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.853 [2024-12-13 10:40:12.441483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.853 qpair failed and we were unable to recover it. 00:38:18.853 [2024-12-13 10:40:12.441665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.853 [2024-12-13 10:40:12.441679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.853 qpair failed and we were unable to recover it. 00:38:18.853 [2024-12-13 10:40:12.441881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.853 [2024-12-13 10:40:12.441896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.853 qpair failed and we were unable to recover it. 00:38:18.853 [2024-12-13 10:40:12.442138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.853 [2024-12-13 10:40:12.442153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.853 qpair failed and we were unable to recover it. 00:38:18.853 [2024-12-13 10:40:12.442232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.853 [2024-12-13 10:40:12.442245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.853 qpair failed and we were unable to recover it. 00:38:18.853 [2024-12-13 10:40:12.442397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.853 [2024-12-13 10:40:12.442412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.853 qpair failed and we were unable to recover it. 00:38:18.853 [2024-12-13 10:40:12.442512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.853 [2024-12-13 10:40:12.442528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.853 qpair failed and we were unable to recover it. 00:38:18.853 [2024-12-13 10:40:12.442623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.853 [2024-12-13 10:40:12.442639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.853 qpair failed and we were unable to recover it. 00:38:18.853 [2024-12-13 10:40:12.442789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.853 [2024-12-13 10:40:12.442804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.853 qpair failed and we were unable to recover it. 00:38:18.853 [2024-12-13 10:40:12.442952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.853 [2024-12-13 10:40:12.442967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.853 qpair failed and we were unable to recover it. 00:38:18.853 [2024-12-13 10:40:12.443115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.853 [2024-12-13 10:40:12.443130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.853 qpair failed and we were unable to recover it. 00:38:18.853 [2024-12-13 10:40:12.443282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.853 [2024-12-13 10:40:12.443297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.853 qpair failed and we were unable to recover it. 00:38:18.853 [2024-12-13 10:40:12.443406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.853 [2024-12-13 10:40:12.443421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.853 qpair failed and we were unable to recover it. 00:38:18.853 [2024-12-13 10:40:12.443581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.853 [2024-12-13 10:40:12.443605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.853 qpair failed and we were unable to recover it. 00:38:18.853 [2024-12-13 10:40:12.443765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.853 [2024-12-13 10:40:12.443780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.853 qpair failed and we were unable to recover it. 00:38:18.853 [2024-12-13 10:40:12.443917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.853 [2024-12-13 10:40:12.443932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.853 qpair failed and we were unable to recover it. 00:38:18.853 [2024-12-13 10:40:12.444107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.853 [2024-12-13 10:40:12.444149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.853 qpair failed and we were unable to recover it. 00:38:18.853 [2024-12-13 10:40:12.444358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.853 [2024-12-13 10:40:12.444400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.853 qpair failed and we were unable to recover it. 00:38:18.853 [2024-12-13 10:40:12.444563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.854 [2024-12-13 10:40:12.444606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.854 qpair failed and we were unable to recover it. 00:38:18.854 [2024-12-13 10:40:12.444823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.854 [2024-12-13 10:40:12.444867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.854 qpair failed and we were unable to recover it. 00:38:18.854 [2024-12-13 10:40:12.445069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.854 [2024-12-13 10:40:12.445113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.854 qpair failed and we were unable to recover it. 00:38:18.854 [2024-12-13 10:40:12.445323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.854 [2024-12-13 10:40:12.445373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.854 qpair failed and we were unable to recover it. 00:38:18.854 [2024-12-13 10:40:12.445522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.854 [2024-12-13 10:40:12.445538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.854 qpair failed and we were unable to recover it. 00:38:18.854 [2024-12-13 10:40:12.445620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.854 [2024-12-13 10:40:12.445636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.854 qpair failed and we were unable to recover it. 00:38:18.854 [2024-12-13 10:40:12.445798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.854 [2024-12-13 10:40:12.445813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.854 qpair failed and we were unable to recover it. 00:38:18.854 [2024-12-13 10:40:12.445924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.854 [2024-12-13 10:40:12.445969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.854 qpair failed and we were unable to recover it. 00:38:18.854 [2024-12-13 10:40:12.446176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.854 [2024-12-13 10:40:12.446220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.854 qpair failed and we were unable to recover it. 00:38:18.854 [2024-12-13 10:40:12.446433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.854 [2024-12-13 10:40:12.446544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:18.854 qpair failed and we were unable to recover it. 00:38:18.854 [2024-12-13 10:40:12.446797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.854 [2024-12-13 10:40:12.446823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:18.854 qpair failed and we were unable to recover it. 00:38:18.854 [2024-12-13 10:40:12.446933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.854 [2024-12-13 10:40:12.446960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:18.854 qpair failed and we were unable to recover it. 00:38:18.854 [2024-12-13 10:40:12.447215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.854 [2024-12-13 10:40:12.447261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.854 qpair failed and we were unable to recover it. 00:38:18.854 [2024-12-13 10:40:12.447500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.854 [2024-12-13 10:40:12.447545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.854 qpair failed and we were unable to recover it. 00:38:18.854 [2024-12-13 10:40:12.447755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.854 [2024-12-13 10:40:12.447770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.854 qpair failed and we were unable to recover it. 00:38:18.854 [2024-12-13 10:40:12.447960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.854 [2024-12-13 10:40:12.448003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.854 qpair failed and we were unable to recover it. 00:38:18.854 [2024-12-13 10:40:12.448263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.854 [2024-12-13 10:40:12.448308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.854 qpair failed and we were unable to recover it. 00:38:18.854 [2024-12-13 10:40:12.448427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.854 [2024-12-13 10:40:12.448443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.854 qpair failed and we were unable to recover it. 00:38:18.854 [2024-12-13 10:40:12.448594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.854 [2024-12-13 10:40:12.448610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.854 qpair failed and we were unable to recover it. 00:38:18.854 [2024-12-13 10:40:12.448752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.854 [2024-12-13 10:40:12.448767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.854 qpair failed and we were unable to recover it. 00:38:18.854 [2024-12-13 10:40:12.448948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.854 [2024-12-13 10:40:12.448962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.854 qpair failed and we were unable to recover it. 00:38:18.854 [2024-12-13 10:40:12.449191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.854 [2024-12-13 10:40:12.449233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.854 qpair failed and we were unable to recover it. 00:38:18.854 [2024-12-13 10:40:12.449442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.854 [2024-12-13 10:40:12.449504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.854 qpair failed and we were unable to recover it. 00:38:18.854 [2024-12-13 10:40:12.449654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.854 [2024-12-13 10:40:12.449696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.854 qpair failed and we were unable to recover it. 00:38:18.854 [2024-12-13 10:40:12.449820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.854 [2024-12-13 10:40:12.449862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.854 qpair failed and we were unable to recover it. 00:38:18.854 [2024-12-13 10:40:12.450077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.854 [2024-12-13 10:40:12.450120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.854 qpair failed and we were unable to recover it. 00:38:18.854 [2024-12-13 10:40:12.450315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.854 [2024-12-13 10:40:12.450356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.854 qpair failed and we were unable to recover it. 00:38:18.854 [2024-12-13 10:40:12.450566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.854 [2024-12-13 10:40:12.450611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.854 qpair failed and we were unable to recover it. 00:38:18.854 [2024-12-13 10:40:12.450815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.854 [2024-12-13 10:40:12.450857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.854 qpair failed and we were unable to recover it. 00:38:18.854 [2024-12-13 10:40:12.451060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.854 [2024-12-13 10:40:12.451103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.854 qpair failed and we were unable to recover it. 00:38:18.854 [2024-12-13 10:40:12.451287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.854 [2024-12-13 10:40:12.451329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.854 qpair failed and we were unable to recover it. 00:38:18.854 [2024-12-13 10:40:12.451458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.854 [2024-12-13 10:40:12.451473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.854 qpair failed and we were unable to recover it. 00:38:18.854 [2024-12-13 10:40:12.451681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.854 [2024-12-13 10:40:12.451696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.854 qpair failed and we were unable to recover it. 00:38:18.854 [2024-12-13 10:40:12.451793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.855 [2024-12-13 10:40:12.451807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.855 qpair failed and we were unable to recover it. 00:38:18.855 [2024-12-13 10:40:12.452020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.855 [2024-12-13 10:40:12.452062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.855 qpair failed and we were unable to recover it. 00:38:18.855 [2024-12-13 10:40:12.452201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.855 [2024-12-13 10:40:12.452244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.855 qpair failed and we were unable to recover it. 00:38:18.855 [2024-12-13 10:40:12.452459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.855 [2024-12-13 10:40:12.452505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.855 qpair failed and we were unable to recover it. 00:38:18.855 [2024-12-13 10:40:12.452676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.855 [2024-12-13 10:40:12.452691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.855 qpair failed and we were unable to recover it. 00:38:18.855 [2024-12-13 10:40:12.452821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.855 [2024-12-13 10:40:12.452863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.855 qpair failed and we were unable to recover it. 00:38:18.855 [2024-12-13 10:40:12.452988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.855 [2024-12-13 10:40:12.453032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.855 qpair failed and we were unable to recover it. 00:38:18.855 [2024-12-13 10:40:12.453171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.855 [2024-12-13 10:40:12.453213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.855 qpair failed and we were unable to recover it. 00:38:18.855 [2024-12-13 10:40:12.453356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.855 [2024-12-13 10:40:12.453400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.855 qpair failed and we were unable to recover it. 00:38:18.855 [2024-12-13 10:40:12.453628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.855 [2024-12-13 10:40:12.453674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.855 qpair failed and we were unable to recover it. 00:38:18.855 [2024-12-13 10:40:12.453836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.855 [2024-12-13 10:40:12.453879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.855 qpair failed and we were unable to recover it. 00:38:18.855 [2024-12-13 10:40:12.454019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.855 [2024-12-13 10:40:12.454061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.855 qpair failed and we were unable to recover it. 00:38:18.855 [2024-12-13 10:40:12.454256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.855 [2024-12-13 10:40:12.454298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.855 qpair failed and we were unable to recover it. 00:38:18.855 [2024-12-13 10:40:12.454520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.855 [2024-12-13 10:40:12.454536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.855 qpair failed and we were unable to recover it. 00:38:18.855 [2024-12-13 10:40:12.454683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.855 [2024-12-13 10:40:12.454727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.855 qpair failed and we were unable to recover it. 00:38:18.855 [2024-12-13 10:40:12.454938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.855 [2024-12-13 10:40:12.454981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.855 qpair failed and we were unable to recover it. 00:38:18.855 [2024-12-13 10:40:12.455255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.855 [2024-12-13 10:40:12.455297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.855 qpair failed and we were unable to recover it. 00:38:18.855 [2024-12-13 10:40:12.455382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.855 [2024-12-13 10:40:12.455396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.855 qpair failed and we were unable to recover it. 00:38:18.855 [2024-12-13 10:40:12.455548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.855 [2024-12-13 10:40:12.455600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.855 qpair failed and we were unable to recover it. 00:38:18.855 [2024-12-13 10:40:12.455882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.855 [2024-12-13 10:40:12.455925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.855 qpair failed and we were unable to recover it. 00:38:18.855 [2024-12-13 10:40:12.456080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.855 [2024-12-13 10:40:12.456124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.855 qpair failed and we were unable to recover it. 00:38:18.855 [2024-12-13 10:40:12.456427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.855 [2024-12-13 10:40:12.456441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.855 qpair failed and we were unable to recover it. 00:38:18.855 [2024-12-13 10:40:12.456601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.855 [2024-12-13 10:40:12.456616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.855 qpair failed and we were unable to recover it. 00:38:18.855 [2024-12-13 10:40:12.456793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.855 [2024-12-13 10:40:12.456835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.855 qpair failed and we were unable to recover it. 00:38:18.855 [2024-12-13 10:40:12.457030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.855 [2024-12-13 10:40:12.457075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.855 qpair failed and we were unable to recover it. 00:38:18.855 [2024-12-13 10:40:12.457285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.855 [2024-12-13 10:40:12.457329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.855 qpair failed and we were unable to recover it. 00:38:18.855 [2024-12-13 10:40:12.457592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.855 [2024-12-13 10:40:12.457608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.855 qpair failed and we were unable to recover it. 00:38:18.855 [2024-12-13 10:40:12.457690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.855 [2024-12-13 10:40:12.457705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.855 qpair failed and we were unable to recover it. 00:38:18.855 [2024-12-13 10:40:12.457806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.855 [2024-12-13 10:40:12.457822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.855 qpair failed and we were unable to recover it. 00:38:18.855 [2024-12-13 10:40:12.457917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.855 [2024-12-13 10:40:12.457938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.855 qpair failed and we were unable to recover it. 00:38:18.855 [2024-12-13 10:40:12.458087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.855 [2024-12-13 10:40:12.458103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.855 qpair failed and we were unable to recover it. 00:38:18.855 [2024-12-13 10:40:12.458195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.855 [2024-12-13 10:40:12.458208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.855 qpair failed and we were unable to recover it. 00:38:18.855 [2024-12-13 10:40:12.458304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.855 [2024-12-13 10:40:12.458348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.855 qpair failed and we were unable to recover it. 00:38:18.855 [2024-12-13 10:40:12.458556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.855 [2024-12-13 10:40:12.458602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.855 qpair failed and we were unable to recover it. 00:38:18.855 [2024-12-13 10:40:12.458758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.855 [2024-12-13 10:40:12.458802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.855 qpair failed and we were unable to recover it. 00:38:18.855 [2024-12-13 10:40:12.458941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.855 [2024-12-13 10:40:12.458983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.855 qpair failed and we were unable to recover it. 00:38:18.855 [2024-12-13 10:40:12.459117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.855 [2024-12-13 10:40:12.459160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.855 qpair failed and we were unable to recover it. 00:38:18.855 [2024-12-13 10:40:12.459441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.855 [2024-12-13 10:40:12.459494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.855 qpair failed and we were unable to recover it. 00:38:18.855 [2024-12-13 10:40:12.459635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.855 [2024-12-13 10:40:12.459650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.855 qpair failed and we were unable to recover it. 00:38:18.856 [2024-12-13 10:40:12.459835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.856 [2024-12-13 10:40:12.459878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.856 qpair failed and we were unable to recover it. 00:38:18.856 [2024-12-13 10:40:12.460153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.856 [2024-12-13 10:40:12.460197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.856 qpair failed and we were unable to recover it. 00:38:18.856 [2024-12-13 10:40:12.460354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.856 [2024-12-13 10:40:12.460406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.856 qpair failed and we were unable to recover it. 00:38:18.856 [2024-12-13 10:40:12.460545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.856 [2024-12-13 10:40:12.460561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.856 qpair failed and we were unable to recover it. 00:38:18.856 [2024-12-13 10:40:12.460733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.856 [2024-12-13 10:40:12.460748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.856 qpair failed and we were unable to recover it. 00:38:18.856 [2024-12-13 10:40:12.460834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.856 [2024-12-13 10:40:12.460848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.856 qpair failed and we were unable to recover it. 00:38:18.856 [2024-12-13 10:40:12.460943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.856 [2024-12-13 10:40:12.460957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.856 qpair failed and we were unable to recover it. 00:38:18.856 [2024-12-13 10:40:12.461094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.856 [2024-12-13 10:40:12.461109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.856 qpair failed and we were unable to recover it. 00:38:18.856 [2024-12-13 10:40:12.461256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.856 [2024-12-13 10:40:12.461271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.856 qpair failed and we were unable to recover it. 00:38:18.856 [2024-12-13 10:40:12.461418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.856 [2024-12-13 10:40:12.461433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.856 qpair failed and we were unable to recover it. 00:38:18.856 [2024-12-13 10:40:12.461575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.856 [2024-12-13 10:40:12.461590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.856 qpair failed and we were unable to recover it. 00:38:18.856 [2024-12-13 10:40:12.461756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.856 [2024-12-13 10:40:12.461800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.856 qpair failed and we were unable to recover it. 00:38:18.856 [2024-12-13 10:40:12.461946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.856 [2024-12-13 10:40:12.461989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.856 qpair failed and we were unable to recover it. 00:38:18.856 [2024-12-13 10:40:12.462196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.856 [2024-12-13 10:40:12.462239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.856 qpair failed and we were unable to recover it. 00:38:18.856 [2024-12-13 10:40:12.462490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.856 [2024-12-13 10:40:12.462506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.856 qpair failed and we were unable to recover it. 00:38:18.856 [2024-12-13 10:40:12.462591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.856 [2024-12-13 10:40:12.462605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.856 qpair failed and we were unable to recover it. 00:38:18.856 [2024-12-13 10:40:12.462835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.856 [2024-12-13 10:40:12.462850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.856 qpair failed and we were unable to recover it. 00:38:18.856 [2024-12-13 10:40:12.463014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.856 [2024-12-13 10:40:12.463032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.856 qpair failed and we were unable to recover it. 00:38:18.856 [2024-12-13 10:40:12.463131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.856 [2024-12-13 10:40:12.463145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.856 qpair failed and we were unable to recover it. 00:38:18.856 [2024-12-13 10:40:12.463314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.856 [2024-12-13 10:40:12.463357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.856 qpair failed and we were unable to recover it. 00:38:18.856 [2024-12-13 10:40:12.463565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.856 [2024-12-13 10:40:12.463610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.856 qpair failed and we were unable to recover it. 00:38:18.856 [2024-12-13 10:40:12.463811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.856 [2024-12-13 10:40:12.463855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.856 qpair failed and we were unable to recover it. 00:38:18.856 [2024-12-13 10:40:12.463993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.856 [2024-12-13 10:40:12.464036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.856 qpair failed and we were unable to recover it. 00:38:18.856 [2024-12-13 10:40:12.464321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.856 [2024-12-13 10:40:12.464364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.856 qpair failed and we were unable to recover it. 00:38:18.856 [2024-12-13 10:40:12.464550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.856 [2024-12-13 10:40:12.464565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.856 qpair failed and we were unable to recover it. 00:38:18.856 [2024-12-13 10:40:12.464648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.856 [2024-12-13 10:40:12.464661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.856 qpair failed and we were unable to recover it. 00:38:18.856 [2024-12-13 10:40:12.464884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.856 [2024-12-13 10:40:12.464900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.856 qpair failed and we were unable to recover it. 00:38:18.856 [2024-12-13 10:40:12.465033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.856 [2024-12-13 10:40:12.465049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.856 qpair failed and we were unable to recover it. 00:38:18.856 [2024-12-13 10:40:12.465221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.856 [2024-12-13 10:40:12.465265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.856 qpair failed and we were unable to recover it. 00:38:18.856 [2024-12-13 10:40:12.465488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.856 [2024-12-13 10:40:12.465533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.856 qpair failed and we were unable to recover it. 00:38:18.856 [2024-12-13 10:40:12.465733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.856 [2024-12-13 10:40:12.465777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.856 qpair failed and we were unable to recover it. 00:38:18.856 [2024-12-13 10:40:12.466098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.856 [2024-12-13 10:40:12.466143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.856 qpair failed and we were unable to recover it. 00:38:18.856 [2024-12-13 10:40:12.466337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.856 [2024-12-13 10:40:12.466381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.856 qpair failed and we were unable to recover it. 00:38:18.856 [2024-12-13 10:40:12.466597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.856 [2024-12-13 10:40:12.466644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.856 qpair failed and we were unable to recover it. 00:38:18.856 [2024-12-13 10:40:12.466848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.856 [2024-12-13 10:40:12.466863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.856 qpair failed and we were unable to recover it. 00:38:18.856 [2024-12-13 10:40:12.466956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.856 [2024-12-13 10:40:12.466970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.856 qpair failed and we were unable to recover it. 00:38:18.856 [2024-12-13 10:40:12.467074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.856 [2024-12-13 10:40:12.467089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.856 qpair failed and we were unable to recover it. 00:38:18.856 [2024-12-13 10:40:12.467247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.856 [2024-12-13 10:40:12.467262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.856 qpair failed and we were unable to recover it. 00:38:18.856 [2024-12-13 10:40:12.467423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.857 [2024-12-13 10:40:12.467481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.857 qpair failed and we were unable to recover it. 00:38:18.857 [2024-12-13 10:40:12.467617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.857 [2024-12-13 10:40:12.467662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.857 qpair failed and we were unable to recover it. 00:38:18.857 [2024-12-13 10:40:12.467885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.857 [2024-12-13 10:40:12.467928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.857 qpair failed and we were unable to recover it. 00:38:18.857 [2024-12-13 10:40:12.468060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.857 [2024-12-13 10:40:12.468103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.857 qpair failed and we were unable to recover it. 00:38:18.857 [2024-12-13 10:40:12.468244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.857 [2024-12-13 10:40:12.468288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.857 qpair failed and we were unable to recover it. 00:38:18.857 [2024-12-13 10:40:12.468445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.857 [2024-12-13 10:40:12.468467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.857 qpair failed and we were unable to recover it. 00:38:18.857 [2024-12-13 10:40:12.468606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.857 [2024-12-13 10:40:12.468622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.857 qpair failed and we were unable to recover it. 00:38:18.857 [2024-12-13 10:40:12.468860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.857 [2024-12-13 10:40:12.468876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.857 qpair failed and we were unable to recover it. 00:38:18.857 [2024-12-13 10:40:12.469031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.857 [2024-12-13 10:40:12.469046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.857 qpair failed and we were unable to recover it. 00:38:18.857 [2024-12-13 10:40:12.469125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.857 [2024-12-13 10:40:12.469139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.857 qpair failed and we were unable to recover it. 00:38:18.857 [2024-12-13 10:40:12.469222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.857 [2024-12-13 10:40:12.469235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.857 qpair failed and we were unable to recover it. 00:38:18.857 [2024-12-13 10:40:12.469380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.857 [2024-12-13 10:40:12.469395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.857 qpair failed and we were unable to recover it. 00:38:18.857 [2024-12-13 10:40:12.469619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.857 [2024-12-13 10:40:12.469635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.857 qpair failed and we were unable to recover it. 00:38:18.857 [2024-12-13 10:40:12.469726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.857 [2024-12-13 10:40:12.469741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.857 qpair failed and we were unable to recover it. 00:38:18.857 [2024-12-13 10:40:12.469828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.857 [2024-12-13 10:40:12.469842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.857 qpair failed and we were unable to recover it. 00:38:18.857 [2024-12-13 10:40:12.469984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.857 [2024-12-13 10:40:12.470000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.857 qpair failed and we were unable to recover it. 00:38:18.857 [2024-12-13 10:40:12.470174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.857 [2024-12-13 10:40:12.470228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.857 qpair failed and we were unable to recover it. 00:38:18.857 [2024-12-13 10:40:12.470380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.857 [2024-12-13 10:40:12.470424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.857 qpair failed and we were unable to recover it. 00:38:18.857 [2024-12-13 10:40:12.470648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.857 [2024-12-13 10:40:12.470692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.857 qpair failed and we were unable to recover it. 00:38:18.857 [2024-12-13 10:40:12.470905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.857 [2024-12-13 10:40:12.470958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.857 qpair failed and we were unable to recover it. 00:38:18.857 [2024-12-13 10:40:12.471154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.857 [2024-12-13 10:40:12.471197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.857 qpair failed and we were unable to recover it. 00:38:18.857 [2024-12-13 10:40:12.471316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.857 [2024-12-13 10:40:12.471330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.857 qpair failed and we were unable to recover it. 00:38:18.857 [2024-12-13 10:40:12.471414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.857 [2024-12-13 10:40:12.471427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.857 qpair failed and we were unable to recover it. 00:38:18.857 [2024-12-13 10:40:12.471537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.857 [2024-12-13 10:40:12.471580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.857 qpair failed and we were unable to recover it. 00:38:18.857 [2024-12-13 10:40:12.471729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.857 [2024-12-13 10:40:12.471773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.857 qpair failed and we were unable to recover it. 00:38:18.857 [2024-12-13 10:40:12.471981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.857 [2024-12-13 10:40:12.472025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.857 qpair failed and we were unable to recover it. 00:38:18.857 [2024-12-13 10:40:12.472218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.857 [2024-12-13 10:40:12.472260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.857 qpair failed and we were unable to recover it. 00:38:18.857 [2024-12-13 10:40:12.472469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.857 [2024-12-13 10:40:12.472490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.857 qpair failed and we were unable to recover it. 00:38:18.857 [2024-12-13 10:40:12.472570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.857 [2024-12-13 10:40:12.472584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.857 qpair failed and we were unable to recover it. 00:38:18.857 [2024-12-13 10:40:12.472798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.857 [2024-12-13 10:40:12.472842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.857 qpair failed and we were unable to recover it. 00:38:18.857 [2024-12-13 10:40:12.473126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.857 [2024-12-13 10:40:12.473168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.857 qpair failed and we were unable to recover it. 00:38:18.857 [2024-12-13 10:40:12.473365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.857 [2024-12-13 10:40:12.473410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.857 qpair failed and we were unable to recover it. 00:38:18.857 [2024-12-13 10:40:12.473675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.857 [2024-12-13 10:40:12.473691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.857 qpair failed and we were unable to recover it. 00:38:18.857 [2024-12-13 10:40:12.473799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.857 [2024-12-13 10:40:12.473814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.857 qpair failed and we were unable to recover it. 00:38:18.857 [2024-12-13 10:40:12.473948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.857 [2024-12-13 10:40:12.473963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.858 qpair failed and we were unable to recover it. 00:38:18.858 [2024-12-13 10:40:12.474103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.858 [2024-12-13 10:40:12.474120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.858 qpair failed and we were unable to recover it. 00:38:18.858 [2024-12-13 10:40:12.474318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.858 [2024-12-13 10:40:12.474363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.858 qpair failed and we were unable to recover it. 00:38:18.858 [2024-12-13 10:40:12.474653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.858 [2024-12-13 10:40:12.474699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.858 qpair failed and we were unable to recover it. 00:38:18.858 [2024-12-13 10:40:12.474904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.858 [2024-12-13 10:40:12.474948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.858 qpair failed and we were unable to recover it. 00:38:18.858 [2024-12-13 10:40:12.475092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.858 [2024-12-13 10:40:12.475134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.858 qpair failed and we were unable to recover it. 00:38:18.858 [2024-12-13 10:40:12.475282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.858 [2024-12-13 10:40:12.475326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.858 qpair failed and we were unable to recover it. 00:38:18.858 [2024-12-13 10:40:12.475480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.858 [2024-12-13 10:40:12.475524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.858 qpair failed and we were unable to recover it. 00:38:18.858 [2024-12-13 10:40:12.475648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.858 [2024-12-13 10:40:12.475664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.858 qpair failed and we were unable to recover it. 00:38:18.858 [2024-12-13 10:40:12.475837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.858 [2024-12-13 10:40:12.475868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.858 qpair failed and we were unable to recover it. 00:38:18.858 [2024-12-13 10:40:12.476071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.858 [2024-12-13 10:40:12.476112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.858 qpair failed and we were unable to recover it. 00:38:18.858 [2024-12-13 10:40:12.476333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.858 [2024-12-13 10:40:12.476377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.858 qpair failed and we were unable to recover it. 00:38:18.858 [2024-12-13 10:40:12.476561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.858 [2024-12-13 10:40:12.476577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.858 qpair failed and we were unable to recover it. 00:38:18.858 [2024-12-13 10:40:12.476653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.858 [2024-12-13 10:40:12.476667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.858 qpair failed and we were unable to recover it. 00:38:18.858 [2024-12-13 10:40:12.476805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.858 [2024-12-13 10:40:12.476821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.858 qpair failed and we were unable to recover it. 00:38:18.858 [2024-12-13 10:40:12.476966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.858 [2024-12-13 10:40:12.476981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.858 qpair failed and we were unable to recover it. 00:38:18.858 [2024-12-13 10:40:12.477052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.858 [2024-12-13 10:40:12.477066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.858 qpair failed and we were unable to recover it. 00:38:18.858 [2024-12-13 10:40:12.477206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.858 [2024-12-13 10:40:12.477245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.858 qpair failed and we were unable to recover it. 00:38:18.858 [2024-12-13 10:40:12.477386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.858 [2024-12-13 10:40:12.477428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.858 qpair failed and we were unable to recover it. 00:38:18.858 [2024-12-13 10:40:12.477645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.858 [2024-12-13 10:40:12.477688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.858 qpair failed and we were unable to recover it. 00:38:18.858 [2024-12-13 10:40:12.477812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.858 [2024-12-13 10:40:12.477855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.858 qpair failed and we were unable to recover it. 00:38:18.858 [2024-12-13 10:40:12.478059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.858 [2024-12-13 10:40:12.478103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.858 qpair failed and we were unable to recover it. 00:38:18.858 [2024-12-13 10:40:12.478276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.858 [2024-12-13 10:40:12.478291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.858 qpair failed and we were unable to recover it. 00:38:18.858 [2024-12-13 10:40:12.478494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.858 [2024-12-13 10:40:12.478510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.858 qpair failed and we were unable to recover it. 00:38:18.858 [2024-12-13 10:40:12.478658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.858 [2024-12-13 10:40:12.478703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.858 qpair failed and we were unable to recover it. 00:38:18.858 [2024-12-13 10:40:12.478894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.858 [2024-12-13 10:40:12.478945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.858 qpair failed and we were unable to recover it. 00:38:18.858 [2024-12-13 10:40:12.479143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.858 [2024-12-13 10:40:12.479188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.858 qpair failed and we were unable to recover it. 00:38:18.858 [2024-12-13 10:40:12.479465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.858 [2024-12-13 10:40:12.479481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.858 qpair failed and we were unable to recover it. 00:38:18.858 [2024-12-13 10:40:12.479554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.858 [2024-12-13 10:40:12.479569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.858 qpair failed and we were unable to recover it. 00:38:18.858 [2024-12-13 10:40:12.479663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.858 [2024-12-13 10:40:12.479678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.858 qpair failed and we were unable to recover it. 00:38:18.858 [2024-12-13 10:40:12.479811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.858 [2024-12-13 10:40:12.479825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.858 qpair failed and we were unable to recover it. 00:38:18.858 [2024-12-13 10:40:12.479983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.858 [2024-12-13 10:40:12.479998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.858 qpair failed and we were unable to recover it. 00:38:18.858 [2024-12-13 10:40:12.480078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.858 [2024-12-13 10:40:12.480092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.858 qpair failed and we were unable to recover it. 00:38:18.858 [2024-12-13 10:40:12.480293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.858 [2024-12-13 10:40:12.480308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.858 qpair failed and we were unable to recover it. 00:38:18.858 [2024-12-13 10:40:12.480488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.858 [2024-12-13 10:40:12.480504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.858 qpair failed and we were unable to recover it. 00:38:18.858 [2024-12-13 10:40:12.480651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.858 [2024-12-13 10:40:12.480669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.858 qpair failed and we were unable to recover it. 00:38:18.858 [2024-12-13 10:40:12.480754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.858 [2024-12-13 10:40:12.480768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.858 qpair failed and we were unable to recover it. 00:38:18.858 [2024-12-13 10:40:12.480979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.858 [2024-12-13 10:40:12.481022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.858 qpair failed and we were unable to recover it. 00:38:18.858 [2024-12-13 10:40:12.481229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.858 [2024-12-13 10:40:12.481273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.858 qpair failed and we were unable to recover it. 00:38:18.858 [2024-12-13 10:40:12.481502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.859 [2024-12-13 10:40:12.481551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.859 qpair failed and we were unable to recover it. 00:38:18.859 [2024-12-13 10:40:12.481711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.859 [2024-12-13 10:40:12.481726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.859 qpair failed and we were unable to recover it. 00:38:18.859 [2024-12-13 10:40:12.481875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.859 [2024-12-13 10:40:12.481891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.859 qpair failed and we were unable to recover it. 00:38:18.859 [2024-12-13 10:40:12.482027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.859 [2024-12-13 10:40:12.482042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.859 qpair failed and we were unable to recover it. 00:38:18.859 [2024-12-13 10:40:12.482213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.859 [2024-12-13 10:40:12.482228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.859 qpair failed and we were unable to recover it. 00:38:18.859 [2024-12-13 10:40:12.482328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.859 [2024-12-13 10:40:12.482372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.859 qpair failed and we were unable to recover it. 00:38:18.859 [2024-12-13 10:40:12.482530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.859 [2024-12-13 10:40:12.482578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.859 qpair failed and we were unable to recover it. 00:38:18.859 [2024-12-13 10:40:12.482789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.859 [2024-12-13 10:40:12.482845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.859 qpair failed and we were unable to recover it. 00:38:18.859 [2024-12-13 10:40:12.483130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.859 [2024-12-13 10:40:12.483174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.859 qpair failed and we were unable to recover it. 00:38:18.859 [2024-12-13 10:40:12.483317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.859 [2024-12-13 10:40:12.483358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.859 qpair failed and we were unable to recover it. 00:38:18.859 [2024-12-13 10:40:12.483659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.859 [2024-12-13 10:40:12.483676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.859 qpair failed and we were unable to recover it. 00:38:18.859 [2024-12-13 10:40:12.483888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.859 [2024-12-13 10:40:12.483932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.859 qpair failed and we were unable to recover it. 00:38:18.859 [2024-12-13 10:40:12.484129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.859 [2024-12-13 10:40:12.484172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.859 qpair failed and we were unable to recover it. 00:38:18.859 [2024-12-13 10:40:12.484368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.859 [2024-12-13 10:40:12.484383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.859 qpair failed and we were unable to recover it. 00:38:18.859 [2024-12-13 10:40:12.484483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.859 [2024-12-13 10:40:12.484497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.859 qpair failed and we were unable to recover it. 00:38:18.859 [2024-12-13 10:40:12.484654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.859 [2024-12-13 10:40:12.484699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.859 qpair failed and we were unable to recover it. 00:38:18.859 [2024-12-13 10:40:12.484850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.859 [2024-12-13 10:40:12.484892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.859 qpair failed and we were unable to recover it. 00:38:18.859 [2024-12-13 10:40:12.485095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.859 [2024-12-13 10:40:12.485138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.859 qpair failed and we were unable to recover it. 00:38:18.859 [2024-12-13 10:40:12.485384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.859 [2024-12-13 10:40:12.485399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.859 qpair failed and we were unable to recover it. 00:38:18.859 [2024-12-13 10:40:12.485493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.859 [2024-12-13 10:40:12.485508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.859 qpair failed and we were unable to recover it. 00:38:18.859 [2024-12-13 10:40:12.485600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.859 [2024-12-13 10:40:12.485614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.859 qpair failed and we were unable to recover it. 00:38:18.859 [2024-12-13 10:40:12.485708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.859 [2024-12-13 10:40:12.485722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.859 qpair failed and we were unable to recover it. 00:38:18.859 [2024-12-13 10:40:12.485929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.859 [2024-12-13 10:40:12.485944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.859 qpair failed and we were unable to recover it. 00:38:18.859 [2024-12-13 10:40:12.486121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.859 [2024-12-13 10:40:12.486136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.859 qpair failed and we were unable to recover it. 00:38:18.859 [2024-12-13 10:40:12.486352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.859 [2024-12-13 10:40:12.486395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.859 qpair failed and we were unable to recover it. 00:38:18.859 [2024-12-13 10:40:12.486544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.859 [2024-12-13 10:40:12.486590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.859 qpair failed and we were unable to recover it. 00:38:18.859 [2024-12-13 10:40:12.486880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.859 [2024-12-13 10:40:12.486930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.859 qpair failed and we were unable to recover it. 00:38:18.859 [2024-12-13 10:40:12.487226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.859 [2024-12-13 10:40:12.487270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.859 qpair failed and we were unable to recover it. 00:38:18.859 [2024-12-13 10:40:12.487484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.859 [2024-12-13 10:40:12.487499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.859 qpair failed and we were unable to recover it. 00:38:18.859 [2024-12-13 10:40:12.487657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.859 [2024-12-13 10:40:12.487672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.859 qpair failed and we were unable to recover it. 00:38:18.859 [2024-12-13 10:40:12.487805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.859 [2024-12-13 10:40:12.487820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.859 qpair failed and we were unable to recover it. 00:38:18.859 [2024-12-13 10:40:12.487955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.859 [2024-12-13 10:40:12.487971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.859 qpair failed and we were unable to recover it. 00:38:18.859 [2024-12-13 10:40:12.488063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.859 [2024-12-13 10:40:12.488076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.859 qpair failed and we were unable to recover it. 00:38:18.859 [2024-12-13 10:40:12.488171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.860 [2024-12-13 10:40:12.488186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.860 qpair failed and we were unable to recover it. 00:38:18.860 [2024-12-13 10:40:12.488350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.860 [2024-12-13 10:40:12.488365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.860 qpair failed and we were unable to recover it. 00:38:18.860 [2024-12-13 10:40:12.488499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.860 [2024-12-13 10:40:12.488520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.860 qpair failed and we were unable to recover it. 00:38:18.860 [2024-12-13 10:40:12.488608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.860 [2024-12-13 10:40:12.488622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.860 qpair failed and we were unable to recover it. 00:38:18.860 [2024-12-13 10:40:12.488705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.860 [2024-12-13 10:40:12.488719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.860 qpair failed and we were unable to recover it. 00:38:18.860 [2024-12-13 10:40:12.488788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.860 [2024-12-13 10:40:12.488827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.860 qpair failed and we were unable to recover it. 00:38:18.860 [2024-12-13 10:40:12.489025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.860 [2024-12-13 10:40:12.489068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.860 qpair failed and we were unable to recover it. 00:38:18.860 [2024-12-13 10:40:12.489297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.860 [2024-12-13 10:40:12.489341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.860 qpair failed and we were unable to recover it. 00:38:18.860 [2024-12-13 10:40:12.489461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.860 [2024-12-13 10:40:12.489476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.860 qpair failed and we were unable to recover it. 00:38:18.860 [2024-12-13 10:40:12.489640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.860 [2024-12-13 10:40:12.489657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.860 qpair failed and we were unable to recover it. 00:38:18.860 [2024-12-13 10:40:12.489903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.860 [2024-12-13 10:40:12.489918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.860 qpair failed and we were unable to recover it. 00:38:18.860 [2024-12-13 10:40:12.489998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.860 [2024-12-13 10:40:12.490012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.860 qpair failed and we were unable to recover it. 00:38:18.860 [2024-12-13 10:40:12.490088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.860 [2024-12-13 10:40:12.490102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.860 qpair failed and we were unable to recover it. 00:38:18.860 [2024-12-13 10:40:12.490255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.860 [2024-12-13 10:40:12.490270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.860 qpair failed and we were unable to recover it. 00:38:18.860 [2024-12-13 10:40:12.490468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.860 [2024-12-13 10:40:12.490484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.860 qpair failed and we were unable to recover it. 00:38:18.860 [2024-12-13 10:40:12.490584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.860 [2024-12-13 10:40:12.490598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.860 qpair failed and we were unable to recover it. 00:38:18.860 [2024-12-13 10:40:12.490743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.860 [2024-12-13 10:40:12.490758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.860 qpair failed and we were unable to recover it. 00:38:18.860 [2024-12-13 10:40:12.490917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.860 [2024-12-13 10:40:12.490960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.860 qpair failed and we were unable to recover it. 00:38:18.860 [2024-12-13 10:40:12.491099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.860 [2024-12-13 10:40:12.491142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.860 qpair failed and we were unable to recover it. 00:38:18.860 [2024-12-13 10:40:12.491270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.860 [2024-12-13 10:40:12.491314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.860 qpair failed and we were unable to recover it. 00:38:18.860 [2024-12-13 10:40:12.491470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.860 [2024-12-13 10:40:12.491487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.860 qpair failed and we were unable to recover it. 00:38:18.860 [2024-12-13 10:40:12.491707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.860 [2024-12-13 10:40:12.491722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.860 qpair failed and we were unable to recover it. 00:38:18.860 [2024-12-13 10:40:12.491926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.860 [2024-12-13 10:40:12.491941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.860 qpair failed and we were unable to recover it. 00:38:18.860 [2024-12-13 10:40:12.492017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.860 [2024-12-13 10:40:12.492032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.860 qpair failed and we were unable to recover it. 00:38:18.860 [2024-12-13 10:40:12.492122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.860 [2024-12-13 10:40:12.492135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.860 qpair failed and we were unable to recover it. 00:38:18.860 [2024-12-13 10:40:12.492274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.860 [2024-12-13 10:40:12.492289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.860 qpair failed and we were unable to recover it. 00:38:18.860 [2024-12-13 10:40:12.492505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.860 [2024-12-13 10:40:12.492521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.860 qpair failed and we were unable to recover it. 00:38:18.860 [2024-12-13 10:40:12.492602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.860 [2024-12-13 10:40:12.492617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.860 qpair failed and we were unable to recover it. 00:38:18.860 [2024-12-13 10:40:12.492750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.860 [2024-12-13 10:40:12.492765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.860 qpair failed and we were unable to recover it. 00:38:18.860 [2024-12-13 10:40:12.492976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.860 [2024-12-13 10:40:12.492992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.860 qpair failed and we were unable to recover it. 00:38:18.860 [2024-12-13 10:40:12.493092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.860 [2024-12-13 10:40:12.493134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.860 qpair failed and we were unable to recover it. 00:38:18.860 [2024-12-13 10:40:12.493260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.860 [2024-12-13 10:40:12.493303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.860 qpair failed and we were unable to recover it. 00:38:18.860 [2024-12-13 10:40:12.493472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.860 [2024-12-13 10:40:12.493517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.860 qpair failed and we were unable to recover it. 00:38:18.860 [2024-12-13 10:40:12.493702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.860 [2024-12-13 10:40:12.493719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.860 qpair failed and we were unable to recover it. 00:38:18.860 [2024-12-13 10:40:12.493791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.860 [2024-12-13 10:40:12.493805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.860 qpair failed and we were unable to recover it. 00:38:18.860 [2024-12-13 10:40:12.493963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.860 [2024-12-13 10:40:12.493979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.860 qpair failed and we were unable to recover it. 00:38:18.860 [2024-12-13 10:40:12.494198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.860 [2024-12-13 10:40:12.494240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.860 qpair failed and we were unable to recover it. 00:38:18.860 [2024-12-13 10:40:12.494397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.861 [2024-12-13 10:40:12.494445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.861 qpair failed and we were unable to recover it. 00:38:18.861 [2024-12-13 10:40:12.494726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.861 [2024-12-13 10:40:12.494746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.861 qpair failed and we were unable to recover it. 00:38:18.861 [2024-12-13 10:40:12.494885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.861 [2024-12-13 10:40:12.494900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.861 qpair failed and we were unable to recover it. 00:38:18.861 [2024-12-13 10:40:12.495113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.861 [2024-12-13 10:40:12.495128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.861 qpair failed and we were unable to recover it. 00:38:18.861 [2024-12-13 10:40:12.495341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.861 [2024-12-13 10:40:12.495356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.861 qpair failed and we were unable to recover it. 00:38:18.861 [2024-12-13 10:40:12.495433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.861 [2024-12-13 10:40:12.495446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.861 qpair failed and we were unable to recover it. 00:38:18.861 [2024-12-13 10:40:12.495760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.861 [2024-12-13 10:40:12.495802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.861 qpair failed and we were unable to recover it. 00:38:18.861 [2024-12-13 10:40:12.495999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.861 [2024-12-13 10:40:12.496042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.861 qpair failed and we were unable to recover it. 00:38:18.861 [2024-12-13 10:40:12.496247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.861 [2024-12-13 10:40:12.496290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.861 qpair failed and we were unable to recover it. 00:38:18.861 [2024-12-13 10:40:12.496407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.861 [2024-12-13 10:40:12.496422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.861 qpair failed and we were unable to recover it. 00:38:18.861 [2024-12-13 10:40:12.496566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.861 [2024-12-13 10:40:12.496582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.861 qpair failed and we were unable to recover it. 00:38:18.861 [2024-12-13 10:40:12.496735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.861 [2024-12-13 10:40:12.496751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.861 qpair failed and we were unable to recover it. 00:38:18.861 [2024-12-13 10:40:12.496905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.861 [2024-12-13 10:40:12.496949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.861 qpair failed and we were unable to recover it. 00:38:18.861 [2024-12-13 10:40:12.497079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.861 [2024-12-13 10:40:12.497123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.861 qpair failed and we were unable to recover it. 00:38:18.861 [2024-12-13 10:40:12.497321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.861 [2024-12-13 10:40:12.497364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.861 qpair failed and we were unable to recover it. 00:38:18.861 [2024-12-13 10:40:12.497548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.861 [2024-12-13 10:40:12.497567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.861 qpair failed and we were unable to recover it. 00:38:18.861 [2024-12-13 10:40:12.497652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.861 [2024-12-13 10:40:12.497666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.861 qpair failed and we were unable to recover it. 00:38:18.861 [2024-12-13 10:40:12.497906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.861 [2024-12-13 10:40:12.497943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.861 qpair failed and we were unable to recover it. 00:38:18.861 [2024-12-13 10:40:12.498102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.861 [2024-12-13 10:40:12.498145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.861 qpair failed and we were unable to recover it. 00:38:18.861 [2024-12-13 10:40:12.498388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.861 [2024-12-13 10:40:12.498432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.861 qpair failed and we were unable to recover it. 00:38:18.861 [2024-12-13 10:40:12.498705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.861 [2024-12-13 10:40:12.498720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.861 qpair failed and we were unable to recover it. 00:38:18.861 [2024-12-13 10:40:12.498863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.861 [2024-12-13 10:40:12.498878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.861 qpair failed and we were unable to recover it. 00:38:18.861 [2024-12-13 10:40:12.498983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.861 [2024-12-13 10:40:12.498999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.861 qpair failed and we were unable to recover it. 00:38:18.861 [2024-12-13 10:40:12.499140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.861 [2024-12-13 10:40:12.499156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.861 qpair failed and we were unable to recover it. 00:38:18.861 [2024-12-13 10:40:12.499299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.861 [2024-12-13 10:40:12.499341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.861 qpair failed and we were unable to recover it. 00:38:18.861 [2024-12-13 10:40:12.499630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.861 [2024-12-13 10:40:12.499675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.861 qpair failed and we were unable to recover it. 00:38:18.861 [2024-12-13 10:40:12.499876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.861 [2024-12-13 10:40:12.499921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.861 qpair failed and we were unable to recover it. 00:38:18.861 [2024-12-13 10:40:12.500060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.861 [2024-12-13 10:40:12.500103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.861 qpair failed and we were unable to recover it. 00:38:18.861 [2024-12-13 10:40:12.500387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.861 [2024-12-13 10:40:12.500431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.861 qpair failed and we were unable to recover it. 00:38:18.861 [2024-12-13 10:40:12.500675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.861 [2024-12-13 10:40:12.500690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.861 qpair failed and we were unable to recover it. 00:38:18.861 [2024-12-13 10:40:12.500778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.861 [2024-12-13 10:40:12.500816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.861 qpair failed and we were unable to recover it. 00:38:18.861 [2024-12-13 10:40:12.501013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.861 [2024-12-13 10:40:12.501056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.861 qpair failed and we were unable to recover it. 00:38:18.861 [2024-12-13 10:40:12.501247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.861 [2024-12-13 10:40:12.501289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.861 qpair failed and we were unable to recover it. 00:38:18.861 [2024-12-13 10:40:12.501433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.861 [2024-12-13 10:40:12.501454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.861 qpair failed and we were unable to recover it. 00:38:18.861 [2024-12-13 10:40:12.501642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.861 [2024-12-13 10:40:12.501686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.861 qpair failed and we were unable to recover it. 00:38:18.861 [2024-12-13 10:40:12.501815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.861 [2024-12-13 10:40:12.501858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.861 qpair failed and we were unable to recover it. 00:38:18.861 [2024-12-13 10:40:12.501985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.861 [2024-12-13 10:40:12.502035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.861 qpair failed and we were unable to recover it. 00:38:18.861 [2024-12-13 10:40:12.502168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.861 [2024-12-13 10:40:12.502212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.861 qpair failed and we were unable to recover it. 00:38:18.861 [2024-12-13 10:40:12.502411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.861 [2024-12-13 10:40:12.502467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.861 qpair failed and we were unable to recover it. 00:38:18.861 [2024-12-13 10:40:12.502733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.862 [2024-12-13 10:40:12.502748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.862 qpair failed and we were unable to recover it. 00:38:18.862 [2024-12-13 10:40:12.502916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.862 [2024-12-13 10:40:12.502959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.862 qpair failed and we were unable to recover it. 00:38:18.862 [2024-12-13 10:40:12.503109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.862 [2024-12-13 10:40:12.503151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.862 qpair failed and we were unable to recover it. 00:38:18.862 [2024-12-13 10:40:12.503388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.862 [2024-12-13 10:40:12.503431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.862 qpair failed and we were unable to recover it. 00:38:18.862 [2024-12-13 10:40:12.503711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.862 [2024-12-13 10:40:12.503756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.862 qpair failed and we were unable to recover it. 00:38:18.862 [2024-12-13 10:40:12.503968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.862 [2024-12-13 10:40:12.504010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.862 qpair failed and we were unable to recover it. 00:38:18.862 [2024-12-13 10:40:12.504244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.862 [2024-12-13 10:40:12.504288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.862 qpair failed and we were unable to recover it. 00:38:18.862 [2024-12-13 10:40:12.504468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.862 [2024-12-13 10:40:12.504500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.862 qpair failed and we were unable to recover it. 00:38:18.862 [2024-12-13 10:40:12.504706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.862 [2024-12-13 10:40:12.504721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.862 qpair failed and we were unable to recover it. 00:38:18.862 [2024-12-13 10:40:12.504891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.862 [2024-12-13 10:40:12.504935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.862 qpair failed and we were unable to recover it. 00:38:18.862 [2024-12-13 10:40:12.505169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.862 [2024-12-13 10:40:12.505211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.862 qpair failed and we were unable to recover it. 00:38:18.862 [2024-12-13 10:40:12.505502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.862 [2024-12-13 10:40:12.505518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.862 qpair failed and we were unable to recover it. 00:38:18.862 [2024-12-13 10:40:12.505664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.862 [2024-12-13 10:40:12.505679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.862 qpair failed and we were unable to recover it. 00:38:18.862 [2024-12-13 10:40:12.505881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.862 [2024-12-13 10:40:12.505896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.862 qpair failed and we were unable to recover it. 00:38:18.862 [2024-12-13 10:40:12.506044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.862 [2024-12-13 10:40:12.506087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.862 qpair failed and we were unable to recover it. 00:38:18.862 [2024-12-13 10:40:12.506297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.862 [2024-12-13 10:40:12.506341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.862 qpair failed and we were unable to recover it. 00:38:18.862 [2024-12-13 10:40:12.506551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.862 [2024-12-13 10:40:12.506596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.862 qpair failed and we were unable to recover it. 00:38:18.862 [2024-12-13 10:40:12.506827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.862 [2024-12-13 10:40:12.506871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.862 qpair failed and we were unable to recover it. 00:38:18.862 [2024-12-13 10:40:12.507079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.862 [2024-12-13 10:40:12.507123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.862 qpair failed and we were unable to recover it. 00:38:18.862 [2024-12-13 10:40:12.507259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.862 [2024-12-13 10:40:12.507303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.862 qpair failed and we were unable to recover it. 00:38:18.862 [2024-12-13 10:40:12.507496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.862 [2024-12-13 10:40:12.507512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.862 qpair failed and we were unable to recover it. 00:38:18.862 [2024-12-13 10:40:12.507639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.862 [2024-12-13 10:40:12.507655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.862 qpair failed and we were unable to recover it. 00:38:18.862 [2024-12-13 10:40:12.507738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.862 [2024-12-13 10:40:12.507752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.862 qpair failed and we were unable to recover it. 00:38:18.862 [2024-12-13 10:40:12.507901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.862 [2024-12-13 10:40:12.507944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.862 qpair failed and we were unable to recover it. 00:38:18.862 [2024-12-13 10:40:12.508094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.862 [2024-12-13 10:40:12.508137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.862 qpair failed and we were unable to recover it. 00:38:18.862 [2024-12-13 10:40:12.508476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.862 [2024-12-13 10:40:12.508521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.862 qpair failed and we were unable to recover it. 00:38:18.862 [2024-12-13 10:40:12.508741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.862 [2024-12-13 10:40:12.508797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.862 qpair failed and we were unable to recover it. 00:38:18.862 [2024-12-13 10:40:12.509003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.862 [2024-12-13 10:40:12.509046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.862 qpair failed and we were unable to recover it. 00:38:18.862 [2024-12-13 10:40:12.509316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.862 [2024-12-13 10:40:12.509359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.862 qpair failed and we were unable to recover it. 00:38:18.862 [2024-12-13 10:40:12.509626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.862 [2024-12-13 10:40:12.509672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.862 qpair failed and we were unable to recover it. 00:38:18.862 [2024-12-13 10:40:12.509979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.862 [2024-12-13 10:40:12.510023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.862 qpair failed and we were unable to recover it. 00:38:18.862 [2024-12-13 10:40:12.510172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.862 [2024-12-13 10:40:12.510216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.862 qpair failed and we were unable to recover it. 00:38:18.862 [2024-12-13 10:40:12.510468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.862 [2024-12-13 10:40:12.510514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.862 qpair failed and we were unable to recover it. 00:38:18.862 [2024-12-13 10:40:12.510714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.862 [2024-12-13 10:40:12.510757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.862 qpair failed and we were unable to recover it. 00:38:18.862 [2024-12-13 10:40:12.510955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.862 [2024-12-13 10:40:12.510999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.862 qpair failed and we were unable to recover it. 00:38:18.862 [2024-12-13 10:40:12.511125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.862 [2024-12-13 10:40:12.511169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.862 qpair failed and we were unable to recover it. 00:38:18.862 [2024-12-13 10:40:12.511364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.862 [2024-12-13 10:40:12.511408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.862 qpair failed and we were unable to recover it. 00:38:18.862 [2024-12-13 10:40:12.511621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.862 [2024-12-13 10:40:12.511639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.862 qpair failed and we were unable to recover it. 00:38:18.862 [2024-12-13 10:40:12.511782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.862 [2024-12-13 10:40:12.511797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.862 qpair failed and we were unable to recover it. 00:38:18.863 [2024-12-13 10:40:12.511952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.863 [2024-12-13 10:40:12.511967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.863 qpair failed and we were unable to recover it. 00:38:18.863 [2024-12-13 10:40:12.512129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.863 [2024-12-13 10:40:12.512172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.863 qpair failed and we were unable to recover it. 00:38:18.863 [2024-12-13 10:40:12.512322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.863 [2024-12-13 10:40:12.512364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.863 qpair failed and we were unable to recover it. 00:38:18.863 [2024-12-13 10:40:12.512565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.863 [2024-12-13 10:40:12.512610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.863 qpair failed and we were unable to recover it. 00:38:18.863 [2024-12-13 10:40:12.512835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.863 [2024-12-13 10:40:12.512879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.863 qpair failed and we were unable to recover it. 00:38:18.863 [2024-12-13 10:40:12.513156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.863 [2024-12-13 10:40:12.513200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.863 qpair failed and we were unable to recover it. 00:38:18.863 [2024-12-13 10:40:12.513399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.863 [2024-12-13 10:40:12.513413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.863 qpair failed and we were unable to recover it. 00:38:18.863 [2024-12-13 10:40:12.513564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.863 [2024-12-13 10:40:12.513610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.863 qpair failed and we were unable to recover it. 00:38:18.863 [2024-12-13 10:40:12.513812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.863 [2024-12-13 10:40:12.513854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.863 qpair failed and we were unable to recover it. 00:38:18.863 [2024-12-13 10:40:12.514113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.863 [2024-12-13 10:40:12.514156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.863 qpair failed and we were unable to recover it. 00:38:18.863 [2024-12-13 10:40:12.514347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.863 [2024-12-13 10:40:12.514389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.863 qpair failed and we were unable to recover it. 00:38:18.863 [2024-12-13 10:40:12.514607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.863 [2024-12-13 10:40:12.514651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.863 qpair failed and we were unable to recover it. 00:38:18.863 [2024-12-13 10:40:12.514901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.863 [2024-12-13 10:40:12.514917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.863 qpair failed and we were unable to recover it. 00:38:18.863 [2024-12-13 10:40:12.515060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.863 [2024-12-13 10:40:12.515074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.863 qpair failed and we were unable to recover it. 00:38:18.863 [2024-12-13 10:40:12.515222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.863 [2024-12-13 10:40:12.515271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.863 qpair failed and we were unable to recover it. 00:38:18.863 [2024-12-13 10:40:12.515409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.863 [2024-12-13 10:40:12.515461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.863 qpair failed and we were unable to recover it. 00:38:18.863 [2024-12-13 10:40:12.515727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.863 [2024-12-13 10:40:12.515771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.863 qpair failed and we were unable to recover it. 00:38:18.863 [2024-12-13 10:40:12.516002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.863 [2024-12-13 10:40:12.516016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.863 qpair failed and we were unable to recover it. 00:38:18.863 [2024-12-13 10:40:12.516112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.863 [2024-12-13 10:40:12.516126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.863 qpair failed and we were unable to recover it. 00:38:18.863 [2024-12-13 10:40:12.516261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.863 [2024-12-13 10:40:12.516301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.863 qpair failed and we were unable to recover it. 00:38:18.863 [2024-12-13 10:40:12.516574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.863 [2024-12-13 10:40:12.516619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.863 qpair failed and we were unable to recover it. 00:38:18.863 [2024-12-13 10:40:12.516902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.863 [2024-12-13 10:40:12.516946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.863 qpair failed and we were unable to recover it. 00:38:18.863 [2024-12-13 10:40:12.517152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.863 [2024-12-13 10:40:12.517195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.863 qpair failed and we were unable to recover it. 00:38:18.863 [2024-12-13 10:40:12.517406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.863 [2024-12-13 10:40:12.517463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.863 qpair failed and we were unable to recover it. 00:38:18.863 [2024-12-13 10:40:12.517622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.863 [2024-12-13 10:40:12.517637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.863 qpair failed and we were unable to recover it. 00:38:18.863 [2024-12-13 10:40:12.517796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.863 [2024-12-13 10:40:12.517840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.863 qpair failed and we were unable to recover it. 00:38:18.863 [2024-12-13 10:40:12.518148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.863 [2024-12-13 10:40:12.518190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.863 qpair failed and we were unable to recover it. 00:38:18.863 [2024-12-13 10:40:12.518411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.863 [2024-12-13 10:40:12.518526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.863 qpair failed and we were unable to recover it. 00:38:18.863 [2024-12-13 10:40:12.518795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.863 [2024-12-13 10:40:12.518839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.863 qpair failed and we were unable to recover it. 00:38:18.863 [2024-12-13 10:40:12.519045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.863 [2024-12-13 10:40:12.519088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.863 qpair failed and we were unable to recover it. 00:38:18.863 [2024-12-13 10:40:12.519321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.863 [2024-12-13 10:40:12.519364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.863 qpair failed and we were unable to recover it. 00:38:18.863 [2024-12-13 10:40:12.519522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.863 [2024-12-13 10:40:12.519538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.863 qpair failed and we were unable to recover it. 00:38:18.864 [2024-12-13 10:40:12.519693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.864 [2024-12-13 10:40:12.519709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.864 qpair failed and we were unable to recover it. 00:38:18.864 [2024-12-13 10:40:12.519862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.864 [2024-12-13 10:40:12.519877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.864 qpair failed and we were unable to recover it. 00:38:18.864 [2024-12-13 10:40:12.520019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.864 [2024-12-13 10:40:12.520063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.864 qpair failed and we were unable to recover it. 00:38:18.864 [2024-12-13 10:40:12.520322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.864 [2024-12-13 10:40:12.520365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.864 qpair failed and we were unable to recover it. 00:38:18.864 [2024-12-13 10:40:12.520568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.864 [2024-12-13 10:40:12.520584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.864 qpair failed and we were unable to recover it. 00:38:18.864 [2024-12-13 10:40:12.520748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.864 [2024-12-13 10:40:12.520763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.864 qpair failed and we were unable to recover it. 00:38:18.864 [2024-12-13 10:40:12.521000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.864 [2024-12-13 10:40:12.521051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.864 qpair failed and we were unable to recover it. 00:38:18.864 [2024-12-13 10:40:12.521353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.864 [2024-12-13 10:40:12.521396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.864 qpair failed and we were unable to recover it. 00:38:18.864 [2024-12-13 10:40:12.521611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.864 [2024-12-13 10:40:12.521627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.864 qpair failed and we were unable to recover it. 00:38:18.864 [2024-12-13 10:40:12.521794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.864 [2024-12-13 10:40:12.521837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.864 qpair failed and we were unable to recover it. 00:38:18.864 [2024-12-13 10:40:12.522098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.864 [2024-12-13 10:40:12.522142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.864 qpair failed and we were unable to recover it. 00:38:18.864 [2024-12-13 10:40:12.522403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.864 [2024-12-13 10:40:12.522464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.864 qpair failed and we were unable to recover it. 00:38:18.864 [2024-12-13 10:40:12.522659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.864 [2024-12-13 10:40:12.522703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.864 qpair failed and we were unable to recover it. 00:38:18.864 [2024-12-13 10:40:12.522910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.864 [2024-12-13 10:40:12.522953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.864 qpair failed and we were unable to recover it. 00:38:18.864 [2024-12-13 10:40:12.523097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.864 [2024-12-13 10:40:12.523138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.864 qpair failed and we were unable to recover it. 00:38:18.864 [2024-12-13 10:40:12.523275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.864 [2024-12-13 10:40:12.523318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.864 qpair failed and we were unable to recover it. 00:38:18.864 [2024-12-13 10:40:12.523573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.864 [2024-12-13 10:40:12.523617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.864 qpair failed and we were unable to recover it. 00:38:18.864 [2024-12-13 10:40:12.523749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.864 [2024-12-13 10:40:12.523791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.864 qpair failed and we were unable to recover it. 00:38:18.864 [2024-12-13 10:40:12.523929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.864 [2024-12-13 10:40:12.523945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.864 qpair failed and we were unable to recover it. 00:38:18.864 [2024-12-13 10:40:12.524085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.864 [2024-12-13 10:40:12.524105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.864 qpair failed and we were unable to recover it. 00:38:18.864 [2024-12-13 10:40:12.524344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.864 [2024-12-13 10:40:12.524360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.864 qpair failed and we were unable to recover it. 00:38:18.864 [2024-12-13 10:40:12.524459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.864 [2024-12-13 10:40:12.524474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.864 qpair failed and we were unable to recover it. 00:38:18.864 [2024-12-13 10:40:12.524643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.864 [2024-12-13 10:40:12.524688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.864 qpair failed and we were unable to recover it. 00:38:18.864 [2024-12-13 10:40:12.524885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.864 [2024-12-13 10:40:12.524928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.864 qpair failed and we were unable to recover it. 00:38:18.864 [2024-12-13 10:40:12.525137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.864 [2024-12-13 10:40:12.525181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.864 qpair failed and we were unable to recover it. 00:38:18.864 [2024-12-13 10:40:12.525326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.864 [2024-12-13 10:40:12.525371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.864 qpair failed and we were unable to recover it. 00:38:18.864 [2024-12-13 10:40:12.525594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.864 [2024-12-13 10:40:12.525640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.864 qpair failed and we were unable to recover it. 00:38:18.864 [2024-12-13 10:40:12.525868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.864 [2024-12-13 10:40:12.525912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.864 qpair failed and we were unable to recover it. 00:38:18.864 [2024-12-13 10:40:12.526218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.864 [2024-12-13 10:40:12.526261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.864 qpair failed and we were unable to recover it. 00:38:18.864 [2024-12-13 10:40:12.526471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.864 [2024-12-13 10:40:12.526516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.864 qpair failed and we were unable to recover it. 00:38:18.864 [2024-12-13 10:40:12.526725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.864 [2024-12-13 10:40:12.526767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.864 qpair failed and we were unable to recover it. 00:38:18.864 [2024-12-13 10:40:12.526964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.864 [2024-12-13 10:40:12.526979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.864 qpair failed and we were unable to recover it. 00:38:18.864 [2024-12-13 10:40:12.527134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.864 [2024-12-13 10:40:12.527149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.864 qpair failed and we were unable to recover it. 00:38:18.864 [2024-12-13 10:40:12.527265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.864 [2024-12-13 10:40:12.527281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.864 qpair failed and we were unable to recover it. 00:38:18.864 [2024-12-13 10:40:12.527371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.864 [2024-12-13 10:40:12.527386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.864 qpair failed and we were unable to recover it. 00:38:18.864 [2024-12-13 10:40:12.527459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.864 [2024-12-13 10:40:12.527474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.864 qpair failed and we were unable to recover it. 00:38:18.864 [2024-12-13 10:40:12.527698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.864 [2024-12-13 10:40:12.527713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.864 qpair failed and we were unable to recover it. 00:38:18.864 [2024-12-13 10:40:12.527876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.864 [2024-12-13 10:40:12.527892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.864 qpair failed and we were unable to recover it. 00:38:18.864 [2024-12-13 10:40:12.527989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.865 [2024-12-13 10:40:12.528003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.865 qpair failed and we were unable to recover it. 00:38:18.865 [2024-12-13 10:40:12.528214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.865 [2024-12-13 10:40:12.528259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.865 qpair failed and we were unable to recover it. 00:38:18.865 [2024-12-13 10:40:12.528479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.865 [2024-12-13 10:40:12.528524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.865 qpair failed and we were unable to recover it. 00:38:18.865 [2024-12-13 10:40:12.528720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.865 [2024-12-13 10:40:12.528762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.865 qpair failed and we were unable to recover it. 00:38:18.865 [2024-12-13 10:40:12.528897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.865 [2024-12-13 10:40:12.528912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.865 qpair failed and we were unable to recover it. 00:38:18.865 [2024-12-13 10:40:12.529072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.865 [2024-12-13 10:40:12.529098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.865 qpair failed and we were unable to recover it. 00:38:18.865 [2024-12-13 10:40:12.529174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.865 [2024-12-13 10:40:12.529187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.865 qpair failed and we were unable to recover it. 00:38:18.865 [2024-12-13 10:40:12.529338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.865 [2024-12-13 10:40:12.529353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.865 qpair failed and we were unable to recover it. 00:38:18.865 [2024-12-13 10:40:12.529558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.865 [2024-12-13 10:40:12.529575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.865 qpair failed and we were unable to recover it. 00:38:18.865 [2024-12-13 10:40:12.529673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.865 [2024-12-13 10:40:12.529686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.865 qpair failed and we were unable to recover it. 00:38:18.865 [2024-12-13 10:40:12.529821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.865 [2024-12-13 10:40:12.529836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.865 qpair failed and we were unable to recover it. 00:38:18.865 [2024-12-13 10:40:12.529993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.865 [2024-12-13 10:40:12.530036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.865 qpair failed and we were unable to recover it. 00:38:18.865 [2024-12-13 10:40:12.530237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.865 [2024-12-13 10:40:12.530280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.865 qpair failed and we were unable to recover it. 00:38:18.865 [2024-12-13 10:40:12.530595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.865 [2024-12-13 10:40:12.530641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.865 qpair failed and we were unable to recover it. 00:38:18.865 [2024-12-13 10:40:12.530790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.865 [2024-12-13 10:40:12.530833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.865 qpair failed and we were unable to recover it. 00:38:18.865 [2024-12-13 10:40:12.531116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.865 [2024-12-13 10:40:12.531160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.865 qpair failed and we were unable to recover it. 00:38:18.865 [2024-12-13 10:40:12.531389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.865 [2024-12-13 10:40:12.531433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.865 qpair failed and we were unable to recover it. 00:38:18.865 [2024-12-13 10:40:12.531650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.865 [2024-12-13 10:40:12.531665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.865 qpair failed and we were unable to recover it. 00:38:18.865 [2024-12-13 10:40:12.531824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.865 [2024-12-13 10:40:12.531839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.865 qpair failed and we were unable to recover it. 00:38:18.865 [2024-12-13 10:40:12.531986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.865 [2024-12-13 10:40:12.532001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.865 qpair failed and we were unable to recover it. 00:38:18.865 [2024-12-13 10:40:12.532235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.865 [2024-12-13 10:40:12.532279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.865 qpair failed and we were unable to recover it. 00:38:18.865 [2024-12-13 10:40:12.532487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.865 [2024-12-13 10:40:12.532532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.865 qpair failed and we were unable to recover it. 00:38:18.865 [2024-12-13 10:40:12.532743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.865 [2024-12-13 10:40:12.532758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.865 qpair failed and we were unable to recover it. 00:38:18.865 [2024-12-13 10:40:12.532844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.865 [2024-12-13 10:40:12.532858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.865 qpair failed and we were unable to recover it. 00:38:18.865 [2024-12-13 10:40:12.532958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.865 [2024-12-13 10:40:12.532972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.865 qpair failed and we were unable to recover it. 00:38:18.865 [2024-12-13 10:40:12.533108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.865 [2024-12-13 10:40:12.533123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.865 qpair failed and we were unable to recover it. 00:38:18.865 [2024-12-13 10:40:12.533191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.865 [2024-12-13 10:40:12.533205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.865 qpair failed and we were unable to recover it. 00:38:18.865 [2024-12-13 10:40:12.533272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.865 [2024-12-13 10:40:12.533286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.865 qpair failed and we were unable to recover it. 00:38:18.865 [2024-12-13 10:40:12.533367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.865 [2024-12-13 10:40:12.533381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.865 qpair failed and we were unable to recover it. 00:38:18.865 [2024-12-13 10:40:12.533458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.865 [2024-12-13 10:40:12.533474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.865 qpair failed and we were unable to recover it. 00:38:18.865 [2024-12-13 10:40:12.533624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.865 [2024-12-13 10:40:12.533638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.865 qpair failed and we were unable to recover it. 00:38:18.865 [2024-12-13 10:40:12.533745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.865 [2024-12-13 10:40:12.533785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.865 qpair failed and we were unable to recover it. 00:38:18.865 [2024-12-13 10:40:12.533980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.865 [2024-12-13 10:40:12.534023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.865 qpair failed and we were unable to recover it. 00:38:18.865 [2024-12-13 10:40:12.534228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.865 [2024-12-13 10:40:12.534269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.865 qpair failed and we were unable to recover it. 00:38:18.865 [2024-12-13 10:40:12.534503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.865 [2024-12-13 10:40:12.534543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.865 qpair failed and we were unable to recover it. 00:38:18.865 [2024-12-13 10:40:12.534798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.865 [2024-12-13 10:40:12.534855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.865 qpair failed and we were unable to recover it. 00:38:18.865 [2024-12-13 10:40:12.534992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.865 [2024-12-13 10:40:12.535036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.865 qpair failed and we were unable to recover it. 00:38:18.865 [2024-12-13 10:40:12.535306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.865 [2024-12-13 10:40:12.535349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.865 qpair failed and we were unable to recover it. 00:38:18.865 [2024-12-13 10:40:12.535498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.865 [2024-12-13 10:40:12.535513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.865 qpair failed and we were unable to recover it. 00:38:18.865 [2024-12-13 10:40:12.535729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.866 [2024-12-13 10:40:12.535772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.866 qpair failed and we were unable to recover it. 00:38:18.866 [2024-12-13 10:40:12.535906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.866 [2024-12-13 10:40:12.535948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.866 qpair failed and we were unable to recover it. 00:38:18.866 [2024-12-13 10:40:12.536213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.866 [2024-12-13 10:40:12.536256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.866 qpair failed and we were unable to recover it. 00:38:18.866 [2024-12-13 10:40:12.536530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.866 [2024-12-13 10:40:12.536580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.866 qpair failed and we were unable to recover it. 00:38:18.866 [2024-12-13 10:40:12.536790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.866 [2024-12-13 10:40:12.536826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.866 qpair failed and we were unable to recover it. 00:38:18.866 [2024-12-13 10:40:12.536978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.866 [2024-12-13 10:40:12.536993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.866 qpair failed and we were unable to recover it. 00:38:18.866 [2024-12-13 10:40:12.537085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.866 [2024-12-13 10:40:12.537103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.866 qpair failed and we were unable to recover it. 00:38:18.866 [2024-12-13 10:40:12.537192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.866 [2024-12-13 10:40:12.537206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.866 qpair failed and we were unable to recover it. 00:38:18.866 [2024-12-13 10:40:12.537445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.866 [2024-12-13 10:40:12.537511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.866 qpair failed and we were unable to recover it. 00:38:18.866 [2024-12-13 10:40:12.537651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.866 [2024-12-13 10:40:12.537706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.866 qpair failed and we were unable to recover it. 00:38:18.866 [2024-12-13 10:40:12.537902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.866 [2024-12-13 10:40:12.537945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.866 qpair failed and we were unable to recover it. 00:38:18.866 [2024-12-13 10:40:12.538207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.866 [2024-12-13 10:40:12.538251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.866 qpair failed and we were unable to recover it. 00:38:18.866 [2024-12-13 10:40:12.538404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.866 [2024-12-13 10:40:12.538460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.866 qpair failed and we were unable to recover it. 00:38:18.866 [2024-12-13 10:40:12.538661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.866 [2024-12-13 10:40:12.538705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.866 qpair failed and we were unable to recover it. 00:38:18.866 [2024-12-13 10:40:12.538913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.866 [2024-12-13 10:40:12.538928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.866 qpair failed and we were unable to recover it. 00:38:18.866 [2024-12-13 10:40:12.538996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.866 [2024-12-13 10:40:12.539009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.866 qpair failed and we were unable to recover it. 00:38:18.866 [2024-12-13 10:40:12.539073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.866 [2024-12-13 10:40:12.539087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.866 qpair failed and we were unable to recover it. 00:38:18.866 [2024-12-13 10:40:12.539180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.866 [2024-12-13 10:40:12.539194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.866 qpair failed and we were unable to recover it. 00:38:18.866 [2024-12-13 10:40:12.539356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.866 [2024-12-13 10:40:12.539399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.866 qpair failed and we were unable to recover it. 00:38:18.866 [2024-12-13 10:40:12.539534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.866 [2024-12-13 10:40:12.539578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.866 qpair failed and we were unable to recover it. 00:38:18.866 [2024-12-13 10:40:12.539792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.866 [2024-12-13 10:40:12.539846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.866 qpair failed and we were unable to recover it. 00:38:18.866 [2024-12-13 10:40:12.539914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.866 [2024-12-13 10:40:12.539927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.866 qpair failed and we were unable to recover it. 00:38:18.866 [2024-12-13 10:40:12.540012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.866 [2024-12-13 10:40:12.540026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.866 qpair failed and we were unable to recover it. 00:38:18.866 [2024-12-13 10:40:12.540181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.866 [2024-12-13 10:40:12.540196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.866 qpair failed and we were unable to recover it. 00:38:18.866 [2024-12-13 10:40:12.540273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.866 [2024-12-13 10:40:12.540286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.866 qpair failed and we were unable to recover it. 00:38:18.866 [2024-12-13 10:40:12.540396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.866 [2024-12-13 10:40:12.540441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.866 qpair failed and we were unable to recover it. 00:38:18.866 [2024-12-13 10:40:12.540619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.866 [2024-12-13 10:40:12.540664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.866 qpair failed and we were unable to recover it. 00:38:18.866 [2024-12-13 10:40:12.540809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.866 [2024-12-13 10:40:12.540852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.866 qpair failed and we were unable to recover it. 00:38:18.866 [2024-12-13 10:40:12.541121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.866 [2024-12-13 10:40:12.541165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.866 qpair failed and we were unable to recover it. 00:38:18.866 [2024-12-13 10:40:12.541308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.866 [2024-12-13 10:40:12.541363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.866 qpair failed and we were unable to recover it. 00:38:18.866 [2024-12-13 10:40:12.541535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.866 [2024-12-13 10:40:12.541551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.866 qpair failed and we were unable to recover it. 00:38:18.866 [2024-12-13 10:40:12.541649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.866 [2024-12-13 10:40:12.541693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.866 qpair failed and we were unable to recover it. 00:38:18.866 [2024-12-13 10:40:12.541981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.866 [2024-12-13 10:40:12.542025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.866 qpair failed and we were unable to recover it. 00:38:18.866 [2024-12-13 10:40:12.542219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.866 [2024-12-13 10:40:12.542263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.866 qpair failed and we were unable to recover it. 00:38:18.866 [2024-12-13 10:40:12.542473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.866 [2024-12-13 10:40:12.542517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.866 qpair failed and we were unable to recover it. 00:38:18.866 [2024-12-13 10:40:12.542727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.866 [2024-12-13 10:40:12.542774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.866 qpair failed and we were unable to recover it. 00:38:18.866 [2024-12-13 10:40:12.542912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.866 [2024-12-13 10:40:12.542927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.866 qpair failed and we were unable to recover it. 00:38:18.866 [2024-12-13 10:40:12.543004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.866 [2024-12-13 10:40:12.543018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.866 qpair failed and we were unable to recover it. 00:38:18.866 [2024-12-13 10:40:12.543189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.867 [2024-12-13 10:40:12.543204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.867 qpair failed and we were unable to recover it. 00:38:18.867 [2024-12-13 10:40:12.543444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.867 [2024-12-13 10:40:12.543498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.867 qpair failed and we were unable to recover it. 00:38:18.867 [2024-12-13 10:40:12.543639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.867 [2024-12-13 10:40:12.543683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.867 qpair failed and we were unable to recover it. 00:38:18.867 [2024-12-13 10:40:12.543821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.867 [2024-12-13 10:40:12.543836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.867 qpair failed and we were unable to recover it. 00:38:18.867 [2024-12-13 10:40:12.543985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.867 [2024-12-13 10:40:12.544000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.867 qpair failed and we were unable to recover it. 00:38:18.867 [2024-12-13 10:40:12.544156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.867 [2024-12-13 10:40:12.544199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.867 qpair failed and we were unable to recover it. 00:38:18.867 [2024-12-13 10:40:12.544336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.867 [2024-12-13 10:40:12.544379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.867 qpair failed and we were unable to recover it. 00:38:18.867 [2024-12-13 10:40:12.544613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.867 [2024-12-13 10:40:12.544658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.867 qpair failed and we were unable to recover it. 00:38:18.867 [2024-12-13 10:40:12.544821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.867 [2024-12-13 10:40:12.544851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.867 qpair failed and we were unable to recover it. 00:38:18.867 [2024-12-13 10:40:12.545002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.867 [2024-12-13 10:40:12.545045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.867 qpair failed and we were unable to recover it. 00:38:18.867 [2024-12-13 10:40:12.545220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.867 [2024-12-13 10:40:12.545267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.867 qpair failed and we were unable to recover it. 00:38:18.867 [2024-12-13 10:40:12.545556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.867 [2024-12-13 10:40:12.545606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.867 qpair failed and we were unable to recover it. 00:38:18.867 [2024-12-13 10:40:12.545829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.867 [2024-12-13 10:40:12.545873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.867 qpair failed and we were unable to recover it. 00:38:18.867 [2024-12-13 10:40:12.546158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.867 [2024-12-13 10:40:12.546202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.867 qpair failed and we were unable to recover it. 00:38:18.867 [2024-12-13 10:40:12.546425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.867 [2024-12-13 10:40:12.546480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.867 qpair failed and we were unable to recover it. 00:38:18.867 [2024-12-13 10:40:12.546587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.867 [2024-12-13 10:40:12.546602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.867 qpair failed and we were unable to recover it. 00:38:18.867 [2024-12-13 10:40:12.546758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.867 [2024-12-13 10:40:12.546787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.867 qpair failed and we were unable to recover it. 00:38:18.867 [2024-12-13 10:40:12.546935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.867 [2024-12-13 10:40:12.546978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.867 qpair failed and we were unable to recover it. 00:38:18.867 [2024-12-13 10:40:12.547172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.867 [2024-12-13 10:40:12.547215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.867 qpair failed and we were unable to recover it. 00:38:18.867 [2024-12-13 10:40:12.547412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.867 [2024-12-13 10:40:12.547465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.867 qpair failed and we were unable to recover it. 00:38:18.867 [2024-12-13 10:40:12.547665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.867 [2024-12-13 10:40:12.547709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.867 qpair failed and we were unable to recover it. 00:38:18.867 [2024-12-13 10:40:12.547843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.867 [2024-12-13 10:40:12.547886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.867 qpair failed and we were unable to recover it. 00:38:18.867 [2024-12-13 10:40:12.548043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.867 [2024-12-13 10:40:12.548057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.867 qpair failed and we were unable to recover it. 00:38:18.867 [2024-12-13 10:40:12.548145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.867 [2024-12-13 10:40:12.548158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.867 qpair failed and we were unable to recover it. 00:38:18.867 [2024-12-13 10:40:12.548368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.867 [2024-12-13 10:40:12.548411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.867 qpair failed and we were unable to recover it. 00:38:18.867 [2024-12-13 10:40:12.548687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.867 [2024-12-13 10:40:12.548732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.867 qpair failed and we were unable to recover it. 00:38:18.867 [2024-12-13 10:40:12.548914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.867 [2024-12-13 10:40:12.548928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.867 qpair failed and we were unable to recover it. 00:38:18.867 [2024-12-13 10:40:12.549154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.867 [2024-12-13 10:40:12.549168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.867 qpair failed and we were unable to recover it. 00:38:18.867 [2024-12-13 10:40:12.549407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.867 [2024-12-13 10:40:12.549474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.867 qpair failed and we were unable to recover it. 00:38:18.867 [2024-12-13 10:40:12.549682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.867 [2024-12-13 10:40:12.549726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.867 qpair failed and we were unable to recover it. 00:38:18.867 [2024-12-13 10:40:12.549915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.867 [2024-12-13 10:40:12.549931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.867 qpair failed and we were unable to recover it. 00:38:18.867 [2024-12-13 10:40:12.550037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.867 [2024-12-13 10:40:12.550052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.867 qpair failed and we were unable to recover it. 00:38:18.867 [2024-12-13 10:40:12.550142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.867 [2024-12-13 10:40:12.550175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.867 qpair failed and we were unable to recover it. 00:38:18.867 [2024-12-13 10:40:12.550352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.867 [2024-12-13 10:40:12.550396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.867 qpair failed and we were unable to recover it. 00:38:18.867 [2024-12-13 10:40:12.550605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.867 [2024-12-13 10:40:12.550650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.867 qpair failed and we were unable to recover it. 00:38:18.867 [2024-12-13 10:40:12.550874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.867 [2024-12-13 10:40:12.550918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.867 qpair failed and we were unable to recover it. 00:38:18.867 [2024-12-13 10:40:12.551061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.867 [2024-12-13 10:40:12.551104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.867 qpair failed and we were unable to recover it. 00:38:18.867 [2024-12-13 10:40:12.551247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.867 [2024-12-13 10:40:12.551290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.867 qpair failed and we were unable to recover it. 00:38:18.867 [2024-12-13 10:40:12.551517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.867 [2024-12-13 10:40:12.551532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.867 qpair failed and we were unable to recover it. 00:38:18.867 [2024-12-13 10:40:12.551666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.868 [2024-12-13 10:40:12.551682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.868 qpair failed and we were unable to recover it. 00:38:18.868 [2024-12-13 10:40:12.551766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.868 [2024-12-13 10:40:12.551779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.868 qpair failed and we were unable to recover it. 00:38:18.868 [2024-12-13 10:40:12.551927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.868 [2024-12-13 10:40:12.551942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.868 qpair failed and we were unable to recover it. 00:38:18.868 [2024-12-13 10:40:12.552081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.868 [2024-12-13 10:40:12.552096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.868 qpair failed and we were unable to recover it. 00:38:18.868 [2024-12-13 10:40:12.552251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.868 [2024-12-13 10:40:12.552266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.868 qpair failed and we were unable to recover it. 00:38:18.868 [2024-12-13 10:40:12.552428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.868 [2024-12-13 10:40:12.552483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.868 qpair failed and we were unable to recover it. 00:38:18.868 [2024-12-13 10:40:12.552684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.868 [2024-12-13 10:40:12.552729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.868 qpair failed and we were unable to recover it. 00:38:18.868 [2024-12-13 10:40:12.552938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.868 [2024-12-13 10:40:12.552982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.868 qpair failed and we were unable to recover it. 00:38:18.868 [2024-12-13 10:40:12.553115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.868 [2024-12-13 10:40:12.553159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.868 qpair failed and we were unable to recover it. 00:38:18.868 [2024-12-13 10:40:12.553429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.868 [2024-12-13 10:40:12.553499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.868 qpair failed and we were unable to recover it. 00:38:18.868 [2024-12-13 10:40:12.553655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.868 [2024-12-13 10:40:12.553670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.868 qpair failed and we were unable to recover it. 00:38:18.868 [2024-12-13 10:40:12.553884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.868 [2024-12-13 10:40:12.553927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.868 qpair failed and we were unable to recover it. 00:38:18.868 [2024-12-13 10:40:12.554125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.868 [2024-12-13 10:40:12.554175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.868 qpair failed and we were unable to recover it. 00:38:18.868 [2024-12-13 10:40:12.554392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.868 [2024-12-13 10:40:12.554437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.868 qpair failed and we were unable to recover it. 00:38:18.868 [2024-12-13 10:40:12.554687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.868 [2024-12-13 10:40:12.554732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.868 qpair failed and we were unable to recover it. 00:38:18.868 [2024-12-13 10:40:12.554938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.868 [2024-12-13 10:40:12.554981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.868 qpair failed and we were unable to recover it. 00:38:18.868 [2024-12-13 10:40:12.555141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.868 [2024-12-13 10:40:12.555184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.868 qpair failed and we were unable to recover it. 00:38:18.868 [2024-12-13 10:40:12.555318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.868 [2024-12-13 10:40:12.555361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.868 qpair failed and we were unable to recover it. 00:38:18.868 [2024-12-13 10:40:12.555477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.868 [2024-12-13 10:40:12.555493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.868 qpair failed and we were unable to recover it. 00:38:18.868 [2024-12-13 10:40:12.555627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.868 [2024-12-13 10:40:12.555656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.868 qpair failed and we were unable to recover it. 00:38:18.868 [2024-12-13 10:40:12.555925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.868 [2024-12-13 10:40:12.555970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.868 qpair failed and we were unable to recover it. 00:38:18.868 [2024-12-13 10:40:12.556241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.868 [2024-12-13 10:40:12.556285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.868 qpair failed and we were unable to recover it. 00:38:18.868 [2024-12-13 10:40:12.556481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.868 [2024-12-13 10:40:12.556526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.868 qpair failed and we were unable to recover it. 00:38:18.868 [2024-12-13 10:40:12.556707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.868 [2024-12-13 10:40:12.556722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.868 qpair failed and we were unable to recover it. 00:38:18.868 [2024-12-13 10:40:12.556893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.868 [2024-12-13 10:40:12.556937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.868 qpair failed and we were unable to recover it. 00:38:18.868 [2024-12-13 10:40:12.557223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.868 [2024-12-13 10:40:12.557266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.868 qpair failed and we were unable to recover it. 00:38:18.868 [2024-12-13 10:40:12.557472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.868 [2024-12-13 10:40:12.557488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.868 qpair failed and we were unable to recover it. 00:38:18.868 [2024-12-13 10:40:12.557580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.868 [2024-12-13 10:40:12.557617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.868 qpair failed and we were unable to recover it. 00:38:18.868 [2024-12-13 10:40:12.557748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.868 [2024-12-13 10:40:12.557792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.868 qpair failed and we were unable to recover it. 00:38:18.868 [2024-12-13 10:40:12.557918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.868 [2024-12-13 10:40:12.557962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.868 qpair failed and we were unable to recover it. 00:38:18.868 [2024-12-13 10:40:12.558249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.868 [2024-12-13 10:40:12.558292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.868 qpair failed and we were unable to recover it. 00:38:18.868 [2024-12-13 10:40:12.558492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.869 [2024-12-13 10:40:12.558538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.869 qpair failed and we were unable to recover it. 00:38:18.869 [2024-12-13 10:40:12.558810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.869 [2024-12-13 10:40:12.558854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.869 qpair failed and we were unable to recover it. 00:38:18.869 [2024-12-13 10:40:12.559064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.869 [2024-12-13 10:40:12.559107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.869 qpair failed and we were unable to recover it. 00:38:18.869 [2024-12-13 10:40:12.559336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.869 [2024-12-13 10:40:12.559380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.869 qpair failed and we were unable to recover it. 00:38:18.869 [2024-12-13 10:40:12.559610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.869 [2024-12-13 10:40:12.559656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.869 qpair failed and we were unable to recover it. 00:38:18.869 [2024-12-13 10:40:12.559869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.869 [2024-12-13 10:40:12.559912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.869 qpair failed and we were unable to recover it. 00:38:18.869 [2024-12-13 10:40:12.560065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.869 [2024-12-13 10:40:12.560107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.869 qpair failed and we were unable to recover it. 00:38:18.869 [2024-12-13 10:40:12.560341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.869 [2024-12-13 10:40:12.560396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.869 qpair failed and we were unable to recover it. 00:38:18.869 [2024-12-13 10:40:12.560683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.869 [2024-12-13 10:40:12.560730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:18.869 qpair failed and we were unable to recover it. 00:38:18.869 [2024-12-13 10:40:12.561015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.869 [2024-12-13 10:40:12.561061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:18.869 qpair failed and we were unable to recover it. 00:38:18.869 [2024-12-13 10:40:12.561289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.869 [2024-12-13 10:40:12.561337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:18.869 qpair failed and we were unable to recover it. 00:38:18.869 [2024-12-13 10:40:12.561507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.869 [2024-12-13 10:40:12.561524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.869 qpair failed and we were unable to recover it. 00:38:18.869 [2024-12-13 10:40:12.561677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.869 [2024-12-13 10:40:12.561692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.869 qpair failed and we were unable to recover it. 00:38:18.869 [2024-12-13 10:40:12.561882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.869 [2024-12-13 10:40:12.561925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.869 qpair failed and we were unable to recover it. 00:38:18.869 [2024-12-13 10:40:12.562138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.869 [2024-12-13 10:40:12.562181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.869 qpair failed and we were unable to recover it. 00:38:18.869 [2024-12-13 10:40:12.562413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.869 [2024-12-13 10:40:12.562468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.869 qpair failed and we were unable to recover it. 00:38:18.869 [2024-12-13 10:40:12.562674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.869 [2024-12-13 10:40:12.562716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.869 qpair failed and we were unable to recover it. 00:38:18.869 [2024-12-13 10:40:12.562865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.869 [2024-12-13 10:40:12.562909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.869 qpair failed and we were unable to recover it. 00:38:18.869 [2024-12-13 10:40:12.563115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.869 [2024-12-13 10:40:12.563158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.869 qpair failed and we were unable to recover it. 00:38:18.869 [2024-12-13 10:40:12.563357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.869 [2024-12-13 10:40:12.563401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.869 qpair failed and we were unable to recover it. 00:38:18.869 [2024-12-13 10:40:12.563639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.869 [2024-12-13 10:40:12.563687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.869 qpair failed and we were unable to recover it. 00:38:18.869 [2024-12-13 10:40:12.563780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.869 [2024-12-13 10:40:12.563796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.869 qpair failed and we were unable to recover it. 00:38:18.869 [2024-12-13 10:40:12.563933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.869 [2024-12-13 10:40:12.563948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.869 qpair failed and we were unable to recover it. 00:38:18.869 [2024-12-13 10:40:12.564124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.869 [2024-12-13 10:40:12.564138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.869 qpair failed and we were unable to recover it. 00:38:18.869 [2024-12-13 10:40:12.564233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.869 [2024-12-13 10:40:12.564276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.869 qpair failed and we were unable to recover it. 00:38:18.869 [2024-12-13 10:40:12.564496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.869 [2024-12-13 10:40:12.564541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.869 qpair failed and we were unable to recover it. 00:38:18.869 [2024-12-13 10:40:12.564686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.869 [2024-12-13 10:40:12.564728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.869 qpair failed and we were unable to recover it. 00:38:18.869 [2024-12-13 10:40:12.564996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.869 [2024-12-13 10:40:12.565011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.869 qpair failed and we were unable to recover it. 00:38:18.869 [2024-12-13 10:40:12.565148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.869 [2024-12-13 10:40:12.565164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.869 qpair failed and we were unable to recover it. 00:38:18.869 [2024-12-13 10:40:12.565324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.869 [2024-12-13 10:40:12.565379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.869 qpair failed and we were unable to recover it. 00:38:18.869 [2024-12-13 10:40:12.565539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.869 [2024-12-13 10:40:12.565584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.869 qpair failed and we were unable to recover it. 00:38:18.869 [2024-12-13 10:40:12.565881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.869 [2024-12-13 10:40:12.565925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.869 qpair failed and we were unable to recover it. 00:38:18.869 [2024-12-13 10:40:12.566159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.869 [2024-12-13 10:40:12.566203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.869 qpair failed and we were unable to recover it. 00:38:18.869 [2024-12-13 10:40:12.566424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.869 [2024-12-13 10:40:12.566479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.869 qpair failed and we were unable to recover it. 00:38:18.869 [2024-12-13 10:40:12.566618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.869 [2024-12-13 10:40:12.566661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.869 qpair failed and we were unable to recover it. 00:38:18.869 [2024-12-13 10:40:12.566874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.869 [2024-12-13 10:40:12.566919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.869 qpair failed and we were unable to recover it. 00:38:18.869 [2024-12-13 10:40:12.567072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.869 [2024-12-13 10:40:12.567115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.869 qpair failed and we were unable to recover it. 00:38:18.869 [2024-12-13 10:40:12.567272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.869 [2024-12-13 10:40:12.567317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.869 qpair failed and we were unable to recover it. 00:38:18.869 [2024-12-13 10:40:12.567504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.869 [2024-12-13 10:40:12.567520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.869 qpair failed and we were unable to recover it. 00:38:18.869 [2024-12-13 10:40:12.567732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.870 [2024-12-13 10:40:12.567775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.870 qpair failed and we were unable to recover it. 00:38:18.870 [2024-12-13 10:40:12.568082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.870 [2024-12-13 10:40:12.568125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.870 qpair failed and we were unable to recover it. 00:38:18.870 [2024-12-13 10:40:12.568331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.870 [2024-12-13 10:40:12.568376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.870 qpair failed and we were unable to recover it. 00:38:18.870 [2024-12-13 10:40:12.568528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.870 [2024-12-13 10:40:12.568572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.870 qpair failed and we were unable to recover it. 00:38:18.870 [2024-12-13 10:40:12.568795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.870 [2024-12-13 10:40:12.568810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.870 qpair failed and we were unable to recover it. 00:38:18.870 [2024-12-13 10:40:12.568886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.870 [2024-12-13 10:40:12.568901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.870 qpair failed and we were unable to recover it. 00:38:18.870 [2024-12-13 10:40:12.569198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.870 [2024-12-13 10:40:12.569241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.870 qpair failed and we were unable to recover it. 00:38:18.870 [2024-12-13 10:40:12.569475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.870 [2024-12-13 10:40:12.569513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.870 qpair failed and we were unable to recover it. 00:38:18.870 [2024-12-13 10:40:12.569660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.870 [2024-12-13 10:40:12.569676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.870 qpair failed and we were unable to recover it. 00:38:18.870 [2024-12-13 10:40:12.569895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.870 [2024-12-13 10:40:12.569945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.870 qpair failed and we were unable to recover it. 00:38:18.870 [2024-12-13 10:40:12.570101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.870 [2024-12-13 10:40:12.570145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.870 qpair failed and we were unable to recover it. 00:38:18.870 [2024-12-13 10:40:12.570352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.870 [2024-12-13 10:40:12.570403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.870 qpair failed and we were unable to recover it. 00:38:18.870 [2024-12-13 10:40:12.570503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.870 [2024-12-13 10:40:12.570518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.870 qpair failed and we were unable to recover it. 00:38:18.870 [2024-12-13 10:40:12.570673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.870 [2024-12-13 10:40:12.570688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.870 qpair failed and we were unable to recover it. 00:38:18.870 [2024-12-13 10:40:12.570906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.870 [2024-12-13 10:40:12.570949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.870 qpair failed and we were unable to recover it. 00:38:18.870 [2024-12-13 10:40:12.571172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.870 [2024-12-13 10:40:12.571216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.870 qpair failed and we were unable to recover it. 00:38:18.870 [2024-12-13 10:40:12.571414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.870 [2024-12-13 10:40:12.571467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.870 qpair failed and we were unable to recover it. 00:38:18.870 [2024-12-13 10:40:12.571753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.870 [2024-12-13 10:40:12.571797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.870 qpair failed and we were unable to recover it. 00:38:18.870 [2024-12-13 10:40:12.572045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.870 [2024-12-13 10:40:12.572060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.870 qpair failed and we were unable to recover it. 00:38:18.870 [2024-12-13 10:40:12.572292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.870 [2024-12-13 10:40:12.572307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.870 qpair failed and we were unable to recover it. 00:38:18.870 [2024-12-13 10:40:12.572403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.870 [2024-12-13 10:40:12.572416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.870 qpair failed and we were unable to recover it. 00:38:18.870 [2024-12-13 10:40:12.572554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.870 [2024-12-13 10:40:12.572570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.870 qpair failed and we were unable to recover it. 00:38:18.870 [2024-12-13 10:40:12.572773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.870 [2024-12-13 10:40:12.572788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.870 qpair failed and we were unable to recover it. 00:38:18.870 [2024-12-13 10:40:12.572933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.870 [2024-12-13 10:40:12.572947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.870 qpair failed and we were unable to recover it. 00:38:18.870 [2024-12-13 10:40:12.573043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.870 [2024-12-13 10:40:12.573100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.870 qpair failed and we were unable to recover it. 00:38:18.870 [2024-12-13 10:40:12.573315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.870 [2024-12-13 10:40:12.573359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.870 qpair failed and we were unable to recover it. 00:38:18.870 [2024-12-13 10:40:12.573650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.870 [2024-12-13 10:40:12.573694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.870 qpair failed and we were unable to recover it. 00:38:18.870 [2024-12-13 10:40:12.573850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.870 [2024-12-13 10:40:12.573894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.870 qpair failed and we were unable to recover it. 00:38:18.870 [2024-12-13 10:40:12.574098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.870 [2024-12-13 10:40:12.574142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.870 qpair failed and we were unable to recover it. 00:38:18.870 [2024-12-13 10:40:12.574335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.870 [2024-12-13 10:40:12.574385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.870 qpair failed and we were unable to recover it. 00:38:18.870 [2024-12-13 10:40:12.574605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.870 [2024-12-13 10:40:12.574650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.870 qpair failed and we were unable to recover it. 00:38:18.870 [2024-12-13 10:40:12.574846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.870 [2024-12-13 10:40:12.574861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.870 qpair failed and we were unable to recover it. 00:38:18.870 [2024-12-13 10:40:12.575024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.870 [2024-12-13 10:40:12.575069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.870 qpair failed and we were unable to recover it. 00:38:18.870 [2024-12-13 10:40:12.575262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.870 [2024-12-13 10:40:12.575304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.870 qpair failed and we were unable to recover it. 00:38:18.870 [2024-12-13 10:40:12.575437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.870 [2024-12-13 10:40:12.575494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.870 qpair failed and we were unable to recover it. 00:38:18.870 [2024-12-13 10:40:12.575697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.870 [2024-12-13 10:40:12.575712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.870 qpair failed and we were unable to recover it. 00:38:18.870 [2024-12-13 10:40:12.575930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.870 [2024-12-13 10:40:12.575974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.870 qpair failed and we were unable to recover it. 00:38:18.870 [2024-12-13 10:40:12.576270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.870 [2024-12-13 10:40:12.576314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.870 qpair failed and we were unable to recover it. 00:38:18.870 [2024-12-13 10:40:12.576464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.870 [2024-12-13 10:40:12.576516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.870 qpair failed and we were unable to recover it. 00:38:18.871 [2024-12-13 10:40:12.576598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.871 [2024-12-13 10:40:12.576612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.871 qpair failed and we were unable to recover it. 00:38:18.871 [2024-12-13 10:40:12.576746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.871 [2024-12-13 10:40:12.576761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.871 qpair failed and we were unable to recover it. 00:38:18.871 [2024-12-13 10:40:12.576990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.871 [2024-12-13 10:40:12.577005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.871 qpair failed and we were unable to recover it. 00:38:18.871 [2024-12-13 10:40:12.577114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.871 [2024-12-13 10:40:12.577129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.871 qpair failed and we were unable to recover it. 00:38:18.871 [2024-12-13 10:40:12.577277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.871 [2024-12-13 10:40:12.577292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.871 qpair failed and we were unable to recover it. 00:38:18.871 [2024-12-13 10:40:12.577389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.871 [2024-12-13 10:40:12.577404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.871 qpair failed and we were unable to recover it. 00:38:18.871 [2024-12-13 10:40:12.577504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.871 [2024-12-13 10:40:12.577518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.871 qpair failed and we were unable to recover it. 00:38:18.871 [2024-12-13 10:40:12.577739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.871 [2024-12-13 10:40:12.577782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.871 qpair failed and we were unable to recover it. 00:38:18.871 [2024-12-13 10:40:12.578049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.871 [2024-12-13 10:40:12.578093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.871 qpair failed and we were unable to recover it. 00:38:18.871 [2024-12-13 10:40:12.578303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.871 [2024-12-13 10:40:12.578348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.871 qpair failed and we were unable to recover it. 00:38:18.871 [2024-12-13 10:40:12.578603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.871 [2024-12-13 10:40:12.578655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.871 qpair failed and we were unable to recover it. 00:38:18.871 [2024-12-13 10:40:12.578832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.871 [2024-12-13 10:40:12.578847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.871 qpair failed and we were unable to recover it. 00:38:18.871 [2024-12-13 10:40:12.579000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.871 [2024-12-13 10:40:12.579015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.871 qpair failed and we were unable to recover it. 00:38:18.871 [2024-12-13 10:40:12.579171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.871 [2024-12-13 10:40:12.579186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.871 qpair failed and we were unable to recover it. 00:38:18.871 [2024-12-13 10:40:12.579327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.871 [2024-12-13 10:40:12.579342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.871 qpair failed and we were unable to recover it. 00:38:18.871 [2024-12-13 10:40:12.579423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.871 [2024-12-13 10:40:12.579441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.871 qpair failed and we were unable to recover it. 00:38:18.871 [2024-12-13 10:40:12.579605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.871 [2024-12-13 10:40:12.579620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.871 qpair failed and we were unable to recover it. 00:38:18.871 [2024-12-13 10:40:12.579789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.871 [2024-12-13 10:40:12.579833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.871 qpair failed and we were unable to recover it. 00:38:18.871 [2024-12-13 10:40:12.580059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.871 [2024-12-13 10:40:12.580102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.871 qpair failed and we were unable to recover it. 00:38:18.871 [2024-12-13 10:40:12.580257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.871 [2024-12-13 10:40:12.580301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.871 qpair failed and we were unable to recover it. 00:38:18.871 [2024-12-13 10:40:12.580427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.871 [2024-12-13 10:40:12.580480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.871 qpair failed and we were unable to recover it. 00:38:18.871 [2024-12-13 10:40:12.580631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.871 [2024-12-13 10:40:12.580674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.871 qpair failed and we were unable to recover it. 00:38:18.871 [2024-12-13 10:40:12.580877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.871 [2024-12-13 10:40:12.580921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.871 qpair failed and we were unable to recover it. 00:38:18.871 [2024-12-13 10:40:12.581125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.871 [2024-12-13 10:40:12.581169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.871 qpair failed and we were unable to recover it. 00:38:18.871 [2024-12-13 10:40:12.581381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.871 [2024-12-13 10:40:12.581425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.871 qpair failed and we were unable to recover it. 00:38:18.871 [2024-12-13 10:40:12.581581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.871 [2024-12-13 10:40:12.581626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.871 qpair failed and we were unable to recover it. 00:38:18.871 [2024-12-13 10:40:12.581824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.871 [2024-12-13 10:40:12.581868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.871 qpair failed and we were unable to recover it. 00:38:18.871 [2024-12-13 10:40:12.582019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.871 [2024-12-13 10:40:12.582062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.871 qpair failed and we were unable to recover it. 00:38:18.871 [2024-12-13 10:40:12.582292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.871 [2024-12-13 10:40:12.582336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.871 qpair failed and we were unable to recover it. 00:38:18.871 [2024-12-13 10:40:12.582651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.871 [2024-12-13 10:40:12.582698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.871 qpair failed and we were unable to recover it. 00:38:18.871 [2024-12-13 10:40:12.582972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.871 [2024-12-13 10:40:12.583015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.871 qpair failed and we were unable to recover it. 00:38:18.871 [2024-12-13 10:40:12.583238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.871 [2024-12-13 10:40:12.583282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.871 qpair failed and we were unable to recover it. 00:38:18.871 [2024-12-13 10:40:12.583518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.871 [2024-12-13 10:40:12.583565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.871 qpair failed and we were unable to recover it. 00:38:18.871 [2024-12-13 10:40:12.583771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.871 [2024-12-13 10:40:12.583814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.871 qpair failed and we were unable to recover it. 00:38:18.871 [2024-12-13 10:40:12.583970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.871 [2024-12-13 10:40:12.584013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.871 qpair failed and we were unable to recover it. 00:38:18.871 [2024-12-13 10:40:12.584227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.871 [2024-12-13 10:40:12.584272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.871 qpair failed and we were unable to recover it. 00:38:18.871 [2024-12-13 10:40:12.584470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.871 [2024-12-13 10:40:12.584514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.871 qpair failed and we were unable to recover it. 00:38:18.872 [2024-12-13 10:40:12.584756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.872 [2024-12-13 10:40:12.584800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.872 qpair failed and we were unable to recover it. 00:38:18.872 [2024-12-13 10:40:12.584945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.872 [2024-12-13 10:40:12.584960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.872 qpair failed and we were unable to recover it. 00:38:18.872 [2024-12-13 10:40:12.585204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.872 [2024-12-13 10:40:12.585248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.872 qpair failed and we were unable to recover it. 00:38:18.872 [2024-12-13 10:40:12.585444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.872 [2024-12-13 10:40:12.585509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.872 qpair failed and we were unable to recover it. 00:38:18.872 [2024-12-13 10:40:12.585708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.872 [2024-12-13 10:40:12.585752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.872 qpair failed and we were unable to recover it. 00:38:18.872 [2024-12-13 10:40:12.585986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.872 [2024-12-13 10:40:12.586002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.872 qpair failed and we were unable to recover it. 00:38:18.872 [2024-12-13 10:40:12.586247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.872 [2024-12-13 10:40:12.586262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.872 qpair failed and we were unable to recover it. 00:38:18.872 [2024-12-13 10:40:12.586467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.872 [2024-12-13 10:40:12.586482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.872 qpair failed and we were unable to recover it. 00:38:18.872 [2024-12-13 10:40:12.586617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.872 [2024-12-13 10:40:12.586632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.872 qpair failed and we were unable to recover it. 00:38:18.872 [2024-12-13 10:40:12.586776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.872 [2024-12-13 10:40:12.586820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.872 qpair failed and we were unable to recover it. 00:38:18.872 [2024-12-13 10:40:12.587101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.872 [2024-12-13 10:40:12.587145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.872 qpair failed and we were unable to recover it. 00:38:18.872 [2024-12-13 10:40:12.587298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.872 [2024-12-13 10:40:12.587341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.872 qpair failed and we were unable to recover it. 00:38:18.872 [2024-12-13 10:40:12.587606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.872 [2024-12-13 10:40:12.587652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.872 qpair failed and we were unable to recover it. 00:38:18.872 [2024-12-13 10:40:12.587930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.872 [2024-12-13 10:40:12.587981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.872 qpair failed and we were unable to recover it. 00:38:18.872 [2024-12-13 10:40:12.588269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.872 [2024-12-13 10:40:12.588312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.872 qpair failed and we were unable to recover it. 00:38:18.872 [2024-12-13 10:40:12.588457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.872 [2024-12-13 10:40:12.588501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.872 qpair failed and we were unable to recover it. 00:38:18.872 [2024-12-13 10:40:12.588710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.872 [2024-12-13 10:40:12.588754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.872 qpair failed and we were unable to recover it. 00:38:18.872 [2024-12-13 10:40:12.588980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.872 [2024-12-13 10:40:12.589023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.872 qpair failed and we were unable to recover it. 00:38:18.872 [2024-12-13 10:40:12.589156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.872 [2024-12-13 10:40:12.589199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.872 qpair failed and we were unable to recover it. 00:38:18.872 [2024-12-13 10:40:12.589356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.872 [2024-12-13 10:40:12.589400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.872 qpair failed and we were unable to recover it. 00:38:18.872 [2024-12-13 10:40:12.589682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.872 [2024-12-13 10:40:12.589731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.872 qpair failed and we were unable to recover it. 00:38:18.872 [2024-12-13 10:40:12.589854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.872 [2024-12-13 10:40:12.589869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.872 qpair failed and we were unable to recover it. 00:38:18.872 [2024-12-13 10:40:12.590013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.872 [2024-12-13 10:40:12.590028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.872 qpair failed and we were unable to recover it. 00:38:18.872 [2024-12-13 10:40:12.590172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.872 [2024-12-13 10:40:12.590187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.872 qpair failed and we were unable to recover it. 00:38:18.872 [2024-12-13 10:40:12.590333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.872 [2024-12-13 10:40:12.590348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.872 qpair failed and we were unable to recover it. 00:38:18.872 [2024-12-13 10:40:12.590554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.872 [2024-12-13 10:40:12.590599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.872 qpair failed and we were unable to recover it. 00:38:18.872 [2024-12-13 10:40:12.590794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.872 [2024-12-13 10:40:12.590838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.872 qpair failed and we were unable to recover it. 00:38:18.872 [2024-12-13 10:40:12.590993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.872 [2024-12-13 10:40:12.591037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.872 qpair failed and we were unable to recover it. 00:38:18.872 [2024-12-13 10:40:12.591261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.872 [2024-12-13 10:40:12.591305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.872 qpair failed and we were unable to recover it. 00:38:18.872 [2024-12-13 10:40:12.591562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.872 [2024-12-13 10:40:12.591607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.872 qpair failed and we were unable to recover it. 00:38:18.872 [2024-12-13 10:40:12.591879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.872 [2024-12-13 10:40:12.591922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.872 qpair failed and we were unable to recover it. 00:38:18.872 [2024-12-13 10:40:12.592120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.872 [2024-12-13 10:40:12.592135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.872 qpair failed and we were unable to recover it. 00:38:18.872 [2024-12-13 10:40:12.592215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.872 [2024-12-13 10:40:12.592254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.872 qpair failed and we were unable to recover it. 00:38:18.872 [2024-12-13 10:40:12.592514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.872 [2024-12-13 10:40:12.592559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.872 qpair failed and we were unable to recover it. 00:38:18.873 [2024-12-13 10:40:12.592686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.873 [2024-12-13 10:40:12.592729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.873 qpair failed and we were unable to recover it. 00:38:18.873 [2024-12-13 10:40:12.592965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.873 [2024-12-13 10:40:12.593009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.873 qpair failed and we were unable to recover it. 00:38:18.873 [2024-12-13 10:40:12.593201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.873 [2024-12-13 10:40:12.593244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.873 qpair failed and we were unable to recover it. 00:38:18.873 [2024-12-13 10:40:12.593444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.873 [2024-12-13 10:40:12.593514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.873 qpair failed and we were unable to recover it. 00:38:18.873 [2024-12-13 10:40:12.593724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.873 [2024-12-13 10:40:12.593767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.873 qpair failed and we were unable to recover it. 00:38:18.873 [2024-12-13 10:40:12.593985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.873 [2024-12-13 10:40:12.594000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.873 qpair failed and we were unable to recover it. 00:38:18.873 [2024-12-13 10:40:12.594221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.873 [2024-12-13 10:40:12.594263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.873 qpair failed and we were unable to recover it. 00:38:18.873 [2024-12-13 10:40:12.594485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.873 [2024-12-13 10:40:12.594533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.873 qpair failed and we were unable to recover it. 00:38:18.873 [2024-12-13 10:40:12.594770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.873 [2024-12-13 10:40:12.594825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.873 qpair failed and we were unable to recover it. 00:38:18.873 [2024-12-13 10:40:12.595043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.873 [2024-12-13 10:40:12.595057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.873 qpair failed and we were unable to recover it. 00:38:18.873 [2024-12-13 10:40:12.595269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.873 [2024-12-13 10:40:12.595311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.873 qpair failed and we were unable to recover it. 00:38:18.873 [2024-12-13 10:40:12.595572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.873 [2024-12-13 10:40:12.595618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.873 qpair failed and we were unable to recover it. 00:38:18.873 [2024-12-13 10:40:12.595818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.873 [2024-12-13 10:40:12.595833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.873 qpair failed and we were unable to recover it. 00:38:18.873 [2024-12-13 10:40:12.596029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.873 [2024-12-13 10:40:12.596044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.873 qpair failed and we were unable to recover it. 00:38:18.873 [2024-12-13 10:40:12.596128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.873 [2024-12-13 10:40:12.596142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.873 qpair failed and we were unable to recover it. 00:38:18.873 [2024-12-13 10:40:12.596324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.873 [2024-12-13 10:40:12.596368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.873 qpair failed and we were unable to recover it. 00:38:18.873 [2024-12-13 10:40:12.596572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.873 [2024-12-13 10:40:12.596617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.873 qpair failed and we were unable to recover it. 00:38:18.873 [2024-12-13 10:40:12.596899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.873 [2024-12-13 10:40:12.596944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.873 qpair failed and we were unable to recover it. 00:38:18.873 [2024-12-13 10:40:12.597202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.873 [2024-12-13 10:40:12.597245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.873 qpair failed and we were unable to recover it. 00:38:18.873 [2024-12-13 10:40:12.597396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.873 [2024-12-13 10:40:12.597446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.873 qpair failed and we were unable to recover it. 00:38:18.873 [2024-12-13 10:40:12.597753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.873 [2024-12-13 10:40:12.597795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.873 qpair failed and we were unable to recover it. 00:38:18.873 [2024-12-13 10:40:12.598059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.873 [2024-12-13 10:40:12.598103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.873 qpair failed and we were unable to recover it. 00:38:18.873 [2024-12-13 10:40:12.598393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.873 [2024-12-13 10:40:12.598436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.873 qpair failed and we were unable to recover it. 00:38:18.873 [2024-12-13 10:40:12.598698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.873 [2024-12-13 10:40:12.598713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.873 qpair failed and we were unable to recover it. 00:38:18.873 [2024-12-13 10:40:12.598827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.873 [2024-12-13 10:40:12.598872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.873 qpair failed and we were unable to recover it. 00:38:18.873 [2024-12-13 10:40:12.599068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.873 [2024-12-13 10:40:12.599111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.873 qpair failed and we were unable to recover it. 00:38:18.873 [2024-12-13 10:40:12.599406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.873 [2024-12-13 10:40:12.599463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.873 qpair failed and we were unable to recover it. 00:38:18.873 [2024-12-13 10:40:12.599612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.873 [2024-12-13 10:40:12.599656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.873 qpair failed and we were unable to recover it. 00:38:18.873 [2024-12-13 10:40:12.599811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.873 [2024-12-13 10:40:12.599852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.873 qpair failed and we were unable to recover it. 00:38:18.873 [2024-12-13 10:40:12.600123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.873 [2024-12-13 10:40:12.600138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.873 qpair failed and we were unable to recover it. 00:38:18.873 [2024-12-13 10:40:12.600301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.873 [2024-12-13 10:40:12.600316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.873 qpair failed and we were unable to recover it. 00:38:18.873 [2024-12-13 10:40:12.600418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.873 [2024-12-13 10:40:12.600433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.873 qpair failed and we were unable to recover it. 00:38:18.873 [2024-12-13 10:40:12.600530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.873 [2024-12-13 10:40:12.600544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.873 qpair failed and we were unable to recover it. 00:38:18.873 [2024-12-13 10:40:12.600626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.873 [2024-12-13 10:40:12.600641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.873 qpair failed and we were unable to recover it. 00:38:18.873 [2024-12-13 10:40:12.600837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.873 [2024-12-13 10:40:12.600852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.873 qpair failed and we were unable to recover it. 00:38:18.873 [2024-12-13 10:40:12.600984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.873 [2024-12-13 10:40:12.600998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.873 qpair failed and we were unable to recover it. 00:38:18.873 [2024-12-13 10:40:12.601108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.873 [2024-12-13 10:40:12.601122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.873 qpair failed and we were unable to recover it. 00:38:18.873 [2024-12-13 10:40:12.601223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.873 [2024-12-13 10:40:12.601238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.873 qpair failed and we were unable to recover it. 00:38:18.874 [2024-12-13 10:40:12.601315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.874 [2024-12-13 10:40:12.601328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.874 qpair failed and we were unable to recover it. 00:38:18.874 [2024-12-13 10:40:12.601430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.874 [2024-12-13 10:40:12.601497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.874 qpair failed and we were unable to recover it. 00:38:18.874 [2024-12-13 10:40:12.601758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.874 [2024-12-13 10:40:12.601800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.874 qpair failed and we were unable to recover it. 00:38:18.874 [2024-12-13 10:40:12.601927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.874 [2024-12-13 10:40:12.601962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.874 qpair failed and we were unable to recover it. 00:38:18.874 [2024-12-13 10:40:12.602050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.874 [2024-12-13 10:40:12.602063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.874 qpair failed and we were unable to recover it. 00:38:18.874 [2024-12-13 10:40:12.602211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.874 [2024-12-13 10:40:12.602226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.874 qpair failed and we were unable to recover it. 00:38:18.874 [2024-12-13 10:40:12.602469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.874 [2024-12-13 10:40:12.602484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.874 qpair failed and we were unable to recover it. 00:38:18.874 [2024-12-13 10:40:12.602578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.874 [2024-12-13 10:40:12.602592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.874 qpair failed and we were unable to recover it. 00:38:18.874 [2024-12-13 10:40:12.602824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.874 [2024-12-13 10:40:12.602867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.874 qpair failed and we were unable to recover it. 00:38:18.874 [2024-12-13 10:40:12.603061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.874 [2024-12-13 10:40:12.603111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.874 qpair failed and we were unable to recover it. 00:38:18.874 [2024-12-13 10:40:12.603374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.874 [2024-12-13 10:40:12.603419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.874 qpair failed and we were unable to recover it. 00:38:18.874 [2024-12-13 10:40:12.603583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.874 [2024-12-13 10:40:12.603627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.874 qpair failed and we were unable to recover it. 00:38:18.874 [2024-12-13 10:40:12.603851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.874 [2024-12-13 10:40:12.603894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.874 qpair failed and we were unable to recover it. 00:38:18.874 [2024-12-13 10:40:12.604107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.874 [2024-12-13 10:40:12.604150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.874 qpair failed and we were unable to recover it. 00:38:18.874 [2024-12-13 10:40:12.604430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.874 [2024-12-13 10:40:12.604482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.874 qpair failed and we were unable to recover it. 00:38:18.874 [2024-12-13 10:40:12.604698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.874 [2024-12-13 10:40:12.604714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.874 qpair failed and we were unable to recover it. 00:38:18.874 [2024-12-13 10:40:12.604897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.874 [2024-12-13 10:40:12.604939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.874 qpair failed and we were unable to recover it. 00:38:18.874 [2024-12-13 10:40:12.605206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.874 [2024-12-13 10:40:12.605250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.874 qpair failed and we were unable to recover it. 00:38:18.874 [2024-12-13 10:40:12.605556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.874 [2024-12-13 10:40:12.605572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.874 qpair failed and we were unable to recover it. 00:38:18.874 [2024-12-13 10:40:12.605646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.874 [2024-12-13 10:40:12.605660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.874 qpair failed and we were unable to recover it. 00:38:18.874 [2024-12-13 10:40:12.605737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.874 [2024-12-13 10:40:12.605751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.874 qpair failed and we were unable to recover it. 00:38:18.874 [2024-12-13 10:40:12.605906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.874 [2024-12-13 10:40:12.605924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.874 qpair failed and we were unable to recover it. 00:38:18.874 [2024-12-13 10:40:12.606034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.874 [2024-12-13 10:40:12.606077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.874 qpair failed and we were unable to recover it. 00:38:18.874 [2024-12-13 10:40:12.606290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.874 [2024-12-13 10:40:12.606333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.874 qpair failed and we were unable to recover it. 00:38:18.874 [2024-12-13 10:40:12.606539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.874 [2024-12-13 10:40:12.606584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.874 qpair failed and we were unable to recover it. 00:38:18.874 [2024-12-13 10:40:12.606733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.874 [2024-12-13 10:40:12.606776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.874 qpair failed and we were unable to recover it. 00:38:18.874 [2024-12-13 10:40:12.606934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.874 [2024-12-13 10:40:12.606949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.874 qpair failed and we were unable to recover it. 00:38:18.874 [2024-12-13 10:40:12.607082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.874 [2024-12-13 10:40:12.607097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.874 qpair failed and we were unable to recover it. 00:38:18.874 [2024-12-13 10:40:12.607237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.874 [2024-12-13 10:40:12.607251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.874 qpair failed and we were unable to recover it. 00:38:18.874 [2024-12-13 10:40:12.607386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.874 [2024-12-13 10:40:12.607401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.874 qpair failed and we were unable to recover it. 00:38:18.874 [2024-12-13 10:40:12.607483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.874 [2024-12-13 10:40:12.607497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.874 qpair failed and we were unable to recover it. 00:38:18.874 [2024-12-13 10:40:12.607578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.874 [2024-12-13 10:40:12.607591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.874 qpair failed and we were unable to recover it. 00:38:18.874 [2024-12-13 10:40:12.607733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.874 [2024-12-13 10:40:12.607777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.874 qpair failed and we were unable to recover it. 00:38:18.874 [2024-12-13 10:40:12.608053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.874 [2024-12-13 10:40:12.608098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.874 qpair failed and we were unable to recover it. 00:38:18.874 [2024-12-13 10:40:12.608225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.874 [2024-12-13 10:40:12.608280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.874 qpair failed and we were unable to recover it. 00:38:18.874 [2024-12-13 10:40:12.608523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.874 [2024-12-13 10:40:12.608568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.874 qpair failed and we were unable to recover it. 00:38:18.874 [2024-12-13 10:40:12.608787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.874 [2024-12-13 10:40:12.608802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.874 qpair failed and we were unable to recover it. 00:38:18.874 [2024-12-13 10:40:12.608897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.874 [2024-12-13 10:40:12.608918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.874 qpair failed and we were unable to recover it. 00:38:18.874 [2024-12-13 10:40:12.609062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.874 [2024-12-13 10:40:12.609087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.875 qpair failed and we were unable to recover it. 00:38:18.875 [2024-12-13 10:40:12.609342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.875 [2024-12-13 10:40:12.609385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.875 qpair failed and we were unable to recover it. 00:38:18.875 [2024-12-13 10:40:12.609595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.875 [2024-12-13 10:40:12.609639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.875 qpair failed and we were unable to recover it. 00:38:18.875 [2024-12-13 10:40:12.609776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.875 [2024-12-13 10:40:12.609791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.875 qpair failed and we were unable to recover it. 00:38:18.875 [2024-12-13 10:40:12.609942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.875 [2024-12-13 10:40:12.609957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.875 qpair failed and we were unable to recover it. 00:38:18.875 [2024-12-13 10:40:12.610183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.875 [2024-12-13 10:40:12.610197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.875 qpair failed and we were unable to recover it. 00:38:18.875 [2024-12-13 10:40:12.610281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.875 [2024-12-13 10:40:12.610295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.875 qpair failed and we were unable to recover it. 00:38:18.875 [2024-12-13 10:40:12.610596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.875 [2024-12-13 10:40:12.610642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.875 qpair failed and we were unable to recover it. 00:38:18.875 [2024-12-13 10:40:12.610945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.875 [2024-12-13 10:40:12.610989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.875 qpair failed and we were unable to recover it. 00:38:18.875 [2024-12-13 10:40:12.611134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.875 [2024-12-13 10:40:12.611176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.875 qpair failed and we were unable to recover it. 00:38:18.875 [2024-12-13 10:40:12.611309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.875 [2024-12-13 10:40:12.611351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.875 qpair failed and we were unable to recover it. 00:38:18.875 [2024-12-13 10:40:12.611617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.875 [2024-12-13 10:40:12.611662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.875 qpair failed and we were unable to recover it. 00:38:18.875 [2024-12-13 10:40:12.611945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.875 [2024-12-13 10:40:12.611987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.875 qpair failed and we were unable to recover it. 00:38:18.875 [2024-12-13 10:40:12.612216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.875 [2024-12-13 10:40:12.612259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.875 qpair failed and we were unable to recover it. 00:38:18.875 [2024-12-13 10:40:12.612466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.875 [2024-12-13 10:40:12.612511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.875 qpair failed and we were unable to recover it. 00:38:18.875 [2024-12-13 10:40:12.612717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.875 [2024-12-13 10:40:12.612760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.875 qpair failed and we were unable to recover it. 00:38:18.875 [2024-12-13 10:40:12.612984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.875 [2024-12-13 10:40:12.613029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.875 qpair failed and we were unable to recover it. 00:38:18.875 [2024-12-13 10:40:12.613315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.875 [2024-12-13 10:40:12.613358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.875 qpair failed and we were unable to recover it. 00:38:18.875 [2024-12-13 10:40:12.613622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.875 [2024-12-13 10:40:12.613667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.875 qpair failed and we were unable to recover it. 00:38:18.875 [2024-12-13 10:40:12.613878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.875 [2024-12-13 10:40:12.613920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.875 qpair failed and we were unable to recover it. 00:38:18.875 [2024-12-13 10:40:12.614062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.875 [2024-12-13 10:40:12.614077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.875 qpair failed and we were unable to recover it. 00:38:18.875 [2024-12-13 10:40:12.614154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.875 [2024-12-13 10:40:12.614168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.875 qpair failed and we were unable to recover it. 00:38:18.875 [2024-12-13 10:40:12.614304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.875 [2024-12-13 10:40:12.614318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.875 qpair failed and we were unable to recover it. 00:38:18.875 [2024-12-13 10:40:12.614414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.875 [2024-12-13 10:40:12.614430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.875 qpair failed and we were unable to recover it. 00:38:18.875 [2024-12-13 10:40:12.614596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.875 [2024-12-13 10:40:12.614612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.875 qpair failed and we were unable to recover it. 00:38:18.875 [2024-12-13 10:40:12.614776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.875 [2024-12-13 10:40:12.614819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.875 qpair failed and we were unable to recover it. 00:38:18.875 [2024-12-13 10:40:12.614965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.875 [2024-12-13 10:40:12.615008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.875 qpair failed and we were unable to recover it. 00:38:18.875 [2024-12-13 10:40:12.615220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.875 [2024-12-13 10:40:12.615263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.875 qpair failed and we were unable to recover it. 00:38:18.875 [2024-12-13 10:40:12.615473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.875 [2024-12-13 10:40:12.615517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.875 qpair failed and we were unable to recover it. 00:38:18.875 [2024-12-13 10:40:12.615725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.875 [2024-12-13 10:40:12.615770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.875 qpair failed and we were unable to recover it. 00:38:18.875 [2024-12-13 10:40:12.615949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.875 [2024-12-13 10:40:12.615964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.875 qpair failed and we were unable to recover it. 00:38:18.875 [2024-12-13 10:40:12.616058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.875 [2024-12-13 10:40:12.616072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.875 qpair failed and we were unable to recover it. 00:38:18.875 [2024-12-13 10:40:12.616207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.875 [2024-12-13 10:40:12.616222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.875 qpair failed and we were unable to recover it. 00:38:18.875 [2024-12-13 10:40:12.616367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.875 [2024-12-13 10:40:12.616382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.875 qpair failed and we were unable to recover it. 00:38:18.875 [2024-12-13 10:40:12.616469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.875 [2024-12-13 10:40:12.616500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.875 qpair failed and we were unable to recover it. 00:38:18.875 [2024-12-13 10:40:12.616643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.875 [2024-12-13 10:40:12.616686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.875 qpair failed and we were unable to recover it. 00:38:18.875 [2024-12-13 10:40:12.616881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.875 [2024-12-13 10:40:12.616924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.875 qpair failed and we were unable to recover it. 00:38:18.875 [2024-12-13 10:40:12.617151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.875 [2024-12-13 10:40:12.617196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.875 qpair failed and we were unable to recover it. 00:38:18.875 [2024-12-13 10:40:12.617416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.875 [2024-12-13 10:40:12.617486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.875 qpair failed and we were unable to recover it. 00:38:18.875 [2024-12-13 10:40:12.617676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.876 [2024-12-13 10:40:12.617692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.876 qpair failed and we were unable to recover it. 00:38:18.876 [2024-12-13 10:40:12.617915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.876 [2024-12-13 10:40:12.617958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.876 qpair failed and we were unable to recover it. 00:38:18.876 [2024-12-13 10:40:12.618162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.876 [2024-12-13 10:40:12.618205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.876 qpair failed and we were unable to recover it. 00:38:18.876 [2024-12-13 10:40:12.618358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.876 [2024-12-13 10:40:12.618401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.876 qpair failed and we were unable to recover it. 00:38:18.876 [2024-12-13 10:40:12.618604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.876 [2024-12-13 10:40:12.618619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.876 qpair failed and we were unable to recover it. 00:38:18.876 [2024-12-13 10:40:12.618851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.876 [2024-12-13 10:40:12.618894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.876 qpair failed and we were unable to recover it. 00:38:18.876 [2024-12-13 10:40:12.619040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.876 [2024-12-13 10:40:12.619084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.876 qpair failed and we were unable to recover it. 00:38:18.876 [2024-12-13 10:40:12.619247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.876 [2024-12-13 10:40:12.619290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.876 qpair failed and we were unable to recover it. 00:38:18.876 [2024-12-13 10:40:12.619567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.876 [2024-12-13 10:40:12.619612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.876 qpair failed and we were unable to recover it. 00:38:18.876 [2024-12-13 10:40:12.619792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.876 [2024-12-13 10:40:12.619806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.876 qpair failed and we were unable to recover it. 00:38:18.876 [2024-12-13 10:40:12.619965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.876 [2024-12-13 10:40:12.620009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.876 qpair failed and we were unable to recover it. 00:38:18.876 [2024-12-13 10:40:12.620210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.876 [2024-12-13 10:40:12.620255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.876 qpair failed and we were unable to recover it. 00:38:18.876 [2024-12-13 10:40:12.620564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.876 [2024-12-13 10:40:12.620596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.876 qpair failed and we were unable to recover it. 00:38:18.876 [2024-12-13 10:40:12.620691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.876 [2024-12-13 10:40:12.620704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.876 qpair failed and we were unable to recover it. 00:38:18.876 [2024-12-13 10:40:12.620794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.876 [2024-12-13 10:40:12.620808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.876 qpair failed and we were unable to recover it. 00:38:18.876 [2024-12-13 10:40:12.620958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.876 [2024-12-13 10:40:12.620973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.876 qpair failed and we were unable to recover it. 00:38:18.876 [2024-12-13 10:40:12.621129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.876 [2024-12-13 10:40:12.621173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.876 qpair failed and we were unable to recover it. 00:38:18.876 [2024-12-13 10:40:12.621367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.876 [2024-12-13 10:40:12.621410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.876 qpair failed and we were unable to recover it. 00:38:18.876 [2024-12-13 10:40:12.621631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.876 [2024-12-13 10:40:12.621685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.876 qpair failed and we were unable to recover it. 00:38:18.876 [2024-12-13 10:40:12.621819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.876 [2024-12-13 10:40:12.621834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.876 qpair failed and we were unable to recover it. 00:38:18.876 [2024-12-13 10:40:12.621982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.876 [2024-12-13 10:40:12.622027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.876 qpair failed and we were unable to recover it. 00:38:18.876 [2024-12-13 10:40:12.622242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.876 [2024-12-13 10:40:12.622304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.876 qpair failed and we were unable to recover it. 00:38:18.876 [2024-12-13 10:40:12.622542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.876 [2024-12-13 10:40:12.622588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.876 qpair failed and we were unable to recover it. 00:38:18.876 [2024-12-13 10:40:12.622716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.876 [2024-12-13 10:40:12.622731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.876 qpair failed and we were unable to recover it. 00:38:18.876 [2024-12-13 10:40:12.622862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.876 [2024-12-13 10:40:12.622879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.876 qpair failed and we were unable to recover it. 00:38:18.876 [2024-12-13 10:40:12.622978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.876 [2024-12-13 10:40:12.623022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.876 qpair failed and we were unable to recover it. 00:38:18.876 [2024-12-13 10:40:12.623237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.876 [2024-12-13 10:40:12.623280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.876 qpair failed and we were unable to recover it. 00:38:18.876 [2024-12-13 10:40:12.623501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.876 [2024-12-13 10:40:12.623546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.876 qpair failed and we were unable to recover it. 00:38:18.876 [2024-12-13 10:40:12.623737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.876 [2024-12-13 10:40:12.623752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.876 qpair failed and we were unable to recover it. 00:38:18.876 [2024-12-13 10:40:12.623904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.876 [2024-12-13 10:40:12.623946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.876 qpair failed and we were unable to recover it. 00:38:18.876 [2024-12-13 10:40:12.624078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.876 [2024-12-13 10:40:12.624122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.876 qpair failed and we were unable to recover it. 00:38:18.876 [2024-12-13 10:40:12.624313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.876 [2024-12-13 10:40:12.624357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.876 qpair failed and we were unable to recover it. 00:38:18.876 [2024-12-13 10:40:12.624623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.876 [2024-12-13 10:40:12.624668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.876 qpair failed and we were unable to recover it. 00:38:18.876 [2024-12-13 10:40:12.624863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.876 [2024-12-13 10:40:12.624906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.876 qpair failed and we were unable to recover it. 00:38:18.876 [2024-12-13 10:40:12.625103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.876 [2024-12-13 10:40:12.625147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.876 qpair failed and we were unable to recover it. 00:38:18.876 [2024-12-13 10:40:12.625362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.876 [2024-12-13 10:40:12.625405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.876 qpair failed and we were unable to recover it. 00:38:18.876 [2024-12-13 10:40:12.625660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.876 [2024-12-13 10:40:12.625706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.876 qpair failed and we were unable to recover it. 00:38:18.876 [2024-12-13 10:40:12.625960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.876 [2024-12-13 10:40:12.625975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.876 qpair failed and we were unable to recover it. 00:38:18.876 [2024-12-13 10:40:12.626207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.876 [2024-12-13 10:40:12.626223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.876 qpair failed and we were unable to recover it. 00:38:18.876 [2024-12-13 10:40:12.626317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.877 [2024-12-13 10:40:12.626331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.877 qpair failed and we were unable to recover it. 00:38:18.877 [2024-12-13 10:40:12.626480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.877 [2024-12-13 10:40:12.626496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.877 qpair failed and we were unable to recover it. 00:38:18.877 [2024-12-13 10:40:12.626708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.877 [2024-12-13 10:40:12.626753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.877 qpair failed and we were unable to recover it. 00:38:18.877 [2024-12-13 10:40:12.626963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.877 [2024-12-13 10:40:12.627007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.877 qpair failed and we were unable to recover it. 00:38:18.877 [2024-12-13 10:40:12.627264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.877 [2024-12-13 10:40:12.627308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.877 qpair failed and we were unable to recover it. 00:38:18.877 [2024-12-13 10:40:12.627572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.877 [2024-12-13 10:40:12.627617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.877 qpair failed and we were unable to recover it. 00:38:18.877 [2024-12-13 10:40:12.627751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.877 [2024-12-13 10:40:12.627766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.877 qpair failed and we were unable to recover it. 00:38:18.877 [2024-12-13 10:40:12.627902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.877 [2024-12-13 10:40:12.627946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.877 qpair failed and we were unable to recover it. 00:38:18.877 [2024-12-13 10:40:12.628188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.877 [2024-12-13 10:40:12.628231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.877 qpair failed and we were unable to recover it. 00:38:18.877 [2024-12-13 10:40:12.628503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.877 [2024-12-13 10:40:12.628549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.877 qpair failed and we were unable to recover it. 00:38:18.877 [2024-12-13 10:40:12.628756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.877 [2024-12-13 10:40:12.628771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.877 qpair failed and we were unable to recover it. 00:38:18.877 [2024-12-13 10:40:12.628994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.877 [2024-12-13 10:40:12.629037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.877 qpair failed and we were unable to recover it. 00:38:18.877 [2024-12-13 10:40:12.629170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.877 [2024-12-13 10:40:12.629215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.877 qpair failed and we were unable to recover it. 00:38:18.877 [2024-12-13 10:40:12.629433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.877 [2024-12-13 10:40:12.629505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.877 qpair failed and we were unable to recover it. 00:38:18.877 [2024-12-13 10:40:12.629791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.877 [2024-12-13 10:40:12.629834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.877 qpair failed and we were unable to recover it. 00:38:18.877 [2024-12-13 10:40:12.630033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.877 [2024-12-13 10:40:12.630047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.877 qpair failed and we were unable to recover it. 00:38:18.877 [2024-12-13 10:40:12.630114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.877 [2024-12-13 10:40:12.630127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.877 qpair failed and we were unable to recover it. 00:38:18.877 [2024-12-13 10:40:12.630202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.877 [2024-12-13 10:40:12.630216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.877 qpair failed and we were unable to recover it. 00:38:18.877 [2024-12-13 10:40:12.630402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.877 [2024-12-13 10:40:12.630444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.877 qpair failed and we were unable to recover it. 00:38:18.877 [2024-12-13 10:40:12.630667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.877 [2024-12-13 10:40:12.630711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.877 qpair failed and we were unable to recover it. 00:38:18.877 [2024-12-13 10:40:12.630917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.877 [2024-12-13 10:40:12.630960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.877 qpair failed and we were unable to recover it. 00:38:18.877 [2024-12-13 10:40:12.631117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.877 [2024-12-13 10:40:12.631132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.877 qpair failed and we were unable to recover it. 00:38:18.877 [2024-12-13 10:40:12.631369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.877 [2024-12-13 10:40:12.631412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.877 qpair failed and we were unable to recover it. 00:38:18.877 [2024-12-13 10:40:12.631582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.877 [2024-12-13 10:40:12.631627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.877 qpair failed and we were unable to recover it. 00:38:18.877 [2024-12-13 10:40:12.631899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.877 [2024-12-13 10:40:12.631942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.877 qpair failed and we were unable to recover it. 00:38:18.877 [2024-12-13 10:40:12.632228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.877 [2024-12-13 10:40:12.632282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.877 qpair failed and we were unable to recover it. 00:38:18.877 [2024-12-13 10:40:12.632503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.877 [2024-12-13 10:40:12.632549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.877 qpair failed and we were unable to recover it. 00:38:18.877 [2024-12-13 10:40:12.632746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.877 [2024-12-13 10:40:12.632789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.877 qpair failed and we were unable to recover it. 00:38:18.877 [2024-12-13 10:40:12.632993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.877 [2024-12-13 10:40:12.633036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.877 qpair failed and we were unable to recover it. 00:38:18.877 [2024-12-13 10:40:12.633266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.877 [2024-12-13 10:40:12.633310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.877 qpair failed and we were unable to recover it. 00:38:18.877 [2024-12-13 10:40:12.633578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.877 [2024-12-13 10:40:12.633622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.877 qpair failed and we were unable to recover it. 00:38:18.878 [2024-12-13 10:40:12.633802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.878 [2024-12-13 10:40:12.633816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.878 qpair failed and we were unable to recover it. 00:38:18.878 [2024-12-13 10:40:12.633899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.878 [2024-12-13 10:40:12.633912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.878 qpair failed and we were unable to recover it. 00:38:18.878 [2024-12-13 10:40:12.634014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.878 [2024-12-13 10:40:12.634027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.878 qpair failed and we were unable to recover it. 00:38:18.878 [2024-12-13 10:40:12.634102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.878 [2024-12-13 10:40:12.634117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.878 qpair failed and we were unable to recover it. 00:38:18.878 [2024-12-13 10:40:12.634271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.878 [2024-12-13 10:40:12.634286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.878 qpair failed and we were unable to recover it. 00:38:18.878 [2024-12-13 10:40:12.634510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.878 [2024-12-13 10:40:12.634554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.878 qpair failed and we were unable to recover it. 00:38:18.878 [2024-12-13 10:40:12.634702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.878 [2024-12-13 10:40:12.634717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.878 qpair failed and we were unable to recover it. 00:38:18.878 [2024-12-13 10:40:12.634877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.878 [2024-12-13 10:40:12.634892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.878 qpair failed and we were unable to recover it. 00:38:18.878 [2024-12-13 10:40:12.634991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.878 [2024-12-13 10:40:12.635005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.878 qpair failed and we were unable to recover it. 00:38:18.878 [2024-12-13 10:40:12.635072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.878 [2024-12-13 10:40:12.635087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.878 qpair failed and we were unable to recover it. 00:38:18.878 [2024-12-13 10:40:12.635300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.878 [2024-12-13 10:40:12.635344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.878 qpair failed and we were unable to recover it. 00:38:18.878 [2024-12-13 10:40:12.635545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.878 [2024-12-13 10:40:12.635590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.878 qpair failed and we were unable to recover it. 00:38:18.878 [2024-12-13 10:40:12.635723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.878 [2024-12-13 10:40:12.635738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.878 qpair failed and we were unable to recover it. 00:38:18.878 [2024-12-13 10:40:12.635872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.878 [2024-12-13 10:40:12.635887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.878 qpair failed and we were unable to recover it. 00:38:18.878 [2024-12-13 10:40:12.636049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.878 [2024-12-13 10:40:12.636064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.878 qpair failed and we were unable to recover it. 00:38:18.878 [2024-12-13 10:40:12.636247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.878 [2024-12-13 10:40:12.636302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.878 qpair failed and we were unable to recover it. 00:38:18.878 [2024-12-13 10:40:12.636439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.878 [2024-12-13 10:40:12.636496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.878 qpair failed and we were unable to recover it. 00:38:18.878 [2024-12-13 10:40:12.636755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.878 [2024-12-13 10:40:12.636799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.878 qpair failed and we were unable to recover it. 00:38:18.878 [2024-12-13 10:40:12.636911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.878 [2024-12-13 10:40:12.636926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.878 qpair failed and we were unable to recover it. 00:38:18.878 [2024-12-13 10:40:12.637065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.878 [2024-12-13 10:40:12.637104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.878 qpair failed and we were unable to recover it. 00:38:18.878 [2024-12-13 10:40:12.637407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.878 [2024-12-13 10:40:12.637463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.878 qpair failed and we were unable to recover it. 00:38:18.878 [2024-12-13 10:40:12.637614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.878 [2024-12-13 10:40:12.637656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.878 qpair failed and we were unable to recover it. 00:38:18.878 [2024-12-13 10:40:12.637798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.878 [2024-12-13 10:40:12.637841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.878 qpair failed and we were unable to recover it. 00:38:18.878 [2024-12-13 10:40:12.638150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.878 [2024-12-13 10:40:12.638194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.878 qpair failed and we were unable to recover it. 00:38:18.878 [2024-12-13 10:40:12.638352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.878 [2024-12-13 10:40:12.638395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.878 qpair failed and we were unable to recover it. 00:38:18.878 [2024-12-13 10:40:12.638648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.878 [2024-12-13 10:40:12.638693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.878 qpair failed and we were unable to recover it. 00:38:18.878 [2024-12-13 10:40:12.638950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.878 [2024-12-13 10:40:12.638994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.878 qpair failed and we were unable to recover it. 00:38:18.878 [2024-12-13 10:40:12.639138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.878 [2024-12-13 10:40:12.639180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.878 qpair failed and we were unable to recover it. 00:38:18.878 [2024-12-13 10:40:12.639473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.878 [2024-12-13 10:40:12.639517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.878 qpair failed and we were unable to recover it. 00:38:18.878 [2024-12-13 10:40:12.639801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.878 [2024-12-13 10:40:12.639845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.878 qpair failed and we were unable to recover it. 00:38:18.878 [2024-12-13 10:40:12.640102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.878 [2024-12-13 10:40:12.640117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.878 qpair failed and we were unable to recover it. 00:38:18.878 [2024-12-13 10:40:12.640263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.878 [2024-12-13 10:40:12.640278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.878 qpair failed and we were unable to recover it. 00:38:18.878 [2024-12-13 10:40:12.640512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.878 [2024-12-13 10:40:12.640558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.878 qpair failed and we were unable to recover it. 00:38:18.878 [2024-12-13 10:40:12.640716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.878 [2024-12-13 10:40:12.640759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.878 qpair failed and we were unable to recover it. 00:38:18.878 [2024-12-13 10:40:12.641046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.878 [2024-12-13 10:40:12.641097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.878 qpair failed and we were unable to recover it. 00:38:18.878 [2024-12-13 10:40:12.641289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.878 [2024-12-13 10:40:12.641305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.878 qpair failed and we were unable to recover it. 00:38:18.878 [2024-12-13 10:40:12.641494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.878 [2024-12-13 10:40:12.641537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.878 qpair failed and we were unable to recover it. 00:38:18.878 [2024-12-13 10:40:12.641799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.878 [2024-12-13 10:40:12.641815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.878 qpair failed and we were unable to recover it. 00:38:18.878 [2024-12-13 10:40:12.641923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.878 [2024-12-13 10:40:12.641967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.879 qpair failed and we were unable to recover it. 00:38:18.879 [2024-12-13 10:40:12.642162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.879 [2024-12-13 10:40:12.642206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.879 qpair failed and we were unable to recover it. 00:38:18.879 [2024-12-13 10:40:12.642401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.879 [2024-12-13 10:40:12.642444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.879 qpair failed and we were unable to recover it. 00:38:18.879 [2024-12-13 10:40:12.642592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.879 [2024-12-13 10:40:12.642638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.879 qpair failed and we were unable to recover it. 00:38:18.879 [2024-12-13 10:40:12.642779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.879 [2024-12-13 10:40:12.642821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.879 qpair failed and we were unable to recover it. 00:38:18.879 [2024-12-13 10:40:12.643033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.879 [2024-12-13 10:40:12.643076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.879 qpair failed and we were unable to recover it. 00:38:18.879 [2024-12-13 10:40:12.643293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.879 [2024-12-13 10:40:12.643337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.879 qpair failed and we were unable to recover it. 00:38:18.879 [2024-12-13 10:40:12.643626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.879 [2024-12-13 10:40:12.643670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.879 qpair failed and we were unable to recover it. 00:38:18.879 [2024-12-13 10:40:12.643879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.879 [2024-12-13 10:40:12.643923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.879 qpair failed and we were unable to recover it. 00:38:18.879 [2024-12-13 10:40:12.644209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.879 [2024-12-13 10:40:12.644253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.879 qpair failed and we were unable to recover it. 00:38:18.879 [2024-12-13 10:40:12.644471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.879 [2024-12-13 10:40:12.644514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.879 qpair failed and we were unable to recover it. 00:38:18.879 [2024-12-13 10:40:12.644736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.879 [2024-12-13 10:40:12.644751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.879 qpair failed and we were unable to recover it. 00:38:18.879 [2024-12-13 10:40:12.644903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.879 [2024-12-13 10:40:12.644948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.879 qpair failed and we were unable to recover it. 00:38:18.879 [2024-12-13 10:40:12.645163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.879 [2024-12-13 10:40:12.645205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.879 qpair failed and we were unable to recover it. 00:38:18.879 [2024-12-13 10:40:12.645349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.879 [2024-12-13 10:40:12.645393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.879 qpair failed and we were unable to recover it. 00:38:18.879 [2024-12-13 10:40:12.645637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.879 [2024-12-13 10:40:12.645682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.879 qpair failed and we were unable to recover it. 00:38:18.879 [2024-12-13 10:40:12.645923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.879 [2024-12-13 10:40:12.645967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.879 qpair failed and we were unable to recover it. 00:38:18.879 [2024-12-13 10:40:12.646157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.879 [2024-12-13 10:40:12.646171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.879 qpair failed and we were unable to recover it. 00:38:18.879 [2024-12-13 10:40:12.646313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.879 [2024-12-13 10:40:12.646355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.879 qpair failed and we were unable to recover it. 00:38:18.879 [2024-12-13 10:40:12.646504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.879 [2024-12-13 10:40:12.646549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.879 qpair failed and we were unable to recover it. 00:38:18.879 [2024-12-13 10:40:12.646760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.879 [2024-12-13 10:40:12.646804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.879 qpair failed and we were unable to recover it. 00:38:18.879 [2024-12-13 10:40:12.646960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.879 [2024-12-13 10:40:12.647000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.879 qpair failed and we were unable to recover it. 00:38:18.879 [2024-12-13 10:40:12.647138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.879 [2024-12-13 10:40:12.647153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.879 qpair failed and we were unable to recover it. 00:38:18.879 [2024-12-13 10:40:12.647376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.879 [2024-12-13 10:40:12.647391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.879 qpair failed and we were unable to recover it. 00:38:18.879 [2024-12-13 10:40:12.647596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.879 [2024-12-13 10:40:12.647611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.879 qpair failed and we were unable to recover it. 00:38:18.879 [2024-12-13 10:40:12.647691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.879 [2024-12-13 10:40:12.647705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.879 qpair failed and we were unable to recover it. 00:38:18.879 [2024-12-13 10:40:12.647883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.879 [2024-12-13 10:40:12.647926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.879 qpair failed and we were unable to recover it. 00:38:18.879 [2024-12-13 10:40:12.648163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.879 [2024-12-13 10:40:12.648207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.879 qpair failed and we were unable to recover it. 00:38:18.879 [2024-12-13 10:40:12.648341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.879 [2024-12-13 10:40:12.648385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.879 qpair failed and we were unable to recover it. 00:38:18.879 [2024-12-13 10:40:12.648608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.879 [2024-12-13 10:40:12.648653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.879 qpair failed and we were unable to recover it. 00:38:18.879 [2024-12-13 10:40:12.648926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.879 [2024-12-13 10:40:12.648970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.879 qpair failed and we were unable to recover it. 00:38:18.879 [2024-12-13 10:40:12.649183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.879 [2024-12-13 10:40:12.649228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.879 qpair failed and we were unable to recover it. 00:38:18.879 [2024-12-13 10:40:12.649435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.879 [2024-12-13 10:40:12.649506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.879 qpair failed and we were unable to recover it. 00:38:18.879 [2024-12-13 10:40:12.649771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.879 [2024-12-13 10:40:12.649814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.879 qpair failed and we were unable to recover it. 00:38:18.879 [2024-12-13 10:40:12.650076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.879 [2024-12-13 10:40:12.650091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.879 qpair failed and we were unable to recover it. 00:38:18.879 [2024-12-13 10:40:12.650341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.879 [2024-12-13 10:40:12.650356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.879 qpair failed and we were unable to recover it. 00:38:18.879 [2024-12-13 10:40:12.650500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.879 [2024-12-13 10:40:12.650518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.879 qpair failed and we were unable to recover it. 00:38:18.879 [2024-12-13 10:40:12.650684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.879 [2024-12-13 10:40:12.650698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.879 qpair failed and we were unable to recover it. 00:38:18.879 [2024-12-13 10:40:12.650926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.879 [2024-12-13 10:40:12.650970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.879 qpair failed and we were unable to recover it. 00:38:18.879 [2024-12-13 10:40:12.651200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.879 [2024-12-13 10:40:12.651243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.880 qpair failed and we were unable to recover it. 00:38:18.880 [2024-12-13 10:40:12.651473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.880 [2024-12-13 10:40:12.651519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.880 qpair failed and we were unable to recover it. 00:38:18.880 [2024-12-13 10:40:12.651798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.880 [2024-12-13 10:40:12.651854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.880 qpair failed and we were unable to recover it. 00:38:18.880 [2024-12-13 10:40:12.652063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.880 [2024-12-13 10:40:12.652078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.880 qpair failed and we were unable to recover it. 00:38:18.880 [2024-12-13 10:40:12.652260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.880 [2024-12-13 10:40:12.652304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.880 qpair failed and we were unable to recover it. 00:38:18.880 [2024-12-13 10:40:12.652567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.880 [2024-12-13 10:40:12.652613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.880 qpair failed and we were unable to recover it. 00:38:18.880 [2024-12-13 10:40:12.652814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.880 [2024-12-13 10:40:12.652857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.880 qpair failed and we were unable to recover it. 00:38:18.880 [2024-12-13 10:40:12.653013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.880 [2024-12-13 10:40:12.653027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.880 qpair failed and we were unable to recover it. 00:38:18.880 [2024-12-13 10:40:12.653164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.880 [2024-12-13 10:40:12.653179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.880 qpair failed and we were unable to recover it. 00:38:18.880 [2024-12-13 10:40:12.653324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.880 [2024-12-13 10:40:12.653339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.880 qpair failed and we were unable to recover it. 00:38:18.880 [2024-12-13 10:40:12.653496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.880 [2024-12-13 10:40:12.653511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.880 qpair failed and we were unable to recover it. 00:38:18.880 [2024-12-13 10:40:12.653663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.880 [2024-12-13 10:40:12.653678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.880 qpair failed and we were unable to recover it. 00:38:18.880 [2024-12-13 10:40:12.653827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.880 [2024-12-13 10:40:12.653841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.880 qpair failed and we were unable to recover it. 00:38:18.880 [2024-12-13 10:40:12.653992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.880 [2024-12-13 10:40:12.654007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.880 qpair failed and we were unable to recover it. 00:38:18.880 [2024-12-13 10:40:12.654161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.880 [2024-12-13 10:40:12.654176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.880 qpair failed and we were unable to recover it. 00:38:18.880 [2024-12-13 10:40:12.654257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.880 [2024-12-13 10:40:12.654270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.880 qpair failed and we were unable to recover it. 00:38:18.880 [2024-12-13 10:40:12.654365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.880 [2024-12-13 10:40:12.654379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.880 qpair failed and we were unable to recover it. 00:38:18.880 [2024-12-13 10:40:12.654514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.880 [2024-12-13 10:40:12.654529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.880 qpair failed and we were unable to recover it. 00:38:18.880 [2024-12-13 10:40:12.654666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.880 [2024-12-13 10:40:12.654681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.880 qpair failed and we were unable to recover it. 00:38:18.880 [2024-12-13 10:40:12.654766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.880 [2024-12-13 10:40:12.654779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.880 qpair failed and we were unable to recover it. 00:38:18.880 [2024-12-13 10:40:12.655014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.880 [2024-12-13 10:40:12.655029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.880 qpair failed and we were unable to recover it. 00:38:18.880 [2024-12-13 10:40:12.655185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.880 [2024-12-13 10:40:12.655200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.880 qpair failed and we were unable to recover it. 00:38:18.880 [2024-12-13 10:40:12.655312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.880 [2024-12-13 10:40:12.655326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.880 qpair failed and we were unable to recover it. 00:38:18.880 [2024-12-13 10:40:12.655422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.880 [2024-12-13 10:40:12.655437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.880 qpair failed and we were unable to recover it. 00:38:18.880 [2024-12-13 10:40:12.655572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.880 [2024-12-13 10:40:12.655617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:18.880 qpair failed and we were unable to recover it. 00:38:18.880 [2024-12-13 10:40:12.655768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.880 [2024-12-13 10:40:12.655814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:18.880 qpair failed and we were unable to recover it. 00:38:18.880 [2024-12-13 10:40:12.656042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.880 [2024-12-13 10:40:12.656089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:18.880 qpair failed and we were unable to recover it. 00:38:18.880 [2024-12-13 10:40:12.656184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.880 [2024-12-13 10:40:12.656201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.880 qpair failed and we were unable to recover it. 00:38:18.880 [2024-12-13 10:40:12.656356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.880 [2024-12-13 10:40:12.656371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.880 qpair failed and we were unable to recover it. 00:38:18.880 [2024-12-13 10:40:12.656582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.880 [2024-12-13 10:40:12.656598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.880 qpair failed and we were unable to recover it. 00:38:18.880 [2024-12-13 10:40:12.656732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.880 [2024-12-13 10:40:12.656748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.880 qpair failed and we were unable to recover it. 00:38:18.880 [2024-12-13 10:40:12.656902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.880 [2024-12-13 10:40:12.656918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.880 qpair failed and we were unable to recover it. 00:38:18.880 [2024-12-13 10:40:12.657002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.880 [2024-12-13 10:40:12.657016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.880 qpair failed and we were unable to recover it. 00:38:18.880 [2024-12-13 10:40:12.657242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.880 [2024-12-13 10:40:12.657257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.880 qpair failed and we were unable to recover it. 00:38:18.880 [2024-12-13 10:40:12.657353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.880 [2024-12-13 10:40:12.657368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.880 qpair failed and we were unable to recover it. 00:38:18.880 [2024-12-13 10:40:12.657463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.880 [2024-12-13 10:40:12.657478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.880 qpair failed and we were unable to recover it. 00:38:18.880 [2024-12-13 10:40:12.657627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.880 [2024-12-13 10:40:12.657643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.880 qpair failed and we were unable to recover it. 00:38:18.880 [2024-12-13 10:40:12.657782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.880 [2024-12-13 10:40:12.657800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.880 qpair failed and we were unable to recover it. 00:38:18.880 [2024-12-13 10:40:12.657958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.880 [2024-12-13 10:40:12.657974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.880 qpair failed and we were unable to recover it. 00:38:18.881 [2024-12-13 10:40:12.658140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.881 [2024-12-13 10:40:12.658155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.881 qpair failed and we were unable to recover it. 00:38:18.881 [2024-12-13 10:40:12.658224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.881 [2024-12-13 10:40:12.658238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.881 qpair failed and we were unable to recover it. 00:38:18.881 [2024-12-13 10:40:12.658305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.881 [2024-12-13 10:40:12.658319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.881 qpair failed and we were unable to recover it. 00:38:18.881 [2024-12-13 10:40:12.658396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.881 [2024-12-13 10:40:12.658409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.881 qpair failed and we were unable to recover it. 00:38:18.881 [2024-12-13 10:40:12.658503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.881 [2024-12-13 10:40:12.658518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.881 qpair failed and we were unable to recover it. 00:38:18.881 [2024-12-13 10:40:12.658692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.881 [2024-12-13 10:40:12.658706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.881 qpair failed and we were unable to recover it. 00:38:18.881 [2024-12-13 10:40:12.658868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.881 [2024-12-13 10:40:12.658884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.881 qpair failed and we were unable to recover it. 00:38:18.881 [2024-12-13 10:40:12.658969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.881 [2024-12-13 10:40:12.658983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.881 qpair failed and we were unable to recover it. 00:38:18.881 [2024-12-13 10:40:12.659115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.881 [2024-12-13 10:40:12.659131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.881 qpair failed and we were unable to recover it. 00:38:18.881 [2024-12-13 10:40:12.659266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.881 [2024-12-13 10:40:12.659281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.881 qpair failed and we were unable to recover it. 00:38:18.881 [2024-12-13 10:40:12.659350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.881 [2024-12-13 10:40:12.659364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.881 qpair failed and we were unable to recover it. 00:38:18.881 [2024-12-13 10:40:12.659441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.881 [2024-12-13 10:40:12.659461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.881 qpair failed and we were unable to recover it. 00:38:18.881 [2024-12-13 10:40:12.659536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.881 [2024-12-13 10:40:12.659550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.881 qpair failed and we were unable to recover it. 00:38:18.881 [2024-12-13 10:40:12.659697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.881 [2024-12-13 10:40:12.659711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.881 qpair failed and we were unable to recover it. 00:38:18.881 [2024-12-13 10:40:12.659847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.881 [2024-12-13 10:40:12.659862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.881 qpair failed and we were unable to recover it. 00:38:18.881 [2024-12-13 10:40:12.660015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.881 [2024-12-13 10:40:12.660030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.881 qpair failed and we were unable to recover it. 00:38:18.881 [2024-12-13 10:40:12.660100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.881 [2024-12-13 10:40:12.660113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.881 qpair failed and we were unable to recover it. 00:38:18.881 [2024-12-13 10:40:12.660270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.881 [2024-12-13 10:40:12.660285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.881 qpair failed and we were unable to recover it. 00:38:18.881 [2024-12-13 10:40:12.660422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.881 [2024-12-13 10:40:12.660438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.881 qpair failed and we were unable to recover it. 00:38:18.881 [2024-12-13 10:40:12.660585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.881 [2024-12-13 10:40:12.660600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.881 qpair failed and we were unable to recover it. 00:38:18.881 [2024-12-13 10:40:12.660742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.881 [2024-12-13 10:40:12.660758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.881 qpair failed and we were unable to recover it. 00:38:18.881 [2024-12-13 10:40:12.660854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.881 [2024-12-13 10:40:12.660868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.881 qpair failed and we were unable to recover it. 00:38:18.881 [2024-12-13 10:40:12.660962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.881 [2024-12-13 10:40:12.660978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.881 qpair failed and we were unable to recover it. 00:38:18.881 [2024-12-13 10:40:12.661112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.881 [2024-12-13 10:40:12.661128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.881 qpair failed and we were unable to recover it. 00:38:18.881 [2024-12-13 10:40:12.661277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.881 [2024-12-13 10:40:12.661292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.881 qpair failed and we were unable to recover it. 00:38:18.881 [2024-12-13 10:40:12.661396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.881 [2024-12-13 10:40:12.661427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:18.881 qpair failed and we were unable to recover it. 00:38:18.881 [2024-12-13 10:40:12.661589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.881 [2024-12-13 10:40:12.661620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:18.881 qpair failed and we were unable to recover it. 00:38:18.881 [2024-12-13 10:40:12.661800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.881 [2024-12-13 10:40:12.661830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:18.881 qpair failed and we were unable to recover it. 00:38:18.881 [2024-12-13 10:40:12.662018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.881 [2024-12-13 10:40:12.662035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.881 qpair failed and we were unable to recover it. 00:38:18.881 [2024-12-13 10:40:12.662206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.881 [2024-12-13 10:40:12.662222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.881 qpair failed and we were unable to recover it. 00:38:18.881 [2024-12-13 10:40:12.662361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.881 [2024-12-13 10:40:12.662376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.881 qpair failed and we were unable to recover it. 00:38:18.881 [2024-12-13 10:40:12.662468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.881 [2024-12-13 10:40:12.662483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.881 qpair failed and we were unable to recover it. 00:38:18.881 [2024-12-13 10:40:12.662651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.881 [2024-12-13 10:40:12.662667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.881 qpair failed and we were unable to recover it. 00:38:18.881 [2024-12-13 10:40:12.662749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.881 [2024-12-13 10:40:12.662768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.882 qpair failed and we were unable to recover it. 00:38:18.882 [2024-12-13 10:40:12.662924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.882 [2024-12-13 10:40:12.662940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.882 qpair failed and we were unable to recover it. 00:38:18.882 [2024-12-13 10:40:12.663080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.882 [2024-12-13 10:40:12.663097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.882 qpair failed and we were unable to recover it. 00:38:18.882 [2024-12-13 10:40:12.663249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.882 [2024-12-13 10:40:12.663264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.882 qpair failed and we were unable to recover it. 00:38:18.882 [2024-12-13 10:40:12.663470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.882 [2024-12-13 10:40:12.663485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.882 qpair failed and we were unable to recover it. 00:38:18.882 [2024-12-13 10:40:12.663570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.882 [2024-12-13 10:40:12.663587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.882 qpair failed and we were unable to recover it. 00:38:18.882 [2024-12-13 10:40:12.663731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.882 [2024-12-13 10:40:12.663746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.882 qpair failed and we were unable to recover it. 00:38:18.882 [2024-12-13 10:40:12.663832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.882 [2024-12-13 10:40:12.663847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.882 qpair failed and we were unable to recover it. 00:38:18.882 [2024-12-13 10:40:12.664082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.882 [2024-12-13 10:40:12.664125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.882 qpair failed and we were unable to recover it. 00:38:18.882 [2024-12-13 10:40:12.664273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.882 [2024-12-13 10:40:12.664315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.882 qpair failed and we were unable to recover it. 00:38:18.882 [2024-12-13 10:40:12.664516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.882 [2024-12-13 10:40:12.664562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.882 qpair failed and we were unable to recover it. 00:38:18.882 [2024-12-13 10:40:12.664819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.882 [2024-12-13 10:40:12.664834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.882 qpair failed and we were unable to recover it. 00:38:18.882 [2024-12-13 10:40:12.664923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.882 [2024-12-13 10:40:12.664938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.882 qpair failed and we were unable to recover it. 00:38:18.882 [2024-12-13 10:40:12.665190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.882 [2024-12-13 10:40:12.665233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.882 qpair failed and we were unable to recover it. 00:38:18.882 [2024-12-13 10:40:12.665440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.882 [2024-12-13 10:40:12.665516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.882 qpair failed and we were unable to recover it. 00:38:18.882 [2024-12-13 10:40:12.665710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.882 [2024-12-13 10:40:12.665736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.882 qpair failed and we were unable to recover it. 00:38:18.882 [2024-12-13 10:40:12.665868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.882 [2024-12-13 10:40:12.665883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.882 qpair failed and we were unable to recover it. 00:38:18.882 [2024-12-13 10:40:12.666093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.882 [2024-12-13 10:40:12.666112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.882 qpair failed and we were unable to recover it. 00:38:18.882 [2024-12-13 10:40:12.666247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.882 [2024-12-13 10:40:12.666262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.882 qpair failed and we were unable to recover it. 00:38:18.882 [2024-12-13 10:40:12.666342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.882 [2024-12-13 10:40:12.666355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.882 qpair failed and we were unable to recover it. 00:38:18.882 [2024-12-13 10:40:12.666434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.882 [2024-12-13 10:40:12.666454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.882 qpair failed and we were unable to recover it. 00:38:18.882 [2024-12-13 10:40:12.666542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.882 [2024-12-13 10:40:12.666557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.882 qpair failed and we were unable to recover it. 00:38:18.882 [2024-12-13 10:40:12.666698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.882 [2024-12-13 10:40:12.666712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.882 qpair failed and we were unable to recover it. 00:38:18.882 [2024-12-13 10:40:12.666931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.882 [2024-12-13 10:40:12.666973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.882 qpair failed and we were unable to recover it. 00:38:18.882 [2024-12-13 10:40:12.667109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.882 [2024-12-13 10:40:12.667152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.882 qpair failed and we were unable to recover it. 00:38:18.882 [2024-12-13 10:40:12.667274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.882 [2024-12-13 10:40:12.667318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.882 qpair failed and we were unable to recover it. 00:38:18.882 [2024-12-13 10:40:12.667535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.882 [2024-12-13 10:40:12.667581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.882 qpair failed and we were unable to recover it. 00:38:18.882 [2024-12-13 10:40:12.667794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.882 [2024-12-13 10:40:12.667837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.882 qpair failed and we were unable to recover it. 00:38:18.882 [2024-12-13 10:40:12.668032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.882 [2024-12-13 10:40:12.668075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.882 qpair failed and we were unable to recover it. 00:38:18.882 [2024-12-13 10:40:12.668205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.882 [2024-12-13 10:40:12.668247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.882 qpair failed and we were unable to recover it. 00:38:18.882 [2024-12-13 10:40:12.668370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.882 [2024-12-13 10:40:12.668414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.882 qpair failed and we were unable to recover it. 00:38:18.882 [2024-12-13 10:40:12.668642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.882 [2024-12-13 10:40:12.668695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:18.882 qpair failed and we were unable to recover it. 00:38:18.882 [2024-12-13 10:40:12.668876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.882 [2024-12-13 10:40:12.668902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:18.882 qpair failed and we were unable to recover it. 00:38:18.882 [2024-12-13 10:40:12.669091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.882 [2024-12-13 10:40:12.669118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:18.882 qpair failed and we were unable to recover it. 00:38:18.882 [2024-12-13 10:40:12.669301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.882 [2024-12-13 10:40:12.669318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.882 qpair failed and we were unable to recover it. 00:38:18.882 [2024-12-13 10:40:12.669420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.882 [2024-12-13 10:40:12.669473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.882 qpair failed and we were unable to recover it. 00:38:18.882 [2024-12-13 10:40:12.669675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.882 [2024-12-13 10:40:12.669718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.882 qpair failed and we were unable to recover it. 00:38:18.882 [2024-12-13 10:40:12.669938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.883 [2024-12-13 10:40:12.669981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.883 qpair failed and we were unable to recover it. 00:38:18.883 [2024-12-13 10:40:12.670158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.883 [2024-12-13 10:40:12.670174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.883 qpair failed and we were unable to recover it. 00:38:18.883 [2024-12-13 10:40:12.670395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.883 [2024-12-13 10:40:12.670439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.883 qpair failed and we were unable to recover it. 00:38:18.883 [2024-12-13 10:40:12.670713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.883 [2024-12-13 10:40:12.670757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.883 qpair failed and we were unable to recover it. 00:38:18.883 [2024-12-13 10:40:12.670941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.883 [2024-12-13 10:40:12.670956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.883 qpair failed and we were unable to recover it. 00:38:18.883 [2024-12-13 10:40:12.671103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.883 [2024-12-13 10:40:12.671146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.883 qpair failed and we were unable to recover it. 00:38:18.883 [2024-12-13 10:40:12.671432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.883 [2024-12-13 10:40:12.671493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.883 qpair failed and we were unable to recover it. 00:38:18.883 [2024-12-13 10:40:12.671706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.883 [2024-12-13 10:40:12.671751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.883 qpair failed and we were unable to recover it. 00:38:18.883 [2024-12-13 10:40:12.672038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.883 [2024-12-13 10:40:12.672088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.883 qpair failed and we were unable to recover it. 00:38:18.883 [2024-12-13 10:40:12.672351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.883 [2024-12-13 10:40:12.672394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.883 qpair failed and we were unable to recover it. 00:38:18.883 [2024-12-13 10:40:12.672542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.883 [2024-12-13 10:40:12.672587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.883 qpair failed and we were unable to recover it. 00:38:18.883 [2024-12-13 10:40:12.672869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.883 [2024-12-13 10:40:12.672913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.883 qpair failed and we were unable to recover it. 00:38:18.883 [2024-12-13 10:40:12.673051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.883 [2024-12-13 10:40:12.673093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.883 qpair failed and we were unable to recover it. 00:38:18.883 [2024-12-13 10:40:12.673333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.883 [2024-12-13 10:40:12.673365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.883 qpair failed and we were unable to recover it. 00:38:18.883 [2024-12-13 10:40:12.673597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.883 [2024-12-13 10:40:12.673644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.883 qpair failed and we were unable to recover it. 00:38:18.883 [2024-12-13 10:40:12.673791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.883 [2024-12-13 10:40:12.673834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.883 qpair failed and we were unable to recover it. 00:38:18.883 [2024-12-13 10:40:12.673966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.883 [2024-12-13 10:40:12.673981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.883 qpair failed and we were unable to recover it. 00:38:18.883 [2024-12-13 10:40:12.674135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.883 [2024-12-13 10:40:12.674188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.883 qpair failed and we were unable to recover it. 00:38:18.883 [2024-12-13 10:40:12.674414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.883 [2024-12-13 10:40:12.674467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.883 qpair failed and we were unable to recover it. 00:38:18.883 [2024-12-13 10:40:12.674704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.883 [2024-12-13 10:40:12.674747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.883 qpair failed and we were unable to recover it. 00:38:18.883 [2024-12-13 10:40:12.675021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.883 [2024-12-13 10:40:12.675036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.883 qpair failed and we were unable to recover it. 00:38:18.883 [2024-12-13 10:40:12.675167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.883 [2024-12-13 10:40:12.675189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.883 qpair failed and we were unable to recover it. 00:38:18.883 [2024-12-13 10:40:12.675276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.883 [2024-12-13 10:40:12.675289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.883 qpair failed and we were unable to recover it. 00:38:18.883 [2024-12-13 10:40:12.675445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.883 [2024-12-13 10:40:12.675499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.883 qpair failed and we were unable to recover it. 00:38:18.883 [2024-12-13 10:40:12.675767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.883 [2024-12-13 10:40:12.675809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.883 qpair failed and we were unable to recover it. 00:38:18.883 [2024-12-13 10:40:12.676000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.883 [2024-12-13 10:40:12.676015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.883 qpair failed and we were unable to recover it. 00:38:18.883 [2024-12-13 10:40:12.676169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.883 [2024-12-13 10:40:12.676218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.883 qpair failed and we were unable to recover it. 00:38:18.883 [2024-12-13 10:40:12.676426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.883 [2024-12-13 10:40:12.676479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.883 qpair failed and we were unable to recover it. 00:38:18.883 [2024-12-13 10:40:12.676764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.883 [2024-12-13 10:40:12.676807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.883 qpair failed and we were unable to recover it. 00:38:18.883 [2024-12-13 10:40:12.676968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.883 [2024-12-13 10:40:12.676984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.883 qpair failed and we were unable to recover it. 00:38:18.883 [2024-12-13 10:40:12.677151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.883 [2024-12-13 10:40:12.677171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.883 qpair failed and we were unable to recover it. 00:38:18.883 [2024-12-13 10:40:12.677318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.883 [2024-12-13 10:40:12.677333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.883 qpair failed and we were unable to recover it. 00:38:18.883 [2024-12-13 10:40:12.677495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.883 [2024-12-13 10:40:12.677540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.883 qpair failed and we were unable to recover it. 00:38:18.883 [2024-12-13 10:40:12.677757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.883 [2024-12-13 10:40:12.677801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.883 qpair failed and we were unable to recover it. 00:38:18.883 [2024-12-13 10:40:12.678038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.883 [2024-12-13 10:40:12.678081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.883 qpair failed and we were unable to recover it. 00:38:18.883 [2024-12-13 10:40:12.678394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.883 [2024-12-13 10:40:12.678438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.883 qpair failed and we were unable to recover it. 00:38:18.883 [2024-12-13 10:40:12.678721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.883 [2024-12-13 10:40:12.678765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.884 qpair failed and we were unable to recover it. 00:38:18.884 [2024-12-13 10:40:12.679047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.884 [2024-12-13 10:40:12.679090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.884 qpair failed and we were unable to recover it. 00:38:18.884 [2024-12-13 10:40:12.679314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.884 [2024-12-13 10:40:12.679357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.884 qpair failed and we were unable to recover it. 00:38:18.884 [2024-12-13 10:40:12.679567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.884 [2024-12-13 10:40:12.679611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.884 qpair failed and we were unable to recover it. 00:38:18.884 [2024-12-13 10:40:12.679865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.884 [2024-12-13 10:40:12.679880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.884 qpair failed and we were unable to recover it. 00:38:18.884 [2024-12-13 10:40:12.680035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.884 [2024-12-13 10:40:12.680050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.884 qpair failed and we were unable to recover it. 00:38:18.884 [2024-12-13 10:40:12.680171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.884 [2024-12-13 10:40:12.680214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.884 qpair failed and we were unable to recover it. 00:38:18.884 [2024-12-13 10:40:12.680439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.884 [2024-12-13 10:40:12.680493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.884 qpair failed and we were unable to recover it. 00:38:18.884 [2024-12-13 10:40:12.680700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.884 [2024-12-13 10:40:12.680745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.884 qpair failed and we were unable to recover it. 00:38:18.884 [2024-12-13 10:40:12.680912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.884 [2024-12-13 10:40:12.680927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.884 qpair failed and we were unable to recover it. 00:38:18.884 [2024-12-13 10:40:12.681061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.884 [2024-12-13 10:40:12.681076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.884 qpair failed and we were unable to recover it. 00:38:18.884 [2024-12-13 10:40:12.681287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.884 [2024-12-13 10:40:12.681330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.884 qpair failed and we were unable to recover it. 00:38:18.884 [2024-12-13 10:40:12.681521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.884 [2024-12-13 10:40:12.681567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.884 qpair failed and we were unable to recover it. 00:38:18.884 [2024-12-13 10:40:12.681772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.884 [2024-12-13 10:40:12.681815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.884 qpair failed and we were unable to recover it. 00:38:18.884 [2024-12-13 10:40:12.682015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.884 [2024-12-13 10:40:12.682030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.884 qpair failed and we were unable to recover it. 00:38:18.884 [2024-12-13 10:40:12.682197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.884 [2024-12-13 10:40:12.682240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.884 qpair failed and we were unable to recover it. 00:38:18.884 [2024-12-13 10:40:12.682446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.884 [2024-12-13 10:40:12.682501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.884 qpair failed and we were unable to recover it. 00:38:18.884 [2024-12-13 10:40:12.682711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.884 [2024-12-13 10:40:12.682755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.884 qpair failed and we were unable to recover it. 00:38:18.884 [2024-12-13 10:40:12.682955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.884 [2024-12-13 10:40:12.682971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.884 qpair failed and we were unable to recover it. 00:38:18.884 [2024-12-13 10:40:12.683193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.884 [2024-12-13 10:40:12.683236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.884 qpair failed and we were unable to recover it. 00:38:18.884 [2024-12-13 10:40:12.683505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.884 [2024-12-13 10:40:12.683550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.884 qpair failed and we were unable to recover it. 00:38:18.884 [2024-12-13 10:40:12.683752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.884 [2024-12-13 10:40:12.683796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.884 qpair failed and we were unable to recover it. 00:38:18.884 [2024-12-13 10:40:12.684073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.884 [2024-12-13 10:40:12.684116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.884 qpair failed and we were unable to recover it. 00:38:18.884 [2024-12-13 10:40:12.684258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.884 [2024-12-13 10:40:12.684273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.884 qpair failed and we were unable to recover it. 00:38:18.884 [2024-12-13 10:40:12.684490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.884 [2024-12-13 10:40:12.684536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.884 qpair failed and we were unable to recover it. 00:38:18.884 [2024-12-13 10:40:12.684749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.884 [2024-12-13 10:40:12.684792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.884 qpair failed and we were unable to recover it. 00:38:18.884 [2024-12-13 10:40:12.685010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.884 [2024-12-13 10:40:12.685054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.884 qpair failed and we were unable to recover it. 00:38:18.884 [2024-12-13 10:40:12.685257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.884 [2024-12-13 10:40:12.685272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.884 qpair failed and we were unable to recover it. 00:38:18.884 [2024-12-13 10:40:12.685421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.884 [2024-12-13 10:40:12.685485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.884 qpair failed and we were unable to recover it. 00:38:18.884 [2024-12-13 10:40:12.685778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.884 [2024-12-13 10:40:12.685821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.884 qpair failed and we were unable to recover it. 00:38:18.884 [2024-12-13 10:40:12.685983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.884 [2024-12-13 10:40:12.686026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.884 qpair failed and we were unable to recover it. 00:38:18.884 [2024-12-13 10:40:12.686141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.884 [2024-12-13 10:40:12.686156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.884 qpair failed and we were unable to recover it. 00:38:18.884 [2024-12-13 10:40:12.686346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.884 [2024-12-13 10:40:12.686389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.884 qpair failed and we were unable to recover it. 00:38:18.884 [2024-12-13 10:40:12.686612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.884 [2024-12-13 10:40:12.686658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.884 qpair failed and we were unable to recover it. 00:38:18.884 [2024-12-13 10:40:12.686942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.884 [2024-12-13 10:40:12.686985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.884 qpair failed and we were unable to recover it. 00:38:18.884 [2024-12-13 10:40:12.687220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.884 [2024-12-13 10:40:12.687264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.884 qpair failed and we were unable to recover it. 00:38:18.884 [2024-12-13 10:40:12.687519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.884 [2024-12-13 10:40:12.687564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.884 qpair failed and we were unable to recover it. 00:38:18.884 [2024-12-13 10:40:12.687760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.884 [2024-12-13 10:40:12.687802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.884 qpair failed and we were unable to recover it. 00:38:18.884 [2024-12-13 10:40:12.688006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.884 [2024-12-13 10:40:12.688050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.884 qpair failed and we were unable to recover it. 00:38:18.884 [2024-12-13 10:40:12.688271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.885 [2024-12-13 10:40:12.688288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.885 qpair failed and we were unable to recover it. 00:38:18.885 [2024-12-13 10:40:12.688522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.885 [2024-12-13 10:40:12.688567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.885 qpair failed and we were unable to recover it. 00:38:18.885 [2024-12-13 10:40:12.688723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.885 [2024-12-13 10:40:12.688766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.885 qpair failed and we were unable to recover it. 00:38:18.885 [2024-12-13 10:40:12.689056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.885 [2024-12-13 10:40:12.689099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.885 qpair failed and we were unable to recover it. 00:38:18.885 [2024-12-13 10:40:12.689337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.885 [2024-12-13 10:40:12.689363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.885 qpair failed and we were unable to recover it. 00:38:18.885 [2024-12-13 10:40:12.689566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.885 [2024-12-13 10:40:12.689582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.885 qpair failed and we were unable to recover it. 00:38:18.885 [2024-12-13 10:40:12.689653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.885 [2024-12-13 10:40:12.689667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.885 qpair failed and we were unable to recover it. 00:38:18.885 [2024-12-13 10:40:12.689821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.885 [2024-12-13 10:40:12.689836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.885 qpair failed and we were unable to recover it. 00:38:18.885 [2024-12-13 10:40:12.690035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.885 [2024-12-13 10:40:12.690078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.885 qpair failed and we were unable to recover it. 00:38:18.885 [2024-12-13 10:40:12.690335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.885 [2024-12-13 10:40:12.690379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.885 qpair failed and we were unable to recover it. 00:38:18.885 [2024-12-13 10:40:12.690647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.885 [2024-12-13 10:40:12.690692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.885 qpair failed and we were unable to recover it. 00:38:18.885 [2024-12-13 10:40:12.690823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.885 [2024-12-13 10:40:12.690868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.885 qpair failed and we were unable to recover it. 00:38:18.885 [2024-12-13 10:40:12.691172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.885 [2024-12-13 10:40:12.691187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.885 qpair failed and we were unable to recover it. 00:38:18.885 [2024-12-13 10:40:12.691342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.885 [2024-12-13 10:40:12.691357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.885 qpair failed and we were unable to recover it. 00:38:18.885 [2024-12-13 10:40:12.691515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.885 [2024-12-13 10:40:12.691530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.885 qpair failed and we were unable to recover it. 00:38:18.885 [2024-12-13 10:40:12.691714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.885 [2024-12-13 10:40:12.691756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.885 qpair failed and we were unable to recover it. 00:38:18.885 [2024-12-13 10:40:12.691975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.885 [2024-12-13 10:40:12.692019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.885 qpair failed and we were unable to recover it. 00:38:18.885 [2024-12-13 10:40:12.692221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.885 [2024-12-13 10:40:12.692265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.885 qpair failed and we were unable to recover it. 00:38:18.885 [2024-12-13 10:40:12.692429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.885 [2024-12-13 10:40:12.692486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.885 qpair failed and we were unable to recover it. 00:38:18.885 [2024-12-13 10:40:12.692750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.885 [2024-12-13 10:40:12.692805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.885 qpair failed and we were unable to recover it. 00:38:18.885 [2024-12-13 10:40:12.693058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.885 [2024-12-13 10:40:12.693074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.885 qpair failed and we were unable to recover it. 00:38:18.885 [2024-12-13 10:40:12.693142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.885 [2024-12-13 10:40:12.693155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.885 qpair failed and we were unable to recover it. 00:38:18.885 [2024-12-13 10:40:12.693439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.885 [2024-12-13 10:40:12.693492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.885 qpair failed and we were unable to recover it. 00:38:18.885 [2024-12-13 10:40:12.693708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.885 [2024-12-13 10:40:12.693751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.885 qpair failed and we were unable to recover it. 00:38:18.885 [2024-12-13 10:40:12.694029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.885 [2024-12-13 10:40:12.694044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.885 qpair failed and we were unable to recover it. 00:38:18.885 [2024-12-13 10:40:12.694246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.885 [2024-12-13 10:40:12.694261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.885 qpair failed and we were unable to recover it. 00:38:18.885 [2024-12-13 10:40:12.694345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.885 [2024-12-13 10:40:12.694359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.885 qpair failed and we were unable to recover it. 00:38:18.885 [2024-12-13 10:40:12.694542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.885 [2024-12-13 10:40:12.694586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.885 qpair failed and we were unable to recover it. 00:38:18.885 [2024-12-13 10:40:12.694732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.885 [2024-12-13 10:40:12.694775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.885 qpair failed and we were unable to recover it. 00:38:18.885 [2024-12-13 10:40:12.694968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.885 [2024-12-13 10:40:12.695010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.885 qpair failed and we were unable to recover it. 00:38:18.885 [2024-12-13 10:40:12.695146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.885 [2024-12-13 10:40:12.695162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.885 qpair failed and we were unable to recover it. 00:38:18.885 [2024-12-13 10:40:12.695258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.885 [2024-12-13 10:40:12.695272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.885 qpair failed and we were unable to recover it. 00:38:18.885 [2024-12-13 10:40:12.695415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.885 [2024-12-13 10:40:12.695429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.885 qpair failed and we were unable to recover it. 00:38:18.885 [2024-12-13 10:40:12.695592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.885 [2024-12-13 10:40:12.695608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.885 qpair failed and we were unable to recover it. 00:38:18.885 [2024-12-13 10:40:12.695788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.885 [2024-12-13 10:40:12.695803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.885 qpair failed and we were unable to recover it. 00:38:18.885 [2024-12-13 10:40:12.695880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.885 [2024-12-13 10:40:12.695893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.885 qpair failed and we were unable to recover it. 00:38:18.885 [2024-12-13 10:40:12.696042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.885 [2024-12-13 10:40:12.696057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.885 qpair failed and we were unable to recover it. 00:38:18.885 [2024-12-13 10:40:12.696190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.885 [2024-12-13 10:40:12.696205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.885 qpair failed and we were unable to recover it. 00:38:18.885 [2024-12-13 10:40:12.696296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.885 [2024-12-13 10:40:12.696326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.885 qpair failed and we were unable to recover it. 00:38:18.886 [2024-12-13 10:40:12.696475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.886 [2024-12-13 10:40:12.696520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.886 qpair failed and we were unable to recover it. 00:38:18.886 [2024-12-13 10:40:12.696775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.886 [2024-12-13 10:40:12.696825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.886 qpair failed and we were unable to recover it. 00:38:18.886 [2024-12-13 10:40:12.697028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.886 [2024-12-13 10:40:12.697042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.886 qpair failed and we were unable to recover it. 00:38:18.886 [2024-12-13 10:40:12.697191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.886 [2024-12-13 10:40:12.697206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.886 qpair failed and we were unable to recover it. 00:38:18.886 [2024-12-13 10:40:12.697392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.886 [2024-12-13 10:40:12.697435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.886 qpair failed and we were unable to recover it. 00:38:18.886 [2024-12-13 10:40:12.697701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.886 [2024-12-13 10:40:12.697746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.886 qpair failed and we were unable to recover it. 00:38:18.886 [2024-12-13 10:40:12.698031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.886 [2024-12-13 10:40:12.698074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.886 qpair failed and we were unable to recover it. 00:38:18.886 [2024-12-13 10:40:12.698282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.886 [2024-12-13 10:40:12.698325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.886 qpair failed and we were unable to recover it. 00:38:18.886 [2024-12-13 10:40:12.698546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.886 [2024-12-13 10:40:12.698591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.886 qpair failed and we were unable to recover it. 00:38:18.886 [2024-12-13 10:40:12.698782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.886 [2024-12-13 10:40:12.698797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.886 qpair failed and we were unable to recover it. 00:38:18.886 [2024-12-13 10:40:12.699032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.886 [2024-12-13 10:40:12.699076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.886 qpair failed and we were unable to recover it. 00:38:18.886 [2024-12-13 10:40:12.699291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.886 [2024-12-13 10:40:12.699334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.886 qpair failed and we were unable to recover it. 00:38:18.886 [2024-12-13 10:40:12.699475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.886 [2024-12-13 10:40:12.699520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.886 qpair failed and we were unable to recover it. 00:38:18.886 [2024-12-13 10:40:12.699670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.886 [2024-12-13 10:40:12.699714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.886 qpair failed and we were unable to recover it. 00:38:18.886 [2024-12-13 10:40:12.700026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.886 [2024-12-13 10:40:12.700041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.886 qpair failed and we were unable to recover it. 00:38:18.886 [2024-12-13 10:40:12.700213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.886 [2024-12-13 10:40:12.700228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.886 qpair failed and we were unable to recover it. 00:38:18.886 [2024-12-13 10:40:12.700462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.886 [2024-12-13 10:40:12.700507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.886 qpair failed and we were unable to recover it. 00:38:18.886 [2024-12-13 10:40:12.700729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.886 [2024-12-13 10:40:12.700772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.886 qpair failed and we were unable to recover it. 00:38:18.886 [2024-12-13 10:40:12.700967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.886 [2024-12-13 10:40:12.700982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.886 qpair failed and we were unable to recover it. 00:38:18.886 [2024-12-13 10:40:12.701210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.886 [2024-12-13 10:40:12.701225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.886 qpair failed and we were unable to recover it. 00:38:18.886 [2024-12-13 10:40:12.701463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.886 [2024-12-13 10:40:12.701478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.886 qpair failed and we were unable to recover it. 00:38:18.886 [2024-12-13 10:40:12.701649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.886 [2024-12-13 10:40:12.701664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.886 qpair failed and we were unable to recover it. 00:38:18.886 [2024-12-13 10:40:12.701744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.886 [2024-12-13 10:40:12.701793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.886 qpair failed and we were unable to recover it. 00:38:18.886 [2024-12-13 10:40:12.702028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.886 [2024-12-13 10:40:12.702071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.886 qpair failed and we were unable to recover it. 00:38:18.886 [2024-12-13 10:40:12.702332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.886 [2024-12-13 10:40:12.702375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.886 qpair failed and we were unable to recover it. 00:38:18.886 [2024-12-13 10:40:12.702670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.886 [2024-12-13 10:40:12.702716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.886 qpair failed and we were unable to recover it. 00:38:18.886 [2024-12-13 10:40:12.702992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.886 [2024-12-13 10:40:12.703035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.886 qpair failed and we were unable to recover it. 00:38:18.886 [2024-12-13 10:40:12.703243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.886 [2024-12-13 10:40:12.703286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.886 qpair failed and we were unable to recover it. 00:38:18.886 [2024-12-13 10:40:12.703602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.886 [2024-12-13 10:40:12.703648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.886 qpair failed and we were unable to recover it. 00:38:18.886 [2024-12-13 10:40:12.703841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.886 [2024-12-13 10:40:12.703884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.886 qpair failed and we were unable to recover it. 00:38:18.886 [2024-12-13 10:40:12.704148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.886 [2024-12-13 10:40:12.704191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.886 qpair failed and we were unable to recover it. 00:38:18.886 [2024-12-13 10:40:12.704356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.886 [2024-12-13 10:40:12.704371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.886 qpair failed and we were unable to recover it. 00:38:18.887 [2024-12-13 10:40:12.704577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.887 [2024-12-13 10:40:12.704592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.887 qpair failed and we were unable to recover it. 00:38:18.887 [2024-12-13 10:40:12.704836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.887 [2024-12-13 10:40:12.704851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.887 qpair failed and we were unable to recover it. 00:38:18.887 [2024-12-13 10:40:12.704994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.887 [2024-12-13 10:40:12.705009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.887 qpair failed and we were unable to recover it. 00:38:18.887 [2024-12-13 10:40:12.705146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.887 [2024-12-13 10:40:12.705161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.887 qpair failed and we were unable to recover it. 00:38:18.887 [2024-12-13 10:40:12.705413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.887 [2024-12-13 10:40:12.705467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.887 qpair failed and we were unable to recover it. 00:38:18.887 [2024-12-13 10:40:12.705741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.887 [2024-12-13 10:40:12.705784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.887 qpair failed and we were unable to recover it. 00:38:18.887 [2024-12-13 10:40:12.705987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.887 [2024-12-13 10:40:12.706030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.887 qpair failed and we were unable to recover it. 00:38:18.887 [2024-12-13 10:40:12.706231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.887 [2024-12-13 10:40:12.706273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.887 qpair failed and we were unable to recover it. 00:38:18.887 [2024-12-13 10:40:12.706478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.887 [2024-12-13 10:40:12.706522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.887 qpair failed and we were unable to recover it. 00:38:18.887 [2024-12-13 10:40:12.706736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.887 [2024-12-13 10:40:12.706786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.887 qpair failed and we were unable to recover it. 00:38:18.887 [2024-12-13 10:40:12.707001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.887 [2024-12-13 10:40:12.707044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.887 qpair failed and we were unable to recover it. 00:38:18.887 [2024-12-13 10:40:12.707249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.887 [2024-12-13 10:40:12.707264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.887 qpair failed and we were unable to recover it. 00:38:18.887 [2024-12-13 10:40:12.707402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.887 [2024-12-13 10:40:12.707479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.887 qpair failed and we were unable to recover it. 00:38:18.887 [2024-12-13 10:40:12.707682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.887 [2024-12-13 10:40:12.707725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.887 qpair failed and we were unable to recover it. 00:38:18.887 [2024-12-13 10:40:12.707989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.887 [2024-12-13 10:40:12.708043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.887 qpair failed and we were unable to recover it. 00:38:18.887 [2024-12-13 10:40:12.708184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.887 [2024-12-13 10:40:12.708198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.887 qpair failed and we were unable to recover it. 00:38:18.887 [2024-12-13 10:40:12.708267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.887 [2024-12-13 10:40:12.708282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.887 qpair failed and we were unable to recover it. 00:38:18.887 [2024-12-13 10:40:12.708500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.887 [2024-12-13 10:40:12.708520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.887 qpair failed and we were unable to recover it. 00:38:18.887 [2024-12-13 10:40:12.708695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.887 [2024-12-13 10:40:12.708711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.887 qpair failed and we were unable to recover it. 00:38:18.887 [2024-12-13 10:40:12.708805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.887 [2024-12-13 10:40:12.708849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.887 qpair failed and we were unable to recover it. 00:38:18.887 [2024-12-13 10:40:12.709054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.887 [2024-12-13 10:40:12.709097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.887 qpair failed and we were unable to recover it. 00:38:18.887 [2024-12-13 10:40:12.709297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.887 [2024-12-13 10:40:12.709340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.887 qpair failed and we were unable to recover it. 00:38:18.887 [2024-12-13 10:40:12.709602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.887 [2024-12-13 10:40:12.709647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.887 qpair failed and we were unable to recover it. 00:38:18.887 [2024-12-13 10:40:12.709852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.887 [2024-12-13 10:40:12.709897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.887 qpair failed and we were unable to recover it. 00:38:18.887 [2024-12-13 10:40:12.710047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.887 [2024-12-13 10:40:12.710062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.887 qpair failed and we were unable to recover it. 00:38:18.887 [2024-12-13 10:40:12.710302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.887 [2024-12-13 10:40:12.710317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.887 qpair failed and we were unable to recover it. 00:38:18.887 [2024-12-13 10:40:12.710482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.887 [2024-12-13 10:40:12.710498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.887 qpair failed and we were unable to recover it. 00:38:18.887 [2024-12-13 10:40:12.710722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.887 [2024-12-13 10:40:12.710737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.887 qpair failed and we were unable to recover it. 00:38:18.887 [2024-12-13 10:40:12.710899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.887 [2024-12-13 10:40:12.710943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.887 qpair failed and we were unable to recover it. 00:38:18.887 [2024-12-13 10:40:12.711143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.887 [2024-12-13 10:40:12.711188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.887 qpair failed and we were unable to recover it. 00:38:18.887 [2024-12-13 10:40:12.711477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.887 [2024-12-13 10:40:12.711521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.887 qpair failed and we were unable to recover it. 00:38:18.887 [2024-12-13 10:40:12.711690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.887 [2024-12-13 10:40:12.711735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.887 qpair failed and we were unable to recover it. 00:38:18.887 [2024-12-13 10:40:12.711996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.887 [2024-12-13 10:40:12.712011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.887 qpair failed and we were unable to recover it. 00:38:18.887 [2024-12-13 10:40:12.712222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.887 [2024-12-13 10:40:12.712237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.887 qpair failed and we were unable to recover it. 00:38:18.887 [2024-12-13 10:40:12.712378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.887 [2024-12-13 10:40:12.712394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.887 qpair failed and we were unable to recover it. 00:38:18.887 [2024-12-13 10:40:12.712585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.887 [2024-12-13 10:40:12.712600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.887 qpair failed and we were unable to recover it. 00:38:18.887 [2024-12-13 10:40:12.712700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.887 [2024-12-13 10:40:12.712716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.887 qpair failed and we were unable to recover it. 00:38:18.887 [2024-12-13 10:40:12.712864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.888 [2024-12-13 10:40:12.712879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.888 qpair failed and we were unable to recover it. 00:38:18.888 [2024-12-13 10:40:12.713026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.888 [2024-12-13 10:40:12.713042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.888 qpair failed and we were unable to recover it. 00:38:18.888 [2024-12-13 10:40:12.713120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.888 [2024-12-13 10:40:12.713134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.888 qpair failed and we were unable to recover it. 00:38:18.888 [2024-12-13 10:40:12.713340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.888 [2024-12-13 10:40:12.713355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.888 qpair failed and we were unable to recover it. 00:38:18.888 [2024-12-13 10:40:12.713504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.888 [2024-12-13 10:40:12.713519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.888 qpair failed and we were unable to recover it. 00:38:18.888 [2024-12-13 10:40:12.713671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.888 [2024-12-13 10:40:12.713715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.888 qpair failed and we were unable to recover it. 00:38:18.888 [2024-12-13 10:40:12.713854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.888 [2024-12-13 10:40:12.713898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.888 qpair failed and we were unable to recover it. 00:38:18.888 [2024-12-13 10:40:12.714105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.888 [2024-12-13 10:40:12.714149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.888 qpair failed and we were unable to recover it. 00:38:18.888 [2024-12-13 10:40:12.714299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.888 [2024-12-13 10:40:12.714315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.888 qpair failed and we were unable to recover it. 00:38:18.888 [2024-12-13 10:40:12.714395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.888 [2024-12-13 10:40:12.714409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.888 qpair failed and we were unable to recover it. 00:38:18.888 [2024-12-13 10:40:12.714568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:18.888 [2024-12-13 10:40:12.714584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:18.888 qpair failed and we were unable to recover it. 00:38:19.172 [2024-12-13 10:40:12.714728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.172 [2024-12-13 10:40:12.714743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.172 qpair failed and we were unable to recover it. 00:38:19.172 [2024-12-13 10:40:12.714879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.172 [2024-12-13 10:40:12.714896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.172 qpair failed and we were unable to recover it. 00:38:19.172 [2024-12-13 10:40:12.715066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.172 [2024-12-13 10:40:12.715082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.172 qpair failed and we were unable to recover it. 00:38:19.172 [2024-12-13 10:40:12.715152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.172 [2024-12-13 10:40:12.715165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.172 qpair failed and we were unable to recover it. 00:38:19.172 [2024-12-13 10:40:12.715242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.172 [2024-12-13 10:40:12.715256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.172 qpair failed and we were unable to recover it. 00:38:19.172 [2024-12-13 10:40:12.715469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.172 [2024-12-13 10:40:12.715485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.172 qpair failed and we were unable to recover it. 00:38:19.172 [2024-12-13 10:40:12.715664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.172 [2024-12-13 10:40:12.715678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.172 qpair failed and we were unable to recover it. 00:38:19.172 [2024-12-13 10:40:12.715770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.172 [2024-12-13 10:40:12.715783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.172 qpair failed and we were unable to recover it. 00:38:19.172 [2024-12-13 10:40:12.715880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.172 [2024-12-13 10:40:12.715893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.172 qpair failed and we were unable to recover it. 00:38:19.172 [2024-12-13 10:40:12.715991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.172 [2024-12-13 10:40:12.716005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.172 qpair failed and we were unable to recover it. 00:38:19.172 [2024-12-13 10:40:12.716161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.173 [2024-12-13 10:40:12.716175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.173 qpair failed and we were unable to recover it. 00:38:19.173 [2024-12-13 10:40:12.716262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.173 [2024-12-13 10:40:12.716276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.173 qpair failed and we were unable to recover it. 00:38:19.173 [2024-12-13 10:40:12.716419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.173 [2024-12-13 10:40:12.716432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.173 qpair failed and we were unable to recover it. 00:38:19.173 [2024-12-13 10:40:12.716522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.173 [2024-12-13 10:40:12.716536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.173 qpair failed and we were unable to recover it. 00:38:19.173 [2024-12-13 10:40:12.716673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.173 [2024-12-13 10:40:12.716687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.173 qpair failed and we were unable to recover it. 00:38:19.173 [2024-12-13 10:40:12.716784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.173 [2024-12-13 10:40:12.716798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.173 qpair failed and we were unable to recover it. 00:38:19.173 [2024-12-13 10:40:12.716977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.173 [2024-12-13 10:40:12.716991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.173 qpair failed and we were unable to recover it. 00:38:19.173 [2024-12-13 10:40:12.717081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.173 [2024-12-13 10:40:12.717095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.173 qpair failed and we were unable to recover it. 00:38:19.173 [2024-12-13 10:40:12.717191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.173 [2024-12-13 10:40:12.717204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.173 qpair failed and we were unable to recover it. 00:38:19.173 [2024-12-13 10:40:12.717274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.173 [2024-12-13 10:40:12.717287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.173 qpair failed and we were unable to recover it. 00:38:19.173 [2024-12-13 10:40:12.717368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.173 [2024-12-13 10:40:12.717383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.173 qpair failed and we were unable to recover it. 00:38:19.173 [2024-12-13 10:40:12.717526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.173 [2024-12-13 10:40:12.717540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.173 qpair failed and we were unable to recover it. 00:38:19.173 [2024-12-13 10:40:12.717625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.173 [2024-12-13 10:40:12.717639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.173 qpair failed and we were unable to recover it. 00:38:19.173 [2024-12-13 10:40:12.717737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.173 [2024-12-13 10:40:12.717751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.173 qpair failed and we were unable to recover it. 00:38:19.173 [2024-12-13 10:40:12.717958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.173 [2024-12-13 10:40:12.717971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.173 qpair failed and we were unable to recover it. 00:38:19.173 [2024-12-13 10:40:12.718135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.173 [2024-12-13 10:40:12.718151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.173 qpair failed and we were unable to recover it. 00:38:19.173 [2024-12-13 10:40:12.718245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.173 [2024-12-13 10:40:12.718259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.173 qpair failed and we were unable to recover it. 00:38:19.173 [2024-12-13 10:40:12.718344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.173 [2024-12-13 10:40:12.718359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.173 qpair failed and we were unable to recover it. 00:38:19.173 [2024-12-13 10:40:12.718517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.173 [2024-12-13 10:40:12.718537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.173 qpair failed and we were unable to recover it. 00:38:19.173 [2024-12-13 10:40:12.718692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.173 [2024-12-13 10:40:12.718707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.173 qpair failed and we were unable to recover it. 00:38:19.173 [2024-12-13 10:40:12.718852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.173 [2024-12-13 10:40:12.718867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.173 qpair failed and we were unable to recover it. 00:38:19.173 [2024-12-13 10:40:12.719019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.173 [2024-12-13 10:40:12.719034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.173 qpair failed and we were unable to recover it. 00:38:19.173 [2024-12-13 10:40:12.719110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.173 [2024-12-13 10:40:12.719124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.173 qpair failed and we were unable to recover it. 00:38:19.173 [2024-12-13 10:40:12.719215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.173 [2024-12-13 10:40:12.719229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.173 qpair failed and we were unable to recover it. 00:38:19.173 [2024-12-13 10:40:12.719310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.173 [2024-12-13 10:40:12.719324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.173 qpair failed and we were unable to recover it. 00:38:19.173 [2024-12-13 10:40:12.719523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.173 [2024-12-13 10:40:12.719539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.173 qpair failed and we were unable to recover it. 00:38:19.173 [2024-12-13 10:40:12.719692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.173 [2024-12-13 10:40:12.719708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.173 qpair failed and we were unable to recover it. 00:38:19.173 [2024-12-13 10:40:12.719932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.173 [2024-12-13 10:40:12.719948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.173 qpair failed and we were unable to recover it. 00:38:19.173 [2024-12-13 10:40:12.720035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.173 [2024-12-13 10:40:12.720048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.173 qpair failed and we were unable to recover it. 00:38:19.173 [2024-12-13 10:40:12.720145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.173 [2024-12-13 10:40:12.720160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.173 qpair failed and we were unable to recover it. 00:38:19.173 [2024-12-13 10:40:12.720229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.173 [2024-12-13 10:40:12.720244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.173 qpair failed and we were unable to recover it. 00:38:19.173 [2024-12-13 10:40:12.720387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.173 [2024-12-13 10:40:12.720404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.173 qpair failed and we were unable to recover it. 00:38:19.173 [2024-12-13 10:40:12.720617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.173 [2024-12-13 10:40:12.720633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.173 qpair failed and we were unable to recover it. 00:38:19.173 [2024-12-13 10:40:12.720740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.173 [2024-12-13 10:40:12.720755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.173 qpair failed and we were unable to recover it. 00:38:19.173 [2024-12-13 10:40:12.720832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.173 [2024-12-13 10:40:12.720848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.173 qpair failed and we were unable to recover it. 00:38:19.173 [2024-12-13 10:40:12.720926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.173 [2024-12-13 10:40:12.720940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.173 qpair failed and we were unable to recover it. 00:38:19.173 [2024-12-13 10:40:12.721085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.173 [2024-12-13 10:40:12.721100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.173 qpair failed and we were unable to recover it. 00:38:19.173 [2024-12-13 10:40:12.721263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.173 [2024-12-13 10:40:12.721306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.173 qpair failed and we were unable to recover it. 00:38:19.174 [2024-12-13 10:40:12.721562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.174 [2024-12-13 10:40:12.721607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.174 qpair failed and we were unable to recover it. 00:38:19.174 [2024-12-13 10:40:12.721827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.174 [2024-12-13 10:40:12.721870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.174 qpair failed and we were unable to recover it. 00:38:19.174 [2024-12-13 10:40:12.722062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.174 [2024-12-13 10:40:12.722106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.174 qpair failed and we were unable to recover it. 00:38:19.174 [2024-12-13 10:40:12.722288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.174 [2024-12-13 10:40:12.722303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.174 qpair failed and we were unable to recover it. 00:38:19.174 [2024-12-13 10:40:12.722457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.174 [2024-12-13 10:40:12.722472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.174 qpair failed and we were unable to recover it. 00:38:19.174 [2024-12-13 10:40:12.722620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.174 [2024-12-13 10:40:12.722635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.174 qpair failed and we were unable to recover it. 00:38:19.174 [2024-12-13 10:40:12.722807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.174 [2024-12-13 10:40:12.722821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.174 qpair failed and we were unable to recover it. 00:38:19.174 [2024-12-13 10:40:12.722911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.174 [2024-12-13 10:40:12.722925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.174 qpair failed and we were unable to recover it. 00:38:19.174 [2024-12-13 10:40:12.723057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.174 [2024-12-13 10:40:12.723101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.174 qpair failed and we were unable to recover it. 00:38:19.174 [2024-12-13 10:40:12.723368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.174 [2024-12-13 10:40:12.723412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.174 qpair failed and we were unable to recover it. 00:38:19.174 [2024-12-13 10:40:12.723579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.174 [2024-12-13 10:40:12.723625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.174 qpair failed and we were unable to recover it. 00:38:19.174 [2024-12-13 10:40:12.723860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.174 [2024-12-13 10:40:12.723903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.174 qpair failed and we were unable to recover it. 00:38:19.174 [2024-12-13 10:40:12.724117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.174 [2024-12-13 10:40:12.724132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.174 qpair failed and we were unable to recover it. 00:38:19.174 [2024-12-13 10:40:12.724297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.174 [2024-12-13 10:40:12.724342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.174 qpair failed and we were unable to recover it. 00:38:19.174 [2024-12-13 10:40:12.724500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.174 [2024-12-13 10:40:12.724544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.174 qpair failed and we were unable to recover it. 00:38:19.174 [2024-12-13 10:40:12.724832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.174 [2024-12-13 10:40:12.724847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.174 qpair failed and we were unable to recover it. 00:38:19.174 [2024-12-13 10:40:12.725009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.174 [2024-12-13 10:40:12.725023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.174 qpair failed and we were unable to recover it. 00:38:19.174 [2024-12-13 10:40:12.725171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.174 [2024-12-13 10:40:12.725186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.174 qpair failed and we were unable to recover it. 00:38:19.174 [2024-12-13 10:40:12.725325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.174 [2024-12-13 10:40:12.725341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.174 qpair failed and we were unable to recover it. 00:38:19.174 [2024-12-13 10:40:12.725437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.174 [2024-12-13 10:40:12.725461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.174 qpair failed and we were unable to recover it. 00:38:19.174 [2024-12-13 10:40:12.725605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.174 [2024-12-13 10:40:12.725620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.174 qpair failed and we were unable to recover it. 00:38:19.174 [2024-12-13 10:40:12.725850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.174 [2024-12-13 10:40:12.725866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.174 qpair failed and we were unable to recover it. 00:38:19.174 [2024-12-13 10:40:12.725948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.174 [2024-12-13 10:40:12.725962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.174 qpair failed and we were unable to recover it. 00:38:19.174 [2024-12-13 10:40:12.726042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.174 [2024-12-13 10:40:12.726056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.174 qpair failed and we were unable to recover it. 00:38:19.174 [2024-12-13 10:40:12.726255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.174 [2024-12-13 10:40:12.726270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.174 qpair failed and we were unable to recover it. 00:38:19.174 [2024-12-13 10:40:12.726340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.174 [2024-12-13 10:40:12.726355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.174 qpair failed and we were unable to recover it. 00:38:19.174 [2024-12-13 10:40:12.726482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.174 [2024-12-13 10:40:12.726528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.174 qpair failed and we were unable to recover it. 00:38:19.174 [2024-12-13 10:40:12.726658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.174 [2024-12-13 10:40:12.726702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.174 qpair failed and we were unable to recover it. 00:38:19.174 [2024-12-13 10:40:12.726853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.174 [2024-12-13 10:40:12.726896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.174 qpair failed and we were unable to recover it. 00:38:19.174 [2024-12-13 10:40:12.727116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.174 [2024-12-13 10:40:12.727132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.174 qpair failed and we were unable to recover it. 00:38:19.174 [2024-12-13 10:40:12.727231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.174 [2024-12-13 10:40:12.727245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.174 qpair failed and we were unable to recover it. 00:38:19.174 [2024-12-13 10:40:12.727467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.174 [2024-12-13 10:40:12.727483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.174 qpair failed and we were unable to recover it. 00:38:19.174 [2024-12-13 10:40:12.727561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.174 [2024-12-13 10:40:12.727574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.174 qpair failed and we were unable to recover it. 00:38:19.174 [2024-12-13 10:40:12.727721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.174 [2024-12-13 10:40:12.727739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.174 qpair failed and we were unable to recover it. 00:38:19.174 [2024-12-13 10:40:12.727828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.174 [2024-12-13 10:40:12.727843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.174 qpair failed and we were unable to recover it. 00:38:19.174 [2024-12-13 10:40:12.727980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.174 [2024-12-13 10:40:12.727997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.174 qpair failed and we were unable to recover it. 00:38:19.174 [2024-12-13 10:40:12.728089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.174 [2024-12-13 10:40:12.728104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.174 qpair failed and we were unable to recover it. 00:38:19.174 [2024-12-13 10:40:12.728249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.174 [2024-12-13 10:40:12.728291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.174 qpair failed and we were unable to recover it. 00:38:19.174 [2024-12-13 10:40:12.728440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.174 [2024-12-13 10:40:12.728496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.174 qpair failed and we were unable to recover it. 00:38:19.175 [2024-12-13 10:40:12.728635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.175 [2024-12-13 10:40:12.728680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.175 qpair failed and we were unable to recover it. 00:38:19.175 [2024-12-13 10:40:12.728880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.175 [2024-12-13 10:40:12.728923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.175 qpair failed and we were unable to recover it. 00:38:19.175 [2024-12-13 10:40:12.729115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.175 [2024-12-13 10:40:12.729157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.175 qpair failed and we were unable to recover it. 00:38:19.175 [2024-12-13 10:40:12.729361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.175 [2024-12-13 10:40:12.729376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.175 qpair failed and we were unable to recover it. 00:38:19.175 [2024-12-13 10:40:12.729472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.175 [2024-12-13 10:40:12.729488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.175 qpair failed and we were unable to recover it. 00:38:19.175 [2024-12-13 10:40:12.729656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.175 [2024-12-13 10:40:12.729675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.175 qpair failed and we were unable to recover it. 00:38:19.175 [2024-12-13 10:40:12.729830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.175 [2024-12-13 10:40:12.729874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.175 qpair failed and we were unable to recover it. 00:38:19.175 [2024-12-13 10:40:12.730166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.175 [2024-12-13 10:40:12.730210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.175 qpair failed and we were unable to recover it. 00:38:19.175 [2024-12-13 10:40:12.730418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.175 [2024-12-13 10:40:12.730473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.175 qpair failed and we were unable to recover it. 00:38:19.175 [2024-12-13 10:40:12.730677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.175 [2024-12-13 10:40:12.730721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.175 qpair failed and we were unable to recover it. 00:38:19.175 [2024-12-13 10:40:12.730920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.175 [2024-12-13 10:40:12.730936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.175 qpair failed and we were unable to recover it. 00:38:19.175 [2024-12-13 10:40:12.731011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.175 [2024-12-13 10:40:12.731024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.175 qpair failed and we were unable to recover it. 00:38:19.175 [2024-12-13 10:40:12.731200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.175 [2024-12-13 10:40:12.731243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.175 qpair failed and we were unable to recover it. 00:38:19.175 [2024-12-13 10:40:12.731398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.175 [2024-12-13 10:40:12.731442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.175 qpair failed and we were unable to recover it. 00:38:19.175 [2024-12-13 10:40:12.731762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.175 [2024-12-13 10:40:12.731817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.175 qpair failed and we were unable to recover it. 00:38:19.175 [2024-12-13 10:40:12.731902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.175 [2024-12-13 10:40:12.731917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.175 qpair failed and we were unable to recover it. 00:38:19.175 [2024-12-13 10:40:12.731998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.175 [2024-12-13 10:40:12.732012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.175 qpair failed and we were unable to recover it. 00:38:19.175 [2024-12-13 10:40:12.732169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.175 [2024-12-13 10:40:12.732211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.175 qpair failed and we were unable to recover it. 00:38:19.175 [2024-12-13 10:40:12.732477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.175 [2024-12-13 10:40:12.732522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.175 qpair failed and we were unable to recover it. 00:38:19.175 [2024-12-13 10:40:12.732721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.175 [2024-12-13 10:40:12.732765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.175 qpair failed and we were unable to recover it. 00:38:19.175 [2024-12-13 10:40:12.732969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.175 [2024-12-13 10:40:12.733014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.175 qpair failed and we were unable to recover it. 00:38:19.175 [2024-12-13 10:40:12.733242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.175 [2024-12-13 10:40:12.733258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.175 qpair failed and we were unable to recover it. 00:38:19.175 [2024-12-13 10:40:12.733411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.175 [2024-12-13 10:40:12.733426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.175 qpair failed and we were unable to recover it. 00:38:19.175 [2024-12-13 10:40:12.733515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.175 [2024-12-13 10:40:12.733529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.175 qpair failed and we were unable to recover it. 00:38:19.175 [2024-12-13 10:40:12.733684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.175 [2024-12-13 10:40:12.733700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.175 qpair failed and we were unable to recover it. 00:38:19.175 [2024-12-13 10:40:12.733795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.175 [2024-12-13 10:40:12.733810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.175 qpair failed and we were unable to recover it. 00:38:19.175 [2024-12-13 10:40:12.733953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.175 [2024-12-13 10:40:12.733968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.175 qpair failed and we were unable to recover it. 00:38:19.175 [2024-12-13 10:40:12.734196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.175 [2024-12-13 10:40:12.734211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.175 qpair failed and we were unable to recover it. 00:38:19.175 [2024-12-13 10:40:12.734366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.175 [2024-12-13 10:40:12.734382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.175 qpair failed and we were unable to recover it. 00:38:19.175 [2024-12-13 10:40:12.734562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.175 [2024-12-13 10:40:12.734578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.175 qpair failed and we were unable to recover it. 00:38:19.175 [2024-12-13 10:40:12.734761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.175 [2024-12-13 10:40:12.734776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.175 qpair failed and we were unable to recover it. 00:38:19.175 [2024-12-13 10:40:12.734927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.175 [2024-12-13 10:40:12.734942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.175 qpair failed and we were unable to recover it. 00:38:19.175 [2024-12-13 10:40:12.735093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.175 [2024-12-13 10:40:12.735109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.175 qpair failed and we were unable to recover it. 00:38:19.175 [2024-12-13 10:40:12.735193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.175 [2024-12-13 10:40:12.735249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.175 qpair failed and we were unable to recover it. 00:38:19.175 [2024-12-13 10:40:12.735460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.175 [2024-12-13 10:40:12.735512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.175 qpair failed and we were unable to recover it. 00:38:19.175 [2024-12-13 10:40:12.735661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.175 [2024-12-13 10:40:12.735706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.175 qpair failed and we were unable to recover it. 00:38:19.176 [2024-12-13 10:40:12.735983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.176 [2024-12-13 10:40:12.736029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.176 qpair failed and we were unable to recover it. 00:38:19.176 [2024-12-13 10:40:12.736183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.176 [2024-12-13 10:40:12.736227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.176 qpair failed and we were unable to recover it. 00:38:19.176 [2024-12-13 10:40:12.736422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.176 [2024-12-13 10:40:12.736437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.176 qpair failed and we were unable to recover it. 00:38:19.176 [2024-12-13 10:40:12.736596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.176 [2024-12-13 10:40:12.736614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.176 qpair failed and we were unable to recover it. 00:38:19.176 [2024-12-13 10:40:12.736694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.176 [2024-12-13 10:40:12.736708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.176 qpair failed and we were unable to recover it. 00:38:19.176 [2024-12-13 10:40:12.736899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.176 [2024-12-13 10:40:12.736943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.176 qpair failed and we were unable to recover it. 00:38:19.176 [2024-12-13 10:40:12.737172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.176 [2024-12-13 10:40:12.737224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.176 qpair failed and we were unable to recover it. 00:38:19.176 [2024-12-13 10:40:12.737370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.176 [2024-12-13 10:40:12.737413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.176 qpair failed and we were unable to recover it. 00:38:19.176 [2024-12-13 10:40:12.737713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.176 [2024-12-13 10:40:12.737803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:19.176 qpair failed and we were unable to recover it. 00:38:19.176 [2024-12-13 10:40:12.738044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.176 [2024-12-13 10:40:12.738124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:19.176 qpair failed and we were unable to recover it. 00:38:19.176 [2024-12-13 10:40:12.738410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.176 [2024-12-13 10:40:12.738466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:19.176 qpair failed and we were unable to recover it. 00:38:19.176 [2024-12-13 10:40:12.738636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.176 [2024-12-13 10:40:12.738652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.176 qpair failed and we were unable to recover it. 00:38:19.176 [2024-12-13 10:40:12.738828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.176 [2024-12-13 10:40:12.738844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.176 qpair failed and we were unable to recover it. 00:38:19.176 [2024-12-13 10:40:12.739050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.176 [2024-12-13 10:40:12.739065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.176 qpair failed and we were unable to recover it. 00:38:19.176 [2024-12-13 10:40:12.739157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.176 [2024-12-13 10:40:12.739171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.176 qpair failed and we were unable to recover it. 00:38:19.176 [2024-12-13 10:40:12.739243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.176 [2024-12-13 10:40:12.739256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.176 qpair failed and we were unable to recover it. 00:38:19.176 [2024-12-13 10:40:12.739352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.176 [2024-12-13 10:40:12.739366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.176 qpair failed and we were unable to recover it. 00:38:19.176 [2024-12-13 10:40:12.739445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.176 [2024-12-13 10:40:12.739465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.176 qpair failed and we were unable to recover it. 00:38:19.176 [2024-12-13 10:40:12.739626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.176 [2024-12-13 10:40:12.739641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.176 qpair failed and we were unable to recover it. 00:38:19.176 [2024-12-13 10:40:12.739790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.176 [2024-12-13 10:40:12.739805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.176 qpair failed and we were unable to recover it. 00:38:19.176 [2024-12-13 10:40:12.739966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.176 [2024-12-13 10:40:12.739981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.176 qpair failed and we were unable to recover it. 00:38:19.176 [2024-12-13 10:40:12.740047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.176 [2024-12-13 10:40:12.740060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.176 qpair failed and we were unable to recover it. 00:38:19.176 [2024-12-13 10:40:12.740135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.176 [2024-12-13 10:40:12.740175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.176 qpair failed and we were unable to recover it. 00:38:19.176 [2024-12-13 10:40:12.740323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.176 [2024-12-13 10:40:12.740379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:19.176 qpair failed and we were unable to recover it. 00:38:19.176 [2024-12-13 10:40:12.740627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.176 [2024-12-13 10:40:12.740684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:19.176 qpair failed and we were unable to recover it. 00:38:19.176 [2024-12-13 10:40:12.740931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.176 [2024-12-13 10:40:12.740987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:19.176 qpair failed and we were unable to recover it. 00:38:19.176 [2024-12-13 10:40:12.741225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.176 [2024-12-13 10:40:12.741277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:19.176 qpair failed and we were unable to recover it. 00:38:19.176 [2024-12-13 10:40:12.741457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.176 [2024-12-13 10:40:12.741482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:19.176 qpair failed and we were unable to recover it. 00:38:19.176 [2024-12-13 10:40:12.741654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.176 [2024-12-13 10:40:12.741676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:19.176 qpair failed and we were unable to recover it. 00:38:19.176 [2024-12-13 10:40:12.741921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.176 [2024-12-13 10:40:12.741938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.176 qpair failed and we were unable to recover it. 00:38:19.176 [2024-12-13 10:40:12.742091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.176 [2024-12-13 10:40:12.742106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.176 qpair failed and we were unable to recover it. 00:38:19.176 [2024-12-13 10:40:12.742197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.176 [2024-12-13 10:40:12.742210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.176 qpair failed and we were unable to recover it. 00:38:19.176 [2024-12-13 10:40:12.742287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.176 [2024-12-13 10:40:12.742301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.176 qpair failed and we were unable to recover it. 00:38:19.176 [2024-12-13 10:40:12.742460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.176 [2024-12-13 10:40:12.742476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.176 qpair failed and we were unable to recover it. 00:38:19.176 [2024-12-13 10:40:12.742685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.176 [2024-12-13 10:40:12.742700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.176 qpair failed and we were unable to recover it. 00:38:19.176 [2024-12-13 10:40:12.742795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.176 [2024-12-13 10:40:12.742809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.176 qpair failed and we were unable to recover it. 00:38:19.176 [2024-12-13 10:40:12.742956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.176 [2024-12-13 10:40:12.742971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.176 qpair failed and we were unable to recover it. 00:38:19.176 [2024-12-13 10:40:12.743109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.176 [2024-12-13 10:40:12.743124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.176 qpair failed and we were unable to recover it. 00:38:19.176 [2024-12-13 10:40:12.743196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.177 [2024-12-13 10:40:12.743212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.177 qpair failed and we were unable to recover it. 00:38:19.177 [2024-12-13 10:40:12.743308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.177 [2024-12-13 10:40:12.743321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.177 qpair failed and we were unable to recover it. 00:38:19.177 [2024-12-13 10:40:12.743401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.177 [2024-12-13 10:40:12.743414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.177 qpair failed and we were unable to recover it. 00:38:19.177 [2024-12-13 10:40:12.743560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.177 [2024-12-13 10:40:12.743576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.177 qpair failed and we were unable to recover it. 00:38:19.177 [2024-12-13 10:40:12.743661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.177 [2024-12-13 10:40:12.743681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.177 qpair failed and we were unable to recover it. 00:38:19.177 [2024-12-13 10:40:12.743765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.177 [2024-12-13 10:40:12.743778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.177 qpair failed and we were unable to recover it. 00:38:19.177 [2024-12-13 10:40:12.743926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.177 [2024-12-13 10:40:12.743941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.177 qpair failed and we were unable to recover it. 00:38:19.177 [2024-12-13 10:40:12.744144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.177 [2024-12-13 10:40:12.744159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.177 qpair failed and we were unable to recover it. 00:38:19.177 [2024-12-13 10:40:12.744236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.177 [2024-12-13 10:40:12.744251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.177 qpair failed and we were unable to recover it. 00:38:19.177 [2024-12-13 10:40:12.744403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.177 [2024-12-13 10:40:12.744418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.177 qpair failed and we were unable to recover it. 00:38:19.177 [2024-12-13 10:40:12.744504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.177 [2024-12-13 10:40:12.744518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.177 qpair failed and we were unable to recover it. 00:38:19.177 [2024-12-13 10:40:12.744720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.177 [2024-12-13 10:40:12.744735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.177 qpair failed and we were unable to recover it. 00:38:19.177 [2024-12-13 10:40:12.744824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.177 [2024-12-13 10:40:12.744837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.177 qpair failed and we were unable to recover it. 00:38:19.177 [2024-12-13 10:40:12.744997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.177 [2024-12-13 10:40:12.745012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.177 qpair failed and we were unable to recover it. 00:38:19.177 [2024-12-13 10:40:12.745171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.177 [2024-12-13 10:40:12.745186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.177 qpair failed and we were unable to recover it. 00:38:19.177 [2024-12-13 10:40:12.745321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.177 [2024-12-13 10:40:12.745375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.177 qpair failed and we were unable to recover it. 00:38:19.177 [2024-12-13 10:40:12.745527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.177 [2024-12-13 10:40:12.745573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.177 qpair failed and we were unable to recover it. 00:38:19.177 [2024-12-13 10:40:12.745776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.177 [2024-12-13 10:40:12.745820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.177 qpair failed and we were unable to recover it. 00:38:19.177 [2024-12-13 10:40:12.745927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.177 [2024-12-13 10:40:12.745942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.177 qpair failed and we were unable to recover it. 00:38:19.177 [2024-12-13 10:40:12.746120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.177 [2024-12-13 10:40:12.746176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.177 qpair failed and we were unable to recover it. 00:38:19.177 [2024-12-13 10:40:12.746373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.177 [2024-12-13 10:40:12.746415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.177 qpair failed and we were unable to recover it. 00:38:19.177 [2024-12-13 10:40:12.746644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.177 [2024-12-13 10:40:12.746688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.177 qpair failed and we were unable to recover it. 00:38:19.177 [2024-12-13 10:40:12.746880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.177 [2024-12-13 10:40:12.746924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.177 qpair failed and we were unable to recover it. 00:38:19.177 [2024-12-13 10:40:12.747204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.177 [2024-12-13 10:40:12.747218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.177 qpair failed and we were unable to recover it. 00:38:19.177 [2024-12-13 10:40:12.747361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.177 [2024-12-13 10:40:12.747376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.177 qpair failed and we were unable to recover it. 00:38:19.177 [2024-12-13 10:40:12.747439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.177 [2024-12-13 10:40:12.747463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.177 qpair failed and we were unable to recover it. 00:38:19.177 [2024-12-13 10:40:12.747686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.177 [2024-12-13 10:40:12.747702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.177 qpair failed and we were unable to recover it. 00:38:19.177 [2024-12-13 10:40:12.747880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.177 [2024-12-13 10:40:12.747906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:19.177 qpair failed and we were unable to recover it. 00:38:19.177 [2024-12-13 10:40:12.748123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.177 [2024-12-13 10:40:12.748151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:19.177 qpair failed and we were unable to recover it. 00:38:19.177 [2024-12-13 10:40:12.748378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.177 [2024-12-13 10:40:12.748406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:19.177 qpair failed and we were unable to recover it. 00:38:19.177 [2024-12-13 10:40:12.748496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.177 [2024-12-13 10:40:12.748512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.177 qpair failed and we were unable to recover it. 00:38:19.177 [2024-12-13 10:40:12.748660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.177 [2024-12-13 10:40:12.748675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.177 qpair failed and we were unable to recover it. 00:38:19.177 [2024-12-13 10:40:12.748898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.178 [2024-12-13 10:40:12.748913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.178 qpair failed and we were unable to recover it. 00:38:19.178 [2024-12-13 10:40:12.749099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.178 [2024-12-13 10:40:12.749113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.178 qpair failed and we were unable to recover it. 00:38:19.178 [2024-12-13 10:40:12.749182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.178 [2024-12-13 10:40:12.749195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.178 qpair failed and we were unable to recover it. 00:38:19.178 [2024-12-13 10:40:12.749334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.178 [2024-12-13 10:40:12.749349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.178 qpair failed and we were unable to recover it. 00:38:19.178 [2024-12-13 10:40:12.749553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.178 [2024-12-13 10:40:12.749569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.178 qpair failed and we were unable to recover it. 00:38:19.178 [2024-12-13 10:40:12.749641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.178 [2024-12-13 10:40:12.749655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.178 qpair failed and we were unable to recover it. 00:38:19.178 [2024-12-13 10:40:12.749860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.178 [2024-12-13 10:40:12.749875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.178 qpair failed and we were unable to recover it. 00:38:19.178 [2024-12-13 10:40:12.750102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.178 [2024-12-13 10:40:12.750116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.178 qpair failed and we were unable to recover it. 00:38:19.178 [2024-12-13 10:40:12.750209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.178 [2024-12-13 10:40:12.750228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.178 qpair failed and we were unable to recover it. 00:38:19.178 [2024-12-13 10:40:12.750411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.178 [2024-12-13 10:40:12.750426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.178 qpair failed and we were unable to recover it. 00:38:19.178 [2024-12-13 10:40:12.750585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.178 [2024-12-13 10:40:12.750600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.178 qpair failed and we were unable to recover it. 00:38:19.178 [2024-12-13 10:40:12.750861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.178 [2024-12-13 10:40:12.750904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.178 qpair failed and we were unable to recover it. 00:38:19.178 [2024-12-13 10:40:12.751104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.178 [2024-12-13 10:40:12.751119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.178 qpair failed and we were unable to recover it. 00:38:19.178 [2024-12-13 10:40:12.751208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.178 [2024-12-13 10:40:12.751266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.178 qpair failed and we were unable to recover it. 00:38:19.178 [2024-12-13 10:40:12.751525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.178 [2024-12-13 10:40:12.751569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.178 qpair failed and we were unable to recover it. 00:38:19.178 [2024-12-13 10:40:12.751715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.178 [2024-12-13 10:40:12.751760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.178 qpair failed and we were unable to recover it. 00:38:19.178 [2024-12-13 10:40:12.751900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.178 [2024-12-13 10:40:12.751943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.178 qpair failed and we were unable to recover it. 00:38:19.178 [2024-12-13 10:40:12.752128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.178 [2024-12-13 10:40:12.752143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.178 qpair failed and we were unable to recover it. 00:38:19.178 [2024-12-13 10:40:12.752225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.178 [2024-12-13 10:40:12.752238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.178 qpair failed and we were unable to recover it. 00:38:19.178 [2024-12-13 10:40:12.752438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.178 [2024-12-13 10:40:12.752458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.178 qpair failed and we were unable to recover it. 00:38:19.178 [2024-12-13 10:40:12.752630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.178 [2024-12-13 10:40:12.752645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.178 qpair failed and we were unable to recover it. 00:38:19.178 [2024-12-13 10:40:12.752848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.178 [2024-12-13 10:40:12.752863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.178 qpair failed and we were unable to recover it. 00:38:19.178 [2024-12-13 10:40:12.753091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.178 [2024-12-13 10:40:12.753106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.178 qpair failed and we were unable to recover it. 00:38:19.178 [2024-12-13 10:40:12.753189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.178 [2024-12-13 10:40:12.753202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.178 qpair failed and we were unable to recover it. 00:38:19.178 [2024-12-13 10:40:12.753294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.178 [2024-12-13 10:40:12.753308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.178 qpair failed and we were unable to recover it. 00:38:19.178 [2024-12-13 10:40:12.753398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.178 [2024-12-13 10:40:12.753412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.178 qpair failed and we were unable to recover it. 00:38:19.178 [2024-12-13 10:40:12.753519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.178 [2024-12-13 10:40:12.753534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.178 qpair failed and we were unable to recover it. 00:38:19.178 [2024-12-13 10:40:12.753719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.178 [2024-12-13 10:40:12.753735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.178 qpair failed and we were unable to recover it. 00:38:19.178 [2024-12-13 10:40:12.753812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.178 [2024-12-13 10:40:12.753825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.178 qpair failed and we were unable to recover it. 00:38:19.178 [2024-12-13 10:40:12.754029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.178 [2024-12-13 10:40:12.754044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.178 qpair failed and we were unable to recover it. 00:38:19.178 [2024-12-13 10:40:12.754208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.178 [2024-12-13 10:40:12.754222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.178 qpair failed and we were unable to recover it. 00:38:19.178 [2024-12-13 10:40:12.754307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.178 [2024-12-13 10:40:12.754321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.178 qpair failed and we were unable to recover it. 00:38:19.178 [2024-12-13 10:40:12.754419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.178 [2024-12-13 10:40:12.754435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.178 qpair failed and we were unable to recover it. 00:38:19.178 [2024-12-13 10:40:12.754627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.178 [2024-12-13 10:40:12.754688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:19.178 qpair failed and we were unable to recover it. 00:38:19.178 [2024-12-13 10:40:12.754843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.178 [2024-12-13 10:40:12.754893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:19.178 qpair failed and we were unable to recover it. 00:38:19.178 [2024-12-13 10:40:12.755069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.178 [2024-12-13 10:40:12.755118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:19.178 qpair failed and we were unable to recover it. 00:38:19.178 [2024-12-13 10:40:12.755253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.178 [2024-12-13 10:40:12.755277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:19.178 qpair failed and we were unable to recover it. 00:38:19.178 [2024-12-13 10:40:12.755371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.178 [2024-12-13 10:40:12.755395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:19.179 qpair failed and we were unable to recover it. 00:38:19.179 [2024-12-13 10:40:12.755587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.179 [2024-12-13 10:40:12.755610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:19.179 qpair failed and we were unable to recover it. 00:38:19.179 [2024-12-13 10:40:12.755791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.179 [2024-12-13 10:40:12.755809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.179 qpair failed and we were unable to recover it. 00:38:19.179 [2024-12-13 10:40:12.755957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.179 [2024-12-13 10:40:12.755973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.179 qpair failed and we were unable to recover it. 00:38:19.179 [2024-12-13 10:40:12.756131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.179 [2024-12-13 10:40:12.756176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.179 qpair failed and we were unable to recover it. 00:38:19.179 [2024-12-13 10:40:12.756378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.179 [2024-12-13 10:40:12.756423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.179 qpair failed and we were unable to recover it. 00:38:19.179 [2024-12-13 10:40:12.756623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.179 [2024-12-13 10:40:12.756668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.179 qpair failed and we were unable to recover it. 00:38:19.179 [2024-12-13 10:40:12.756978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.179 [2024-12-13 10:40:12.757024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.179 qpair failed and we were unable to recover it. 00:38:19.179 [2024-12-13 10:40:12.757242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.179 [2024-12-13 10:40:12.757333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.179 qpair failed and we were unable to recover it. 00:38:19.179 [2024-12-13 10:40:12.757622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.179 [2024-12-13 10:40:12.757667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.179 qpair failed and we were unable to recover it. 00:38:19.179 [2024-12-13 10:40:12.757866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.179 [2024-12-13 10:40:12.757910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.179 qpair failed and we were unable to recover it. 00:38:19.179 [2024-12-13 10:40:12.758058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.179 [2024-12-13 10:40:12.758109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.179 qpair failed and we were unable to recover it. 00:38:19.179 [2024-12-13 10:40:12.758279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.179 [2024-12-13 10:40:12.758294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.179 qpair failed and we were unable to recover it. 00:38:19.179 [2024-12-13 10:40:12.758503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.179 [2024-12-13 10:40:12.758519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.179 qpair failed and we were unable to recover it. 00:38:19.179 [2024-12-13 10:40:12.758611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.179 [2024-12-13 10:40:12.758625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.179 qpair failed and we were unable to recover it. 00:38:19.179 [2024-12-13 10:40:12.758767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.179 [2024-12-13 10:40:12.758782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.179 qpair failed and we were unable to recover it. 00:38:19.179 [2024-12-13 10:40:12.758949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.179 [2024-12-13 10:40:12.758965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.179 qpair failed and we were unable to recover it. 00:38:19.179 [2024-12-13 10:40:12.759121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.179 [2024-12-13 10:40:12.759136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.179 qpair failed and we were unable to recover it. 00:38:19.179 [2024-12-13 10:40:12.759235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.179 [2024-12-13 10:40:12.759250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.179 qpair failed and we were unable to recover it. 00:38:19.179 [2024-12-13 10:40:12.759341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.179 [2024-12-13 10:40:12.759355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.179 qpair failed and we were unable to recover it. 00:38:19.179 [2024-12-13 10:40:12.759439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.179 [2024-12-13 10:40:12.759458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.179 qpair failed and we were unable to recover it. 00:38:19.179 [2024-12-13 10:40:12.759559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.179 [2024-12-13 10:40:12.759574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.179 qpair failed and we were unable to recover it. 00:38:19.179 [2024-12-13 10:40:12.759664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.179 [2024-12-13 10:40:12.759678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.179 qpair failed and we were unable to recover it. 00:38:19.179 [2024-12-13 10:40:12.759773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.179 [2024-12-13 10:40:12.759788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.179 qpair failed and we were unable to recover it. 00:38:19.179 [2024-12-13 10:40:12.759877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.179 [2024-12-13 10:40:12.759893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.179 qpair failed and we were unable to recover it. 00:38:19.179 [2024-12-13 10:40:12.760064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.179 [2024-12-13 10:40:12.760110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.179 qpair failed and we were unable to recover it. 00:38:19.179 [2024-12-13 10:40:12.760316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.179 [2024-12-13 10:40:12.760359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.179 qpair failed and we were unable to recover it. 00:38:19.179 [2024-12-13 10:40:12.760524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.179 [2024-12-13 10:40:12.760570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.179 qpair failed and we were unable to recover it. 00:38:19.179 [2024-12-13 10:40:12.760745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.179 [2024-12-13 10:40:12.760793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.179 qpair failed and we were unable to recover it. 00:38:19.179 [2024-12-13 10:40:12.761010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.179 [2024-12-13 10:40:12.761025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.179 qpair failed and we were unable to recover it. 00:38:19.179 [2024-12-13 10:40:12.761158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.179 [2024-12-13 10:40:12.761174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.179 qpair failed and we were unable to recover it. 00:38:19.179 [2024-12-13 10:40:12.761271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.179 [2024-12-13 10:40:12.761286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.179 qpair failed and we were unable to recover it. 00:38:19.179 [2024-12-13 10:40:12.761516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.179 [2024-12-13 10:40:12.761534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.179 qpair failed and we were unable to recover it. 00:38:19.179 [2024-12-13 10:40:12.761679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.179 [2024-12-13 10:40:12.761695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.179 qpair failed and we were unable to recover it. 00:38:19.179 [2024-12-13 10:40:12.761779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.179 [2024-12-13 10:40:12.761794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.179 qpair failed and we were unable to recover it. 00:38:19.179 [2024-12-13 10:40:12.761935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.179 [2024-12-13 10:40:12.761966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.179 qpair failed and we were unable to recover it. 00:38:19.179 [2024-12-13 10:40:12.762169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.179 [2024-12-13 10:40:12.762213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.179 qpair failed and we were unable to recover it. 00:38:19.179 [2024-12-13 10:40:12.762343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.179 [2024-12-13 10:40:12.762387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.179 qpair failed and we were unable to recover it. 00:38:19.179 [2024-12-13 10:40:12.762564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.179 [2024-12-13 10:40:12.762615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:19.179 qpair failed and we were unable to recover it. 00:38:19.179 [2024-12-13 10:40:12.762786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.179 [2024-12-13 10:40:12.762838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:19.180 qpair failed and we were unable to recover it. 00:38:19.180 [2024-12-13 10:40:12.763019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.180 [2024-12-13 10:40:12.763046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:19.180 qpair failed and we were unable to recover it. 00:38:19.180 [2024-12-13 10:40:12.763135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.180 [2024-12-13 10:40:12.763152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.180 qpair failed and we were unable to recover it. 00:38:19.180 [2024-12-13 10:40:12.763240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.180 [2024-12-13 10:40:12.763254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.180 qpair failed and we were unable to recover it. 00:38:19.180 [2024-12-13 10:40:12.763407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.180 [2024-12-13 10:40:12.763468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.180 qpair failed and we were unable to recover it. 00:38:19.180 [2024-12-13 10:40:12.763758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.180 [2024-12-13 10:40:12.763802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.180 qpair failed and we were unable to recover it. 00:38:19.180 [2024-12-13 10:40:12.763930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.180 [2024-12-13 10:40:12.763974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.180 qpair failed and we were unable to recover it. 00:38:19.180 [2024-12-13 10:40:12.764118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.180 [2024-12-13 10:40:12.764134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.180 qpair failed and we were unable to recover it. 00:38:19.180 [2024-12-13 10:40:12.764332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.180 [2024-12-13 10:40:12.764376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.180 qpair failed and we were unable to recover it. 00:38:19.180 [2024-12-13 10:40:12.764515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.180 [2024-12-13 10:40:12.764569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.180 qpair failed and we were unable to recover it. 00:38:19.180 [2024-12-13 10:40:12.764721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.180 [2024-12-13 10:40:12.764766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.180 qpair failed and we were unable to recover it. 00:38:19.180 [2024-12-13 10:40:12.764908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.180 [2024-12-13 10:40:12.764951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.180 qpair failed and we were unable to recover it. 00:38:19.180 [2024-12-13 10:40:12.765079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.180 [2024-12-13 10:40:12.765097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.180 qpair failed and we were unable to recover it. 00:38:19.180 [2024-12-13 10:40:12.765251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.180 [2024-12-13 10:40:12.765266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.180 qpair failed and we were unable to recover it. 00:38:19.180 [2024-12-13 10:40:12.765407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.180 [2024-12-13 10:40:12.765476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.180 qpair failed and we were unable to recover it. 00:38:19.180 [2024-12-13 10:40:12.765640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.180 [2024-12-13 10:40:12.765683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.180 qpair failed and we were unable to recover it. 00:38:19.180 [2024-12-13 10:40:12.765821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.180 [2024-12-13 10:40:12.765865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.180 qpair failed and we were unable to recover it. 00:38:19.180 [2024-12-13 10:40:12.766008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.180 [2024-12-13 10:40:12.766052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.180 qpair failed and we were unable to recover it. 00:38:19.180 [2024-12-13 10:40:12.766266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.180 [2024-12-13 10:40:12.766310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.180 qpair failed and we were unable to recover it. 00:38:19.180 [2024-12-13 10:40:12.766460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.180 [2024-12-13 10:40:12.766506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.180 qpair failed and we were unable to recover it. 00:38:19.180 [2024-12-13 10:40:12.766638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.180 [2024-12-13 10:40:12.766681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.180 qpair failed and we were unable to recover it. 00:38:19.180 [2024-12-13 10:40:12.766813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.180 [2024-12-13 10:40:12.766857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.180 qpair failed and we were unable to recover it. 00:38:19.180 [2024-12-13 10:40:12.766998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.180 [2024-12-13 10:40:12.767042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.180 qpair failed and we were unable to recover it. 00:38:19.180 [2024-12-13 10:40:12.767190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.180 [2024-12-13 10:40:12.767240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.180 qpair failed and we were unable to recover it. 00:38:19.180 [2024-12-13 10:40:12.767323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.180 [2024-12-13 10:40:12.767337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.180 qpair failed and we were unable to recover it. 00:38:19.180 [2024-12-13 10:40:12.767562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.180 [2024-12-13 10:40:12.767578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.180 qpair failed and we were unable to recover it. 00:38:19.180 [2024-12-13 10:40:12.767730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.180 [2024-12-13 10:40:12.767745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.180 qpair failed and we were unable to recover it. 00:38:19.180 [2024-12-13 10:40:12.767821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.180 [2024-12-13 10:40:12.767836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.180 qpair failed and we were unable to recover it. 00:38:19.180 [2024-12-13 10:40:12.767974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.180 [2024-12-13 10:40:12.767989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.180 qpair failed and we were unable to recover it. 00:38:19.180 [2024-12-13 10:40:12.768137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.180 [2024-12-13 10:40:12.768153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.180 qpair failed and we were unable to recover it. 00:38:19.180 [2024-12-13 10:40:12.768359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.180 [2024-12-13 10:40:12.768403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.180 qpair failed and we were unable to recover it. 00:38:19.180 [2024-12-13 10:40:12.768558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.180 [2024-12-13 10:40:12.768601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.180 qpair failed and we were unable to recover it. 00:38:19.180 [2024-12-13 10:40:12.768811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.180 [2024-12-13 10:40:12.768855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.180 qpair failed and we were unable to recover it. 00:38:19.180 [2024-12-13 10:40:12.768969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.180 [2024-12-13 10:40:12.768984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.180 qpair failed and we were unable to recover it. 00:38:19.180 [2024-12-13 10:40:12.769203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.180 [2024-12-13 10:40:12.769247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.180 qpair failed and we were unable to recover it. 00:38:19.180 [2024-12-13 10:40:12.769499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.180 [2024-12-13 10:40:12.769546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.180 qpair failed and we were unable to recover it. 00:38:19.180 [2024-12-13 10:40:12.769757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.180 [2024-12-13 10:40:12.769814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.180 qpair failed and we were unable to recover it. 00:38:19.180 [2024-12-13 10:40:12.770028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.180 [2024-12-13 10:40:12.770073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.180 qpair failed and we were unable to recover it. 00:38:19.180 [2024-12-13 10:40:12.770230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.180 [2024-12-13 10:40:12.770274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.180 qpair failed and we were unable to recover it. 00:38:19.181 [2024-12-13 10:40:12.770493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.181 [2024-12-13 10:40:12.770519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:19.181 qpair failed and we were unable to recover it. 00:38:19.181 [2024-12-13 10:40:12.770677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.181 [2024-12-13 10:40:12.770699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:19.181 qpair failed and we were unable to recover it. 00:38:19.181 [2024-12-13 10:40:12.770802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.181 [2024-12-13 10:40:12.770825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:19.181 qpair failed and we were unable to recover it. 00:38:19.181 [2024-12-13 10:40:12.770948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.181 [2024-12-13 10:40:12.770972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:19.181 qpair failed and we were unable to recover it. 00:38:19.181 [2024-12-13 10:40:12.771081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.181 [2024-12-13 10:40:12.771105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:19.181 qpair failed and we were unable to recover it. 00:38:19.181 [2024-12-13 10:40:12.771340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.181 [2024-12-13 10:40:12.771386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:19.181 qpair failed and we were unable to recover it. 00:38:19.181 [2024-12-13 10:40:12.771587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.181 [2024-12-13 10:40:12.771631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:19.181 qpair failed and we were unable to recover it. 00:38:19.181 [2024-12-13 10:40:12.771787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.181 [2024-12-13 10:40:12.771831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:19.181 qpair failed and we were unable to recover it. 00:38:19.181 [2024-12-13 10:40:12.771997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.181 [2024-12-13 10:40:12.772041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:19.181 qpair failed and we were unable to recover it. 00:38:19.181 [2024-12-13 10:40:12.772177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.181 [2024-12-13 10:40:12.772220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:19.181 qpair failed and we were unable to recover it. 00:38:19.181 [2024-12-13 10:40:12.772368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.181 [2024-12-13 10:40:12.772390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:19.181 qpair failed and we were unable to recover it. 00:38:19.181 [2024-12-13 10:40:12.772595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.181 [2024-12-13 10:40:12.772620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:19.181 qpair failed and we were unable to recover it. 00:38:19.181 [2024-12-13 10:40:12.772779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.181 [2024-12-13 10:40:12.772808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:19.181 qpair failed and we were unable to recover it. 00:38:19.181 [2024-12-13 10:40:12.772981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.181 [2024-12-13 10:40:12.773007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:19.181 qpair failed and we were unable to recover it. 00:38:19.181 [2024-12-13 10:40:12.773178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.181 [2024-12-13 10:40:12.773201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:19.181 qpair failed and we were unable to recover it. 00:38:19.181 [2024-12-13 10:40:12.773372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.181 [2024-12-13 10:40:12.773394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:19.181 qpair failed and we were unable to recover it. 00:38:19.181 [2024-12-13 10:40:12.773565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.181 [2024-12-13 10:40:12.773589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:19.181 qpair failed and we were unable to recover it. 00:38:19.181 [2024-12-13 10:40:12.773754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.181 [2024-12-13 10:40:12.773777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:19.181 qpair failed and we were unable to recover it. 00:38:19.181 [2024-12-13 10:40:12.773873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.181 [2024-12-13 10:40:12.773893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.181 qpair failed and we were unable to recover it. 00:38:19.181 [2024-12-13 10:40:12.774037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.181 [2024-12-13 10:40:12.774052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.181 qpair failed and we were unable to recover it. 00:38:19.181 [2024-12-13 10:40:12.774137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.181 [2024-12-13 10:40:12.774151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.181 qpair failed and we were unable to recover it. 00:38:19.181 [2024-12-13 10:40:12.774304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.181 [2024-12-13 10:40:12.774319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.181 qpair failed and we were unable to recover it. 00:38:19.181 [2024-12-13 10:40:12.774402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.181 [2024-12-13 10:40:12.774417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.181 qpair failed and we were unable to recover it. 00:38:19.181 [2024-12-13 10:40:12.774652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.181 [2024-12-13 10:40:12.774668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.181 qpair failed and we were unable to recover it. 00:38:19.181 [2024-12-13 10:40:12.774819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.181 [2024-12-13 10:40:12.774836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.181 qpair failed and we were unable to recover it. 00:38:19.181 [2024-12-13 10:40:12.774978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.181 [2024-12-13 10:40:12.774995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.181 qpair failed and we were unable to recover it. 00:38:19.181 [2024-12-13 10:40:12.775137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.181 [2024-12-13 10:40:12.775153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.181 qpair failed and we were unable to recover it. 00:38:19.181 [2024-12-13 10:40:12.775299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.181 [2024-12-13 10:40:12.775315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.181 qpair failed and we were unable to recover it. 00:38:19.181 [2024-12-13 10:40:12.775400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.181 [2024-12-13 10:40:12.775415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.181 qpair failed and we were unable to recover it. 00:38:19.181 [2024-12-13 10:40:12.775617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.181 [2024-12-13 10:40:12.775633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.181 qpair failed and we were unable to recover it. 00:38:19.181 [2024-12-13 10:40:12.775780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.181 [2024-12-13 10:40:12.775795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.181 qpair failed and we were unable to recover it. 00:38:19.182 [2024-12-13 10:40:12.775878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.182 [2024-12-13 10:40:12.775893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.182 qpair failed and we were unable to recover it. 00:38:19.182 [2024-12-13 10:40:12.775961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.182 [2024-12-13 10:40:12.775974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.182 qpair failed and we were unable to recover it. 00:38:19.182 [2024-12-13 10:40:12.776060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.182 [2024-12-13 10:40:12.776074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.182 qpair failed and we were unable to recover it. 00:38:19.182 [2024-12-13 10:40:12.776164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.182 [2024-12-13 10:40:12.776177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.182 qpair failed and we were unable to recover it. 00:38:19.182 [2024-12-13 10:40:12.776270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.182 [2024-12-13 10:40:12.776286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.182 qpair failed and we were unable to recover it. 00:38:19.182 [2024-12-13 10:40:12.776428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.182 [2024-12-13 10:40:12.776443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.182 qpair failed and we were unable to recover it. 00:38:19.182 [2024-12-13 10:40:12.776547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.182 [2024-12-13 10:40:12.776561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.182 qpair failed and we were unable to recover it. 00:38:19.182 [2024-12-13 10:40:12.776708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.182 [2024-12-13 10:40:12.776723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.182 qpair failed and we were unable to recover it. 00:38:19.182 [2024-12-13 10:40:12.776865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.182 [2024-12-13 10:40:12.776880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.182 qpair failed and we were unable to recover it. 00:38:19.182 [2024-12-13 10:40:12.777064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.182 [2024-12-13 10:40:12.777110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:19.182 qpair failed and we were unable to recover it. 00:38:19.182 [2024-12-13 10:40:12.777321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.182 [2024-12-13 10:40:12.777368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:19.182 qpair failed and we were unable to recover it. 00:38:19.182 [2024-12-13 10:40:12.777555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.182 [2024-12-13 10:40:12.777579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:19.182 qpair failed and we were unable to recover it. 00:38:19.182 [2024-12-13 10:40:12.777680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.182 [2024-12-13 10:40:12.777695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.182 qpair failed and we were unable to recover it. 00:38:19.182 [2024-12-13 10:40:12.777795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.182 [2024-12-13 10:40:12.777810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.182 qpair failed and we were unable to recover it. 00:38:19.182 [2024-12-13 10:40:12.777970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.182 [2024-12-13 10:40:12.777986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.182 qpair failed and we were unable to recover it. 00:38:19.182 [2024-12-13 10:40:12.778133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.182 [2024-12-13 10:40:12.778149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.182 qpair failed and we were unable to recover it. 00:38:19.182 [2024-12-13 10:40:12.778311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.182 [2024-12-13 10:40:12.778354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.182 qpair failed and we were unable to recover it. 00:38:19.182 [2024-12-13 10:40:12.778500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.182 [2024-12-13 10:40:12.778545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.182 qpair failed and we were unable to recover it. 00:38:19.182 [2024-12-13 10:40:12.778768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.182 [2024-12-13 10:40:12.778812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.182 qpair failed and we were unable to recover it. 00:38:19.182 [2024-12-13 10:40:12.778940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.182 [2024-12-13 10:40:12.778983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.182 qpair failed and we were unable to recover it. 00:38:19.182 [2024-12-13 10:40:12.779146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.182 [2024-12-13 10:40:12.779190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.182 qpair failed and we were unable to recover it. 00:38:19.182 [2024-12-13 10:40:12.779386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.182 [2024-12-13 10:40:12.779402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.182 qpair failed and we were unable to recover it. 00:38:19.182 [2024-12-13 10:40:12.779590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.182 [2024-12-13 10:40:12.779605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.182 qpair failed and we were unable to recover it. 00:38:19.182 [2024-12-13 10:40:12.779756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.182 [2024-12-13 10:40:12.779772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.182 qpair failed and we were unable to recover it. 00:38:19.182 [2024-12-13 10:40:12.779868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.182 [2024-12-13 10:40:12.779882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.182 qpair failed and we were unable to recover it. 00:38:19.182 [2024-12-13 10:40:12.779958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.182 [2024-12-13 10:40:12.779972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.182 qpair failed and we were unable to recover it. 00:38:19.182 [2024-12-13 10:40:12.780118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.182 [2024-12-13 10:40:12.780134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.182 qpair failed and we were unable to recover it. 00:38:19.182 [2024-12-13 10:40:12.780221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.182 [2024-12-13 10:40:12.780235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.182 qpair failed and we were unable to recover it. 00:38:19.182 [2024-12-13 10:40:12.780389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.182 [2024-12-13 10:40:12.780431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.182 qpair failed and we were unable to recover it. 00:38:19.182 [2024-12-13 10:40:12.780689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.182 [2024-12-13 10:40:12.780735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.182 qpair failed and we were unable to recover it. 00:38:19.182 [2024-12-13 10:40:12.780880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.182 [2024-12-13 10:40:12.780923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.182 qpair failed and we were unable to recover it. 00:38:19.182 [2024-12-13 10:40:12.781169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.182 [2024-12-13 10:40:12.781211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.182 qpair failed and we were unable to recover it. 00:38:19.182 [2024-12-13 10:40:12.781402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.182 [2024-12-13 10:40:12.781444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.182 qpair failed and we were unable to recover it. 00:38:19.182 [2024-12-13 10:40:12.781632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.182 [2024-12-13 10:40:12.781648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.182 qpair failed and we were unable to recover it. 00:38:19.182 [2024-12-13 10:40:12.781816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.182 [2024-12-13 10:40:12.781831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.182 qpair failed and we were unable to recover it. 00:38:19.182 [2024-12-13 10:40:12.782049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.182 [2024-12-13 10:40:12.782064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.182 qpair failed and we were unable to recover it. 00:38:19.182 [2024-12-13 10:40:12.782245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.182 [2024-12-13 10:40:12.782260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.182 qpair failed and we were unable to recover it. 00:38:19.182 [2024-12-13 10:40:12.782434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.182 [2024-12-13 10:40:12.782454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.182 qpair failed and we were unable to recover it. 00:38:19.182 [2024-12-13 10:40:12.782592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.182 [2024-12-13 10:40:12.782624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.183 qpair failed and we were unable to recover it. 00:38:19.183 [2024-12-13 10:40:12.782791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.183 [2024-12-13 10:40:12.782807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.183 qpair failed and we were unable to recover it. 00:38:19.183 [2024-12-13 10:40:12.782901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.183 [2024-12-13 10:40:12.782919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.183 qpair failed and we were unable to recover it. 00:38:19.183 [2024-12-13 10:40:12.783088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.183 [2024-12-13 10:40:12.783105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.183 qpair failed and we were unable to recover it. 00:38:19.183 [2024-12-13 10:40:12.783282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.183 [2024-12-13 10:40:12.783299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.183 qpair failed and we were unable to recover it. 00:38:19.183 [2024-12-13 10:40:12.783473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.183 [2024-12-13 10:40:12.783518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.183 qpair failed and we were unable to recover it. 00:38:19.183 [2024-12-13 10:40:12.783714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.183 [2024-12-13 10:40:12.783758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.183 qpair failed and we were unable to recover it. 00:38:19.183 [2024-12-13 10:40:12.783974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.183 [2024-12-13 10:40:12.784017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.183 qpair failed and we were unable to recover it. 00:38:19.183 [2024-12-13 10:40:12.784166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.183 [2024-12-13 10:40:12.784203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.183 qpair failed and we were unable to recover it. 00:38:19.183 [2024-12-13 10:40:12.784348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.183 [2024-12-13 10:40:12.784363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.183 qpair failed and we were unable to recover it. 00:38:19.183 [2024-12-13 10:40:12.784428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.183 [2024-12-13 10:40:12.784441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.183 qpair failed and we were unable to recover it. 00:38:19.183 [2024-12-13 10:40:12.784526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.183 [2024-12-13 10:40:12.784542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.183 qpair failed and we were unable to recover it. 00:38:19.183 [2024-12-13 10:40:12.784691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.183 [2024-12-13 10:40:12.784705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.183 qpair failed and we were unable to recover it. 00:38:19.183 [2024-12-13 10:40:12.784920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.183 [2024-12-13 10:40:12.784940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.183 qpair failed and we were unable to recover it. 00:38:19.183 [2024-12-13 10:40:12.785029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.183 [2024-12-13 10:40:12.785043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.183 qpair failed and we were unable to recover it. 00:38:19.183 [2024-12-13 10:40:12.785133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.183 [2024-12-13 10:40:12.785149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.183 qpair failed and we were unable to recover it. 00:38:19.183 [2024-12-13 10:40:12.785296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.183 [2024-12-13 10:40:12.785312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.183 qpair failed and we were unable to recover it. 00:38:19.183 [2024-12-13 10:40:12.785397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.183 [2024-12-13 10:40:12.785412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.183 qpair failed and we were unable to recover it. 00:38:19.183 [2024-12-13 10:40:12.785499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.183 [2024-12-13 10:40:12.785513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.183 qpair failed and we were unable to recover it. 00:38:19.183 [2024-12-13 10:40:12.785603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.183 [2024-12-13 10:40:12.785619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.183 qpair failed and we were unable to recover it. 00:38:19.183 [2024-12-13 10:40:12.785691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.183 [2024-12-13 10:40:12.785705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.183 qpair failed and we were unable to recover it. 00:38:19.183 [2024-12-13 10:40:12.785794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.183 [2024-12-13 10:40:12.785836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.183 qpair failed and we were unable to recover it. 00:38:19.183 [2024-12-13 10:40:12.785972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.183 [2024-12-13 10:40:12.786015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.183 qpair failed and we were unable to recover it. 00:38:19.183 [2024-12-13 10:40:12.786175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.183 [2024-12-13 10:40:12.786219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.183 qpair failed and we were unable to recover it. 00:38:19.183 [2024-12-13 10:40:12.786364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.183 [2024-12-13 10:40:12.786379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.183 qpair failed and we were unable to recover it. 00:38:19.183 [2024-12-13 10:40:12.786591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.183 [2024-12-13 10:40:12.786607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.183 qpair failed and we were unable to recover it. 00:38:19.183 [2024-12-13 10:40:12.786758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.183 [2024-12-13 10:40:12.786773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.183 qpair failed and we were unable to recover it. 00:38:19.183 [2024-12-13 10:40:12.786917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.183 [2024-12-13 10:40:12.786932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.183 qpair failed and we were unable to recover it. 00:38:19.183 [2024-12-13 10:40:12.787099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.183 [2024-12-13 10:40:12.787114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.183 qpair failed and we were unable to recover it. 00:38:19.183 [2024-12-13 10:40:12.787195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.183 [2024-12-13 10:40:12.787209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.183 qpair failed and we were unable to recover it. 00:38:19.183 [2024-12-13 10:40:12.787294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.183 [2024-12-13 10:40:12.787309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.183 qpair failed and we were unable to recover it. 00:38:19.183 [2024-12-13 10:40:12.787382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.183 [2024-12-13 10:40:12.787396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.183 qpair failed and we were unable to recover it. 00:38:19.183 [2024-12-13 10:40:12.787543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.183 [2024-12-13 10:40:12.787559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.183 qpair failed and we were unable to recover it. 00:38:19.183 [2024-12-13 10:40:12.787630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.183 [2024-12-13 10:40:12.787644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.183 qpair failed and we were unable to recover it. 00:38:19.183 [2024-12-13 10:40:12.787780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.183 [2024-12-13 10:40:12.787795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.183 qpair failed and we were unable to recover it. 00:38:19.183 [2024-12-13 10:40:12.787882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.183 [2024-12-13 10:40:12.787896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.183 qpair failed and we were unable to recover it. 00:38:19.183 [2024-12-13 10:40:12.787988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.183 [2024-12-13 10:40:12.788004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.183 qpair failed and we were unable to recover it. 00:38:19.183 [2024-12-13 10:40:12.788095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.183 [2024-12-13 10:40:12.788108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.183 qpair failed and we were unable to recover it. 00:38:19.183 [2024-12-13 10:40:12.788263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.183 [2024-12-13 10:40:12.788277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.183 qpair failed and we were unable to recover it. 00:38:19.183 [2024-12-13 10:40:12.788424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.184 [2024-12-13 10:40:12.788438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.184 qpair failed and we were unable to recover it. 00:38:19.184 [2024-12-13 10:40:12.788582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.184 [2024-12-13 10:40:12.788597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.184 qpair failed and we were unable to recover it. 00:38:19.184 [2024-12-13 10:40:12.788749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.184 [2024-12-13 10:40:12.788763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.184 qpair failed and we were unable to recover it. 00:38:19.184 [2024-12-13 10:40:12.788828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.184 [2024-12-13 10:40:12.788841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.184 qpair failed and we were unable to recover it. 00:38:19.184 [2024-12-13 10:40:12.788911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.184 [2024-12-13 10:40:12.788924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.184 qpair failed and we were unable to recover it. 00:38:19.184 [2024-12-13 10:40:12.788994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.184 [2024-12-13 10:40:12.789006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.184 qpair failed and we were unable to recover it. 00:38:19.184 [2024-12-13 10:40:12.789083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.184 [2024-12-13 10:40:12.789096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.184 qpair failed and we were unable to recover it. 00:38:19.184 [2024-12-13 10:40:12.789246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.184 [2024-12-13 10:40:12.789260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.184 qpair failed and we were unable to recover it. 00:38:19.184 [2024-12-13 10:40:12.789404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.184 [2024-12-13 10:40:12.789418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.184 qpair failed and we were unable to recover it. 00:38:19.184 [2024-12-13 10:40:12.789494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.184 [2024-12-13 10:40:12.789508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.184 qpair failed and we were unable to recover it. 00:38:19.184 [2024-12-13 10:40:12.789648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.184 [2024-12-13 10:40:12.789662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.184 qpair failed and we were unable to recover it. 00:38:19.184 [2024-12-13 10:40:12.789834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.184 [2024-12-13 10:40:12.789849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.184 qpair failed and we were unable to recover it. 00:38:19.184 [2024-12-13 10:40:12.789931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.184 [2024-12-13 10:40:12.789990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.184 qpair failed and we were unable to recover it. 00:38:19.184 [2024-12-13 10:40:12.790215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.184 [2024-12-13 10:40:12.790259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.184 qpair failed and we were unable to recover it. 00:38:19.184 [2024-12-13 10:40:12.790488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.184 [2024-12-13 10:40:12.790531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.184 qpair failed and we were unable to recover it. 00:38:19.184 [2024-12-13 10:40:12.790688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.184 [2024-12-13 10:40:12.790732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.184 qpair failed and we were unable to recover it. 00:38:19.184 [2024-12-13 10:40:12.790878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.184 [2024-12-13 10:40:12.790920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.184 qpair failed and we were unable to recover it. 00:38:19.184 [2024-12-13 10:40:12.791037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.184 [2024-12-13 10:40:12.791051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.184 qpair failed and we were unable to recover it. 00:38:19.184 [2024-12-13 10:40:12.791188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.184 [2024-12-13 10:40:12.791203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.184 qpair failed and we were unable to recover it. 00:38:19.184 [2024-12-13 10:40:12.791341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.184 [2024-12-13 10:40:12.791355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.184 qpair failed and we were unable to recover it. 00:38:19.184 [2024-12-13 10:40:12.791455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.184 [2024-12-13 10:40:12.791471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.184 qpair failed and we were unable to recover it. 00:38:19.184 [2024-12-13 10:40:12.791617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.184 [2024-12-13 10:40:12.791632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.184 qpair failed and we were unable to recover it. 00:38:19.184 [2024-12-13 10:40:12.791720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.184 [2024-12-13 10:40:12.791735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.184 qpair failed and we were unable to recover it. 00:38:19.184 [2024-12-13 10:40:12.791825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.184 [2024-12-13 10:40:12.791840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.184 qpair failed and we were unable to recover it. 00:38:19.184 [2024-12-13 10:40:12.791974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.184 [2024-12-13 10:40:12.791989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.184 qpair failed and we were unable to recover it. 00:38:19.184 [2024-12-13 10:40:12.792125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.184 [2024-12-13 10:40:12.792140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.184 qpair failed and we were unable to recover it. 00:38:19.184 [2024-12-13 10:40:12.792304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.184 [2024-12-13 10:40:12.792319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.184 qpair failed and we were unable to recover it. 00:38:19.184 [2024-12-13 10:40:12.792403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.184 [2024-12-13 10:40:12.792416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.184 qpair failed and we were unable to recover it. 00:38:19.184 [2024-12-13 10:40:12.792604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.184 [2024-12-13 10:40:12.792619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.184 qpair failed and we were unable to recover it. 00:38:19.184 [2024-12-13 10:40:12.792762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.184 [2024-12-13 10:40:12.792776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.184 qpair failed and we were unable to recover it. 00:38:19.184 [2024-12-13 10:40:12.792865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.184 [2024-12-13 10:40:12.792878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.184 qpair failed and we were unable to recover it. 00:38:19.184 [2024-12-13 10:40:12.792982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.184 [2024-12-13 10:40:12.792996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.184 qpair failed and we were unable to recover it. 00:38:19.184 [2024-12-13 10:40:12.793146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.184 [2024-12-13 10:40:12.793161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.184 qpair failed and we were unable to recover it. 00:38:19.184 [2024-12-13 10:40:12.793245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.184 [2024-12-13 10:40:12.793259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.184 qpair failed and we were unable to recover it. 00:38:19.184 [2024-12-13 10:40:12.793396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.184 [2024-12-13 10:40:12.793410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.184 qpair failed and we were unable to recover it. 00:38:19.184 [2024-12-13 10:40:12.793537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.184 [2024-12-13 10:40:12.793552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.184 qpair failed and we were unable to recover it. 00:38:19.184 [2024-12-13 10:40:12.793683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.184 [2024-12-13 10:40:12.793698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.184 qpair failed and we were unable to recover it. 00:38:19.184 [2024-12-13 10:40:12.793771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.184 [2024-12-13 10:40:12.793785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.184 qpair failed and we were unable to recover it. 00:38:19.184 [2024-12-13 10:40:12.793884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.184 [2024-12-13 10:40:12.793907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.184 qpair failed and we were unable to recover it. 00:38:19.184 [2024-12-13 10:40:12.793991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.185 [2024-12-13 10:40:12.794007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.185 qpair failed and we were unable to recover it. 00:38:19.185 [2024-12-13 10:40:12.794145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.185 [2024-12-13 10:40:12.794160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.185 qpair failed and we were unable to recover it. 00:38:19.185 [2024-12-13 10:40:12.794236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.185 [2024-12-13 10:40:12.794251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.185 qpair failed and we were unable to recover it. 00:38:19.185 [2024-12-13 10:40:12.794406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.185 [2024-12-13 10:40:12.794421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.185 qpair failed and we were unable to recover it. 00:38:19.185 [2024-12-13 10:40:12.794564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.185 [2024-12-13 10:40:12.794580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.185 qpair failed and we were unable to recover it. 00:38:19.185 [2024-12-13 10:40:12.794649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.185 [2024-12-13 10:40:12.794664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.185 qpair failed and we were unable to recover it. 00:38:19.185 [2024-12-13 10:40:12.794805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.185 [2024-12-13 10:40:12.794821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.185 qpair failed and we were unable to recover it. 00:38:19.185 [2024-12-13 10:40:12.794898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.185 [2024-12-13 10:40:12.794912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.185 qpair failed and we were unable to recover it. 00:38:19.185 [2024-12-13 10:40:12.795003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.185 [2024-12-13 10:40:12.795046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.185 qpair failed and we were unable to recover it. 00:38:19.185 [2024-12-13 10:40:12.795242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.185 [2024-12-13 10:40:12.795285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.185 qpair failed and we were unable to recover it. 00:38:19.185 [2024-12-13 10:40:12.795478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.185 [2024-12-13 10:40:12.795522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.185 qpair failed and we were unable to recover it. 00:38:19.185 [2024-12-13 10:40:12.795794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.185 [2024-12-13 10:40:12.795838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.185 qpair failed and we were unable to recover it. 00:38:19.185 [2024-12-13 10:40:12.796118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.185 [2024-12-13 10:40:12.796148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.185 qpair failed and we were unable to recover it. 00:38:19.185 [2024-12-13 10:40:12.796217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.185 [2024-12-13 10:40:12.796233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.185 qpair failed and we were unable to recover it. 00:38:19.185 [2024-12-13 10:40:12.796305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.185 [2024-12-13 10:40:12.796319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.185 qpair failed and we were unable to recover it. 00:38:19.185 [2024-12-13 10:40:12.796427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.185 [2024-12-13 10:40:12.796442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.185 qpair failed and we were unable to recover it. 00:38:19.185 [2024-12-13 10:40:12.796584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.185 [2024-12-13 10:40:12.796599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.185 qpair failed and we were unable to recover it. 00:38:19.185 [2024-12-13 10:40:12.796754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.185 [2024-12-13 10:40:12.796769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.185 qpair failed and we were unable to recover it. 00:38:19.185 [2024-12-13 10:40:12.796900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.185 [2024-12-13 10:40:12.796915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.185 qpair failed and we were unable to recover it. 00:38:19.185 [2024-12-13 10:40:12.797053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.185 [2024-12-13 10:40:12.797069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.185 qpair failed and we were unable to recover it. 00:38:19.185 [2024-12-13 10:40:12.797153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.185 [2024-12-13 10:40:12.797167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.185 qpair failed and we were unable to recover it. 00:38:19.185 [2024-12-13 10:40:12.797317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.185 [2024-12-13 10:40:12.797332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.185 qpair failed and we were unable to recover it. 00:38:19.185 [2024-12-13 10:40:12.797411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.185 [2024-12-13 10:40:12.797425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.185 qpair failed and we were unable to recover it. 00:38:19.185 [2024-12-13 10:40:12.797571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.185 [2024-12-13 10:40:12.797587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.185 qpair failed and we were unable to recover it. 00:38:19.185 [2024-12-13 10:40:12.797724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.185 [2024-12-13 10:40:12.797739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.185 qpair failed and we were unable to recover it. 00:38:19.185 [2024-12-13 10:40:12.797843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.185 [2024-12-13 10:40:12.797858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.185 qpair failed and we were unable to recover it. 00:38:19.185 [2024-12-13 10:40:12.797943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.185 [2024-12-13 10:40:12.798000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.185 qpair failed and we were unable to recover it. 00:38:19.185 [2024-12-13 10:40:12.798174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.185 [2024-12-13 10:40:12.798235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:19.185 qpair failed and we were unable to recover it. 00:38:19.185 [2024-12-13 10:40:12.798389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.185 [2024-12-13 10:40:12.798439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:19.185 qpair failed and we were unable to recover it. 00:38:19.185 [2024-12-13 10:40:12.798596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.185 [2024-12-13 10:40:12.798640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:19.185 qpair failed and we were unable to recover it. 00:38:19.185 [2024-12-13 10:40:12.798774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.185 [2024-12-13 10:40:12.798818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:19.185 qpair failed and we were unable to recover it. 00:38:19.185 [2024-12-13 10:40:12.799038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.185 [2024-12-13 10:40:12.799081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:19.185 qpair failed and we were unable to recover it. 00:38:19.185 [2024-12-13 10:40:12.799280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.185 [2024-12-13 10:40:12.799325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:19.185 qpair failed and we were unable to recover it. 00:38:19.185 [2024-12-13 10:40:12.799454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.185 [2024-12-13 10:40:12.799477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:19.185 qpair failed and we were unable to recover it. 00:38:19.185 [2024-12-13 10:40:12.799556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.185 [2024-12-13 10:40:12.799577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:19.185 qpair failed and we were unable to recover it. 00:38:19.185 [2024-12-13 10:40:12.799825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.185 [2024-12-13 10:40:12.799848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:19.185 qpair failed and we were unable to recover it. 00:38:19.185 [2024-12-13 10:40:12.799946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.185 [2024-12-13 10:40:12.799963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.185 qpair failed and we were unable to recover it. 00:38:19.185 [2024-12-13 10:40:12.800105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.185 [2024-12-13 10:40:12.800120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.185 qpair failed and we were unable to recover it. 00:38:19.185 [2024-12-13 10:40:12.800265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.185 [2024-12-13 10:40:12.800280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.185 qpair failed and we were unable to recover it. 00:38:19.186 [2024-12-13 10:40:12.800349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.186 [2024-12-13 10:40:12.800363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.186 qpair failed and we were unable to recover it. 00:38:19.186 [2024-12-13 10:40:12.800435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.186 [2024-12-13 10:40:12.800455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.186 qpair failed and we were unable to recover it. 00:38:19.186 [2024-12-13 10:40:12.800593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.186 [2024-12-13 10:40:12.800607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.186 qpair failed and we were unable to recover it. 00:38:19.186 [2024-12-13 10:40:12.800762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.186 [2024-12-13 10:40:12.800777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.186 qpair failed and we were unable to recover it. 00:38:19.186 [2024-12-13 10:40:12.800862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.186 [2024-12-13 10:40:12.800877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.186 qpair failed and we were unable to recover it. 00:38:19.186 [2024-12-13 10:40:12.801035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.186 [2024-12-13 10:40:12.801079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.186 qpair failed and we were unable to recover it. 00:38:19.186 [2024-12-13 10:40:12.801368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.186 [2024-12-13 10:40:12.801415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:19.186 qpair failed and we were unable to recover it. 00:38:19.186 [2024-12-13 10:40:12.801626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.186 [2024-12-13 10:40:12.801653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:19.186 qpair failed and we were unable to recover it. 00:38:19.186 [2024-12-13 10:40:12.801842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.186 [2024-12-13 10:40:12.801867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:19.186 qpair failed and we were unable to recover it. 00:38:19.186 [2024-12-13 10:40:12.801976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.186 [2024-12-13 10:40:12.801998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:19.186 qpair failed and we were unable to recover it. 00:38:19.186 [2024-12-13 10:40:12.802153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.186 [2024-12-13 10:40:12.802176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:19.186 qpair failed and we were unable to recover it. 00:38:19.186 [2024-12-13 10:40:12.802266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.186 [2024-12-13 10:40:12.802289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:19.186 qpair failed and we were unable to recover it. 00:38:19.186 [2024-12-13 10:40:12.802388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.186 [2024-12-13 10:40:12.802405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.186 qpair failed and we were unable to recover it. 00:38:19.186 [2024-12-13 10:40:12.802493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.186 [2024-12-13 10:40:12.802508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.186 qpair failed and we were unable to recover it. 00:38:19.186 [2024-12-13 10:40:12.802656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.186 [2024-12-13 10:40:12.802673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.186 qpair failed and we were unable to recover it. 00:38:19.186 [2024-12-13 10:40:12.802745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.186 [2024-12-13 10:40:12.802760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.186 qpair failed and we were unable to recover it. 00:38:19.186 [2024-12-13 10:40:12.802904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.186 [2024-12-13 10:40:12.802919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.186 qpair failed and we were unable to recover it. 00:38:19.186 [2024-12-13 10:40:12.803003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.186 [2024-12-13 10:40:12.803018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.186 qpair failed and we were unable to recover it. 00:38:19.186 [2024-12-13 10:40:12.803156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.186 [2024-12-13 10:40:12.803206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.186 qpair failed and we were unable to recover it. 00:38:19.186 [2024-12-13 10:40:12.803426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.186 [2024-12-13 10:40:12.803484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:19.186 qpair failed and we were unable to recover it. 00:38:19.186 [2024-12-13 10:40:12.803752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.186 [2024-12-13 10:40:12.803796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:19.186 qpair failed and we were unable to recover it. 00:38:19.186 [2024-12-13 10:40:12.803927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.186 [2024-12-13 10:40:12.803969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:19.186 qpair failed and we were unable to recover it. 00:38:19.186 [2024-12-13 10:40:12.804142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.186 [2024-12-13 10:40:12.804187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:19.186 qpair failed and we were unable to recover it. 00:38:19.186 [2024-12-13 10:40:12.804332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.186 [2024-12-13 10:40:12.804375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:19.186 qpair failed and we were unable to recover it. 00:38:19.186 [2024-12-13 10:40:12.804581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.186 [2024-12-13 10:40:12.804625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:19.186 qpair failed and we were unable to recover it. 00:38:19.186 [2024-12-13 10:40:12.804827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.186 [2024-12-13 10:40:12.804871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:19.186 qpair failed and we were unable to recover it. 00:38:19.186 [2024-12-13 10:40:12.805011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.186 [2024-12-13 10:40:12.805055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:19.186 qpair failed and we were unable to recover it. 00:38:19.186 [2024-12-13 10:40:12.805262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.186 [2024-12-13 10:40:12.805305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:19.186 qpair failed and we were unable to recover it. 00:38:19.186 [2024-12-13 10:40:12.805520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.186 [2024-12-13 10:40:12.805569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.186 qpair failed and we were unable to recover it. 00:38:19.186 [2024-12-13 10:40:12.805782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.186 [2024-12-13 10:40:12.805834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.186 qpair failed and we were unable to recover it. 00:38:19.187 [2024-12-13 10:40:12.806041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.187 [2024-12-13 10:40:12.806084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.187 qpair failed and we were unable to recover it. 00:38:19.187 [2024-12-13 10:40:12.806173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.187 [2024-12-13 10:40:12.806187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.187 qpair failed and we were unable to recover it. 00:38:19.187 [2024-12-13 10:40:12.806263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.187 [2024-12-13 10:40:12.806277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.187 qpair failed and we were unable to recover it. 00:38:19.187 [2024-12-13 10:40:12.806426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.187 [2024-12-13 10:40:12.806483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.187 qpair failed and we were unable to recover it. 00:38:19.187 [2024-12-13 10:40:12.806681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.187 [2024-12-13 10:40:12.806723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.187 qpair failed and we were unable to recover it. 00:38:19.187 [2024-12-13 10:40:12.806941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.187 [2024-12-13 10:40:12.806984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.187 qpair failed and we were unable to recover it. 00:38:19.187 [2024-12-13 10:40:12.807102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.187 [2024-12-13 10:40:12.807118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.187 qpair failed and we were unable to recover it. 00:38:19.187 [2024-12-13 10:40:12.807223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.187 [2024-12-13 10:40:12.807238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.187 qpair failed and we were unable to recover it. 00:38:19.187 [2024-12-13 10:40:12.807381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.187 [2024-12-13 10:40:12.807397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.187 qpair failed and we were unable to recover it. 00:38:19.187 [2024-12-13 10:40:12.807533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.187 [2024-12-13 10:40:12.807549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.187 qpair failed and we were unable to recover it. 00:38:19.187 [2024-12-13 10:40:12.807684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.187 [2024-12-13 10:40:12.807699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.187 qpair failed and we were unable to recover it. 00:38:19.187 [2024-12-13 10:40:12.807841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.187 [2024-12-13 10:40:12.807857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.187 qpair failed and we were unable to recover it. 00:38:19.187 [2024-12-13 10:40:12.807939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.187 [2024-12-13 10:40:12.807954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.187 qpair failed and we were unable to recover it. 00:38:19.187 [2024-12-13 10:40:12.808155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.187 [2024-12-13 10:40:12.808197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.187 qpair failed and we were unable to recover it. 00:38:19.187 [2024-12-13 10:40:12.808337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.187 [2024-12-13 10:40:12.808378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.187 qpair failed and we were unable to recover it. 00:38:19.187 [2024-12-13 10:40:12.808598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.187 [2024-12-13 10:40:12.808641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.187 qpair failed and we were unable to recover it. 00:38:19.187 [2024-12-13 10:40:12.808844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.187 [2024-12-13 10:40:12.808888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.187 qpair failed and we were unable to recover it. 00:38:19.187 [2024-12-13 10:40:12.809113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.187 [2024-12-13 10:40:12.809157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.187 qpair failed and we were unable to recover it. 00:38:19.187 [2024-12-13 10:40:12.809372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.187 [2024-12-13 10:40:12.809427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.187 qpair failed and we were unable to recover it. 00:38:19.187 [2024-12-13 10:40:12.809579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.187 [2024-12-13 10:40:12.809623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.187 qpair failed and we were unable to recover it. 00:38:19.187 [2024-12-13 10:40:12.809783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.187 [2024-12-13 10:40:12.809826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.187 qpair failed and we were unable to recover it. 00:38:19.187 [2024-12-13 10:40:12.810043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.187 [2024-12-13 10:40:12.810086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.187 qpair failed and we were unable to recover it. 00:38:19.187 [2024-12-13 10:40:12.810227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.187 [2024-12-13 10:40:12.810269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.187 qpair failed and we were unable to recover it. 00:38:19.187 [2024-12-13 10:40:12.810400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.187 [2024-12-13 10:40:12.810442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.187 qpair failed and we were unable to recover it. 00:38:19.187 [2024-12-13 10:40:12.810666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.187 [2024-12-13 10:40:12.810723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.187 qpair failed and we were unable to recover it. 00:38:19.187 [2024-12-13 10:40:12.810921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.187 [2024-12-13 10:40:12.810963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.187 qpair failed and we were unable to recover it. 00:38:19.187 [2024-12-13 10:40:12.811160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.187 [2024-12-13 10:40:12.811203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.187 qpair failed and we were unable to recover it. 00:38:19.187 [2024-12-13 10:40:12.811342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.187 [2024-12-13 10:40:12.811358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.187 qpair failed and we were unable to recover it. 00:38:19.187 [2024-12-13 10:40:12.811506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.187 [2024-12-13 10:40:12.811522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.187 qpair failed and we were unable to recover it. 00:38:19.187 [2024-12-13 10:40:12.811618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.187 [2024-12-13 10:40:12.811635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.187 qpair failed and we were unable to recover it. 00:38:19.187 [2024-12-13 10:40:12.811775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.187 [2024-12-13 10:40:12.811790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.187 qpair failed and we were unable to recover it. 00:38:19.187 [2024-12-13 10:40:12.811861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.187 [2024-12-13 10:40:12.811874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.187 qpair failed and we were unable to recover it. 00:38:19.187 [2024-12-13 10:40:12.812025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.187 [2024-12-13 10:40:12.812040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.187 qpair failed and we were unable to recover it. 00:38:19.187 [2024-12-13 10:40:12.812181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.187 [2024-12-13 10:40:12.812196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.187 qpair failed and we were unable to recover it. 00:38:19.187 [2024-12-13 10:40:12.812283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.187 [2024-12-13 10:40:12.812299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.187 qpair failed and we were unable to recover it. 00:38:19.187 [2024-12-13 10:40:12.812439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.187 [2024-12-13 10:40:12.812460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.187 qpair failed and we were unable to recover it. 00:38:19.187 [2024-12-13 10:40:12.812669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.187 [2024-12-13 10:40:12.812685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.187 qpair failed and we were unable to recover it. 00:38:19.187 [2024-12-13 10:40:12.812837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.187 [2024-12-13 10:40:12.812882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.187 qpair failed and we were unable to recover it. 00:38:19.187 [2024-12-13 10:40:12.813107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.187 [2024-12-13 10:40:12.813153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.188 qpair failed and we were unable to recover it. 00:38:19.188 [2024-12-13 10:40:12.813311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.188 [2024-12-13 10:40:12.813355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.188 qpair failed and we were unable to recover it. 00:38:19.188 [2024-12-13 10:40:12.813493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.188 [2024-12-13 10:40:12.813537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.188 qpair failed and we were unable to recover it. 00:38:19.188 [2024-12-13 10:40:12.813730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.188 [2024-12-13 10:40:12.813774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.188 qpair failed and we were unable to recover it. 00:38:19.188 [2024-12-13 10:40:12.813991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.188 [2024-12-13 10:40:12.814035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.188 qpair failed and we were unable to recover it. 00:38:19.188 [2024-12-13 10:40:12.814284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.188 [2024-12-13 10:40:12.814299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.188 qpair failed and we were unable to recover it. 00:38:19.188 [2024-12-13 10:40:12.814471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.188 [2024-12-13 10:40:12.814487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.188 qpair failed and we were unable to recover it. 00:38:19.188 [2024-12-13 10:40:12.814639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.188 [2024-12-13 10:40:12.814684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.188 qpair failed and we were unable to recover it. 00:38:19.188 [2024-12-13 10:40:12.814827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.188 [2024-12-13 10:40:12.814871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.188 qpair failed and we were unable to recover it. 00:38:19.188 [2024-12-13 10:40:12.815067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.188 [2024-12-13 10:40:12.815111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.188 qpair failed and we were unable to recover it. 00:38:19.188 [2024-12-13 10:40:12.815248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.188 [2024-12-13 10:40:12.815264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.188 qpair failed and we were unable to recover it. 00:38:19.188 [2024-12-13 10:40:12.815422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.188 [2024-12-13 10:40:12.815437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.188 qpair failed and we were unable to recover it. 00:38:19.188 [2024-12-13 10:40:12.815587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.188 [2024-12-13 10:40:12.815602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.188 qpair failed and we were unable to recover it. 00:38:19.188 [2024-12-13 10:40:12.815704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.188 [2024-12-13 10:40:12.815719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.188 qpair failed and we were unable to recover it. 00:38:19.188 [2024-12-13 10:40:12.815817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.188 [2024-12-13 10:40:12.815832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.188 qpair failed and we were unable to recover it. 00:38:19.188 [2024-12-13 10:40:12.816001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.188 [2024-12-13 10:40:12.816046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.188 qpair failed and we were unable to recover it. 00:38:19.188 [2024-12-13 10:40:12.816187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.188 [2024-12-13 10:40:12.816230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.188 qpair failed and we were unable to recover it. 00:38:19.188 [2024-12-13 10:40:12.816429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.188 [2024-12-13 10:40:12.816485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.188 qpair failed and we were unable to recover it. 00:38:19.188 [2024-12-13 10:40:12.816687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.188 [2024-12-13 10:40:12.816731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.188 qpair failed and we were unable to recover it. 00:38:19.188 [2024-12-13 10:40:12.816875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.188 [2024-12-13 10:40:12.816920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.188 qpair failed and we were unable to recover it. 00:38:19.188 [2024-12-13 10:40:12.817182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.188 [2024-12-13 10:40:12.817225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.188 qpair failed and we were unable to recover it. 00:38:19.188 [2024-12-13 10:40:12.817528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.188 [2024-12-13 10:40:12.817574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.188 qpair failed and we were unable to recover it. 00:38:19.188 [2024-12-13 10:40:12.817782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.188 [2024-12-13 10:40:12.817827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.188 qpair failed and we were unable to recover it. 00:38:19.188 [2024-12-13 10:40:12.817991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.188 [2024-12-13 10:40:12.818034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.188 qpair failed and we were unable to recover it. 00:38:19.188 [2024-12-13 10:40:12.818251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.188 [2024-12-13 10:40:12.818266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.188 qpair failed and we were unable to recover it. 00:38:19.188 [2024-12-13 10:40:12.818429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.188 [2024-12-13 10:40:12.818483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.188 qpair failed and we were unable to recover it. 00:38:19.188 [2024-12-13 10:40:12.818746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.188 [2024-12-13 10:40:12.818796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.188 qpair failed and we were unable to recover it. 00:38:19.188 [2024-12-13 10:40:12.819094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.188 [2024-12-13 10:40:12.819138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.188 qpair failed and we were unable to recover it. 00:38:19.188 [2024-12-13 10:40:12.819282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.188 [2024-12-13 10:40:12.819326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.188 qpair failed and we were unable to recover it. 00:38:19.188 [2024-12-13 10:40:12.819452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.188 [2024-12-13 10:40:12.819475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.188 qpair failed and we were unable to recover it. 00:38:19.188 [2024-12-13 10:40:12.819553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.188 [2024-12-13 10:40:12.819567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.188 qpair failed and we were unable to recover it. 00:38:19.188 [2024-12-13 10:40:12.819659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.188 [2024-12-13 10:40:12.819703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.188 qpair failed and we were unable to recover it. 00:38:19.188 [2024-12-13 10:40:12.819965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.188 [2024-12-13 10:40:12.820010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.188 qpair failed and we were unable to recover it. 00:38:19.188 [2024-12-13 10:40:12.820200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.188 [2024-12-13 10:40:12.820215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.188 qpair failed and we were unable to recover it. 00:38:19.188 [2024-12-13 10:40:12.820290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.188 [2024-12-13 10:40:12.820303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.188 qpair failed and we were unable to recover it. 00:38:19.188 [2024-12-13 10:40:12.820394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.188 [2024-12-13 10:40:12.820407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.188 qpair failed and we were unable to recover it. 00:38:19.188 [2024-12-13 10:40:12.820546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.188 [2024-12-13 10:40:12.820560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.188 qpair failed and we were unable to recover it. 00:38:19.188 [2024-12-13 10:40:12.820693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.188 [2024-12-13 10:40:12.820709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.188 qpair failed and we were unable to recover it. 00:38:19.188 [2024-12-13 10:40:12.820855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.188 [2024-12-13 10:40:12.820870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.188 qpair failed and we were unable to recover it. 00:38:19.188 [2024-12-13 10:40:12.821017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.188 [2024-12-13 10:40:12.821033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.189 qpair failed and we were unable to recover it. 00:38:19.189 [2024-12-13 10:40:12.821238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.189 [2024-12-13 10:40:12.821254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.189 qpair failed and we were unable to recover it. 00:38:19.189 [2024-12-13 10:40:12.821348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.189 [2024-12-13 10:40:12.821361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.189 qpair failed and we were unable to recover it. 00:38:19.189 [2024-12-13 10:40:12.821459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.189 [2024-12-13 10:40:12.821474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.189 qpair failed and we were unable to recover it. 00:38:19.189 [2024-12-13 10:40:12.821559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.189 [2024-12-13 10:40:12.821573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.189 qpair failed and we were unable to recover it. 00:38:19.189 [2024-12-13 10:40:12.821655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.189 [2024-12-13 10:40:12.821677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.189 qpair failed and we were unable to recover it. 00:38:19.189 [2024-12-13 10:40:12.821830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.189 [2024-12-13 10:40:12.821845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.189 qpair failed and we were unable to recover it. 00:38:19.189 [2024-12-13 10:40:12.821927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.189 [2024-12-13 10:40:12.821941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.189 qpair failed and we were unable to recover it. 00:38:19.189 [2024-12-13 10:40:12.822011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.189 [2024-12-13 10:40:12.822025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.189 qpair failed and we were unable to recover it. 00:38:19.189 [2024-12-13 10:40:12.822169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.189 [2024-12-13 10:40:12.822184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.189 qpair failed and we were unable to recover it. 00:38:19.189 [2024-12-13 10:40:12.822329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.189 [2024-12-13 10:40:12.822374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.189 qpair failed and we were unable to recover it. 00:38:19.189 [2024-12-13 10:40:12.822551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.189 [2024-12-13 10:40:12.822595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.189 qpair failed and we were unable to recover it. 00:38:19.189 [2024-12-13 10:40:12.822729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.189 [2024-12-13 10:40:12.822773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.189 qpair failed and we were unable to recover it. 00:38:19.189 [2024-12-13 10:40:12.822919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.189 [2024-12-13 10:40:12.822962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.189 qpair failed and we were unable to recover it. 00:38:19.189 [2024-12-13 10:40:12.823182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.189 [2024-12-13 10:40:12.823197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.189 qpair failed and we were unable to recover it. 00:38:19.189 [2024-12-13 10:40:12.823397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.189 [2024-12-13 10:40:12.823413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.189 qpair failed and we were unable to recover it. 00:38:19.189 [2024-12-13 10:40:12.823502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.189 [2024-12-13 10:40:12.823517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.189 qpair failed and we were unable to recover it. 00:38:19.189 [2024-12-13 10:40:12.823654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.189 [2024-12-13 10:40:12.823670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.189 qpair failed and we were unable to recover it. 00:38:19.189 [2024-12-13 10:40:12.823752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.189 [2024-12-13 10:40:12.823766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.189 qpair failed and we were unable to recover it. 00:38:19.189 [2024-12-13 10:40:12.823859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.189 [2024-12-13 10:40:12.823872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.189 qpair failed and we were unable to recover it. 00:38:19.189 [2024-12-13 10:40:12.824048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.189 [2024-12-13 10:40:12.824091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.189 qpair failed and we were unable to recover it. 00:38:19.189 [2024-12-13 10:40:12.824358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.189 [2024-12-13 10:40:12.824403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.189 qpair failed and we were unable to recover it. 00:38:19.189 [2024-12-13 10:40:12.824640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.189 [2024-12-13 10:40:12.824685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.189 qpair failed and we were unable to recover it. 00:38:19.189 [2024-12-13 10:40:12.824819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.189 [2024-12-13 10:40:12.824862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.189 qpair failed and we were unable to recover it. 00:38:19.189 [2024-12-13 10:40:12.825112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.189 [2024-12-13 10:40:12.825128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.189 qpair failed and we were unable to recover it. 00:38:19.189 [2024-12-13 10:40:12.825274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.189 [2024-12-13 10:40:12.825290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.189 qpair failed and we were unable to recover it. 00:38:19.189 [2024-12-13 10:40:12.825389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.189 [2024-12-13 10:40:12.825402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.189 qpair failed and we were unable to recover it. 00:38:19.189 [2024-12-13 10:40:12.825499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.189 [2024-12-13 10:40:12.825516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.189 qpair failed and we were unable to recover it. 00:38:19.189 [2024-12-13 10:40:12.825660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.189 [2024-12-13 10:40:12.825674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.189 qpair failed and we were unable to recover it. 00:38:19.189 [2024-12-13 10:40:12.825886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.189 [2024-12-13 10:40:12.825901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.189 qpair failed and we were unable to recover it. 00:38:19.189 [2024-12-13 10:40:12.826036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.189 [2024-12-13 10:40:12.826052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.189 qpair failed and we were unable to recover it. 00:38:19.189 [2024-12-13 10:40:12.826138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.189 [2024-12-13 10:40:12.826152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.189 qpair failed and we were unable to recover it. 00:38:19.189 [2024-12-13 10:40:12.826287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.189 [2024-12-13 10:40:12.826303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.189 qpair failed and we were unable to recover it. 00:38:19.189 [2024-12-13 10:40:12.826393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.189 [2024-12-13 10:40:12.826408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.189 qpair failed and we were unable to recover it. 00:38:19.189 [2024-12-13 10:40:12.826496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.189 [2024-12-13 10:40:12.826511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.189 qpair failed and we were unable to recover it. 00:38:19.189 [2024-12-13 10:40:12.826584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.189 [2024-12-13 10:40:12.826598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.189 qpair failed and we were unable to recover it. 00:38:19.189 [2024-12-13 10:40:12.826751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.189 [2024-12-13 10:40:12.826795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.189 qpair failed and we were unable to recover it. 00:38:19.189 [2024-12-13 10:40:12.826912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.189 [2024-12-13 10:40:12.826956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.189 qpair failed and we were unable to recover it. 00:38:19.189 [2024-12-13 10:40:12.827167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.189 [2024-12-13 10:40:12.827211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.189 qpair failed and we were unable to recover it. 00:38:19.190 [2024-12-13 10:40:12.827335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.190 [2024-12-13 10:40:12.827351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.190 qpair failed and we were unable to recover it. 00:38:19.190 [2024-12-13 10:40:12.827433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.190 [2024-12-13 10:40:12.827454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.190 qpair failed and we were unable to recover it. 00:38:19.190 [2024-12-13 10:40:12.827527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.190 [2024-12-13 10:40:12.827541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.190 qpair failed and we were unable to recover it. 00:38:19.190 [2024-12-13 10:40:12.827686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.190 [2024-12-13 10:40:12.827700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.190 qpair failed and we were unable to recover it. 00:38:19.190 [2024-12-13 10:40:12.827851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.190 [2024-12-13 10:40:12.827866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.190 qpair failed and we were unable to recover it. 00:38:19.190 [2024-12-13 10:40:12.828000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.190 [2024-12-13 10:40:12.828015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.190 qpair failed and we were unable to recover it. 00:38:19.190 [2024-12-13 10:40:12.828087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.190 [2024-12-13 10:40:12.828101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.190 qpair failed and we were unable to recover it. 00:38:19.190 [2024-12-13 10:40:12.828167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.190 [2024-12-13 10:40:12.828182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.190 qpair failed and we were unable to recover it. 00:38:19.190 [2024-12-13 10:40:12.828331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.190 [2024-12-13 10:40:12.828345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.190 qpair failed and we were unable to recover it. 00:38:19.190 [2024-12-13 10:40:12.828480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.190 [2024-12-13 10:40:12.828496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.190 qpair failed and we were unable to recover it. 00:38:19.190 [2024-12-13 10:40:12.828701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.190 [2024-12-13 10:40:12.828716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.190 qpair failed and we were unable to recover it. 00:38:19.190 [2024-12-13 10:40:12.828807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.190 [2024-12-13 10:40:12.828825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.190 qpair failed and we were unable to recover it. 00:38:19.190 [2024-12-13 10:40:12.828899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.190 [2024-12-13 10:40:12.828913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.190 qpair failed and we were unable to recover it. 00:38:19.190 [2024-12-13 10:40:12.828991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.190 [2024-12-13 10:40:12.829005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.190 qpair failed and we were unable to recover it. 00:38:19.190 [2024-12-13 10:40:12.829211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.190 [2024-12-13 10:40:12.829226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.190 qpair failed and we were unable to recover it. 00:38:19.190 [2024-12-13 10:40:12.829351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.190 [2024-12-13 10:40:12.829396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:19.190 qpair failed and we were unable to recover it. 00:38:19.190 [2024-12-13 10:40:12.829538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.190 [2024-12-13 10:40:12.829584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:19.190 qpair failed and we were unable to recover it. 00:38:19.190 [2024-12-13 10:40:12.829758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.190 [2024-12-13 10:40:12.829784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:19.190 qpair failed and we were unable to recover it. 00:38:19.190 [2024-12-13 10:40:12.829883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.190 [2024-12-13 10:40:12.829900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.190 qpair failed and we were unable to recover it. 00:38:19.190 [2024-12-13 10:40:12.830127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.190 [2024-12-13 10:40:12.830142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.190 qpair failed and we were unable to recover it. 00:38:19.190 [2024-12-13 10:40:12.830319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.190 [2024-12-13 10:40:12.830362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.190 qpair failed and we were unable to recover it. 00:38:19.190 [2024-12-13 10:40:12.830502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.190 [2024-12-13 10:40:12.830547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.190 qpair failed and we were unable to recover it. 00:38:19.190 [2024-12-13 10:40:12.830695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.190 [2024-12-13 10:40:12.830738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.190 qpair failed and we were unable to recover it. 00:38:19.190 [2024-12-13 10:40:12.830900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.190 [2024-12-13 10:40:12.830945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.190 qpair failed and we were unable to recover it. 00:38:19.190 [2024-12-13 10:40:12.831141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.190 [2024-12-13 10:40:12.831184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.190 qpair failed and we were unable to recover it. 00:38:19.190 [2024-12-13 10:40:12.831442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.190 [2024-12-13 10:40:12.831520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.190 qpair failed and we were unable to recover it. 00:38:19.190 [2024-12-13 10:40:12.831831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.190 [2024-12-13 10:40:12.831875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.190 qpair failed and we were unable to recover it. 00:38:19.190 [2024-12-13 10:40:12.832073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.190 [2024-12-13 10:40:12.832117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.190 qpair failed and we were unable to recover it. 00:38:19.190 [2024-12-13 10:40:12.832253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.190 [2024-12-13 10:40:12.832303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.190 qpair failed and we were unable to recover it. 00:38:19.190 [2024-12-13 10:40:12.832501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.190 [2024-12-13 10:40:12.832517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.190 qpair failed and we were unable to recover it. 00:38:19.190 [2024-12-13 10:40:12.832670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.190 [2024-12-13 10:40:12.832686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.190 qpair failed and we were unable to recover it. 00:38:19.190 [2024-12-13 10:40:12.832821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.190 [2024-12-13 10:40:12.832836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.190 qpair failed and we were unable to recover it. 00:38:19.190 [2024-12-13 10:40:12.832980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.190 [2024-12-13 10:40:12.832995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.190 qpair failed and we were unable to recover it. 00:38:19.190 [2024-12-13 10:40:12.833197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.190 [2024-12-13 10:40:12.833216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.190 qpair failed and we were unable to recover it. 00:38:19.190 [2024-12-13 10:40:12.833301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.190 [2024-12-13 10:40:12.833315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.190 qpair failed and we were unable to recover it. 00:38:19.191 [2024-12-13 10:40:12.833508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.191 [2024-12-13 10:40:12.833523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.191 qpair failed and we were unable to recover it. 00:38:19.191 [2024-12-13 10:40:12.833662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.191 [2024-12-13 10:40:12.833677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.191 qpair failed and we were unable to recover it. 00:38:19.191 [2024-12-13 10:40:12.833823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.191 [2024-12-13 10:40:12.833838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.191 qpair failed and we were unable to recover it. 00:38:19.191 [2024-12-13 10:40:12.833981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.191 [2024-12-13 10:40:12.833998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.191 qpair failed and we were unable to recover it. 00:38:19.191 [2024-12-13 10:40:12.834165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.191 [2024-12-13 10:40:12.834208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.191 qpair failed and we were unable to recover it. 00:38:19.191 [2024-12-13 10:40:12.834351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.191 [2024-12-13 10:40:12.834395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.191 qpair failed and we were unable to recover it. 00:38:19.191 [2024-12-13 10:40:12.834562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.191 [2024-12-13 10:40:12.834606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.191 qpair failed and we were unable to recover it. 00:38:19.191 [2024-12-13 10:40:12.834899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.191 [2024-12-13 10:40:12.834942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.191 qpair failed and we were unable to recover it. 00:38:19.191 [2024-12-13 10:40:12.835085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.191 [2024-12-13 10:40:12.835130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.191 qpair failed and we were unable to recover it. 00:38:19.191 [2024-12-13 10:40:12.835322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.191 [2024-12-13 10:40:12.835366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.191 qpair failed and we were unable to recover it. 00:38:19.191 [2024-12-13 10:40:12.835553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.191 [2024-12-13 10:40:12.835569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.191 qpair failed and we were unable to recover it. 00:38:19.191 [2024-12-13 10:40:12.835717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.191 [2024-12-13 10:40:12.835733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.191 qpair failed and we were unable to recover it. 00:38:19.191 [2024-12-13 10:40:12.835906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.191 [2024-12-13 10:40:12.835921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.191 qpair failed and we were unable to recover it. 00:38:19.191 [2024-12-13 10:40:12.836022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.191 [2024-12-13 10:40:12.836037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.191 qpair failed and we were unable to recover it. 00:38:19.191 [2024-12-13 10:40:12.836117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.191 [2024-12-13 10:40:12.836130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.191 qpair failed and we were unable to recover it. 00:38:19.191 [2024-12-13 10:40:12.836281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.191 [2024-12-13 10:40:12.836297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.191 qpair failed and we were unable to recover it. 00:38:19.191 [2024-12-13 10:40:12.836437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.191 [2024-12-13 10:40:12.836459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.191 qpair failed and we were unable to recover it. 00:38:19.191 [2024-12-13 10:40:12.836553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.191 [2024-12-13 10:40:12.836569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.191 qpair failed and we were unable to recover it. 00:38:19.191 [2024-12-13 10:40:12.836794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.191 [2024-12-13 10:40:12.836810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.191 qpair failed and we were unable to recover it. 00:38:19.191 [2024-12-13 10:40:12.836970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.191 [2024-12-13 10:40:12.837011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.191 qpair failed and we were unable to recover it. 00:38:19.191 [2024-12-13 10:40:12.837238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.191 [2024-12-13 10:40:12.837293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:19.191 qpair failed and we were unable to recover it. 00:38:19.191 [2024-12-13 10:40:12.837462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.191 [2024-12-13 10:40:12.837524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:19.191 qpair failed and we were unable to recover it. 00:38:19.191 [2024-12-13 10:40:12.837802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.191 [2024-12-13 10:40:12.837849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:19.191 qpair failed and we were unable to recover it. 00:38:19.191 [2024-12-13 10:40:12.838046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.191 [2024-12-13 10:40:12.838092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:19.191 qpair failed and we were unable to recover it. 00:38:19.191 [2024-12-13 10:40:12.838249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.191 [2024-12-13 10:40:12.838273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:19.191 qpair failed and we were unable to recover it. 00:38:19.191 [2024-12-13 10:40:12.838445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.191 [2024-12-13 10:40:12.838476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:19.191 qpair failed and we were unable to recover it. 00:38:19.191 [2024-12-13 10:40:12.838686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.191 [2024-12-13 10:40:12.838703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.191 qpair failed and we were unable to recover it. 00:38:19.191 [2024-12-13 10:40:12.838778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.191 [2024-12-13 10:40:12.838792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.191 qpair failed and we were unable to recover it. 00:38:19.191 [2024-12-13 10:40:12.838935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.191 [2024-12-13 10:40:12.838951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.191 qpair failed and we were unable to recover it. 00:38:19.191 [2024-12-13 10:40:12.839031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.191 [2024-12-13 10:40:12.839044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.191 qpair failed and we were unable to recover it. 00:38:19.191 [2024-12-13 10:40:12.839182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.191 [2024-12-13 10:40:12.839197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.191 qpair failed and we were unable to recover it. 00:38:19.191 [2024-12-13 10:40:12.839341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.191 [2024-12-13 10:40:12.839358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.191 qpair failed and we were unable to recover it. 00:38:19.191 [2024-12-13 10:40:12.839438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.191 [2024-12-13 10:40:12.839462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.191 qpair failed and we were unable to recover it. 00:38:19.191 [2024-12-13 10:40:12.839600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.191 [2024-12-13 10:40:12.839618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.191 qpair failed and we were unable to recover it. 00:38:19.191 [2024-12-13 10:40:12.839707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.191 [2024-12-13 10:40:12.839721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.191 qpair failed and we were unable to recover it. 00:38:19.191 [2024-12-13 10:40:12.839951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.192 [2024-12-13 10:40:12.839994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.192 qpair failed and we were unable to recover it. 00:38:19.192 [2024-12-13 10:40:12.840140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.192 [2024-12-13 10:40:12.840183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.192 qpair failed and we were unable to recover it. 00:38:19.192 [2024-12-13 10:40:12.840380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.192 [2024-12-13 10:40:12.840424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.192 qpair failed and we were unable to recover it. 00:38:19.192 [2024-12-13 10:40:12.840637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.192 [2024-12-13 10:40:12.840681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.192 qpair failed and we were unable to recover it. 00:38:19.192 [2024-12-13 10:40:12.840823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.192 [2024-12-13 10:40:12.840866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.192 qpair failed and we were unable to recover it. 00:38:19.192 [2024-12-13 10:40:12.841089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.192 [2024-12-13 10:40:12.841132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.192 qpair failed and we were unable to recover it. 00:38:19.192 [2024-12-13 10:40:12.841279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.192 [2024-12-13 10:40:12.841323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.192 qpair failed and we were unable to recover it. 00:38:19.192 [2024-12-13 10:40:12.841508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.192 [2024-12-13 10:40:12.841524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.192 qpair failed and we were unable to recover it. 00:38:19.192 [2024-12-13 10:40:12.841674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.192 [2024-12-13 10:40:12.841718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.192 qpair failed and we were unable to recover it. 00:38:19.192 [2024-12-13 10:40:12.841911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.192 [2024-12-13 10:40:12.841955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.192 qpair failed and we were unable to recover it. 00:38:19.192 [2024-12-13 10:40:12.842242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.192 [2024-12-13 10:40:12.842285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.192 qpair failed and we were unable to recover it. 00:38:19.192 [2024-12-13 10:40:12.842470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.192 [2024-12-13 10:40:12.842486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.192 qpair failed and we were unable to recover it. 00:38:19.192 [2024-12-13 10:40:12.842567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.192 [2024-12-13 10:40:12.842581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.192 qpair failed and we were unable to recover it. 00:38:19.192 [2024-12-13 10:40:12.842658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.192 [2024-12-13 10:40:12.842672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.192 qpair failed and we were unable to recover it. 00:38:19.192 [2024-12-13 10:40:12.842770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.192 [2024-12-13 10:40:12.842784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.192 qpair failed and we were unable to recover it. 00:38:19.192 [2024-12-13 10:40:12.842856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.192 [2024-12-13 10:40:12.842870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.192 qpair failed and we were unable to recover it. 00:38:19.192 [2024-12-13 10:40:12.843011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.192 [2024-12-13 10:40:12.843026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.192 qpair failed and we were unable to recover it. 00:38:19.192 [2024-12-13 10:40:12.843170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.192 [2024-12-13 10:40:12.843186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.192 qpair failed and we were unable to recover it. 00:38:19.192 [2024-12-13 10:40:12.843427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.192 [2024-12-13 10:40:12.843493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.192 qpair failed and we were unable to recover it. 00:38:19.192 [2024-12-13 10:40:12.843704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.192 [2024-12-13 10:40:12.843749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.192 qpair failed and we were unable to recover it. 00:38:19.192 [2024-12-13 10:40:12.843958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.192 [2024-12-13 10:40:12.844001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.192 qpair failed and we were unable to recover it. 00:38:19.192 [2024-12-13 10:40:12.844208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.192 [2024-12-13 10:40:12.844223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.192 qpair failed and we were unable to recover it. 00:38:19.192 [2024-12-13 10:40:12.844462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.192 [2024-12-13 10:40:12.844508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.192 qpair failed and we were unable to recover it. 00:38:19.192 [2024-12-13 10:40:12.844709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.192 [2024-12-13 10:40:12.844754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.192 qpair failed and we were unable to recover it. 00:38:19.192 [2024-12-13 10:40:12.844947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.192 [2024-12-13 10:40:12.844991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.192 qpair failed and we were unable to recover it. 00:38:19.192 [2024-12-13 10:40:12.845221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.192 [2024-12-13 10:40:12.845273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:19.192 qpair failed and we were unable to recover it. 00:38:19.192 [2024-12-13 10:40:12.845486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.192 [2024-12-13 10:40:12.845527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:19.192 qpair failed and we were unable to recover it. 00:38:19.192 [2024-12-13 10:40:12.845634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.192 [2024-12-13 10:40:12.845657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:19.192 qpair failed and we were unable to recover it. 00:38:19.192 [2024-12-13 10:40:12.845839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.192 [2024-12-13 10:40:12.845887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.192 qpair failed and we were unable to recover it. 00:38:19.192 [2024-12-13 10:40:12.846203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.192 [2024-12-13 10:40:12.846246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.192 qpair failed and we were unable to recover it. 00:38:19.192 [2024-12-13 10:40:12.846384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.192 [2024-12-13 10:40:12.846425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.192 qpair failed and we were unable to recover it. 00:38:19.192 [2024-12-13 10:40:12.846523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.192 [2024-12-13 10:40:12.846537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.192 qpair failed and we were unable to recover it. 00:38:19.192 [2024-12-13 10:40:12.846758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.192 [2024-12-13 10:40:12.846802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.192 qpair failed and we were unable to recover it. 00:38:19.192 [2024-12-13 10:40:12.847039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.192 [2024-12-13 10:40:12.847082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.192 qpair failed and we were unable to recover it. 00:38:19.192 [2024-12-13 10:40:12.847380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.192 [2024-12-13 10:40:12.847423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.192 qpair failed and we were unable to recover it. 00:38:19.192 [2024-12-13 10:40:12.847635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.192 [2024-12-13 10:40:12.847681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.192 qpair failed and we were unable to recover it. 00:38:19.192 [2024-12-13 10:40:12.847879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.192 [2024-12-13 10:40:12.847935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.192 qpair failed and we were unable to recover it. 00:38:19.192 [2024-12-13 10:40:12.848130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.192 [2024-12-13 10:40:12.848174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.192 qpair failed and we were unable to recover it. 00:38:19.192 [2024-12-13 10:40:12.848372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.192 [2024-12-13 10:40:12.848426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.192 qpair failed and we were unable to recover it. 00:38:19.192 [2024-12-13 10:40:12.848581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.193 [2024-12-13 10:40:12.848597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.193 qpair failed and we were unable to recover it. 00:38:19.193 [2024-12-13 10:40:12.848690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.193 [2024-12-13 10:40:12.848705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.193 qpair failed and we were unable to recover it. 00:38:19.193 [2024-12-13 10:40:12.848804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.193 [2024-12-13 10:40:12.848819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.193 qpair failed and we were unable to recover it. 00:38:19.193 [2024-12-13 10:40:12.848905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.193 [2024-12-13 10:40:12.848918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.193 qpair failed and we were unable to recover it. 00:38:19.193 [2024-12-13 10:40:12.849078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.193 [2024-12-13 10:40:12.849094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.193 qpair failed and we were unable to recover it. 00:38:19.193 [2024-12-13 10:40:12.849296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.193 [2024-12-13 10:40:12.849310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.193 qpair failed and we were unable to recover it. 00:38:19.193 [2024-12-13 10:40:12.849443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.193 [2024-12-13 10:40:12.849505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.193 qpair failed and we were unable to recover it. 00:38:19.193 [2024-12-13 10:40:12.849653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.193 [2024-12-13 10:40:12.849696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.193 qpair failed and we were unable to recover it. 00:38:19.193 [2024-12-13 10:40:12.849844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.193 [2024-12-13 10:40:12.849888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.193 qpair failed and we were unable to recover it. 00:38:19.193 [2024-12-13 10:40:12.850083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.193 [2024-12-13 10:40:12.850134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.193 qpair failed and we were unable to recover it. 00:38:19.193 [2024-12-13 10:40:12.850298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.193 [2024-12-13 10:40:12.850313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.193 qpair failed and we were unable to recover it. 00:38:19.193 [2024-12-13 10:40:12.850409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.193 [2024-12-13 10:40:12.850464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.193 qpair failed and we were unable to recover it. 00:38:19.193 [2024-12-13 10:40:12.850661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.193 [2024-12-13 10:40:12.850706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.193 qpair failed and we were unable to recover it. 00:38:19.193 [2024-12-13 10:40:12.850850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.193 [2024-12-13 10:40:12.850894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.193 qpair failed and we were unable to recover it. 00:38:19.193 [2024-12-13 10:40:12.851126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.193 [2024-12-13 10:40:12.851171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.193 qpair failed and we were unable to recover it. 00:38:19.193 [2024-12-13 10:40:12.851300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.193 [2024-12-13 10:40:12.851344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.193 qpair failed and we were unable to recover it. 00:38:19.193 [2024-12-13 10:40:12.851497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.193 [2024-12-13 10:40:12.851543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.193 qpair failed and we were unable to recover it. 00:38:19.193 [2024-12-13 10:40:12.851822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.193 [2024-12-13 10:40:12.851867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.193 qpair failed and we were unable to recover it. 00:38:19.193 [2024-12-13 10:40:12.852003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.193 [2024-12-13 10:40:12.852047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.193 qpair failed and we were unable to recover it. 00:38:19.193 [2024-12-13 10:40:12.852186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.193 [2024-12-13 10:40:12.852230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.193 qpair failed and we were unable to recover it. 00:38:19.193 [2024-12-13 10:40:12.852434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.193 [2024-12-13 10:40:12.852455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.193 qpair failed and we were unable to recover it. 00:38:19.193 [2024-12-13 10:40:12.852551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.193 [2024-12-13 10:40:12.852566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.193 qpair failed and we were unable to recover it. 00:38:19.193 [2024-12-13 10:40:12.852717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.193 [2024-12-13 10:40:12.852732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.193 qpair failed and we were unable to recover it. 00:38:19.193 [2024-12-13 10:40:12.852962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.193 [2024-12-13 10:40:12.852977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.193 qpair failed and we were unable to recover it. 00:38:19.193 [2024-12-13 10:40:12.853082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.193 [2024-12-13 10:40:12.853097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.193 qpair failed and we were unable to recover it. 00:38:19.193 [2024-12-13 10:40:12.853186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.193 [2024-12-13 10:40:12.853201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.193 qpair failed and we were unable to recover it. 00:38:19.193 [2024-12-13 10:40:12.853303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.193 [2024-12-13 10:40:12.853319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.193 qpair failed and we were unable to recover it. 00:38:19.193 [2024-12-13 10:40:12.853403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.193 [2024-12-13 10:40:12.853416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.193 qpair failed and we were unable to recover it. 00:38:19.193 [2024-12-13 10:40:12.853671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.193 [2024-12-13 10:40:12.853717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.193 qpair failed and we were unable to recover it. 00:38:19.193 [2024-12-13 10:40:12.853929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.193 [2024-12-13 10:40:12.853972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.193 qpair failed and we were unable to recover it. 00:38:19.193 [2024-12-13 10:40:12.854171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.193 [2024-12-13 10:40:12.854215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.193 qpair failed and we were unable to recover it. 00:38:19.193 [2024-12-13 10:40:12.854423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.193 [2024-12-13 10:40:12.854479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.193 qpair failed and we were unable to recover it. 00:38:19.193 [2024-12-13 10:40:12.854639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.193 [2024-12-13 10:40:12.854684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.193 qpair failed and we were unable to recover it. 00:38:19.193 [2024-12-13 10:40:12.854836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.193 [2024-12-13 10:40:12.854880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.193 qpair failed and we were unable to recover it. 00:38:19.193 [2024-12-13 10:40:12.855075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.193 [2024-12-13 10:40:12.855118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.193 qpair failed and we were unable to recover it. 00:38:19.193 [2024-12-13 10:40:12.855308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.193 [2024-12-13 10:40:12.855352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.193 qpair failed and we were unable to recover it. 00:38:19.193 [2024-12-13 10:40:12.855490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.193 [2024-12-13 10:40:12.855536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.193 qpair failed and we were unable to recover it. 00:38:19.194 [2024-12-13 10:40:12.855762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.194 [2024-12-13 10:40:12.855777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.194 qpair failed and we were unable to recover it. 00:38:19.194 [2024-12-13 10:40:12.855927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.194 [2024-12-13 10:40:12.855943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.194 qpair failed and we were unable to recover it. 00:38:19.194 [2024-12-13 10:40:12.856037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.194 [2024-12-13 10:40:12.856068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.194 qpair failed and we were unable to recover it. 00:38:19.194 [2024-12-13 10:40:12.856290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.194 [2024-12-13 10:40:12.856333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.194 qpair failed and we were unable to recover it. 00:38:19.194 [2024-12-13 10:40:12.856532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.194 [2024-12-13 10:40:12.856585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.194 qpair failed and we were unable to recover it. 00:38:19.194 [2024-12-13 10:40:12.856794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.194 [2024-12-13 10:40:12.856837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.194 qpair failed and we were unable to recover it. 00:38:19.194 [2024-12-13 10:40:12.857098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.194 [2024-12-13 10:40:12.857143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.194 qpair failed and we were unable to recover it. 00:38:19.194 [2024-12-13 10:40:12.857322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.194 [2024-12-13 10:40:12.857337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.194 qpair failed and we were unable to recover it. 00:38:19.194 [2024-12-13 10:40:12.857419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.194 [2024-12-13 10:40:12.857433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.194 qpair failed and we were unable to recover it. 00:38:19.194 [2024-12-13 10:40:12.857576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.194 [2024-12-13 10:40:12.857591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.194 qpair failed and we were unable to recover it. 00:38:19.194 [2024-12-13 10:40:12.857755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.194 [2024-12-13 10:40:12.857770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.194 qpair failed and we were unable to recover it. 00:38:19.194 [2024-12-13 10:40:12.857845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.194 [2024-12-13 10:40:12.857860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.194 qpair failed and we were unable to recover it. 00:38:19.194 [2024-12-13 10:40:12.857950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.194 [2024-12-13 10:40:12.857964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.194 qpair failed and we were unable to recover it. 00:38:19.194 [2024-12-13 10:40:12.858047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.194 [2024-12-13 10:40:12.858061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.194 qpair failed and we were unable to recover it. 00:38:19.194 [2024-12-13 10:40:12.858160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.194 [2024-12-13 10:40:12.858174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.194 qpair failed and we were unable to recover it. 00:38:19.194 [2024-12-13 10:40:12.858256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.194 [2024-12-13 10:40:12.858269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.194 qpair failed and we were unable to recover it. 00:38:19.194 [2024-12-13 10:40:12.858412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.194 [2024-12-13 10:40:12.858426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.194 qpair failed and we were unable to recover it. 00:38:19.194 [2024-12-13 10:40:12.858506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.194 [2024-12-13 10:40:12.858521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.194 qpair failed and we were unable to recover it. 00:38:19.194 [2024-12-13 10:40:12.858686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.194 [2024-12-13 10:40:12.858700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.194 qpair failed and we were unable to recover it. 00:38:19.194 [2024-12-13 10:40:12.858789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.194 [2024-12-13 10:40:12.858803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.194 qpair failed and we were unable to recover it. 00:38:19.194 [2024-12-13 10:40:12.858893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.194 [2024-12-13 10:40:12.858946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.194 qpair failed and we were unable to recover it. 00:38:19.194 [2024-12-13 10:40:12.859206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.194 [2024-12-13 10:40:12.859250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.194 qpair failed and we were unable to recover it. 00:38:19.194 [2024-12-13 10:40:12.859519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.194 [2024-12-13 10:40:12.859564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.194 qpair failed and we were unable to recover it. 00:38:19.194 [2024-12-13 10:40:12.859846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.194 [2024-12-13 10:40:12.859893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.194 qpair failed and we were unable to recover it. 00:38:19.194 [2024-12-13 10:40:12.860179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.194 [2024-12-13 10:40:12.860235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.194 qpair failed and we were unable to recover it. 00:38:19.194 [2024-12-13 10:40:12.860522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.194 [2024-12-13 10:40:12.860567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.194 qpair failed and we were unable to recover it. 00:38:19.194 [2024-12-13 10:40:12.860675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.194 [2024-12-13 10:40:12.860690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.194 qpair failed and we were unable to recover it. 00:38:19.194 [2024-12-13 10:40:12.860763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.194 [2024-12-13 10:40:12.860777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.194 qpair failed and we were unable to recover it. 00:38:19.194 [2024-12-13 10:40:12.860994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.194 [2024-12-13 10:40:12.861009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.194 qpair failed and we were unable to recover it. 00:38:19.194 [2024-12-13 10:40:12.861206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.194 [2024-12-13 10:40:12.861303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:19.194 qpair failed and we were unable to recover it. 00:38:19.194 [2024-12-13 10:40:12.861581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.194 [2024-12-13 10:40:12.861667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:19.194 qpair failed and we were unable to recover it. 00:38:19.194 [2024-12-13 10:40:12.861946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.194 [2024-12-13 10:40:12.862033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:19.194 qpair failed and we were unable to recover it. 00:38:19.194 [2024-12-13 10:40:12.862249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.194 [2024-12-13 10:40:12.862295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.194 qpair failed and we were unable to recover it. 00:38:19.194 [2024-12-13 10:40:12.862521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.194 [2024-12-13 10:40:12.862536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.194 qpair failed and we were unable to recover it. 00:38:19.194 [2024-12-13 10:40:12.862687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.194 [2024-12-13 10:40:12.862703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.194 qpair failed and we were unable to recover it. 00:38:19.194 [2024-12-13 10:40:12.862801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.194 [2024-12-13 10:40:12.862814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.194 qpair failed and we were unable to recover it. 00:38:19.194 [2024-12-13 10:40:12.862904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.194 [2024-12-13 10:40:12.862919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.194 qpair failed and we were unable to recover it. 00:38:19.194 [2024-12-13 10:40:12.863050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.194 [2024-12-13 10:40:12.863065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.194 qpair failed and we were unable to recover it. 00:38:19.194 [2024-12-13 10:40:12.863203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.194 [2024-12-13 10:40:12.863218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.194 qpair failed and we were unable to recover it. 00:38:19.195 [2024-12-13 10:40:12.863356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.195 [2024-12-13 10:40:12.863372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.195 qpair failed and we were unable to recover it. 00:38:19.195 [2024-12-13 10:40:12.863512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.195 [2024-12-13 10:40:12.863528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.195 qpair failed and we were unable to recover it. 00:38:19.195 [2024-12-13 10:40:12.863601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.195 [2024-12-13 10:40:12.863642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.195 qpair failed and we were unable to recover it. 00:38:19.195 [2024-12-13 10:40:12.863778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.195 [2024-12-13 10:40:12.863829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.195 qpair failed and we were unable to recover it. 00:38:19.195 [2024-12-13 10:40:12.864026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.195 [2024-12-13 10:40:12.864069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.195 qpair failed and we were unable to recover it. 00:38:19.195 [2024-12-13 10:40:12.864331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.195 [2024-12-13 10:40:12.864374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.195 qpair failed and we were unable to recover it. 00:38:19.195 [2024-12-13 10:40:12.864616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.195 [2024-12-13 10:40:12.864662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.195 qpair failed and we were unable to recover it. 00:38:19.195 [2024-12-13 10:40:12.864811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.195 [2024-12-13 10:40:12.864855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.195 qpair failed and we were unable to recover it. 00:38:19.195 [2024-12-13 10:40:12.865067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.195 [2024-12-13 10:40:12.865111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.195 qpair failed and we were unable to recover it. 00:38:19.195 [2024-12-13 10:40:12.865305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.195 [2024-12-13 10:40:12.865350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.195 qpair failed and we were unable to recover it. 00:38:19.195 [2024-12-13 10:40:12.865480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.195 [2024-12-13 10:40:12.865496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.195 qpair failed and we were unable to recover it. 00:38:19.195 [2024-12-13 10:40:12.865653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.195 [2024-12-13 10:40:12.865668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.195 qpair failed and we were unable to recover it. 00:38:19.195 [2024-12-13 10:40:12.865762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.195 [2024-12-13 10:40:12.865775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.195 qpair failed and we were unable to recover it. 00:38:19.195 [2024-12-13 10:40:12.865921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.195 [2024-12-13 10:40:12.865936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.195 qpair failed and we were unable to recover it. 00:38:19.195 [2024-12-13 10:40:12.866088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.195 [2024-12-13 10:40:12.866103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.195 qpair failed and we were unable to recover it. 00:38:19.195 [2024-12-13 10:40:12.866304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.195 [2024-12-13 10:40:12.866319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.195 qpair failed and we were unable to recover it. 00:38:19.195 [2024-12-13 10:40:12.866477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.195 [2024-12-13 10:40:12.866523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.195 qpair failed and we were unable to recover it. 00:38:19.195 [2024-12-13 10:40:12.866759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.195 [2024-12-13 10:40:12.866803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.195 qpair failed and we were unable to recover it. 00:38:19.195 [2024-12-13 10:40:12.867061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.195 [2024-12-13 10:40:12.867105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.195 qpair failed and we were unable to recover it. 00:38:19.195 [2024-12-13 10:40:12.867225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.195 [2024-12-13 10:40:12.867241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.195 qpair failed and we were unable to recover it. 00:38:19.195 [2024-12-13 10:40:12.867386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.195 [2024-12-13 10:40:12.867401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.195 qpair failed and we were unable to recover it. 00:38:19.195 [2024-12-13 10:40:12.867483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.195 [2024-12-13 10:40:12.867497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.195 qpair failed and we were unable to recover it. 00:38:19.195 [2024-12-13 10:40:12.867656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.195 [2024-12-13 10:40:12.867700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.195 qpair failed and we were unable to recover it. 00:38:19.195 [2024-12-13 10:40:12.867946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.195 [2024-12-13 10:40:12.867991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.195 qpair failed and we were unable to recover it. 00:38:19.195 [2024-12-13 10:40:12.868129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.195 [2024-12-13 10:40:12.868172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.195 qpair failed and we were unable to recover it. 00:38:19.195 [2024-12-13 10:40:12.868360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.195 [2024-12-13 10:40:12.868404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.195 qpair failed and we were unable to recover it. 00:38:19.195 [2024-12-13 10:40:12.868562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.195 [2024-12-13 10:40:12.868599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.195 qpair failed and we were unable to recover it. 00:38:19.195 [2024-12-13 10:40:12.868750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.195 [2024-12-13 10:40:12.868765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.195 qpair failed and we were unable to recover it. 00:38:19.195 [2024-12-13 10:40:12.868857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.195 [2024-12-13 10:40:12.868871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.195 qpair failed and we were unable to recover it. 00:38:19.195 [2024-12-13 10:40:12.869014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.195 [2024-12-13 10:40:12.869029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.195 qpair failed and we were unable to recover it. 00:38:19.195 [2024-12-13 10:40:12.869192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.195 [2024-12-13 10:40:12.869223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:19.195 qpair failed and we were unable to recover it. 00:38:19.195 [2024-12-13 10:40:12.869425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.195 [2024-12-13 10:40:12.869458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:19.195 qpair failed and we were unable to recover it. 00:38:19.195 [2024-12-13 10:40:12.869573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.195 [2024-12-13 10:40:12.869600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:19.195 qpair failed and we were unable to recover it. 00:38:19.195 [2024-12-13 10:40:12.869835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.195 [2024-12-13 10:40:12.869853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.196 qpair failed and we were unable to recover it. 00:38:19.196 [2024-12-13 10:40:12.869946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.196 [2024-12-13 10:40:12.869993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.196 qpair failed and we were unable to recover it. 00:38:19.196 [2024-12-13 10:40:12.870139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.196 [2024-12-13 10:40:12.870183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.196 qpair failed and we were unable to recover it. 00:38:19.196 [2024-12-13 10:40:12.870390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.196 [2024-12-13 10:40:12.870433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.196 qpair failed and we were unable to recover it. 00:38:19.196 [2024-12-13 10:40:12.870703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.196 [2024-12-13 10:40:12.870717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.196 qpair failed and we were unable to recover it. 00:38:19.196 [2024-12-13 10:40:12.870820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.196 [2024-12-13 10:40:12.870863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.196 qpair failed and we were unable to recover it. 00:38:19.196 [2024-12-13 10:40:12.871068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.196 [2024-12-13 10:40:12.871112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.196 qpair failed and we were unable to recover it. 00:38:19.196 [2024-12-13 10:40:12.871241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.196 [2024-12-13 10:40:12.871285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.196 qpair failed and we were unable to recover it. 00:38:19.196 [2024-12-13 10:40:12.871529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.196 [2024-12-13 10:40:12.871546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.196 qpair failed and we were unable to recover it. 00:38:19.196 [2024-12-13 10:40:12.871704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.196 [2024-12-13 10:40:12.871720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.196 qpair failed and we were unable to recover it. 00:38:19.196 [2024-12-13 10:40:12.871894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.196 [2024-12-13 10:40:12.871937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.196 qpair failed and we were unable to recover it. 00:38:19.196 [2024-12-13 10:40:12.872165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.196 [2024-12-13 10:40:12.872208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.196 qpair failed and we were unable to recover it. 00:38:19.196 [2024-12-13 10:40:12.872353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.196 [2024-12-13 10:40:12.872398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.196 qpair failed and we were unable to recover it. 00:38:19.196 [2024-12-13 10:40:12.872621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.196 [2024-12-13 10:40:12.872636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.196 qpair failed and we were unable to recover it. 00:38:19.196 [2024-12-13 10:40:12.872852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.196 [2024-12-13 10:40:12.872868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.196 qpair failed and we were unable to recover it. 00:38:19.196 [2024-12-13 10:40:12.873023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.196 [2024-12-13 10:40:12.873038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.196 qpair failed and we were unable to recover it. 00:38:19.196 [2024-12-13 10:40:12.873254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.196 [2024-12-13 10:40:12.873298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.196 qpair failed and we were unable to recover it. 00:38:19.196 [2024-12-13 10:40:12.873491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.196 [2024-12-13 10:40:12.873536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.196 qpair failed and we were unable to recover it. 00:38:19.196 [2024-12-13 10:40:12.873751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.196 [2024-12-13 10:40:12.873796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.196 qpair failed and we were unable to recover it. 00:38:19.196 [2024-12-13 10:40:12.873995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.196 [2024-12-13 10:40:12.874038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.196 qpair failed and we were unable to recover it. 00:38:19.196 [2024-12-13 10:40:12.874315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.196 [2024-12-13 10:40:12.874331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.196 qpair failed and we were unable to recover it. 00:38:19.196 [2024-12-13 10:40:12.874545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.196 [2024-12-13 10:40:12.874591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.196 qpair failed and we were unable to recover it. 00:38:19.196 [2024-12-13 10:40:12.874730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.196 [2024-12-13 10:40:12.874786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.196 qpair failed and we were unable to recover it. 00:38:19.196 [2024-12-13 10:40:12.874986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.196 [2024-12-13 10:40:12.875029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.196 qpair failed and we were unable to recover it. 00:38:19.196 [2024-12-13 10:40:12.875167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.196 [2024-12-13 10:40:12.875182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.196 qpair failed and we were unable to recover it. 00:38:19.196 [2024-12-13 10:40:12.875262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.196 [2024-12-13 10:40:12.875275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.196 qpair failed and we were unable to recover it. 00:38:19.196 [2024-12-13 10:40:12.875416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.196 [2024-12-13 10:40:12.875489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.196 qpair failed and we were unable to recover it. 00:38:19.196 [2024-12-13 10:40:12.875701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.196 [2024-12-13 10:40:12.875744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.196 qpair failed and we were unable to recover it. 00:38:19.196 [2024-12-13 10:40:12.875949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.196 [2024-12-13 10:40:12.875992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.196 qpair failed and we were unable to recover it. 00:38:19.196 [2024-12-13 10:40:12.876151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.196 [2024-12-13 10:40:12.876194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.196 qpair failed and we were unable to recover it. 00:38:19.196 [2024-12-13 10:40:12.876337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.196 [2024-12-13 10:40:12.876379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.196 qpair failed and we were unable to recover it. 00:38:19.196 [2024-12-13 10:40:12.876645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.196 [2024-12-13 10:40:12.876661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.196 qpair failed and we were unable to recover it. 00:38:19.196 [2024-12-13 10:40:12.876808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.196 [2024-12-13 10:40:12.876823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.196 qpair failed and we were unable to recover it. 00:38:19.196 [2024-12-13 10:40:12.876971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.196 [2024-12-13 10:40:12.876986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.196 qpair failed and we were unable to recover it. 00:38:19.196 [2024-12-13 10:40:12.877188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.196 [2024-12-13 10:40:12.877203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.196 qpair failed and we were unable to recover it. 00:38:19.196 [2024-12-13 10:40:12.877346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.196 [2024-12-13 10:40:12.877388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.196 qpair failed and we were unable to recover it. 00:38:19.196 [2024-12-13 10:40:12.877546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.196 [2024-12-13 10:40:12.877592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.196 qpair failed and we were unable to recover it. 00:38:19.196 [2024-12-13 10:40:12.877730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.196 [2024-12-13 10:40:12.877781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.196 qpair failed and we were unable to recover it. 00:38:19.196 [2024-12-13 10:40:12.877992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.196 [2024-12-13 10:40:12.878037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.196 qpair failed and we were unable to recover it. 00:38:19.196 [2024-12-13 10:40:12.878334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.197 [2024-12-13 10:40:12.878378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.197 qpair failed and we were unable to recover it. 00:38:19.197 [2024-12-13 10:40:12.878523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.197 [2024-12-13 10:40:12.878538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.197 qpair failed and we were unable to recover it. 00:38:19.197 [2024-12-13 10:40:12.878721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.197 [2024-12-13 10:40:12.878763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.197 qpair failed and we were unable to recover it. 00:38:19.197 [2024-12-13 10:40:12.879044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.197 [2024-12-13 10:40:12.879089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.197 qpair failed and we were unable to recover it. 00:38:19.197 [2024-12-13 10:40:12.879243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.197 [2024-12-13 10:40:12.879288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.197 qpair failed and we were unable to recover it. 00:38:19.197 [2024-12-13 10:40:12.879413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.197 [2024-12-13 10:40:12.879428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.197 qpair failed and we were unable to recover it. 00:38:19.197 [2024-12-13 10:40:12.879509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.197 [2024-12-13 10:40:12.879524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.197 qpair failed and we were unable to recover it. 00:38:19.197 [2024-12-13 10:40:12.879728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.197 [2024-12-13 10:40:12.879742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.197 qpair failed and we were unable to recover it. 00:38:19.197 [2024-12-13 10:40:12.879947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.197 [2024-12-13 10:40:12.879962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.197 qpair failed and we were unable to recover it. 00:38:19.197 [2024-12-13 10:40:12.880105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.197 [2024-12-13 10:40:12.880126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.197 qpair failed and we were unable to recover it. 00:38:19.197 [2024-12-13 10:40:12.880280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.197 [2024-12-13 10:40:12.880295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.197 qpair failed and we were unable to recover it. 00:38:19.197 [2024-12-13 10:40:12.880443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.197 [2024-12-13 10:40:12.880467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.197 qpair failed and we were unable to recover it. 00:38:19.197 [2024-12-13 10:40:12.880672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.197 [2024-12-13 10:40:12.880687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.197 qpair failed and we were unable to recover it. 00:38:19.197 [2024-12-13 10:40:12.880843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.197 [2024-12-13 10:40:12.880888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.197 qpair failed and we were unable to recover it. 00:38:19.197 [2024-12-13 10:40:12.881114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.197 [2024-12-13 10:40:12.881158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.197 qpair failed and we were unable to recover it. 00:38:19.197 [2024-12-13 10:40:12.881291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.197 [2024-12-13 10:40:12.881336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.197 qpair failed and we were unable to recover it. 00:38:19.197 [2024-12-13 10:40:12.881504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.197 [2024-12-13 10:40:12.881522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.197 qpair failed and we were unable to recover it. 00:38:19.197 [2024-12-13 10:40:12.881659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.197 [2024-12-13 10:40:12.881676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.197 qpair failed and we were unable to recover it. 00:38:19.197 [2024-12-13 10:40:12.881762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.197 [2024-12-13 10:40:12.881776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.197 qpair failed and we were unable to recover it. 00:38:19.197 [2024-12-13 10:40:12.882081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.197 [2024-12-13 10:40:12.882126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.197 qpair failed and we were unable to recover it. 00:38:19.197 [2024-12-13 10:40:12.882335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.197 [2024-12-13 10:40:12.882396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:19.197 qpair failed and we were unable to recover it. 00:38:19.197 [2024-12-13 10:40:12.882641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.197 [2024-12-13 10:40:12.882687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:19.197 qpair failed and we were unable to recover it. 00:38:19.197 [2024-12-13 10:40:12.882919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.197 [2024-12-13 10:40:12.882967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:19.197 qpair failed and we were unable to recover it. 00:38:19.197 [2024-12-13 10:40:12.883128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.197 [2024-12-13 10:40:12.883146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.197 qpair failed and we were unable to recover it. 00:38:19.197 [2024-12-13 10:40:12.883245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.197 [2024-12-13 10:40:12.883260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.197 qpair failed and we were unable to recover it. 00:38:19.197 [2024-12-13 10:40:12.883341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.197 [2024-12-13 10:40:12.883355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.197 qpair failed and we were unable to recover it. 00:38:19.197 [2024-12-13 10:40:12.883506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.197 [2024-12-13 10:40:12.883521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.197 qpair failed and we were unable to recover it. 00:38:19.197 [2024-12-13 10:40:12.883679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.197 [2024-12-13 10:40:12.883695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.197 qpair failed and we were unable to recover it. 00:38:19.197 [2024-12-13 10:40:12.883789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.197 [2024-12-13 10:40:12.883803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.197 qpair failed and we were unable to recover it. 00:38:19.197 [2024-12-13 10:40:12.883887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.197 [2024-12-13 10:40:12.883902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.197 qpair failed and we were unable to recover it. 00:38:19.197 [2024-12-13 10:40:12.884049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.197 [2024-12-13 10:40:12.884070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.197 qpair failed and we were unable to recover it. 00:38:19.197 [2024-12-13 10:40:12.884168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.197 [2024-12-13 10:40:12.884181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.197 qpair failed and we were unable to recover it. 00:38:19.197 [2024-12-13 10:40:12.884321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.197 [2024-12-13 10:40:12.884337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.197 qpair failed and we were unable to recover it. 00:38:19.197 [2024-12-13 10:40:12.884477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.197 [2024-12-13 10:40:12.884492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.197 qpair failed and we were unable to recover it. 00:38:19.197 [2024-12-13 10:40:12.884647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.197 [2024-12-13 10:40:12.884663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.197 qpair failed and we were unable to recover it. 00:38:19.197 [2024-12-13 10:40:12.884733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.197 [2024-12-13 10:40:12.884748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.197 qpair failed and we were unable to recover it. 00:38:19.197 [2024-12-13 10:40:12.884882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.197 [2024-12-13 10:40:12.884897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.197 qpair failed and we were unable to recover it. 00:38:19.197 [2024-12-13 10:40:12.885054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.197 [2024-12-13 10:40:12.885097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.198 qpair failed and we were unable to recover it. 00:38:19.198 [2024-12-13 10:40:12.885265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.198 [2024-12-13 10:40:12.885329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:19.198 qpair failed and we were unable to recover it. 00:38:19.198 [2024-12-13 10:40:12.885617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.198 [2024-12-13 10:40:12.885667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:19.198 qpair failed and we were unable to recover it. 00:38:19.198 [2024-12-13 10:40:12.885920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.198 [2024-12-13 10:40:12.885972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:19.198 qpair failed and we were unable to recover it. 00:38:19.198 [2024-12-13 10:40:12.886278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.198 [2024-12-13 10:40:12.886324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:19.198 qpair failed and we were unable to recover it. 00:38:19.198 [2024-12-13 10:40:12.886481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.198 [2024-12-13 10:40:12.886539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:19.198 qpair failed and we were unable to recover it. 00:38:19.198 [2024-12-13 10:40:12.886722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.198 [2024-12-13 10:40:12.886745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:19.198 qpair failed and we were unable to recover it. 00:38:19.198 [2024-12-13 10:40:12.886928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.198 [2024-12-13 10:40:12.886946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.198 qpair failed and we were unable to recover it. 00:38:19.198 [2024-12-13 10:40:12.887098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.198 [2024-12-13 10:40:12.887114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.198 qpair failed and we were unable to recover it. 00:38:19.198 [2024-12-13 10:40:12.887196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.198 [2024-12-13 10:40:12.887210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.198 qpair failed and we were unable to recover it. 00:38:19.198 [2024-12-13 10:40:12.887363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.198 [2024-12-13 10:40:12.887379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.198 qpair failed and we were unable to recover it. 00:38:19.198 [2024-12-13 10:40:12.887620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.198 [2024-12-13 10:40:12.887636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.198 qpair failed and we were unable to recover it. 00:38:19.198 [2024-12-13 10:40:12.887788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.198 [2024-12-13 10:40:12.887804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.198 qpair failed and we were unable to recover it. 00:38:19.198 [2024-12-13 10:40:12.887894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.198 [2024-12-13 10:40:12.887908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.198 qpair failed and we were unable to recover it. 00:38:19.198 [2024-12-13 10:40:12.888065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.198 [2024-12-13 10:40:12.888081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.198 qpair failed and we were unable to recover it. 00:38:19.198 [2024-12-13 10:40:12.888215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.198 [2024-12-13 10:40:12.888231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.198 qpair failed and we were unable to recover it. 00:38:19.198 [2024-12-13 10:40:12.888301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.198 [2024-12-13 10:40:12.888315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.198 qpair failed and we were unable to recover it. 00:38:19.198 [2024-12-13 10:40:12.888401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.198 [2024-12-13 10:40:12.888415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.198 qpair failed and we were unable to recover it. 00:38:19.198 [2024-12-13 10:40:12.888572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.198 [2024-12-13 10:40:12.888590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.198 qpair failed and we were unable to recover it. 00:38:19.198 [2024-12-13 10:40:12.888651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.198 [2024-12-13 10:40:12.888670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.198 qpair failed and we were unable to recover it. 00:38:19.198 [2024-12-13 10:40:12.888750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.198 [2024-12-13 10:40:12.888765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.198 qpair failed and we were unable to recover it. 00:38:19.198 [2024-12-13 10:40:12.888920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.198 [2024-12-13 10:40:12.888935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.198 qpair failed and we were unable to recover it. 00:38:19.198 [2024-12-13 10:40:12.889096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.198 [2024-12-13 10:40:12.889111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.198 qpair failed and we were unable to recover it. 00:38:19.198 [2024-12-13 10:40:12.889208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.198 [2024-12-13 10:40:12.889223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.198 qpair failed and we were unable to recover it. 00:38:19.198 [2024-12-13 10:40:12.889430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.198 [2024-12-13 10:40:12.889445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.198 qpair failed and we were unable to recover it. 00:38:19.198 [2024-12-13 10:40:12.889599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.198 [2024-12-13 10:40:12.889615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.198 qpair failed and we were unable to recover it. 00:38:19.198 [2024-12-13 10:40:12.889696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.198 [2024-12-13 10:40:12.889710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.198 qpair failed and we were unable to recover it. 00:38:19.198 [2024-12-13 10:40:12.889884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.198 [2024-12-13 10:40:12.889899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.198 qpair failed and we were unable to recover it. 00:38:19.198 [2024-12-13 10:40:12.890053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.198 [2024-12-13 10:40:12.890068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.198 qpair failed and we were unable to recover it. 00:38:19.198 [2024-12-13 10:40:12.890161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.198 [2024-12-13 10:40:12.890176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.198 qpair failed and we were unable to recover it. 00:38:19.198 [2024-12-13 10:40:12.890259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.198 [2024-12-13 10:40:12.890273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.198 qpair failed and we were unable to recover it. 00:38:19.198 [2024-12-13 10:40:12.890349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.198 [2024-12-13 10:40:12.890363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.198 qpair failed and we were unable to recover it. 00:38:19.198 [2024-12-13 10:40:12.890425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.198 [2024-12-13 10:40:12.890440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.198 qpair failed and we were unable to recover it. 00:38:19.198 [2024-12-13 10:40:12.890535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.198 [2024-12-13 10:40:12.890549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.198 qpair failed and we were unable to recover it. 00:38:19.198 [2024-12-13 10:40:12.890642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.198 [2024-12-13 10:40:12.890655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.198 qpair failed and we were unable to recover it. 00:38:19.198 [2024-12-13 10:40:12.890728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.198 [2024-12-13 10:40:12.890741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.198 qpair failed and we were unable to recover it. 00:38:19.198 [2024-12-13 10:40:12.890809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.198 [2024-12-13 10:40:12.890824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.198 qpair failed and we were unable to recover it. 00:38:19.198 [2024-12-13 10:40:12.890959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.198 [2024-12-13 10:40:12.890973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.198 qpair failed and we were unable to recover it. 00:38:19.199 [2024-12-13 10:40:12.891044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.199 [2024-12-13 10:40:12.891057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.199 qpair failed and we were unable to recover it. 00:38:19.199 [2024-12-13 10:40:12.891196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.199 [2024-12-13 10:40:12.891213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.199 qpair failed and we were unable to recover it. 00:38:19.199 [2024-12-13 10:40:12.891289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.199 [2024-12-13 10:40:12.891303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.199 qpair failed and we were unable to recover it. 00:38:19.199 [2024-12-13 10:40:12.891459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.199 [2024-12-13 10:40:12.891478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.199 qpair failed and we were unable to recover it. 00:38:19.199 [2024-12-13 10:40:12.891617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.199 [2024-12-13 10:40:12.891633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.199 qpair failed and we were unable to recover it. 00:38:19.199 [2024-12-13 10:40:12.891765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.199 [2024-12-13 10:40:12.891780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.199 qpair failed and we were unable to recover it. 00:38:19.199 [2024-12-13 10:40:12.891947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.199 [2024-12-13 10:40:12.891962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.199 qpair failed and we were unable to recover it. 00:38:19.199 [2024-12-13 10:40:12.892097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.199 [2024-12-13 10:40:12.892111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.199 qpair failed and we were unable to recover it. 00:38:19.199 [2024-12-13 10:40:12.892252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.199 [2024-12-13 10:40:12.892267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.199 qpair failed and we were unable to recover it. 00:38:19.199 [2024-12-13 10:40:12.892405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.199 [2024-12-13 10:40:12.892422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.199 qpair failed and we were unable to recover it. 00:38:19.199 [2024-12-13 10:40:12.892560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.199 [2024-12-13 10:40:12.892576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.199 qpair failed and we were unable to recover it. 00:38:19.199 [2024-12-13 10:40:12.892713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.199 [2024-12-13 10:40:12.892728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.199 qpair failed and we were unable to recover it. 00:38:19.199 [2024-12-13 10:40:12.892885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.199 [2024-12-13 10:40:12.892899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.199 qpair failed and we were unable to recover it. 00:38:19.199 [2024-12-13 10:40:12.892990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.199 [2024-12-13 10:40:12.893004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.199 qpair failed and we were unable to recover it. 00:38:19.199 [2024-12-13 10:40:12.893179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.199 [2024-12-13 10:40:12.893194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.199 qpair failed and we were unable to recover it. 00:38:19.199 [2024-12-13 10:40:12.893379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.199 [2024-12-13 10:40:12.893394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.199 qpair failed and we were unable to recover it. 00:38:19.199 [2024-12-13 10:40:12.893546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.199 [2024-12-13 10:40:12.893563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.199 qpair failed and we were unable to recover it. 00:38:19.199 [2024-12-13 10:40:12.893722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.199 [2024-12-13 10:40:12.893737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.199 qpair failed and we were unable to recover it. 00:38:19.199 [2024-12-13 10:40:12.893914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.199 [2024-12-13 10:40:12.893929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.199 qpair failed and we were unable to recover it. 00:38:19.199 [2024-12-13 10:40:12.894072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.199 [2024-12-13 10:40:12.894086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.199 qpair failed and we were unable to recover it. 00:38:19.199 [2024-12-13 10:40:12.894229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.199 [2024-12-13 10:40:12.894244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.199 qpair failed and we were unable to recover it. 00:38:19.199 [2024-12-13 10:40:12.894306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.199 [2024-12-13 10:40:12.894319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.199 qpair failed and we were unable to recover it. 00:38:19.199 [2024-12-13 10:40:12.894409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.199 [2024-12-13 10:40:12.894423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.199 qpair failed and we were unable to recover it. 00:38:19.199 [2024-12-13 10:40:12.894657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.199 [2024-12-13 10:40:12.894673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.199 qpair failed and we were unable to recover it. 00:38:19.199 [2024-12-13 10:40:12.894744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.199 [2024-12-13 10:40:12.894758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.199 qpair failed and we were unable to recover it. 00:38:19.199 [2024-12-13 10:40:12.894832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.199 [2024-12-13 10:40:12.894846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.199 qpair failed and we were unable to recover it. 00:38:19.199 [2024-12-13 10:40:12.894993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.199 [2024-12-13 10:40:12.895007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.199 qpair failed and we were unable to recover it. 00:38:19.199 [2024-12-13 10:40:12.895082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.199 [2024-12-13 10:40:12.895097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.199 qpair failed and we were unable to recover it. 00:38:19.199 [2024-12-13 10:40:12.895178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.199 [2024-12-13 10:40:12.895192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.199 qpair failed and we were unable to recover it. 00:38:19.199 [2024-12-13 10:40:12.895343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.199 [2024-12-13 10:40:12.895359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.199 qpair failed and we were unable to recover it. 00:38:19.199 [2024-12-13 10:40:12.895463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.199 [2024-12-13 10:40:12.895479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.199 qpair failed and we were unable to recover it. 00:38:19.199 [2024-12-13 10:40:12.895562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.199 [2024-12-13 10:40:12.895576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.199 qpair failed and we were unable to recover it. 00:38:19.199 [2024-12-13 10:40:12.895676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.199 [2024-12-13 10:40:12.895692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.199 qpair failed and we were unable to recover it. 00:38:19.199 [2024-12-13 10:40:12.895861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.199 [2024-12-13 10:40:12.895904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.199 qpair failed and we were unable to recover it. 00:38:19.199 [2024-12-13 10:40:12.896103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.199 [2024-12-13 10:40:12.896146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.199 qpair failed and we were unable to recover it. 00:38:19.199 [2024-12-13 10:40:12.896373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.199 [2024-12-13 10:40:12.896417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.199 qpair failed and we were unable to recover it. 00:38:19.199 [2024-12-13 10:40:12.896565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.199 [2024-12-13 10:40:12.896610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.199 qpair failed and we were unable to recover it. 00:38:19.199 [2024-12-13 10:40:12.896747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.199 [2024-12-13 10:40:12.896791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.199 qpair failed and we were unable to recover it. 00:38:19.200 [2024-12-13 10:40:12.896938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.200 [2024-12-13 10:40:12.896981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.200 qpair failed and we were unable to recover it. 00:38:19.200 [2024-12-13 10:40:12.897206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.200 [2024-12-13 10:40:12.897250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.200 qpair failed and we were unable to recover it. 00:38:19.200 [2024-12-13 10:40:12.897444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.200 [2024-12-13 10:40:12.897515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.200 qpair failed and we were unable to recover it. 00:38:19.200 [2024-12-13 10:40:12.897696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.200 [2024-12-13 10:40:12.897722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:19.200 qpair failed and we were unable to recover it. 00:38:19.200 [2024-12-13 10:40:12.897977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.200 [2024-12-13 10:40:12.898006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:19.200 qpair failed and we were unable to recover it. 00:38:19.200 [2024-12-13 10:40:12.898205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.200 [2024-12-13 10:40:12.898259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:19.200 qpair failed and we were unable to recover it. 00:38:19.200 [2024-12-13 10:40:12.898555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.200 [2024-12-13 10:40:12.898581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:19.200 qpair failed and we were unable to recover it. 00:38:19.200 [2024-12-13 10:40:12.898738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.200 [2024-12-13 10:40:12.898761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:19.200 qpair failed and we were unable to recover it. 00:38:19.200 [2024-12-13 10:40:12.899013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.200 [2024-12-13 10:40:12.899036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:19.200 qpair failed and we were unable to recover it. 00:38:19.200 [2024-12-13 10:40:12.899236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.200 [2024-12-13 10:40:12.899253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.200 qpair failed and we were unable to recover it. 00:38:19.200 [2024-12-13 10:40:12.899392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.200 [2024-12-13 10:40:12.899407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.200 qpair failed and we were unable to recover it. 00:38:19.200 [2024-12-13 10:40:12.899579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.200 [2024-12-13 10:40:12.899595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.200 qpair failed and we were unable to recover it. 00:38:19.200 [2024-12-13 10:40:12.899770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.200 [2024-12-13 10:40:12.899786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.200 qpair failed and we were unable to recover it. 00:38:19.200 [2024-12-13 10:40:12.899882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.200 [2024-12-13 10:40:12.899901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.200 qpair failed and we were unable to recover it. 00:38:19.200 [2024-12-13 10:40:12.900061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.200 [2024-12-13 10:40:12.900076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.200 qpair failed and we were unable to recover it. 00:38:19.200 [2024-12-13 10:40:12.900150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.200 [2024-12-13 10:40:12.900163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.200 qpair failed and we were unable to recover it. 00:38:19.200 [2024-12-13 10:40:12.900389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.200 [2024-12-13 10:40:12.900404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.200 qpair failed and we were unable to recover it. 00:38:19.200 [2024-12-13 10:40:12.900495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.200 [2024-12-13 10:40:12.900510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.200 qpair failed and we were unable to recover it. 00:38:19.200 [2024-12-13 10:40:12.900616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.200 [2024-12-13 10:40:12.900633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.200 qpair failed and we were unable to recover it. 00:38:19.200 [2024-12-13 10:40:12.900786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.200 [2024-12-13 10:40:12.900802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.200 qpair failed and we were unable to recover it. 00:38:19.200 [2024-12-13 10:40:12.900874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.200 [2024-12-13 10:40:12.900888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.200 qpair failed and we were unable to recover it. 00:38:19.200 [2024-12-13 10:40:12.900989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.200 [2024-12-13 10:40:12.901031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.200 qpair failed and we were unable to recover it. 00:38:19.200 [2024-12-13 10:40:12.901306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.200 [2024-12-13 10:40:12.901350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.200 qpair failed and we were unable to recover it. 00:38:19.200 [2024-12-13 10:40:12.901551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.200 [2024-12-13 10:40:12.901596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.200 qpair failed and we were unable to recover it. 00:38:19.200 [2024-12-13 10:40:12.901726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.200 [2024-12-13 10:40:12.901742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.200 qpair failed and we were unable to recover it. 00:38:19.200 [2024-12-13 10:40:12.901882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.200 [2024-12-13 10:40:12.901897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.200 qpair failed and we were unable to recover it. 00:38:19.200 [2024-12-13 10:40:12.902044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.200 [2024-12-13 10:40:12.902059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.200 qpair failed and we were unable to recover it. 00:38:19.200 [2024-12-13 10:40:12.902292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.200 [2024-12-13 10:40:12.902307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.200 qpair failed and we were unable to recover it. 00:38:19.200 [2024-12-13 10:40:12.902461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.200 [2024-12-13 10:40:12.902477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.200 qpair failed and we were unable to recover it. 00:38:19.200 [2024-12-13 10:40:12.902684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.200 [2024-12-13 10:40:12.902701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.200 qpair failed and we were unable to recover it. 00:38:19.200 [2024-12-13 10:40:12.902864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.200 [2024-12-13 10:40:12.902909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.200 qpair failed and we were unable to recover it. 00:38:19.200 [2024-12-13 10:40:12.903041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.200 [2024-12-13 10:40:12.903084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.200 qpair failed and we were unable to recover it. 00:38:19.200 [2024-12-13 10:40:12.903285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.200 [2024-12-13 10:40:12.903299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.200 qpair failed and we were unable to recover it. 00:38:19.200 [2024-12-13 10:40:12.903531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.200 [2024-12-13 10:40:12.903546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.200 qpair failed and we were unable to recover it. 00:38:19.201 [2024-12-13 10:40:12.903613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.201 [2024-12-13 10:40:12.903626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.201 qpair failed and we were unable to recover it. 00:38:19.201 [2024-12-13 10:40:12.903729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.201 [2024-12-13 10:40:12.903744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.201 qpair failed and we were unable to recover it. 00:38:19.201 [2024-12-13 10:40:12.903886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.201 [2024-12-13 10:40:12.903902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.201 qpair failed and we were unable to recover it. 00:38:19.201 [2024-12-13 10:40:12.904021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.201 [2024-12-13 10:40:12.904062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.201 qpair failed and we were unable to recover it. 00:38:19.201 [2024-12-13 10:40:12.904222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.201 [2024-12-13 10:40:12.904265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.201 qpair failed and we were unable to recover it. 00:38:19.201 [2024-12-13 10:40:12.904501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.201 [2024-12-13 10:40:12.904546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.201 qpair failed and we were unable to recover it. 00:38:19.201 [2024-12-13 10:40:12.904715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.201 [2024-12-13 10:40:12.904730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.201 qpair failed and we were unable to recover it. 00:38:19.201 [2024-12-13 10:40:12.904939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.201 [2024-12-13 10:40:12.904955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.201 qpair failed and we were unable to recover it. 00:38:19.201 [2024-12-13 10:40:12.905048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.201 [2024-12-13 10:40:12.905063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.201 qpair failed and we were unable to recover it. 00:38:19.201 [2024-12-13 10:40:12.905166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.201 [2024-12-13 10:40:12.905181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.201 qpair failed and we were unable to recover it. 00:38:19.201 [2024-12-13 10:40:12.905326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.201 [2024-12-13 10:40:12.905341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.201 qpair failed and we were unable to recover it. 00:38:19.201 [2024-12-13 10:40:12.905558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.201 [2024-12-13 10:40:12.905579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.201 qpair failed and we were unable to recover it. 00:38:19.201 [2024-12-13 10:40:12.905780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.201 [2024-12-13 10:40:12.905796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.201 qpair failed and we were unable to recover it. 00:38:19.201 [2024-12-13 10:40:12.905889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.201 [2024-12-13 10:40:12.905903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.201 qpair failed and we were unable to recover it. 00:38:19.201 [2024-12-13 10:40:12.906043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.201 [2024-12-13 10:40:12.906059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.201 qpair failed and we were unable to recover it. 00:38:19.201 [2024-12-13 10:40:12.906202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.201 [2024-12-13 10:40:12.906218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.201 qpair failed and we were unable to recover it. 00:38:19.201 [2024-12-13 10:40:12.906305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.201 [2024-12-13 10:40:12.906330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.201 qpair failed and we were unable to recover it. 00:38:19.201 [2024-12-13 10:40:12.906514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.201 [2024-12-13 10:40:12.906561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.201 qpair failed and we were unable to recover it. 00:38:19.201 [2024-12-13 10:40:12.906693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.201 [2024-12-13 10:40:12.906736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.201 qpair failed and we were unable to recover it. 00:38:19.201 [2024-12-13 10:40:12.907053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.201 [2024-12-13 10:40:12.907096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.201 qpair failed and we were unable to recover it. 00:38:19.201 [2024-12-13 10:40:12.907303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.201 [2024-12-13 10:40:12.907349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.201 qpair failed and we were unable to recover it. 00:38:19.201 [2024-12-13 10:40:12.907509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.201 [2024-12-13 10:40:12.907553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.201 qpair failed and we were unable to recover it. 00:38:19.201 [2024-12-13 10:40:12.907751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.201 [2024-12-13 10:40:12.907793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.201 qpair failed and we were unable to recover it. 00:38:19.201 [2024-12-13 10:40:12.908018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.201 [2024-12-13 10:40:12.908060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.201 qpair failed and we were unable to recover it. 00:38:19.201 [2024-12-13 10:40:12.908296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.201 [2024-12-13 10:40:12.908339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.201 qpair failed and we were unable to recover it. 00:38:19.201 [2024-12-13 10:40:12.908642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.201 [2024-12-13 10:40:12.908657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.201 qpair failed and we were unable to recover it. 00:38:19.201 [2024-12-13 10:40:12.908800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.201 [2024-12-13 10:40:12.908815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.201 qpair failed and we were unable to recover it. 00:38:19.201 [2024-12-13 10:40:12.908895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.201 [2024-12-13 10:40:12.908910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.201 qpair failed and we were unable to recover it. 00:38:19.201 [2024-12-13 10:40:12.909055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.201 [2024-12-13 10:40:12.909070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.201 qpair failed and we were unable to recover it. 00:38:19.201 [2024-12-13 10:40:12.909222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.201 [2024-12-13 10:40:12.909237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.201 qpair failed and we were unable to recover it. 00:38:19.201 [2024-12-13 10:40:12.909415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.201 [2024-12-13 10:40:12.909429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.201 qpair failed and we were unable to recover it. 00:38:19.201 [2024-12-13 10:40:12.909602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.201 [2024-12-13 10:40:12.909658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.201 qpair failed and we were unable to recover it. 00:38:19.201 [2024-12-13 10:40:12.909798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.201 [2024-12-13 10:40:12.909851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:19.201 qpair failed and we were unable to recover it. 00:38:19.201 [2024-12-13 10:40:12.910016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.201 [2024-12-13 10:40:12.910065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:19.201 qpair failed and we were unable to recover it. 00:38:19.201 [2024-12-13 10:40:12.910295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.201 [2024-12-13 10:40:12.910340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:19.201 qpair failed and we were unable to recover it. 00:38:19.201 [2024-12-13 10:40:12.910556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.201 [2024-12-13 10:40:12.910601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:19.201 qpair failed and we were unable to recover it. 00:38:19.201 [2024-12-13 10:40:12.910793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.201 [2024-12-13 10:40:12.910836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:19.201 qpair failed and we were unable to recover it. 00:38:19.201 [2024-12-13 10:40:12.911131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.201 [2024-12-13 10:40:12.911180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:19.201 qpair failed and we were unable to recover it. 00:38:19.201 [2024-12-13 10:40:12.911329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.201 [2024-12-13 10:40:12.911373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:19.201 qpair failed and we were unable to recover it. 00:38:19.202 [2024-12-13 10:40:12.911584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.202 [2024-12-13 10:40:12.911606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:19.202 qpair failed and we were unable to recover it. 00:38:19.202 [2024-12-13 10:40:12.911777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.202 [2024-12-13 10:40:12.911799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:19.202 qpair failed and we were unable to recover it. 00:38:19.202 [2024-12-13 10:40:12.911892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.202 [2024-12-13 10:40:12.911908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.202 qpair failed and we were unable to recover it. 00:38:19.202 [2024-12-13 10:40:12.912073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.202 [2024-12-13 10:40:12.912088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.202 qpair failed and we were unable to recover it. 00:38:19.202 [2024-12-13 10:40:12.912236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.202 [2024-12-13 10:40:12.912280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.202 qpair failed and we were unable to recover it. 00:38:19.202 [2024-12-13 10:40:12.912565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.202 [2024-12-13 10:40:12.912610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.202 qpair failed and we were unable to recover it. 00:38:19.202 [2024-12-13 10:40:12.912825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.202 [2024-12-13 10:40:12.912840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.202 qpair failed and we were unable to recover it. 00:38:19.202 [2024-12-13 10:40:12.912911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.202 [2024-12-13 10:40:12.912924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.202 qpair failed and we were unable to recover it. 00:38:19.202 [2024-12-13 10:40:12.913007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.202 [2024-12-13 10:40:12.913021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.202 qpair failed and we were unable to recover it. 00:38:19.202 [2024-12-13 10:40:12.913102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.202 [2024-12-13 10:40:12.913117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.202 qpair failed and we were unable to recover it. 00:38:19.202 [2024-12-13 10:40:12.913271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.202 [2024-12-13 10:40:12.913315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.202 qpair failed and we were unable to recover it. 00:38:19.202 [2024-12-13 10:40:12.913510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.202 [2024-12-13 10:40:12.913557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.202 qpair failed and we were unable to recover it. 00:38:19.202 [2024-12-13 10:40:12.913797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.202 [2024-12-13 10:40:12.913854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.202 qpair failed and we were unable to recover it. 00:38:19.202 [2024-12-13 10:40:12.914007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.202 [2024-12-13 10:40:12.914062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.202 qpair failed and we were unable to recover it. 00:38:19.202 [2024-12-13 10:40:12.914272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.202 [2024-12-13 10:40:12.914316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.202 qpair failed and we were unable to recover it. 00:38:19.202 [2024-12-13 10:40:12.914524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.202 [2024-12-13 10:40:12.914541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.202 qpair failed and we were unable to recover it. 00:38:19.202 [2024-12-13 10:40:12.914611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.202 [2024-12-13 10:40:12.914625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.202 qpair failed and we were unable to recover it. 00:38:19.202 [2024-12-13 10:40:12.914776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.202 [2024-12-13 10:40:12.914820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.202 qpair failed and we were unable to recover it. 00:38:19.202 [2024-12-13 10:40:12.915103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.202 [2024-12-13 10:40:12.915147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.202 qpair failed and we were unable to recover it. 00:38:19.202 [2024-12-13 10:40:12.915362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.202 [2024-12-13 10:40:12.915412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.202 qpair failed and we were unable to recover it. 00:38:19.202 [2024-12-13 10:40:12.915687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.202 [2024-12-13 10:40:12.915712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:19.202 qpair failed and we were unable to recover it. 00:38:19.202 [2024-12-13 10:40:12.915881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.202 [2024-12-13 10:40:12.915910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:19.202 qpair failed and we were unable to recover it. 00:38:19.202 [2024-12-13 10:40:12.916001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.202 [2024-12-13 10:40:12.916026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:19.202 qpair failed and we were unable to recover it. 00:38:19.202 [2024-12-13 10:40:12.916120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.202 [2024-12-13 10:40:12.916137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.202 qpair failed and we were unable to recover it. 00:38:19.202 [2024-12-13 10:40:12.916300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.202 [2024-12-13 10:40:12.916316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.202 qpair failed and we were unable to recover it. 00:38:19.202 [2024-12-13 10:40:12.916385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.202 [2024-12-13 10:40:12.916399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.202 qpair failed and we were unable to recover it. 00:38:19.202 [2024-12-13 10:40:12.916640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.202 [2024-12-13 10:40:12.916656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.202 qpair failed and we were unable to recover it. 00:38:19.202 [2024-12-13 10:40:12.916731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.202 [2024-12-13 10:40:12.916745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.202 qpair failed and we were unable to recover it. 00:38:19.202 [2024-12-13 10:40:12.916814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.202 [2024-12-13 10:40:12.916827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.202 qpair failed and we were unable to recover it. 00:38:19.202 [2024-12-13 10:40:12.916976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.202 [2024-12-13 10:40:12.916991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.202 qpair failed and we were unable to recover it. 00:38:19.202 [2024-12-13 10:40:12.917150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.202 [2024-12-13 10:40:12.917165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.202 qpair failed and we were unable to recover it. 00:38:19.202 [2024-12-13 10:40:12.917365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.202 [2024-12-13 10:40:12.917381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.202 qpair failed and we were unable to recover it. 00:38:19.202 [2024-12-13 10:40:12.917485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.202 [2024-12-13 10:40:12.917501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.202 qpair failed and we were unable to recover it. 00:38:19.202 [2024-12-13 10:40:12.917573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.202 [2024-12-13 10:40:12.917587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.202 qpair failed and we were unable to recover it. 00:38:19.202 [2024-12-13 10:40:12.917677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.202 [2024-12-13 10:40:12.917692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.202 qpair failed and we were unable to recover it. 00:38:19.202 [2024-12-13 10:40:12.917829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.202 [2024-12-13 10:40:12.917846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.202 qpair failed and we were unable to recover it. 00:38:19.202 [2024-12-13 10:40:12.917942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.202 [2024-12-13 10:40:12.917958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.202 qpair failed and we were unable to recover it. 00:38:19.203 [2024-12-13 10:40:12.918035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.203 [2024-12-13 10:40:12.918049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.203 qpair failed and we were unable to recover it. 00:38:19.203 [2024-12-13 10:40:12.918204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.203 [2024-12-13 10:40:12.918219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.203 qpair failed and we were unable to recover it. 00:38:19.203 [2024-12-13 10:40:12.918364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.203 [2024-12-13 10:40:12.918379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.203 qpair failed and we were unable to recover it. 00:38:19.203 [2024-12-13 10:40:12.918480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.203 [2024-12-13 10:40:12.918496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.203 qpair failed and we were unable to recover it. 00:38:19.203 [2024-12-13 10:40:12.918569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.203 [2024-12-13 10:40:12.918583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.203 qpair failed and we were unable to recover it. 00:38:19.203 [2024-12-13 10:40:12.918713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.203 [2024-12-13 10:40:12.918728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.203 qpair failed and we were unable to recover it. 00:38:19.203 [2024-12-13 10:40:12.918881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.203 [2024-12-13 10:40:12.918896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.203 qpair failed and we were unable to recover it. 00:38:19.203 [2024-12-13 10:40:12.918959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.203 [2024-12-13 10:40:12.918972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.203 qpair failed and we were unable to recover it. 00:38:19.203 [2024-12-13 10:40:12.919053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.203 [2024-12-13 10:40:12.919067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.203 qpair failed and we were unable to recover it. 00:38:19.203 [2024-12-13 10:40:12.919217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.203 [2024-12-13 10:40:12.919233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.203 qpair failed and we were unable to recover it. 00:38:19.203 [2024-12-13 10:40:12.919323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.203 [2024-12-13 10:40:12.919339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.203 qpair failed and we were unable to recover it. 00:38:19.203 [2024-12-13 10:40:12.919491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.203 [2024-12-13 10:40:12.919507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.203 qpair failed and we were unable to recover it. 00:38:19.203 [2024-12-13 10:40:12.919593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.203 [2024-12-13 10:40:12.919608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.203 qpair failed and we were unable to recover it. 00:38:19.203 [2024-12-13 10:40:12.919691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.203 [2024-12-13 10:40:12.919704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.203 qpair failed and we were unable to recover it. 00:38:19.203 [2024-12-13 10:40:12.919863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.203 [2024-12-13 10:40:12.919879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.203 qpair failed and we were unable to recover it. 00:38:19.203 [2024-12-13 10:40:12.919982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.203 [2024-12-13 10:40:12.920031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.203 qpair failed and we were unable to recover it. 00:38:19.203 [2024-12-13 10:40:12.920233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.203 [2024-12-13 10:40:12.920278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.203 qpair failed and we were unable to recover it. 00:38:19.203 [2024-12-13 10:40:12.920419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.203 [2024-12-13 10:40:12.920474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.203 qpair failed and we were unable to recover it. 00:38:19.203 [2024-12-13 10:40:12.920665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.203 [2024-12-13 10:40:12.920681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.203 qpair failed and we were unable to recover it. 00:38:19.203 [2024-12-13 10:40:12.920935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.203 [2024-12-13 10:40:12.920950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.203 qpair failed and we were unable to recover it. 00:38:19.203 [2024-12-13 10:40:12.921011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.203 [2024-12-13 10:40:12.921025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.203 qpair failed and we were unable to recover it. 00:38:19.203 [2024-12-13 10:40:12.921126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.203 [2024-12-13 10:40:12.921140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.203 qpair failed and we were unable to recover it. 00:38:19.203 [2024-12-13 10:40:12.921227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.203 [2024-12-13 10:40:12.921243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.203 qpair failed and we were unable to recover it. 00:38:19.203 [2024-12-13 10:40:12.921397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.203 [2024-12-13 10:40:12.921412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.203 qpair failed and we were unable to recover it. 00:38:19.203 [2024-12-13 10:40:12.921512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.203 [2024-12-13 10:40:12.921528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.203 qpair failed and we were unable to recover it. 00:38:19.203 [2024-12-13 10:40:12.921611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.203 [2024-12-13 10:40:12.921625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.203 qpair failed and we were unable to recover it. 00:38:19.203 [2024-12-13 10:40:12.921721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.203 [2024-12-13 10:40:12.921736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.203 qpair failed and we were unable to recover it. 00:38:19.203 [2024-12-13 10:40:12.921964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.203 [2024-12-13 10:40:12.921980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.203 qpair failed and we were unable to recover it. 00:38:19.203 [2024-12-13 10:40:12.922075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.203 [2024-12-13 10:40:12.922090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.203 qpair failed and we were unable to recover it. 00:38:19.203 [2024-12-13 10:40:12.922246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.203 [2024-12-13 10:40:12.922262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.203 qpair failed and we were unable to recover it. 00:38:19.203 [2024-12-13 10:40:12.922422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.203 [2024-12-13 10:40:12.922436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.203 qpair failed and we were unable to recover it. 00:38:19.203 [2024-12-13 10:40:12.922580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.203 [2024-12-13 10:40:12.922595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.203 qpair failed and we were unable to recover it. 00:38:19.203 [2024-12-13 10:40:12.922703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.203 [2024-12-13 10:40:12.922746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.203 qpair failed and we were unable to recover it. 00:38:19.203 [2024-12-13 10:40:12.923020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.203 [2024-12-13 10:40:12.923061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.203 qpair failed and we were unable to recover it. 00:38:19.203 [2024-12-13 10:40:12.923207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.203 [2024-12-13 10:40:12.923249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.203 qpair failed and we were unable to recover it. 00:38:19.203 [2024-12-13 10:40:12.923384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.203 [2024-12-13 10:40:12.923399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.203 qpair failed and we were unable to recover it. 00:38:19.203 [2024-12-13 10:40:12.923617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.203 [2024-12-13 10:40:12.923662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.203 qpair failed and we were unable to recover it. 00:38:19.203 [2024-12-13 10:40:12.923805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.203 [2024-12-13 10:40:12.923847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.203 qpair failed and we were unable to recover it. 00:38:19.203 [2024-12-13 10:40:12.923963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.203 [2024-12-13 10:40:12.924005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.203 qpair failed and we were unable to recover it. 00:38:19.204 [2024-12-13 10:40:12.924269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.204 [2024-12-13 10:40:12.924312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.204 qpair failed and we were unable to recover it. 00:38:19.204 [2024-12-13 10:40:12.924523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.204 [2024-12-13 10:40:12.924568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.204 qpair failed and we were unable to recover it. 00:38:19.204 [2024-12-13 10:40:12.924710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.204 [2024-12-13 10:40:12.924754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.204 qpair failed and we were unable to recover it. 00:38:19.204 [2024-12-13 10:40:12.925026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.204 [2024-12-13 10:40:12.925113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:19.204 qpair failed and we were unable to recover it. 00:38:19.204 [2024-12-13 10:40:12.925276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.204 [2024-12-13 10:40:12.925326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:19.204 qpair failed and we were unable to recover it. 00:38:19.204 [2024-12-13 10:40:12.925568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.204 [2024-12-13 10:40:12.925621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:19.204 qpair failed and we were unable to recover it. 00:38:19.204 [2024-12-13 10:40:12.925818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.204 [2024-12-13 10:40:12.925835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.204 qpair failed and we were unable to recover it. 00:38:19.204 [2024-12-13 10:40:12.925976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.204 [2024-12-13 10:40:12.925997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.204 qpair failed and we were unable to recover it. 00:38:19.204 [2024-12-13 10:40:12.926258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.204 [2024-12-13 10:40:12.926302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.204 qpair failed and we were unable to recover it. 00:38:19.204 [2024-12-13 10:40:12.926512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.204 [2024-12-13 10:40:12.926557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.204 qpair failed and we were unable to recover it. 00:38:19.204 [2024-12-13 10:40:12.926705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.204 [2024-12-13 10:40:12.926720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.204 qpair failed and we were unable to recover it. 00:38:19.204 [2024-12-13 10:40:12.926858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.204 [2024-12-13 10:40:12.926873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.204 qpair failed and we were unable to recover it. 00:38:19.204 [2024-12-13 10:40:12.926974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.204 [2024-12-13 10:40:12.926989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.204 qpair failed and we were unable to recover it. 00:38:19.204 [2024-12-13 10:40:12.927069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.204 [2024-12-13 10:40:12.927084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.204 qpair failed and we were unable to recover it. 00:38:19.204 [2024-12-13 10:40:12.927322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.204 [2024-12-13 10:40:12.927336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.204 qpair failed and we were unable to recover it. 00:38:19.204 [2024-12-13 10:40:12.927554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.204 [2024-12-13 10:40:12.927569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.204 qpair failed and we were unable to recover it. 00:38:19.204 [2024-12-13 10:40:12.927739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.204 [2024-12-13 10:40:12.927757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.204 qpair failed and we were unable to recover it. 00:38:19.204 [2024-12-13 10:40:12.927889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.204 [2024-12-13 10:40:12.927904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.204 qpair failed and we were unable to recover it. 00:38:19.204 [2024-12-13 10:40:12.928093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.204 [2024-12-13 10:40:12.928137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.204 qpair failed and we were unable to recover it. 00:38:19.204 [2024-12-13 10:40:12.928339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.204 [2024-12-13 10:40:12.928383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.204 qpair failed and we were unable to recover it. 00:38:19.204 [2024-12-13 10:40:12.928605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.204 [2024-12-13 10:40:12.928649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.204 qpair failed and we were unable to recover it. 00:38:19.204 [2024-12-13 10:40:12.928907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.204 [2024-12-13 10:40:12.928951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.204 qpair failed and we were unable to recover it. 00:38:19.204 [2024-12-13 10:40:12.929081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.204 [2024-12-13 10:40:12.929125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.204 qpair failed and we were unable to recover it. 00:38:19.204 [2024-12-13 10:40:12.929274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.204 [2024-12-13 10:40:12.929317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.204 qpair failed and we were unable to recover it. 00:38:19.204 [2024-12-13 10:40:12.929516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.204 [2024-12-13 10:40:12.929560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.204 qpair failed and we were unable to recover it. 00:38:19.204 [2024-12-13 10:40:12.929759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.204 [2024-12-13 10:40:12.929802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.204 qpair failed and we were unable to recover it. 00:38:19.204 [2024-12-13 10:40:12.930065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.204 [2024-12-13 10:40:12.930108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.204 qpair failed and we were unable to recover it. 00:38:19.204 [2024-12-13 10:40:12.930392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.204 [2024-12-13 10:40:12.930436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.204 qpair failed and we were unable to recover it. 00:38:19.204 [2024-12-13 10:40:12.930641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.204 [2024-12-13 10:40:12.930684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.204 qpair failed and we were unable to recover it. 00:38:19.204 [2024-12-13 10:40:12.930832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.204 [2024-12-13 10:40:12.930875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.204 qpair failed and we were unable to recover it. 00:38:19.204 [2024-12-13 10:40:12.931074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.204 [2024-12-13 10:40:12.931116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.204 qpair failed and we were unable to recover it. 00:38:19.204 [2024-12-13 10:40:12.931338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.204 [2024-12-13 10:40:12.931380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.204 qpair failed and we were unable to recover it. 00:38:19.204 [2024-12-13 10:40:12.931610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.205 [2024-12-13 10:40:12.931655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.205 qpair failed and we were unable to recover it. 00:38:19.205 [2024-12-13 10:40:12.931846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.205 [2024-12-13 10:40:12.931889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.205 qpair failed and we were unable to recover it. 00:38:19.205 [2024-12-13 10:40:12.932031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.205 [2024-12-13 10:40:12.932073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.205 qpair failed and we were unable to recover it. 00:38:19.205 [2024-12-13 10:40:12.932214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.205 [2024-12-13 10:40:12.932258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.205 qpair failed and we were unable to recover it. 00:38:19.205 [2024-12-13 10:40:12.932520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.205 [2024-12-13 10:40:12.932551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.205 qpair failed and we were unable to recover it. 00:38:19.205 [2024-12-13 10:40:12.932690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.205 [2024-12-13 10:40:12.932706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.205 qpair failed and we were unable to recover it. 00:38:19.205 [2024-12-13 10:40:12.932814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.205 [2024-12-13 10:40:12.932830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.205 qpair failed and we were unable to recover it. 00:38:19.205 [2024-12-13 10:40:12.933014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.205 [2024-12-13 10:40:12.933056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.205 qpair failed and we were unable to recover it. 00:38:19.205 [2024-12-13 10:40:12.933197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.205 [2024-12-13 10:40:12.933241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.205 qpair failed and we were unable to recover it. 00:38:19.205 [2024-12-13 10:40:12.933391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.205 [2024-12-13 10:40:12.933432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.205 qpair failed and we were unable to recover it. 00:38:19.205 [2024-12-13 10:40:12.933668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.205 [2024-12-13 10:40:12.933711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.205 qpair failed and we were unable to recover it. 00:38:19.205 [2024-12-13 10:40:12.933962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.205 [2024-12-13 10:40:12.934016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:19.205 qpair failed and we were unable to recover it. 00:38:19.205 [2024-12-13 10:40:12.934258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.205 [2024-12-13 10:40:12.934309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:19.205 qpair failed and we were unable to recover it. 00:38:19.205 [2024-12-13 10:40:12.934564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.205 [2024-12-13 10:40:12.934590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:19.205 qpair failed and we were unable to recover it. 00:38:19.205 [2024-12-13 10:40:12.934750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.205 [2024-12-13 10:40:12.934767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.205 qpair failed and we were unable to recover it. 00:38:19.205 [2024-12-13 10:40:12.934906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.205 [2024-12-13 10:40:12.934920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.205 qpair failed and we were unable to recover it. 00:38:19.205 [2024-12-13 10:40:12.934986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.205 [2024-12-13 10:40:12.934999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.205 qpair failed and we were unable to recover it. 00:38:19.205 [2024-12-13 10:40:12.935157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.205 [2024-12-13 10:40:12.935201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.205 qpair failed and we were unable to recover it. 00:38:19.205 [2024-12-13 10:40:12.935414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.205 [2024-12-13 10:40:12.935465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.205 qpair failed and we were unable to recover it. 00:38:19.205 [2024-12-13 10:40:12.935724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.205 [2024-12-13 10:40:12.935767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.205 qpair failed and we were unable to recover it. 00:38:19.205 [2024-12-13 10:40:12.935991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.205 [2024-12-13 10:40:12.936035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.205 qpair failed and we were unable to recover it. 00:38:19.205 [2024-12-13 10:40:12.936355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.205 [2024-12-13 10:40:12.936399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.205 qpair failed and we were unable to recover it. 00:38:19.205 [2024-12-13 10:40:12.936745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.205 [2024-12-13 10:40:12.936794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.205 qpair failed and we were unable to recover it. 00:38:19.205 [2024-12-13 10:40:12.936957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.205 [2024-12-13 10:40:12.937000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.205 qpair failed and we were unable to recover it. 00:38:19.205 [2024-12-13 10:40:12.937285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.205 [2024-12-13 10:40:12.937336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.205 qpair failed and we were unable to recover it. 00:38:19.205 [2024-12-13 10:40:12.937553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.205 [2024-12-13 10:40:12.937599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.205 qpair failed and we were unable to recover it. 00:38:19.205 [2024-12-13 10:40:12.937860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.205 [2024-12-13 10:40:12.937904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.205 qpair failed and we were unable to recover it. 00:38:19.205 [2024-12-13 10:40:12.938137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.205 [2024-12-13 10:40:12.938180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.205 qpair failed and we were unable to recover it. 00:38:19.205 [2024-12-13 10:40:12.938309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.205 [2024-12-13 10:40:12.938353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.205 qpair failed and we were unable to recover it. 00:38:19.205 [2024-12-13 10:40:12.938549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.205 [2024-12-13 10:40:12.938595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.205 qpair failed and we were unable to recover it. 00:38:19.205 [2024-12-13 10:40:12.938912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.205 [2024-12-13 10:40:12.938955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.205 qpair failed and we were unable to recover it. 00:38:19.205 [2024-12-13 10:40:12.939086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.205 [2024-12-13 10:40:12.939129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.205 qpair failed and we were unable to recover it. 00:38:19.205 [2024-12-13 10:40:12.939411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.205 [2024-12-13 10:40:12.939483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.205 qpair failed and we were unable to recover it. 00:38:19.205 [2024-12-13 10:40:12.939647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.205 [2024-12-13 10:40:12.939690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.205 qpair failed and we were unable to recover it. 00:38:19.205 [2024-12-13 10:40:12.939906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.205 [2024-12-13 10:40:12.939949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.205 qpair failed and we were unable to recover it. 00:38:19.205 [2024-12-13 10:40:12.940208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.205 [2024-12-13 10:40:12.940252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.205 qpair failed and we were unable to recover it. 00:38:19.205 [2024-12-13 10:40:12.940470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.205 [2024-12-13 10:40:12.940515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.205 qpair failed and we were unable to recover it. 00:38:19.205 [2024-12-13 10:40:12.940775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.205 [2024-12-13 10:40:12.940789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.205 qpair failed and we were unable to recover it. 00:38:19.205 [2024-12-13 10:40:12.941006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.205 [2024-12-13 10:40:12.941021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.205 qpair failed and we were unable to recover it. 00:38:19.205 [2024-12-13 10:40:12.941173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.206 [2024-12-13 10:40:12.941188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.206 qpair failed and we were unable to recover it. 00:38:19.206 [2024-12-13 10:40:12.941369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.206 [2024-12-13 10:40:12.941412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.206 qpair failed and we were unable to recover it. 00:38:19.206 [2024-12-13 10:40:12.941712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.206 [2024-12-13 10:40:12.941799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:19.206 qpair failed and we were unable to recover it. 00:38:19.206 [2024-12-13 10:40:12.942111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.206 [2024-12-13 10:40:12.942159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:19.206 qpair failed and we were unable to recover it. 00:38:19.206 [2024-12-13 10:40:12.942468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.206 [2024-12-13 10:40:12.942526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:19.206 qpair failed and we were unable to recover it. 00:38:19.206 [2024-12-13 10:40:12.942754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.206 [2024-12-13 10:40:12.942800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:19.206 qpair failed and we were unable to recover it. 00:38:19.206 [2024-12-13 10:40:12.943069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.206 [2024-12-13 10:40:12.943112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:19.206 qpair failed and we were unable to recover it. 00:38:19.206 [2024-12-13 10:40:12.943403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.206 [2024-12-13 10:40:12.943460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:19.206 qpair failed and we were unable to recover it. 00:38:19.206 [2024-12-13 10:40:12.943773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.206 [2024-12-13 10:40:12.943819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:19.206 qpair failed and we were unable to recover it. 00:38:19.206 [2024-12-13 10:40:12.944044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.206 [2024-12-13 10:40:12.944089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:19.206 qpair failed and we were unable to recover it. 00:38:19.206 [2024-12-13 10:40:12.944309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.206 [2024-12-13 10:40:12.944353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:19.206 qpair failed and we were unable to recover it. 00:38:19.206 [2024-12-13 10:40:12.944483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.206 [2024-12-13 10:40:12.944501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.206 qpair failed and we were unable to recover it. 00:38:19.206 [2024-12-13 10:40:12.944756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.206 [2024-12-13 10:40:12.944809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:19.206 qpair failed and we were unable to recover it. 00:38:19.206 [2024-12-13 10:40:12.945125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.206 [2024-12-13 10:40:12.945173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:19.206 qpair failed and we were unable to recover it. 00:38:19.206 [2024-12-13 10:40:12.945446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.206 [2024-12-13 10:40:12.945500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:19.206 qpair failed and we were unable to recover it. 00:38:19.206 [2024-12-13 10:40:12.945710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.206 [2024-12-13 10:40:12.945754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:19.206 qpair failed and we were unable to recover it. 00:38:19.206 [2024-12-13 10:40:12.945988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.206 [2024-12-13 10:40:12.946032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:19.206 qpair failed and we were unable to recover it. 00:38:19.206 [2024-12-13 10:40:12.946231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.206 [2024-12-13 10:40:12.946282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:19.206 qpair failed and we were unable to recover it. 00:38:19.206 [2024-12-13 10:40:12.946549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.206 [2024-12-13 10:40:12.946595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:19.206 qpair failed and we were unable to recover it. 00:38:19.206 [2024-12-13 10:40:12.946740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.206 [2024-12-13 10:40:12.946783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:19.206 qpair failed and we were unable to recover it. 00:38:19.206 [2024-12-13 10:40:12.946972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.206 [2024-12-13 10:40:12.946995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:19.206 qpair failed and we were unable to recover it. 00:38:19.206 [2024-12-13 10:40:12.947247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.206 [2024-12-13 10:40:12.947271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.206 qpair failed and we were unable to recover it. 00:38:19.206 [2024-12-13 10:40:12.947482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.206 [2024-12-13 10:40:12.947498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.206 qpair failed and we were unable to recover it. 00:38:19.206 [2024-12-13 10:40:12.947583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.206 [2024-12-13 10:40:12.947597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.206 qpair failed and we were unable to recover it. 00:38:19.206 [2024-12-13 10:40:12.947679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.206 [2024-12-13 10:40:12.947692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.206 qpair failed and we were unable to recover it. 00:38:19.206 [2024-12-13 10:40:12.947842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.206 [2024-12-13 10:40:12.947891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.206 qpair failed and we were unable to recover it. 00:38:19.206 [2024-12-13 10:40:12.948098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.206 [2024-12-13 10:40:12.948141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.206 qpair failed and we were unable to recover it. 00:38:19.206 [2024-12-13 10:40:12.948282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.206 [2024-12-13 10:40:12.948324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.206 qpair failed and we were unable to recover it. 00:38:19.206 [2024-12-13 10:40:12.948570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.206 [2024-12-13 10:40:12.948586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.206 qpair failed and we were unable to recover it. 00:38:19.206 [2024-12-13 10:40:12.948669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.206 [2024-12-13 10:40:12.948683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.206 qpair failed and we were unable to recover it. 00:38:19.206 [2024-12-13 10:40:12.948928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.206 [2024-12-13 10:40:12.948971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.206 qpair failed and we were unable to recover it. 00:38:19.206 [2024-12-13 10:40:12.949175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.206 [2024-12-13 10:40:12.949218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.206 qpair failed and we were unable to recover it. 00:38:19.206 [2024-12-13 10:40:12.949415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.206 [2024-12-13 10:40:12.949469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.206 qpair failed and we were unable to recover it. 00:38:19.206 [2024-12-13 10:40:12.949750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.206 [2024-12-13 10:40:12.949792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.206 qpair failed and we were unable to recover it. 00:38:19.206 [2024-12-13 10:40:12.949987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.206 [2024-12-13 10:40:12.950030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.206 qpair failed and we were unable to recover it. 00:38:19.206 [2024-12-13 10:40:12.950170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.206 [2024-12-13 10:40:12.950214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.206 qpair failed and we were unable to recover it. 00:38:19.206 [2024-12-13 10:40:12.950522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.206 [2024-12-13 10:40:12.950568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.206 qpair failed and we were unable to recover it. 00:38:19.206 [2024-12-13 10:40:12.950708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.206 [2024-12-13 10:40:12.950722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.206 qpair failed and we were unable to recover it. 00:38:19.206 [2024-12-13 10:40:12.950875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.206 [2024-12-13 10:40:12.950932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.206 qpair failed and we were unable to recover it. 00:38:19.207 [2024-12-13 10:40:12.951240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.207 [2024-12-13 10:40:12.951289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:19.207 qpair failed and we were unable to recover it. 00:38:19.207 [2024-12-13 10:40:12.951598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.207 [2024-12-13 10:40:12.951644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:19.207 qpair failed and we were unable to recover it. 00:38:19.207 [2024-12-13 10:40:12.951840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.207 [2024-12-13 10:40:12.951866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:19.207 qpair failed and we were unable to recover it. 00:38:19.207 [2024-12-13 10:40:12.952055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.207 [2024-12-13 10:40:12.952102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.207 qpair failed and we were unable to recover it. 00:38:19.207 [2024-12-13 10:40:12.952370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.207 [2024-12-13 10:40:12.952413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.207 qpair failed and we were unable to recover it. 00:38:19.207 [2024-12-13 10:40:12.952601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.207 [2024-12-13 10:40:12.952629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:19.207 qpair failed and we were unable to recover it. 00:38:19.207 [2024-12-13 10:40:12.952866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.207 [2024-12-13 10:40:12.952910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:19.207 qpair failed and we were unable to recover it. 00:38:19.207 [2024-12-13 10:40:12.953119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.207 [2024-12-13 10:40:12.953164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:19.207 qpair failed and we were unable to recover it. 00:38:19.207 [2024-12-13 10:40:12.953416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.207 [2024-12-13 10:40:12.953470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:19.207 qpair failed and we were unable to recover it. 00:38:19.207 [2024-12-13 10:40:12.953714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.207 [2024-12-13 10:40:12.953768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:19.207 qpair failed and we were unable to recover it. 00:38:19.207 [2024-12-13 10:40:12.953964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.207 [2024-12-13 10:40:12.954008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:19.207 qpair failed and we were unable to recover it. 00:38:19.207 [2024-12-13 10:40:12.954223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.207 [2024-12-13 10:40:12.954267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:19.207 qpair failed and we were unable to recover it. 00:38:19.207 [2024-12-13 10:40:12.954577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.207 [2024-12-13 10:40:12.954634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:19.207 qpair failed and we were unable to recover it. 00:38:19.207 [2024-12-13 10:40:12.954793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.207 [2024-12-13 10:40:12.954839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:19.207 qpair failed and we were unable to recover it. 00:38:19.207 [2024-12-13 10:40:12.955044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.207 [2024-12-13 10:40:12.955087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:19.207 qpair failed and we were unable to recover it. 00:38:19.207 [2024-12-13 10:40:12.955321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.207 [2024-12-13 10:40:12.955364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:19.207 qpair failed and we were unable to recover it. 00:38:19.207 [2024-12-13 10:40:12.955670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.207 [2024-12-13 10:40:12.955694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:19.207 qpair failed and we were unable to recover it. 00:38:19.207 [2024-12-13 10:40:12.955957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.207 [2024-12-13 10:40:12.956002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.207 qpair failed and we were unable to recover it. 00:38:19.207 [2024-12-13 10:40:12.956160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.207 [2024-12-13 10:40:12.956202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.207 qpair failed and we were unable to recover it. 00:38:19.207 [2024-12-13 10:40:12.956398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.207 [2024-12-13 10:40:12.956441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.207 qpair failed and we were unable to recover it. 00:38:19.207 [2024-12-13 10:40:12.956710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.207 [2024-12-13 10:40:12.956726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.207 qpair failed and we were unable to recover it. 00:38:19.207 [2024-12-13 10:40:12.956874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.207 [2024-12-13 10:40:12.956889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.207 qpair failed and we were unable to recover it. 00:38:19.207 [2024-12-13 10:40:12.957035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.207 [2024-12-13 10:40:12.957079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.207 qpair failed and we were unable to recover it. 00:38:19.207 [2024-12-13 10:40:12.957230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.207 [2024-12-13 10:40:12.957274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.207 qpair failed and we were unable to recover it. 00:38:19.207 [2024-12-13 10:40:12.957473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.207 [2024-12-13 10:40:12.957518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.207 qpair failed and we were unable to recover it. 00:38:19.207 [2024-12-13 10:40:12.957729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.207 [2024-12-13 10:40:12.957744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.207 qpair failed and we were unable to recover it. 00:38:19.207 [2024-12-13 10:40:12.957891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.207 [2024-12-13 10:40:12.957942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.207 qpair failed and we were unable to recover it. 00:38:19.207 [2024-12-13 10:40:12.958080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.207 [2024-12-13 10:40:12.958122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.207 qpair failed and we were unable to recover it. 00:38:19.207 [2024-12-13 10:40:12.958274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.207 [2024-12-13 10:40:12.958317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.207 qpair failed and we were unable to recover it. 00:38:19.207 [2024-12-13 10:40:12.958537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.207 [2024-12-13 10:40:12.958583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.207 qpair failed and we were unable to recover it. 00:38:19.207 [2024-12-13 10:40:12.958741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.207 [2024-12-13 10:40:12.958784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.207 qpair failed and we were unable to recover it. 00:38:19.207 [2024-12-13 10:40:12.958978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.207 [2024-12-13 10:40:12.959022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.207 qpair failed and we were unable to recover it. 00:38:19.207 [2024-12-13 10:40:12.959307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.207 [2024-12-13 10:40:12.959351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.207 qpair failed and we were unable to recover it. 00:38:19.207 [2024-12-13 10:40:12.959575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.207 [2024-12-13 10:40:12.959590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.207 qpair failed and we were unable to recover it. 00:38:19.207 [2024-12-13 10:40:12.959666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.207 [2024-12-13 10:40:12.959680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.207 qpair failed and we were unable to recover it. 00:38:19.207 [2024-12-13 10:40:12.959780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.207 [2024-12-13 10:40:12.959794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.207 qpair failed and we were unable to recover it. 00:38:19.207 [2024-12-13 10:40:12.960502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.207 [2024-12-13 10:40:12.960530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.207 qpair failed and we were unable to recover it. 00:38:19.208 [2024-12-13 10:40:12.960778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.208 [2024-12-13 10:40:12.960805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:19.208 qpair failed and we were unable to recover it. 00:38:19.208 [2024-12-13 10:40:12.960960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.208 [2024-12-13 10:40:12.960982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:19.208 qpair failed and we were unable to recover it. 00:38:19.208 [2024-12-13 10:40:12.961199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.208 [2024-12-13 10:40:12.961223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:19.208 qpair failed and we were unable to recover it. 00:38:19.208 [2024-12-13 10:40:12.961408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.208 [2024-12-13 10:40:12.961433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:19.208 qpair failed and we were unable to recover it. 00:38:19.208 [2024-12-13 10:40:12.961615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.208 [2024-12-13 10:40:12.961638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:19.208 qpair failed and we were unable to recover it. 00:38:19.208 [2024-12-13 10:40:12.961790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.208 [2024-12-13 10:40:12.961812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:19.208 qpair failed and we were unable to recover it. 00:38:19.208 [2024-12-13 10:40:12.961901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.208 [2024-12-13 10:40:12.961918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.208 qpair failed and we were unable to recover it. 00:38:19.208 [2024-12-13 10:40:12.961997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.208 [2024-12-13 10:40:12.962011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.208 qpair failed and we were unable to recover it. 00:38:19.208 [2024-12-13 10:40:12.962120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.208 [2024-12-13 10:40:12.962135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.208 qpair failed and we were unable to recover it. 00:38:19.208 [2024-12-13 10:40:12.962272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.208 [2024-12-13 10:40:12.962287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.208 qpair failed and we were unable to recover it. 00:38:19.208 [2024-12-13 10:40:12.962460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.208 [2024-12-13 10:40:12.962475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.208 qpair failed and we were unable to recover it. 00:38:19.208 [2024-12-13 10:40:12.962549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.208 [2024-12-13 10:40:12.962564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.208 qpair failed and we were unable to recover it. 00:38:19.208 [2024-12-13 10:40:12.962649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.208 [2024-12-13 10:40:12.962663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.208 qpair failed and we were unable to recover it. 00:38:19.208 [2024-12-13 10:40:12.962808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.208 [2024-12-13 10:40:12.962822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.208 qpair failed and we were unable to recover it. 00:38:19.208 [2024-12-13 10:40:12.962915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.208 [2024-12-13 10:40:12.962929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.208 qpair failed and we were unable to recover it. 00:38:19.208 [2024-12-13 10:40:12.963075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.208 [2024-12-13 10:40:12.963090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.208 qpair failed and we were unable to recover it. 00:38:19.208 [2024-12-13 10:40:12.963325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.208 [2024-12-13 10:40:12.963350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:19.208 qpair failed and we were unable to recover it. 00:38:19.208 [2024-12-13 10:40:12.963497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.208 [2024-12-13 10:40:12.963545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:19.208 qpair failed and we were unable to recover it. 00:38:19.208 [2024-12-13 10:40:12.963651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.208 [2024-12-13 10:40:12.963677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:19.208 qpair failed and we were unable to recover it. 00:38:19.208 [2024-12-13 10:40:12.963831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.208 [2024-12-13 10:40:12.963848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.208 qpair failed and we were unable to recover it. 00:38:19.208 [2024-12-13 10:40:12.963983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.208 [2024-12-13 10:40:12.963997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.208 qpair failed and we were unable to recover it. 00:38:19.208 [2024-12-13 10:40:12.964205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.208 [2024-12-13 10:40:12.964220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.208 qpair failed and we were unable to recover it. 00:38:19.208 [2024-12-13 10:40:12.964309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.208 [2024-12-13 10:40:12.964323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.208 qpair failed and we were unable to recover it. 00:38:19.208 [2024-12-13 10:40:12.964547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.208 [2024-12-13 10:40:12.964563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.208 qpair failed and we were unable to recover it. 00:38:19.208 [2024-12-13 10:40:12.964715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.208 [2024-12-13 10:40:12.964731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.208 qpair failed and we were unable to recover it. 00:38:19.208 [2024-12-13 10:40:12.964979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.208 [2024-12-13 10:40:12.964994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.208 qpair failed and we were unable to recover it. 00:38:19.208 [2024-12-13 10:40:12.965141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.208 [2024-12-13 10:40:12.965156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.208 qpair failed and we were unable to recover it. 00:38:19.208 [2024-12-13 10:40:12.965253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.208 [2024-12-13 10:40:12.965267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.208 qpair failed and we were unable to recover it. 00:38:19.208 [2024-12-13 10:40:12.965403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.208 [2024-12-13 10:40:12.965419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.208 qpair failed and we were unable to recover it. 00:38:19.208 [2024-12-13 10:40:12.965505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.208 [2024-12-13 10:40:12.965522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.208 qpair failed and we were unable to recover it. 00:38:19.208 [2024-12-13 10:40:12.965673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.208 [2024-12-13 10:40:12.965689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.208 qpair failed and we were unable to recover it. 00:38:19.208 [2024-12-13 10:40:12.965839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.208 [2024-12-13 10:40:12.965854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.208 qpair failed and we were unable to recover it. 00:38:19.208 [2024-12-13 10:40:12.965939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.208 [2024-12-13 10:40:12.965953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.208 qpair failed and we were unable to recover it. 00:38:19.208 [2024-12-13 10:40:12.966104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.208 [2024-12-13 10:40:12.966139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.208 qpair failed and we were unable to recover it. 00:38:19.208 [2024-12-13 10:40:12.966343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.208 [2024-12-13 10:40:12.966359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.208 qpair failed and we were unable to recover it. 00:38:19.208 [2024-12-13 10:40:12.966441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.208 [2024-12-13 10:40:12.966463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.208 qpair failed and we were unable to recover it. 00:38:19.208 [2024-12-13 10:40:12.966545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.208 [2024-12-13 10:40:12.966558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.208 qpair failed and we were unable to recover it. 00:38:19.208 [2024-12-13 10:40:12.966648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.208 [2024-12-13 10:40:12.966662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.208 qpair failed and we were unable to recover it. 00:38:19.208 [2024-12-13 10:40:12.966749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.208 [2024-12-13 10:40:12.966763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.208 qpair failed and we were unable to recover it. 00:38:19.209 [2024-12-13 10:40:12.966907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.209 [2024-12-13 10:40:12.966921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.209 qpair failed and we were unable to recover it. 00:38:19.209 [2024-12-13 10:40:12.967023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.209 [2024-12-13 10:40:12.967037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.209 qpair failed and we were unable to recover it. 00:38:19.209 [2024-12-13 10:40:12.967119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.209 [2024-12-13 10:40:12.967133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.209 qpair failed and we were unable to recover it. 00:38:19.209 [2024-12-13 10:40:12.967310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.209 [2024-12-13 10:40:12.967327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.209 qpair failed and we were unable to recover it. 00:38:19.209 [2024-12-13 10:40:12.967498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.209 [2024-12-13 10:40:12.967515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.209 qpair failed and we were unable to recover it. 00:38:19.209 [2024-12-13 10:40:12.967658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.209 [2024-12-13 10:40:12.967674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.209 qpair failed and we were unable to recover it. 00:38:19.209 [2024-12-13 10:40:12.967749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.209 [2024-12-13 10:40:12.967763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.209 qpair failed and we were unable to recover it. 00:38:19.209 [2024-12-13 10:40:12.967849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.209 [2024-12-13 10:40:12.967863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.209 qpair failed and we were unable to recover it. 00:38:19.209 [2024-12-13 10:40:12.968004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.209 [2024-12-13 10:40:12.968019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.209 qpair failed and we were unable to recover it. 00:38:19.209 [2024-12-13 10:40:12.968103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.209 [2024-12-13 10:40:12.968122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.209 qpair failed and we were unable to recover it. 00:38:19.209 [2024-12-13 10:40:12.968263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.209 [2024-12-13 10:40:12.968278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.209 qpair failed and we were unable to recover it. 00:38:19.209 [2024-12-13 10:40:12.968364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.209 [2024-12-13 10:40:12.968377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.209 qpair failed and we were unable to recover it. 00:38:19.209 [2024-12-13 10:40:12.968458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.209 [2024-12-13 10:40:12.968473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.209 qpair failed and we were unable to recover it. 00:38:19.209 [2024-12-13 10:40:12.968578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.209 [2024-12-13 10:40:12.968592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.209 qpair failed and we were unable to recover it. 00:38:19.209 [2024-12-13 10:40:12.968746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.209 [2024-12-13 10:40:12.968764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.209 qpair failed and we were unable to recover it. 00:38:19.209 [2024-12-13 10:40:12.968854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.209 [2024-12-13 10:40:12.968868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.209 qpair failed and we were unable to recover it. 00:38:19.209 [2024-12-13 10:40:12.968961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.209 [2024-12-13 10:40:12.968973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.209 qpair failed and we were unable to recover it. 00:38:19.209 [2024-12-13 10:40:12.969170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.209 [2024-12-13 10:40:12.969214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:19.209 qpair failed and we were unable to recover it. 00:38:19.209 [2024-12-13 10:40:12.969329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.209 [2024-12-13 10:40:12.969358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:19.209 qpair failed and we were unable to recover it. 00:38:19.209 [2024-12-13 10:40:12.969475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.209 [2024-12-13 10:40:12.969499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:19.209 qpair failed and we were unable to recover it. 00:38:19.209 [2024-12-13 10:40:12.969606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.209 [2024-12-13 10:40:12.969621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.209 qpair failed and we were unable to recover it. 00:38:19.209 [2024-12-13 10:40:12.969770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.209 [2024-12-13 10:40:12.969783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.209 qpair failed and we were unable to recover it. 00:38:19.209 [2024-12-13 10:40:12.969850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.209 [2024-12-13 10:40:12.969864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.209 qpair failed and we were unable to recover it. 00:38:19.209 [2024-12-13 10:40:12.969999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.209 [2024-12-13 10:40:12.970014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.209 qpair failed and we were unable to recover it. 00:38:19.209 [2024-12-13 10:40:12.970083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.209 [2024-12-13 10:40:12.970098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.209 qpair failed and we were unable to recover it. 00:38:19.209 [2024-12-13 10:40:12.970320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.209 [2024-12-13 10:40:12.970335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.209 qpair failed and we were unable to recover it. 00:38:19.209 [2024-12-13 10:40:12.970419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.209 [2024-12-13 10:40:12.970433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.209 qpair failed and we were unable to recover it. 00:38:19.209 [2024-12-13 10:40:12.970511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.209 [2024-12-13 10:40:12.970525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.209 qpair failed and we were unable to recover it. 00:38:19.209 [2024-12-13 10:40:12.970683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.209 [2024-12-13 10:40:12.970698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.209 qpair failed and we were unable to recover it. 00:38:19.209 [2024-12-13 10:40:12.970779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.209 [2024-12-13 10:40:12.970793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.209 qpair failed and we were unable to recover it. 00:38:19.209 [2024-12-13 10:40:12.970929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.209 [2024-12-13 10:40:12.970945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.209 qpair failed and we were unable to recover it. 00:38:19.209 [2024-12-13 10:40:12.971096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.209 [2024-12-13 10:40:12.971111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.209 qpair failed and we were unable to recover it. 00:38:19.209 [2024-12-13 10:40:12.971192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.209 [2024-12-13 10:40:12.971206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.210 qpair failed and we were unable to recover it. 00:38:19.210 [2024-12-13 10:40:12.971341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.210 [2024-12-13 10:40:12.971356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.210 qpair failed and we were unable to recover it. 00:38:19.210 [2024-12-13 10:40:12.971445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.210 [2024-12-13 10:40:12.971466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.210 qpair failed and we were unable to recover it. 00:38:19.210 [2024-12-13 10:40:12.971631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.210 [2024-12-13 10:40:12.971646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.210 qpair failed and we were unable to recover it. 00:38:19.210 [2024-12-13 10:40:12.971870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.210 [2024-12-13 10:40:12.971886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.210 qpair failed and we were unable to recover it. 00:38:19.210 [2024-12-13 10:40:12.971954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.210 [2024-12-13 10:40:12.971967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.210 qpair failed and we were unable to recover it. 00:38:19.210 [2024-12-13 10:40:12.972049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.210 [2024-12-13 10:40:12.972063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.210 qpair failed and we were unable to recover it. 00:38:19.210 [2024-12-13 10:40:12.972209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.210 [2024-12-13 10:40:12.972223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.210 qpair failed and we were unable to recover it. 00:38:19.210 [2024-12-13 10:40:12.972402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.210 [2024-12-13 10:40:12.972418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.210 qpair failed and we were unable to recover it. 00:38:19.210 [2024-12-13 10:40:12.972486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.210 [2024-12-13 10:40:12.972500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.210 qpair failed and we were unable to recover it. 00:38:19.210 [2024-12-13 10:40:12.972655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.210 [2024-12-13 10:40:12.972668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.210 qpair failed and we were unable to recover it. 00:38:19.210 [2024-12-13 10:40:12.972761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.210 [2024-12-13 10:40:12.972776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.210 qpair failed and we were unable to recover it. 00:38:19.210 [2024-12-13 10:40:12.972923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.210 [2024-12-13 10:40:12.972940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.210 qpair failed and we were unable to recover it. 00:38:19.210 [2024-12-13 10:40:12.973109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.210 [2024-12-13 10:40:12.973125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.210 qpair failed and we were unable to recover it. 00:38:19.210 [2024-12-13 10:40:12.973207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.210 [2024-12-13 10:40:12.973221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.210 qpair failed and we were unable to recover it. 00:38:19.210 [2024-12-13 10:40:12.973310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.210 [2024-12-13 10:40:12.973324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.210 qpair failed and we were unable to recover it. 00:38:19.210 [2024-12-13 10:40:12.973393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.210 [2024-12-13 10:40:12.973407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.210 qpair failed and we were unable to recover it. 00:38:19.210 [2024-12-13 10:40:12.973573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.210 [2024-12-13 10:40:12.973587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.210 qpair failed and we were unable to recover it. 00:38:19.210 [2024-12-13 10:40:12.973690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.210 [2024-12-13 10:40:12.973703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.210 qpair failed and we were unable to recover it. 00:38:19.210 [2024-12-13 10:40:12.973793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.210 [2024-12-13 10:40:12.973807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.210 qpair failed and we were unable to recover it. 00:38:19.210 [2024-12-13 10:40:12.973889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.210 [2024-12-13 10:40:12.973903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.210 qpair failed and we were unable to recover it. 00:38:19.210 [2024-12-13 10:40:12.974065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.210 [2024-12-13 10:40:12.974080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.210 qpair failed and we were unable to recover it. 00:38:19.210 [2024-12-13 10:40:12.974156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.210 [2024-12-13 10:40:12.974170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.210 qpair failed and we were unable to recover it. 00:38:19.210 [2024-12-13 10:40:12.974240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.210 [2024-12-13 10:40:12.974255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.210 qpair failed and we were unable to recover it. 00:38:19.210 [2024-12-13 10:40:12.974409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.210 [2024-12-13 10:40:12.974422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.210 qpair failed and we were unable to recover it. 00:38:19.210 [2024-12-13 10:40:12.974525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.210 [2024-12-13 10:40:12.974541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.210 qpair failed and we were unable to recover it. 00:38:19.210 [2024-12-13 10:40:12.974616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.210 [2024-12-13 10:40:12.974630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.210 qpair failed and we were unable to recover it. 00:38:19.210 [2024-12-13 10:40:12.974721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.210 [2024-12-13 10:40:12.974735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.210 qpair failed and we were unable to recover it. 00:38:19.210 [2024-12-13 10:40:12.974819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.210 [2024-12-13 10:40:12.974834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.210 qpair failed and we were unable to recover it. 00:38:19.210 [2024-12-13 10:40:12.974907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.210 [2024-12-13 10:40:12.974927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.210 qpair failed and we were unable to recover it. 00:38:19.210 [2024-12-13 10:40:12.975083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.210 [2024-12-13 10:40:12.975098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.210 qpair failed and we were unable to recover it. 00:38:19.210 [2024-12-13 10:40:12.975261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.210 [2024-12-13 10:40:12.975276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.210 qpair failed and we were unable to recover it. 00:38:19.210 [2024-12-13 10:40:12.975358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.210 [2024-12-13 10:40:12.975372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.210 qpair failed and we were unable to recover it. 00:38:19.210 [2024-12-13 10:40:12.975443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.210 [2024-12-13 10:40:12.975463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.210 qpair failed and we were unable to recover it. 00:38:19.210 [2024-12-13 10:40:12.975605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.210 [2024-12-13 10:40:12.975618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.210 qpair failed and we were unable to recover it. 00:38:19.210 [2024-12-13 10:40:12.975797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.210 [2024-12-13 10:40:12.975812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.210 qpair failed and we were unable to recover it. 00:38:19.210 [2024-12-13 10:40:12.975950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.210 [2024-12-13 10:40:12.975966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.210 qpair failed and we were unable to recover it. 00:38:19.210 [2024-12-13 10:40:12.976054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.210 [2024-12-13 10:40:12.976069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.210 qpair failed and we were unable to recover it. 00:38:19.210 [2024-12-13 10:40:12.976155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.210 [2024-12-13 10:40:12.976169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.210 qpair failed and we were unable to recover it. 00:38:19.210 [2024-12-13 10:40:12.976313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.210 [2024-12-13 10:40:12.976329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.211 qpair failed and we were unable to recover it. 00:38:19.211 [2024-12-13 10:40:12.976491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.211 [2024-12-13 10:40:12.976506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.211 qpair failed and we were unable to recover it. 00:38:19.211 [2024-12-13 10:40:12.976647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.211 [2024-12-13 10:40:12.976663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.211 qpair failed and we were unable to recover it. 00:38:19.211 [2024-12-13 10:40:12.976746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.211 [2024-12-13 10:40:12.976759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.211 qpair failed and we were unable to recover it. 00:38:19.211 [2024-12-13 10:40:12.976846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.211 [2024-12-13 10:40:12.976859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.211 qpair failed and we were unable to recover it. 00:38:19.211 [2024-12-13 10:40:12.976995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.211 [2024-12-13 10:40:12.977008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.211 qpair failed and we were unable to recover it. 00:38:19.211 [2024-12-13 10:40:12.977164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.211 [2024-12-13 10:40:12.977180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.211 qpair failed and we were unable to recover it. 00:38:19.211 [2024-12-13 10:40:12.977327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.211 [2024-12-13 10:40:12.977342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.211 qpair failed and we were unable to recover it. 00:38:19.211 [2024-12-13 10:40:12.977415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.211 [2024-12-13 10:40:12.977429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.211 qpair failed and we were unable to recover it. 00:38:19.211 [2024-12-13 10:40:12.977570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.211 [2024-12-13 10:40:12.977586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.211 qpair failed and we were unable to recover it. 00:38:19.211 [2024-12-13 10:40:12.977695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.211 [2024-12-13 10:40:12.977710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.211 qpair failed and we were unable to recover it. 00:38:19.211 [2024-12-13 10:40:12.977780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.211 [2024-12-13 10:40:12.977795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.211 qpair failed and we were unable to recover it. 00:38:19.211 [2024-12-13 10:40:12.977933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.211 [2024-12-13 10:40:12.977948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.211 qpair failed and we were unable to recover it. 00:38:19.211 [2024-12-13 10:40:12.978086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.211 [2024-12-13 10:40:12.978101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.211 qpair failed and we were unable to recover it. 00:38:19.211 [2024-12-13 10:40:12.978181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.211 [2024-12-13 10:40:12.978194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.211 qpair failed and we were unable to recover it. 00:38:19.211 [2024-12-13 10:40:12.978329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.211 [2024-12-13 10:40:12.978344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.211 qpair failed and we were unable to recover it. 00:38:19.211 [2024-12-13 10:40:12.978409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.211 [2024-12-13 10:40:12.978424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.211 qpair failed and we were unable to recover it. 00:38:19.211 [2024-12-13 10:40:12.978491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.211 [2024-12-13 10:40:12.978506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.211 qpair failed and we were unable to recover it. 00:38:19.211 [2024-12-13 10:40:12.978647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.211 [2024-12-13 10:40:12.978661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.211 qpair failed and we were unable to recover it. 00:38:19.211 [2024-12-13 10:40:12.978740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.211 [2024-12-13 10:40:12.978754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.211 qpair failed and we were unable to recover it. 00:38:19.211 [2024-12-13 10:40:12.978843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.211 [2024-12-13 10:40:12.978858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.211 qpair failed and we were unable to recover it. 00:38:19.211 [2024-12-13 10:40:12.978946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.211 [2024-12-13 10:40:12.978960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.211 qpair failed and we were unable to recover it. 00:38:19.211 [2024-12-13 10:40:12.979040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.211 [2024-12-13 10:40:12.979054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.211 qpair failed and we were unable to recover it. 00:38:19.211 [2024-12-13 10:40:12.979127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.211 [2024-12-13 10:40:12.979140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.211 qpair failed and we were unable to recover it. 00:38:19.211 [2024-12-13 10:40:12.979281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.211 [2024-12-13 10:40:12.979295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.211 qpair failed and we were unable to recover it. 00:38:19.211 [2024-12-13 10:40:12.979367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.211 [2024-12-13 10:40:12.979382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.211 qpair failed and we were unable to recover it. 00:38:19.211 [2024-12-13 10:40:12.979477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.211 [2024-12-13 10:40:12.979496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.211 qpair failed and we were unable to recover it. 00:38:19.211 [2024-12-13 10:40:12.979588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.211 [2024-12-13 10:40:12.979601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.211 qpair failed and we were unable to recover it. 00:38:19.211 [2024-12-13 10:40:12.979757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.211 [2024-12-13 10:40:12.979771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.211 qpair failed and we were unable to recover it. 00:38:19.211 [2024-12-13 10:40:12.979920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.211 [2024-12-13 10:40:12.979935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.211 qpair failed and we were unable to recover it. 00:38:19.211 [2024-12-13 10:40:12.980001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.211 [2024-12-13 10:40:12.980016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.211 qpair failed and we were unable to recover it. 00:38:19.211 [2024-12-13 10:40:12.980118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.211 [2024-12-13 10:40:12.980132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.211 qpair failed and we were unable to recover it. 00:38:19.211 [2024-12-13 10:40:12.980220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.211 [2024-12-13 10:40:12.980235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.211 qpair failed and we were unable to recover it. 00:38:19.211 [2024-12-13 10:40:12.980381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.211 [2024-12-13 10:40:12.980396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.211 qpair failed and we were unable to recover it. 00:38:19.211 [2024-12-13 10:40:12.980479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.211 [2024-12-13 10:40:12.980494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.211 qpair failed and we were unable to recover it. 00:38:19.211 [2024-12-13 10:40:12.980583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.211 [2024-12-13 10:40:12.980597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.211 qpair failed and we were unable to recover it. 00:38:19.211 [2024-12-13 10:40:12.980751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.211 [2024-12-13 10:40:12.980767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.211 qpair failed and we were unable to recover it. 00:38:19.211 [2024-12-13 10:40:12.980859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.211 [2024-12-13 10:40:12.980873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.211 qpair failed and we were unable to recover it. 00:38:19.211 [2024-12-13 10:40:12.980949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.211 [2024-12-13 10:40:12.980963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.211 qpair failed and we were unable to recover it. 00:38:19.212 [2024-12-13 10:40:12.981056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.212 [2024-12-13 10:40:12.981071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.212 qpair failed and we were unable to recover it. 00:38:19.212 [2024-12-13 10:40:12.981215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.212 [2024-12-13 10:40:12.981231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.212 qpair failed and we were unable to recover it. 00:38:19.212 [2024-12-13 10:40:12.981377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.212 [2024-12-13 10:40:12.981393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.212 qpair failed and we were unable to recover it. 00:38:19.212 [2024-12-13 10:40:12.981531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.212 [2024-12-13 10:40:12.981546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.212 qpair failed and we were unable to recover it. 00:38:19.212 [2024-12-13 10:40:12.981689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.212 [2024-12-13 10:40:12.981704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.212 qpair failed and we were unable to recover it. 00:38:19.212 [2024-12-13 10:40:12.981786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.212 [2024-12-13 10:40:12.981800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.212 qpair failed and we were unable to recover it. 00:38:19.212 [2024-12-13 10:40:12.981955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.212 [2024-12-13 10:40:12.981970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.212 qpair failed and we were unable to recover it. 00:38:19.212 [2024-12-13 10:40:12.982125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.212 [2024-12-13 10:40:12.982140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.212 qpair failed and we were unable to recover it. 00:38:19.212 [2024-12-13 10:40:12.982282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.212 [2024-12-13 10:40:12.982296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.212 qpair failed and we were unable to recover it. 00:38:19.212 [2024-12-13 10:40:12.982378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.212 [2024-12-13 10:40:12.982395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.212 qpair failed and we were unable to recover it. 00:38:19.212 [2024-12-13 10:40:12.982534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.212 [2024-12-13 10:40:12.982549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.212 qpair failed and we were unable to recover it. 00:38:19.212 [2024-12-13 10:40:12.982699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.212 [2024-12-13 10:40:12.982715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.212 qpair failed and we were unable to recover it. 00:38:19.212 [2024-12-13 10:40:12.982804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.212 [2024-12-13 10:40:12.982819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.212 qpair failed and we were unable to recover it. 00:38:19.212 [2024-12-13 10:40:12.982893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.212 [2024-12-13 10:40:12.982907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.212 qpair failed and we were unable to recover it. 00:38:19.212 [2024-12-13 10:40:12.983083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.212 [2024-12-13 10:40:12.983101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.212 qpair failed and we were unable to recover it. 00:38:19.212 [2024-12-13 10:40:12.983271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.212 [2024-12-13 10:40:12.983291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.212 qpair failed and we were unable to recover it. 00:38:19.212 [2024-12-13 10:40:12.983506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.212 [2024-12-13 10:40:12.983522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.212 qpair failed and we were unable to recover it. 00:38:19.212 [2024-12-13 10:40:12.983682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.212 [2024-12-13 10:40:12.983697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.212 qpair failed and we were unable to recover it. 00:38:19.212 [2024-12-13 10:40:12.983849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.212 [2024-12-13 10:40:12.983871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.212 qpair failed and we were unable to recover it. 00:38:19.212 [2024-12-13 10:40:12.984016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.212 [2024-12-13 10:40:12.984032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.212 qpair failed and we were unable to recover it. 00:38:19.212 [2024-12-13 10:40:12.984118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.212 [2024-12-13 10:40:12.984133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.212 qpair failed and we were unable to recover it. 00:38:19.212 [2024-12-13 10:40:12.984215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.212 [2024-12-13 10:40:12.984231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.212 qpair failed and we were unable to recover it. 00:38:19.212 [2024-12-13 10:40:12.984314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.212 [2024-12-13 10:40:12.984329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.212 qpair failed and we were unable to recover it. 00:38:19.212 [2024-12-13 10:40:12.984395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.212 [2024-12-13 10:40:12.984409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.212 qpair failed and we were unable to recover it. 00:38:19.212 [2024-12-13 10:40:12.984565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.212 [2024-12-13 10:40:12.984580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.212 qpair failed and we were unable to recover it. 00:38:19.212 [2024-12-13 10:40:12.984732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.212 [2024-12-13 10:40:12.984747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.212 qpair failed and we were unable to recover it. 00:38:19.212 [2024-12-13 10:40:12.984832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.212 [2024-12-13 10:40:12.984847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.212 qpair failed and we were unable to recover it. 00:38:19.212 [2024-12-13 10:40:12.984921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.212 [2024-12-13 10:40:12.984937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.212 qpair failed and we were unable to recover it. 00:38:19.212 [2024-12-13 10:40:12.985147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.212 [2024-12-13 10:40:12.985162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.212 qpair failed and we were unable to recover it. 00:38:19.212 [2024-12-13 10:40:12.985228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.212 [2024-12-13 10:40:12.985243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.212 qpair failed and we were unable to recover it. 00:38:19.212 [2024-12-13 10:40:12.985327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.212 [2024-12-13 10:40:12.985342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.212 qpair failed and we were unable to recover it. 00:38:19.212 [2024-12-13 10:40:12.985502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.212 [2024-12-13 10:40:12.985518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.212 qpair failed and we were unable to recover it. 00:38:19.212 [2024-12-13 10:40:12.985663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.212 [2024-12-13 10:40:12.985679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.212 qpair failed and we were unable to recover it. 00:38:19.212 [2024-12-13 10:40:12.985748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.212 [2024-12-13 10:40:12.985762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.212 qpair failed and we were unable to recover it. 00:38:19.213 [2024-12-13 10:40:12.985852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.213 [2024-12-13 10:40:12.985868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.213 qpair failed and we were unable to recover it. 00:38:19.213 [2024-12-13 10:40:12.985941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.213 [2024-12-13 10:40:12.985954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.213 qpair failed and we were unable to recover it. 00:38:19.213 [2024-12-13 10:40:12.986029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.213 [2024-12-13 10:40:12.986043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.213 qpair failed and we were unable to recover it. 00:38:19.213 [2024-12-13 10:40:12.986121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.213 [2024-12-13 10:40:12.986137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.213 qpair failed and we were unable to recover it. 00:38:19.213 [2024-12-13 10:40:12.986222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.213 [2024-12-13 10:40:12.986235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.213 qpair failed and we were unable to recover it. 00:38:19.213 [2024-12-13 10:40:12.986374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.213 [2024-12-13 10:40:12.986389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.213 qpair failed and we were unable to recover it. 00:38:19.213 [2024-12-13 10:40:12.986599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.213 [2024-12-13 10:40:12.986615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.213 qpair failed and we were unable to recover it. 00:38:19.213 [2024-12-13 10:40:12.986767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.213 [2024-12-13 10:40:12.986782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.213 qpair failed and we were unable to recover it. 00:38:19.213 [2024-12-13 10:40:12.986915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.213 [2024-12-13 10:40:12.986930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.213 qpair failed and we were unable to recover it. 00:38:19.213 [2024-12-13 10:40:12.987069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.213 [2024-12-13 10:40:12.987083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.213 qpair failed and we were unable to recover it. 00:38:19.213 [2024-12-13 10:40:12.987248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.213 [2024-12-13 10:40:12.987262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.213 qpair failed and we were unable to recover it. 00:38:19.213 [2024-12-13 10:40:12.987345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.213 [2024-12-13 10:40:12.987360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.213 qpair failed and we were unable to recover it. 00:38:19.213 [2024-12-13 10:40:12.987445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.213 [2024-12-13 10:40:12.987465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.213 qpair failed and we were unable to recover it. 00:38:19.213 [2024-12-13 10:40:12.987636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.213 [2024-12-13 10:40:12.987652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.213 qpair failed and we were unable to recover it. 00:38:19.213 [2024-12-13 10:40:12.987801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.213 [2024-12-13 10:40:12.987815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.213 qpair failed and we were unable to recover it. 00:38:19.213 [2024-12-13 10:40:12.987890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.213 [2024-12-13 10:40:12.987904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.213 qpair failed and we were unable to recover it. 00:38:19.213 [2024-12-13 10:40:12.988055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.213 [2024-12-13 10:40:12.988072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.213 qpair failed and we were unable to recover it. 00:38:19.213 [2024-12-13 10:40:12.988154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.213 [2024-12-13 10:40:12.988169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.213 qpair failed and we were unable to recover it. 00:38:19.213 [2024-12-13 10:40:12.988319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.213 [2024-12-13 10:40:12.988335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.213 qpair failed and we were unable to recover it. 00:38:19.213 [2024-12-13 10:40:12.988473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.213 [2024-12-13 10:40:12.988489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.213 qpair failed and we were unable to recover it. 00:38:19.213 [2024-12-13 10:40:12.988673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.213 [2024-12-13 10:40:12.988689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.213 qpair failed and we were unable to recover it. 00:38:19.213 [2024-12-13 10:40:12.988836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.213 [2024-12-13 10:40:12.988851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.213 qpair failed and we were unable to recover it. 00:38:19.213 [2024-12-13 10:40:12.988937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.213 [2024-12-13 10:40:12.988952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.213 qpair failed and we were unable to recover it. 00:38:19.213 [2024-12-13 10:40:12.989043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.213 [2024-12-13 10:40:12.989059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.213 qpair failed and we were unable to recover it. 00:38:19.213 [2024-12-13 10:40:12.989151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.213 [2024-12-13 10:40:12.989166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.213 qpair failed and we were unable to recover it. 00:38:19.213 [2024-12-13 10:40:12.989238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.213 [2024-12-13 10:40:12.989251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.213 qpair failed and we were unable to recover it. 00:38:19.213 [2024-12-13 10:40:12.989334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.213 [2024-12-13 10:40:12.989347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.213 qpair failed and we were unable to recover it. 00:38:19.213 [2024-12-13 10:40:12.989499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.213 [2024-12-13 10:40:12.989514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.213 qpair failed and we were unable to recover it. 00:38:19.213 [2024-12-13 10:40:12.989592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.213 [2024-12-13 10:40:12.989605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.213 qpair failed and we were unable to recover it. 00:38:19.213 [2024-12-13 10:40:12.989790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.213 [2024-12-13 10:40:12.989806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.213 qpair failed and we were unable to recover it. 00:38:19.213 [2024-12-13 10:40:12.990011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.213 [2024-12-13 10:40:12.990026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.213 qpair failed and we were unable to recover it. 00:38:19.213 [2024-12-13 10:40:12.990096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.213 [2024-12-13 10:40:12.990111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.214 qpair failed and we were unable to recover it. 00:38:19.214 [2024-12-13 10:40:12.990340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.214 [2024-12-13 10:40:12.990354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.214 qpair failed and we were unable to recover it. 00:38:19.214 [2024-12-13 10:40:12.990489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.214 [2024-12-13 10:40:12.990507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.214 qpair failed and we were unable to recover it. 00:38:19.214 [2024-12-13 10:40:12.990648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.214 [2024-12-13 10:40:12.990663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.214 qpair failed and we were unable to recover it. 00:38:19.214 [2024-12-13 10:40:12.990799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.214 [2024-12-13 10:40:12.990815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.214 qpair failed and we were unable to recover it. 00:38:19.214 [2024-12-13 10:40:12.990916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.214 [2024-12-13 10:40:12.990931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.214 qpair failed and we were unable to recover it. 00:38:19.214 [2024-12-13 10:40:12.991014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.214 [2024-12-13 10:40:12.991029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.214 qpair failed and we were unable to recover it. 00:38:19.214 [2024-12-13 10:40:12.991168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.214 [2024-12-13 10:40:12.991183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.214 qpair failed and we were unable to recover it. 00:38:19.214 [2024-12-13 10:40:12.991261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.214 [2024-12-13 10:40:12.991275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.214 qpair failed and we were unable to recover it. 00:38:19.214 [2024-12-13 10:40:12.991350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.214 [2024-12-13 10:40:12.991364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.214 qpair failed and we were unable to recover it. 00:38:19.214 [2024-12-13 10:40:12.991468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.214 [2024-12-13 10:40:12.991483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.214 qpair failed and we were unable to recover it. 00:38:19.214 [2024-12-13 10:40:12.991623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.214 [2024-12-13 10:40:12.991639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.214 qpair failed and we were unable to recover it. 00:38:19.214 [2024-12-13 10:40:12.991714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.214 [2024-12-13 10:40:12.991727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.214 qpair failed and we were unable to recover it. 00:38:19.214 [2024-12-13 10:40:12.991814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.214 [2024-12-13 10:40:12.991830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.214 qpair failed and we were unable to recover it. 00:38:19.214 [2024-12-13 10:40:12.991977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.214 [2024-12-13 10:40:12.992000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.214 qpair failed and we were unable to recover it. 00:38:19.214 [2024-12-13 10:40:12.992083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.214 [2024-12-13 10:40:12.992099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.214 qpair failed and we were unable to recover it. 00:38:19.214 [2024-12-13 10:40:12.992170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.214 [2024-12-13 10:40:12.992186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.214 qpair failed and we were unable to recover it. 00:38:19.214 [2024-12-13 10:40:12.992263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.214 [2024-12-13 10:40:12.992278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.214 qpair failed and we were unable to recover it. 00:38:19.214 [2024-12-13 10:40:12.992348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.214 [2024-12-13 10:40:12.992363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.214 qpair failed and we were unable to recover it. 00:38:19.214 [2024-12-13 10:40:12.992538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.214 [2024-12-13 10:40:12.992555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.214 qpair failed and we were unable to recover it. 00:38:19.214 [2024-12-13 10:40:12.992696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.214 [2024-12-13 10:40:12.992711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.214 qpair failed and we were unable to recover it. 00:38:19.214 [2024-12-13 10:40:12.992872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.214 [2024-12-13 10:40:12.992888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.214 qpair failed and we were unable to recover it. 00:38:19.214 [2024-12-13 10:40:12.992973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.214 [2024-12-13 10:40:12.992988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.214 qpair failed and we were unable to recover it. 00:38:19.214 [2024-12-13 10:40:12.993140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.214 [2024-12-13 10:40:12.993156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.214 qpair failed and we were unable to recover it. 00:38:19.214 [2024-12-13 10:40:12.993332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.214 [2024-12-13 10:40:12.993347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.214 qpair failed and we were unable to recover it. 00:38:19.214 [2024-12-13 10:40:12.993417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.214 [2024-12-13 10:40:12.993431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.214 qpair failed and we were unable to recover it. 00:38:19.214 [2024-12-13 10:40:12.993575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.214 [2024-12-13 10:40:12.993591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.214 qpair failed and we were unable to recover it. 00:38:19.214 [2024-12-13 10:40:12.993691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.214 [2024-12-13 10:40:12.993706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.214 qpair failed and we were unable to recover it. 00:38:19.214 [2024-12-13 10:40:12.993771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.214 [2024-12-13 10:40:12.993785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.214 qpair failed and we were unable to recover it. 00:38:19.214 [2024-12-13 10:40:12.993853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.214 [2024-12-13 10:40:12.993867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.214 qpair failed and we were unable to recover it. 00:38:19.214 [2024-12-13 10:40:12.994015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.214 [2024-12-13 10:40:12.994030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.214 qpair failed and we were unable to recover it. 00:38:19.214 [2024-12-13 10:40:12.994122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.214 [2024-12-13 10:40:12.994137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.214 qpair failed and we were unable to recover it. 00:38:19.214 [2024-12-13 10:40:12.994291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.214 [2024-12-13 10:40:12.994305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.214 qpair failed and we were unable to recover it. 00:38:19.214 [2024-12-13 10:40:12.994406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.214 [2024-12-13 10:40:12.994422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.214 qpair failed and we were unable to recover it. 00:38:19.214 [2024-12-13 10:40:12.994519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.214 [2024-12-13 10:40:12.994536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.214 qpair failed and we were unable to recover it. 00:38:19.214 [2024-12-13 10:40:12.994674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.214 [2024-12-13 10:40:12.994690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.214 qpair failed and we were unable to recover it. 00:38:19.214 [2024-12-13 10:40:12.994916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.214 [2024-12-13 10:40:12.994930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.214 qpair failed and we were unable to recover it. 00:38:19.214 [2024-12-13 10:40:12.994995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.214 [2024-12-13 10:40:12.995008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.214 qpair failed and we were unable to recover it. 00:38:19.214 [2024-12-13 10:40:12.995152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.214 [2024-12-13 10:40:12.995167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.214 qpair failed and we were unable to recover it. 00:38:19.214 [2024-12-13 10:40:12.995331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.214 [2024-12-13 10:40:12.995347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.215 qpair failed and we were unable to recover it. 00:38:19.215 [2024-12-13 10:40:12.995490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.215 [2024-12-13 10:40:12.995505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.215 qpair failed and we were unable to recover it. 00:38:19.215 [2024-12-13 10:40:12.995659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.215 [2024-12-13 10:40:12.995674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.215 qpair failed and we were unable to recover it. 00:38:19.215 [2024-12-13 10:40:12.995843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.215 [2024-12-13 10:40:12.995860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.215 qpair failed and we were unable to recover it. 00:38:19.215 [2024-12-13 10:40:12.995938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.215 [2024-12-13 10:40:12.995952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.215 qpair failed and we were unable to recover it. 00:38:19.215 [2024-12-13 10:40:12.996092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.215 [2024-12-13 10:40:12.996107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.215 qpair failed and we were unable to recover it. 00:38:19.215 [2024-12-13 10:40:12.996186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.215 [2024-12-13 10:40:12.996201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.215 qpair failed and we were unable to recover it. 00:38:19.215 [2024-12-13 10:40:12.996375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.215 [2024-12-13 10:40:12.996390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.215 qpair failed and we were unable to recover it. 00:38:19.215 [2024-12-13 10:40:12.996485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.215 [2024-12-13 10:40:12.996501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.215 qpair failed and we were unable to recover it. 00:38:19.215 [2024-12-13 10:40:12.996660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.215 [2024-12-13 10:40:12.996674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.215 qpair failed and we were unable to recover it. 00:38:19.215 [2024-12-13 10:40:12.996838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.215 [2024-12-13 10:40:12.996853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.215 qpair failed and we were unable to recover it. 00:38:19.215 [2024-12-13 10:40:12.996992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.215 [2024-12-13 10:40:12.997006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.215 qpair failed and we were unable to recover it. 00:38:19.215 [2024-12-13 10:40:12.997074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.215 [2024-12-13 10:40:12.997089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.215 qpair failed and we were unable to recover it. 00:38:19.215 [2024-12-13 10:40:12.997179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.215 [2024-12-13 10:40:12.997196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.215 qpair failed and we were unable to recover it. 00:38:19.215 [2024-12-13 10:40:12.997286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.215 [2024-12-13 10:40:12.997301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.215 qpair failed and we were unable to recover it. 00:38:19.215 [2024-12-13 10:40:12.997446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.215 [2024-12-13 10:40:12.997473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.215 qpair failed and we were unable to recover it. 00:38:19.215 [2024-12-13 10:40:12.997552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.215 [2024-12-13 10:40:12.997568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.215 qpair failed and we were unable to recover it. 00:38:19.215 [2024-12-13 10:40:12.997720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.215 [2024-12-13 10:40:12.997736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.215 qpair failed and we were unable to recover it. 00:38:19.215 [2024-12-13 10:40:12.997899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.215 [2024-12-13 10:40:12.997915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.215 qpair failed and we were unable to recover it. 00:38:19.215 [2024-12-13 10:40:12.998076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.215 [2024-12-13 10:40:12.998091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.215 qpair failed and we were unable to recover it. 00:38:19.215 [2024-12-13 10:40:12.998187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.215 [2024-12-13 10:40:12.998202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.215 qpair failed and we were unable to recover it. 00:38:19.215 [2024-12-13 10:40:12.998300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.215 [2024-12-13 10:40:12.998315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.215 qpair failed and we were unable to recover it. 00:38:19.215 [2024-12-13 10:40:12.998457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.215 [2024-12-13 10:40:12.998473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.215 qpair failed and we were unable to recover it. 00:38:19.215 [2024-12-13 10:40:12.998618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.215 [2024-12-13 10:40:12.998634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.215 qpair failed and we were unable to recover it. 00:38:19.215 [2024-12-13 10:40:12.998790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.215 [2024-12-13 10:40:12.998805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.215 qpair failed and we were unable to recover it. 00:38:19.215 [2024-12-13 10:40:12.998874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.215 [2024-12-13 10:40:12.998889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.215 qpair failed and we were unable to recover it. 00:38:19.215 [2024-12-13 10:40:12.999030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.215 [2024-12-13 10:40:12.999045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.215 qpair failed and we were unable to recover it. 00:38:19.215 [2024-12-13 10:40:12.999128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.215 [2024-12-13 10:40:12.999143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.215 qpair failed and we were unable to recover it. 00:38:19.215 [2024-12-13 10:40:12.999292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.215 [2024-12-13 10:40:12.999307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.215 qpair failed and we were unable to recover it. 00:38:19.215 [2024-12-13 10:40:12.999455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.215 [2024-12-13 10:40:12.999470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.215 qpair failed and we were unable to recover it. 00:38:19.215 [2024-12-13 10:40:12.999608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.215 [2024-12-13 10:40:12.999624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.215 qpair failed and we were unable to recover it. 00:38:19.215 [2024-12-13 10:40:12.999702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.215 [2024-12-13 10:40:12.999718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.215 qpair failed and we were unable to recover it. 00:38:19.215 [2024-12-13 10:40:12.999854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.215 [2024-12-13 10:40:12.999869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.215 qpair failed and we were unable to recover it. 00:38:19.215 [2024-12-13 10:40:13.000019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.215 [2024-12-13 10:40:13.000034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.215 qpair failed and we were unable to recover it. 00:38:19.215 [2024-12-13 10:40:13.000120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.215 [2024-12-13 10:40:13.000134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.215 qpair failed and we were unable to recover it. 00:38:19.215 [2024-12-13 10:40:13.000232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.215 [2024-12-13 10:40:13.000247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.215 qpair failed and we were unable to recover it. 00:38:19.215 [2024-12-13 10:40:13.000401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.215 [2024-12-13 10:40:13.000417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.215 qpair failed and we were unable to recover it. 00:38:19.216 [2024-12-13 10:40:13.000624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.216 [2024-12-13 10:40:13.000641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.216 qpair failed and we were unable to recover it. 00:38:19.216 [2024-12-13 10:40:13.000734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.216 [2024-12-13 10:40:13.000748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.216 qpair failed and we were unable to recover it. 00:38:19.216 [2024-12-13 10:40:13.000838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.216 [2024-12-13 10:40:13.000860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.216 qpair failed and we were unable to recover it. 00:38:19.216 [2024-12-13 10:40:13.000956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.216 [2024-12-13 10:40:13.000971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.216 qpair failed and we were unable to recover it. 00:38:19.216 [2024-12-13 10:40:13.001106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.216 [2024-12-13 10:40:13.001122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.216 qpair failed and we were unable to recover it. 00:38:19.216 [2024-12-13 10:40:13.001201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.216 [2024-12-13 10:40:13.001216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.216 qpair failed and we were unable to recover it. 00:38:19.216 [2024-12-13 10:40:13.001352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.216 [2024-12-13 10:40:13.001370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.216 qpair failed and we were unable to recover it. 00:38:19.216 [2024-12-13 10:40:13.001510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.216 [2024-12-13 10:40:13.001525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.216 qpair failed and we were unable to recover it. 00:38:19.216 [2024-12-13 10:40:13.001596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.216 [2024-12-13 10:40:13.001612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.216 qpair failed and we were unable to recover it. 00:38:19.216 [2024-12-13 10:40:13.001682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.216 [2024-12-13 10:40:13.001697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.216 qpair failed and we were unable to recover it. 00:38:19.216 [2024-12-13 10:40:13.001781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.216 [2024-12-13 10:40:13.001795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.216 qpair failed and we were unable to recover it. 00:38:19.216 [2024-12-13 10:40:13.001877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.216 [2024-12-13 10:40:13.001892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.216 qpair failed and we were unable to recover it. 00:38:19.216 [2024-12-13 10:40:13.001980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.216 [2024-12-13 10:40:13.001995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.216 qpair failed and we were unable to recover it. 00:38:19.216 [2024-12-13 10:40:13.002081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.216 [2024-12-13 10:40:13.002096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.216 qpair failed and we were unable to recover it. 00:38:19.216 [2024-12-13 10:40:13.002158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.216 [2024-12-13 10:40:13.002176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.216 qpair failed and we were unable to recover it. 00:38:19.216 [2024-12-13 10:40:13.002423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.216 [2024-12-13 10:40:13.002438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.216 qpair failed and we were unable to recover it. 00:38:19.216 [2024-12-13 10:40:13.002597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.216 [2024-12-13 10:40:13.002612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.216 qpair failed and we were unable to recover it. 00:38:19.216 [2024-12-13 10:40:13.002818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.216 [2024-12-13 10:40:13.002833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.216 qpair failed and we were unable to recover it. 00:38:19.216 [2024-12-13 10:40:13.002993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.216 [2024-12-13 10:40:13.003008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.216 qpair failed and we were unable to recover it. 00:38:19.216 [2024-12-13 10:40:13.003096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.216 [2024-12-13 10:40:13.003111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.216 qpair failed and we were unable to recover it. 00:38:19.216 [2024-12-13 10:40:13.003249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.216 [2024-12-13 10:40:13.003264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.216 qpair failed and we were unable to recover it. 00:38:19.216 [2024-12-13 10:40:13.003496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.216 [2024-12-13 10:40:13.003511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.216 qpair failed and we were unable to recover it. 00:38:19.216 [2024-12-13 10:40:13.003604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.216 [2024-12-13 10:40:13.003619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.216 qpair failed and we were unable to recover it. 00:38:19.216 [2024-12-13 10:40:13.003836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.216 [2024-12-13 10:40:13.003851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.216 qpair failed and we were unable to recover it. 00:38:19.216 [2024-12-13 10:40:13.003933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.216 [2024-12-13 10:40:13.003949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.216 qpair failed and we were unable to recover it. 00:38:19.216 [2024-12-13 10:40:13.004103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.216 [2024-12-13 10:40:13.004119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.216 qpair failed and we were unable to recover it. 00:38:19.216 [2024-12-13 10:40:13.004208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.216 [2024-12-13 10:40:13.004223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.216 qpair failed and we were unable to recover it. 00:38:19.216 [2024-12-13 10:40:13.004322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.216 [2024-12-13 10:40:13.004337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.216 qpair failed and we were unable to recover it. 00:38:19.216 [2024-12-13 10:40:13.004562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.216 [2024-12-13 10:40:13.004577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.216 qpair failed and we were unable to recover it. 00:38:19.216 [2024-12-13 10:40:13.004648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.216 [2024-12-13 10:40:13.004662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.216 qpair failed and we were unable to recover it. 00:38:19.216 [2024-12-13 10:40:13.004737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.216 [2024-12-13 10:40:13.004751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.216 qpair failed and we were unable to recover it. 00:38:19.216 [2024-12-13 10:40:13.004821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.216 [2024-12-13 10:40:13.004836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.216 qpair failed and we were unable to recover it. 00:38:19.216 [2024-12-13 10:40:13.004935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.216 [2024-12-13 10:40:13.004950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.216 qpair failed and we were unable to recover it. 00:38:19.216 [2024-12-13 10:40:13.005156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.216 [2024-12-13 10:40:13.005171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.216 qpair failed and we were unable to recover it. 00:38:19.216 [2024-12-13 10:40:13.005253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.216 [2024-12-13 10:40:13.005267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.216 qpair failed and we were unable to recover it. 00:38:19.216 [2024-12-13 10:40:13.005430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.216 [2024-12-13 10:40:13.005444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.216 qpair failed and we were unable to recover it. 00:38:19.216 [2024-12-13 10:40:13.005592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.216 [2024-12-13 10:40:13.005607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.216 qpair failed and we were unable to recover it. 00:38:19.216 [2024-12-13 10:40:13.005743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.216 [2024-12-13 10:40:13.005758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.216 qpair failed and we were unable to recover it. 00:38:19.217 [2024-12-13 10:40:13.005863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.217 [2024-12-13 10:40:13.005878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.217 qpair failed and we were unable to recover it. 00:38:19.217 [2024-12-13 10:40:13.006092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.217 [2024-12-13 10:40:13.006108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.217 qpair failed and we were unable to recover it. 00:38:19.217 [2024-12-13 10:40:13.006181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.217 [2024-12-13 10:40:13.006195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.217 qpair failed and we were unable to recover it. 00:38:19.217 [2024-12-13 10:40:13.006281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.217 [2024-12-13 10:40:13.006296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.217 qpair failed and we were unable to recover it. 00:38:19.217 [2024-12-13 10:40:13.006387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.217 [2024-12-13 10:40:13.006403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.217 qpair failed and we were unable to recover it. 00:38:19.217 [2024-12-13 10:40:13.006544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.217 [2024-12-13 10:40:13.006560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.217 qpair failed and we were unable to recover it. 00:38:19.217 [2024-12-13 10:40:13.006733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.217 [2024-12-13 10:40:13.006752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.217 qpair failed and we were unable to recover it. 00:38:19.217 [2024-12-13 10:40:13.006844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.217 [2024-12-13 10:40:13.006860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.217 qpair failed and we were unable to recover it. 00:38:19.217 [2024-12-13 10:40:13.007069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.217 [2024-12-13 10:40:13.007089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.217 qpair failed and we were unable to recover it. 00:38:19.217 [2024-12-13 10:40:13.007274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.217 [2024-12-13 10:40:13.007289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.217 qpair failed and we were unable to recover it. 00:38:19.217 [2024-12-13 10:40:13.007477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.217 [2024-12-13 10:40:13.007493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.217 qpair failed and we were unable to recover it. 00:38:19.217 [2024-12-13 10:40:13.007561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.217 [2024-12-13 10:40:13.007575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.217 qpair failed and we were unable to recover it. 00:38:19.217 [2024-12-13 10:40:13.007785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.217 [2024-12-13 10:40:13.007800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.217 qpair failed and we were unable to recover it. 00:38:19.217 [2024-12-13 10:40:13.007877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.217 [2024-12-13 10:40:13.007891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.217 qpair failed and we were unable to recover it. 00:38:19.217 [2024-12-13 10:40:13.008095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.217 [2024-12-13 10:40:13.008112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.217 qpair failed and we were unable to recover it. 00:38:19.217 [2024-12-13 10:40:13.008252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.217 [2024-12-13 10:40:13.008268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.217 qpair failed and we were unable to recover it. 00:38:19.217 [2024-12-13 10:40:13.008350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.217 [2024-12-13 10:40:13.008366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.217 qpair failed and we were unable to recover it. 00:38:19.217 [2024-12-13 10:40:13.008446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.217 [2024-12-13 10:40:13.008469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.217 qpair failed and we were unable to recover it. 00:38:19.217 [2024-12-13 10:40:13.008551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.217 [2024-12-13 10:40:13.008567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.217 qpair failed and we were unable to recover it. 00:38:19.217 [2024-12-13 10:40:13.008636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.217 [2024-12-13 10:40:13.008651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.217 qpair failed and we were unable to recover it. 00:38:19.217 [2024-12-13 10:40:13.008730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.217 [2024-12-13 10:40:13.008745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.217 qpair failed and we were unable to recover it. 00:38:19.217 [2024-12-13 10:40:13.008827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.217 [2024-12-13 10:40:13.008841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.217 qpair failed and we were unable to recover it. 00:38:19.217 [2024-12-13 10:40:13.008939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.217 [2024-12-13 10:40:13.008954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.217 qpair failed and we were unable to recover it. 00:38:19.217 [2024-12-13 10:40:13.009021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.217 [2024-12-13 10:40:13.009035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.217 qpair failed and we were unable to recover it. 00:38:19.217 [2024-12-13 10:40:13.009166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.217 [2024-12-13 10:40:13.009181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.217 qpair failed and we were unable to recover it. 00:38:19.217 [2024-12-13 10:40:13.009352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.217 [2024-12-13 10:40:13.009367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.217 qpair failed and we were unable to recover it. 00:38:19.217 [2024-12-13 10:40:13.009438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.217 [2024-12-13 10:40:13.009459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.217 qpair failed and we were unable to recover it. 00:38:19.217 [2024-12-13 10:40:13.009601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.217 [2024-12-13 10:40:13.009616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.217 qpair failed and we were unable to recover it. 00:38:19.217 [2024-12-13 10:40:13.009768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.217 [2024-12-13 10:40:13.009787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.217 qpair failed and we were unable to recover it. 00:38:19.217 [2024-12-13 10:40:13.009862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.217 [2024-12-13 10:40:13.009877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.217 qpair failed and we were unable to recover it. 00:38:19.217 [2024-12-13 10:40:13.009944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.217 [2024-12-13 10:40:13.009958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.217 qpair failed and we were unable to recover it. 00:38:19.217 [2024-12-13 10:40:13.010090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.217 [2024-12-13 10:40:13.010105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.217 qpair failed and we were unable to recover it. 00:38:19.217 [2024-12-13 10:40:13.010179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.217 [2024-12-13 10:40:13.010194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.217 qpair failed and we were unable to recover it. 00:38:19.217 [2024-12-13 10:40:13.010345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.217 [2024-12-13 10:40:13.010361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.217 qpair failed and we were unable to recover it. 00:38:19.217 [2024-12-13 10:40:13.010514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.217 [2024-12-13 10:40:13.010531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.217 qpair failed and we were unable to recover it. 00:38:19.217 [2024-12-13 10:40:13.010634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.217 [2024-12-13 10:40:13.010665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:19.217 qpair failed and we were unable to recover it. 00:38:19.217 [2024-12-13 10:40:13.010864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.217 [2024-12-13 10:40:13.010924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:19.217 qpair failed and we were unable to recover it. 00:38:19.217 [2024-12-13 10:40:13.011228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.217 [2024-12-13 10:40:13.011278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:19.217 qpair failed and we were unable to recover it. 00:38:19.217 [2024-12-13 10:40:13.011509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.217 [2024-12-13 10:40:13.011556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.217 qpair failed and we were unable to recover it. 00:38:19.217 [2024-12-13 10:40:13.011714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.217 [2024-12-13 10:40:13.011730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.217 qpair failed and we were unable to recover it. 00:38:19.217 [2024-12-13 10:40:13.011816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.217 [2024-12-13 10:40:13.011831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.217 qpair failed and we were unable to recover it. 00:38:19.217 [2024-12-13 10:40:13.011974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.217 [2024-12-13 10:40:13.011989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.217 qpair failed and we were unable to recover it. 00:38:19.218 [2024-12-13 10:40:13.012059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.218 [2024-12-13 10:40:13.012074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.218 qpair failed and we were unable to recover it. 00:38:19.218 [2024-12-13 10:40:13.012212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.218 [2024-12-13 10:40:13.012227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.218 qpair failed and we were unable to recover it. 00:38:19.218 [2024-12-13 10:40:13.012380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.218 [2024-12-13 10:40:13.012395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.218 qpair failed and we were unable to recover it. 00:38:19.218 [2024-12-13 10:40:13.012537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.218 [2024-12-13 10:40:13.012552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.218 qpair failed and we were unable to recover it. 00:38:19.218 [2024-12-13 10:40:13.012780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.218 [2024-12-13 10:40:13.012795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.218 qpair failed and we were unable to recover it. 00:38:19.218 [2024-12-13 10:40:13.012999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.218 [2024-12-13 10:40:13.013014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.218 qpair failed and we were unable to recover it. 00:38:19.218 [2024-12-13 10:40:13.013151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.218 [2024-12-13 10:40:13.013169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.218 qpair failed and we were unable to recover it. 00:38:19.218 [2024-12-13 10:40:13.013262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.218 [2024-12-13 10:40:13.013277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.218 qpair failed and we were unable to recover it. 00:38:19.218 [2024-12-13 10:40:13.013348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.218 [2024-12-13 10:40:13.013364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.218 qpair failed and we were unable to recover it. 00:38:19.218 [2024-12-13 10:40:13.013511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.218 [2024-12-13 10:40:13.013527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.218 qpair failed and we were unable to recover it. 00:38:19.218 [2024-12-13 10:40:13.013697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.218 [2024-12-13 10:40:13.013741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.218 qpair failed and we were unable to recover it. 00:38:19.218 [2024-12-13 10:40:13.013889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.218 [2024-12-13 10:40:13.013932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.218 qpair failed and we were unable to recover it. 00:38:19.218 [2024-12-13 10:40:13.014073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.218 [2024-12-13 10:40:13.014117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.218 qpair failed and we were unable to recover it. 00:38:19.218 [2024-12-13 10:40:13.014264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.218 [2024-12-13 10:40:13.014337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:19.218 qpair failed and we were unable to recover it. 00:38:19.218 [2024-12-13 10:40:13.014581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.218 [2024-12-13 10:40:13.014629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:19.218 qpair failed and we were unable to recover it. 00:38:19.218 [2024-12-13 10:40:13.014906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.218 [2024-12-13 10:40:13.014959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:19.218 qpair failed and we were unable to recover it. 00:38:19.218 [2024-12-13 10:40:13.015126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.218 [2024-12-13 10:40:13.015157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:19.218 qpair failed and we were unable to recover it. 00:38:19.218 [2024-12-13 10:40:13.015321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.218 [2024-12-13 10:40:13.015345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:19.218 qpair failed and we were unable to recover it. 00:38:19.218 [2024-12-13 10:40:13.015585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.218 [2024-12-13 10:40:13.015608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:19.218 qpair failed and we were unable to recover it. 00:38:19.218 [2024-12-13 10:40:13.015704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.218 [2024-12-13 10:40:13.015722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.218 qpair failed and we were unable to recover it. 00:38:19.218 [2024-12-13 10:40:13.015831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.218 [2024-12-13 10:40:13.015846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.218 qpair failed and we were unable to recover it. 00:38:19.218 [2024-12-13 10:40:13.016075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.218 [2024-12-13 10:40:13.016089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.218 qpair failed and we were unable to recover it. 00:38:19.218 [2024-12-13 10:40:13.016179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.218 [2024-12-13 10:40:13.016195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.218 qpair failed and we were unable to recover it. 00:38:19.218 [2024-12-13 10:40:13.016354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.218 [2024-12-13 10:40:13.016369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.218 qpair failed and we were unable to recover it. 00:38:19.218 [2024-12-13 10:40:13.016510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.218 [2024-12-13 10:40:13.016525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.218 qpair failed and we were unable to recover it. 00:38:19.218 [2024-12-13 10:40:13.016678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.218 [2024-12-13 10:40:13.016693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.218 qpair failed and we were unable to recover it. 00:38:19.218 [2024-12-13 10:40:13.016825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.218 [2024-12-13 10:40:13.016840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.218 qpair failed and we were unable to recover it. 00:38:19.218 [2024-12-13 10:40:13.016981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.218 [2024-12-13 10:40:13.016996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.218 qpair failed and we were unable to recover it. 00:38:19.218 [2024-12-13 10:40:13.017096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.218 [2024-12-13 10:40:13.017111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.218 qpair failed and we were unable to recover it. 00:38:19.218 [2024-12-13 10:40:13.017201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.218 [2024-12-13 10:40:13.017216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.218 qpair failed and we were unable to recover it. 00:38:19.218 [2024-12-13 10:40:13.017366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.218 [2024-12-13 10:40:13.017381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.218 qpair failed and we were unable to recover it. 00:38:19.218 [2024-12-13 10:40:13.017473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.219 [2024-12-13 10:40:13.017489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.219 qpair failed and we were unable to recover it. 00:38:19.219 [2024-12-13 10:40:13.017640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.219 [2024-12-13 10:40:13.017656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.219 qpair failed and we were unable to recover it. 00:38:19.219 [2024-12-13 10:40:13.017853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.219 [2024-12-13 10:40:13.017878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:19.219 qpair failed and we were unable to recover it. 00:38:19.219 [2024-12-13 10:40:13.018124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.219 [2024-12-13 10:40:13.018172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:19.219 qpair failed and we were unable to recover it. 00:38:19.219 [2024-12-13 10:40:13.018335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.219 [2024-12-13 10:40:13.018386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:19.219 qpair failed and we were unable to recover it. 00:38:19.219 [2024-12-13 10:40:13.018607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.219 [2024-12-13 10:40:13.018653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:19.219 qpair failed and we were unable to recover it. 00:38:19.219 [2024-12-13 10:40:13.018788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.219 [2024-12-13 10:40:13.018833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:19.219 qpair failed and we were unable to recover it. 00:38:19.219 [2024-12-13 10:40:13.019134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.219 [2024-12-13 10:40:13.019158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:19.219 qpair failed and we were unable to recover it. 00:38:19.219 [2024-12-13 10:40:13.019369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.219 [2024-12-13 10:40:13.019386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.219 qpair failed and we were unable to recover it. 00:38:19.219 [2024-12-13 10:40:13.019540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.219 [2024-12-13 10:40:13.019556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.219 qpair failed and we were unable to recover it. 00:38:19.219 [2024-12-13 10:40:13.019646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.219 [2024-12-13 10:40:13.019661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.219 qpair failed and we were unable to recover it. 00:38:19.219 [2024-12-13 10:40:13.019742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.219 [2024-12-13 10:40:13.019757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.219 qpair failed and we were unable to recover it. 00:38:19.219 [2024-12-13 10:40:13.019906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.219 [2024-12-13 10:40:13.019921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.219 qpair failed and we were unable to recover it. 00:38:19.219 [2024-12-13 10:40:13.020084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.219 [2024-12-13 10:40:13.020100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.219 qpair failed and we were unable to recover it. 00:38:19.219 [2024-12-13 10:40:13.020212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.219 [2024-12-13 10:40:13.020256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.219 qpair failed and we were unable to recover it. 00:38:19.219 [2024-12-13 10:40:13.020415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.219 [2024-12-13 10:40:13.020480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:19.219 qpair failed and we were unable to recover it. 00:38:19.219 [2024-12-13 10:40:13.020661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.219 [2024-12-13 10:40:13.020708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:19.219 qpair failed and we were unable to recover it. 00:38:19.219 [2024-12-13 10:40:13.020912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.219 [2024-12-13 10:40:13.020935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:19.219 qpair failed and we were unable to recover it. 00:38:19.219 [2024-12-13 10:40:13.021053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.219 [2024-12-13 10:40:13.021097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:19.219 qpair failed and we were unable to recover it. 00:38:19.219 [2024-12-13 10:40:13.021302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.219 [2024-12-13 10:40:13.021352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:19.219 qpair failed and we were unable to recover it. 00:38:19.219 [2024-12-13 10:40:13.021525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.219 [2024-12-13 10:40:13.021573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:19.219 qpair failed and we were unable to recover it. 00:38:19.219 [2024-12-13 10:40:13.021771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.219 [2024-12-13 10:40:13.021815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:19.219 qpair failed and we were unable to recover it. 00:38:19.219 [2024-12-13 10:40:13.022016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.219 [2024-12-13 10:40:13.022059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:19.219 qpair failed and we were unable to recover it. 00:38:19.219 [2024-12-13 10:40:13.022199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.219 [2024-12-13 10:40:13.022242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:19.219 qpair failed and we were unable to recover it. 00:38:19.219 [2024-12-13 10:40:13.022446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.219 [2024-12-13 10:40:13.022505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:19.219 qpair failed and we were unable to recover it. 00:38:19.219 [2024-12-13 10:40:13.022718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.219 [2024-12-13 10:40:13.022763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:19.219 qpair failed and we were unable to recover it. 00:38:19.219 [2024-12-13 10:40:13.022980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.219 [2024-12-13 10:40:13.023025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:19.219 qpair failed and we were unable to recover it. 00:38:19.219 [2024-12-13 10:40:13.023311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.219 [2024-12-13 10:40:13.023356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:19.219 qpair failed and we were unable to recover it. 00:38:19.219 [2024-12-13 10:40:13.023560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.220 [2024-12-13 10:40:13.023607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:19.220 qpair failed and we were unable to recover it. 00:38:19.220 [2024-12-13 10:40:13.023763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.220 [2024-12-13 10:40:13.023808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:19.220 qpair failed and we were unable to recover it. 00:38:19.220 [2024-12-13 10:40:13.024021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.220 [2024-12-13 10:40:13.024045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:19.220 qpair failed and we were unable to recover it. 00:38:19.220 [2024-12-13 10:40:13.024201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.220 [2024-12-13 10:40:13.024224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:19.220 qpair failed and we were unable to recover it. 00:38:19.220 [2024-12-13 10:40:13.024401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.220 [2024-12-13 10:40:13.024425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:19.220 qpair failed and we were unable to recover it. 00:38:19.220 [2024-12-13 10:40:13.024602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.220 [2024-12-13 10:40:13.024620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.220 qpair failed and we were unable to recover it. 00:38:19.220 [2024-12-13 10:40:13.024795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.220 [2024-12-13 10:40:13.024810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.220 qpair failed and we were unable to recover it. 00:38:19.220 [2024-12-13 10:40:13.024916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.220 [2024-12-13 10:40:13.024931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.220 qpair failed and we were unable to recover it. 00:38:19.220 [2024-12-13 10:40:13.025105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.220 [2024-12-13 10:40:13.025120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.220 qpair failed and we were unable to recover it. 00:38:19.220 [2024-12-13 10:40:13.025202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.220 [2024-12-13 10:40:13.025216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.220 qpair failed and we were unable to recover it. 00:38:19.220 [2024-12-13 10:40:13.025311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.220 [2024-12-13 10:40:13.025325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.220 qpair failed and we were unable to recover it. 00:38:19.220 [2024-12-13 10:40:13.025411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.220 [2024-12-13 10:40:13.025426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.220 qpair failed and we were unable to recover it. 00:38:19.220 [2024-12-13 10:40:13.025517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.220 [2024-12-13 10:40:13.025533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.220 qpair failed and we were unable to recover it. 00:38:19.220 [2024-12-13 10:40:13.025631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.220 [2024-12-13 10:40:13.025646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.220 qpair failed and we were unable to recover it. 00:38:19.220 [2024-12-13 10:40:13.025808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.220 [2024-12-13 10:40:13.025833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:19.220 qpair failed and we were unable to recover it. 00:38:19.220 [2024-12-13 10:40:13.026055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.220 [2024-12-13 10:40:13.026078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:19.220 qpair failed and we were unable to recover it. 00:38:19.220 [2024-12-13 10:40:13.026230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.220 [2024-12-13 10:40:13.026254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:19.220 qpair failed and we were unable to recover it. 00:38:19.220 [2024-12-13 10:40:13.026404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.220 [2024-12-13 10:40:13.026421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.220 qpair failed and we were unable to recover it. 00:38:19.220 [2024-12-13 10:40:13.026518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.220 [2024-12-13 10:40:13.026534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.220 qpair failed and we were unable to recover it. 00:38:19.220 [2024-12-13 10:40:13.026634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.220 [2024-12-13 10:40:13.026649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.220 qpair failed and we were unable to recover it. 00:38:19.220 [2024-12-13 10:40:13.026723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.220 [2024-12-13 10:40:13.026737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.220 qpair failed and we were unable to recover it. 00:38:19.220 [2024-12-13 10:40:13.026880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.220 [2024-12-13 10:40:13.026895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.220 qpair failed and we were unable to recover it. 00:38:19.220 [2024-12-13 10:40:13.027038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.220 [2024-12-13 10:40:13.027054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.220 qpair failed and we were unable to recover it. 00:38:19.220 [2024-12-13 10:40:13.027190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.220 [2024-12-13 10:40:13.027205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.220 qpair failed and we were unable to recover it. 00:38:19.220 [2024-12-13 10:40:13.027273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.220 [2024-12-13 10:40:13.027287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.220 qpair failed and we were unable to recover it. 00:38:19.220 [2024-12-13 10:40:13.027368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.220 [2024-12-13 10:40:13.027381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.220 qpair failed and we were unable to recover it. 00:38:19.220 [2024-12-13 10:40:13.027471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.220 [2024-12-13 10:40:13.027491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.220 qpair failed and we were unable to recover it. 00:38:19.220 [2024-12-13 10:40:13.027568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.220 [2024-12-13 10:40:13.027589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.220 qpair failed and we were unable to recover it. 00:38:19.220 [2024-12-13 10:40:13.027737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.220 [2024-12-13 10:40:13.027753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.220 qpair failed and we were unable to recover it. 00:38:19.220 [2024-12-13 10:40:13.027931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.220 [2024-12-13 10:40:13.027948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.220 qpair failed and we were unable to recover it. 00:38:19.220 [2024-12-13 10:40:13.028019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.220 [2024-12-13 10:40:13.028034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.220 qpair failed and we were unable to recover it. 00:38:19.220 [2024-12-13 10:40:13.028121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.220 [2024-12-13 10:40:13.028134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.220 qpair failed and we were unable to recover it. 00:38:19.220 [2024-12-13 10:40:13.028281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.220 [2024-12-13 10:40:13.028296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.220 qpair failed and we were unable to recover it. 00:38:19.220 [2024-12-13 10:40:13.028434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.220 [2024-12-13 10:40:13.028459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.220 qpair failed and we were unable to recover it. 00:38:19.220 [2024-12-13 10:40:13.028561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.220 [2024-12-13 10:40:13.028576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.220 qpair failed and we were unable to recover it. 00:38:19.220 [2024-12-13 10:40:13.028723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.220 [2024-12-13 10:40:13.028740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.220 qpair failed and we were unable to recover it. 00:38:19.220 [2024-12-13 10:40:13.028808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.220 [2024-12-13 10:40:13.028824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.220 qpair failed and we were unable to recover it. 00:38:19.220 [2024-12-13 10:40:13.028970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.220 [2024-12-13 10:40:13.028986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.220 qpair failed and we were unable to recover it. 00:38:19.220 [2024-12-13 10:40:13.029082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.220 [2024-12-13 10:40:13.029097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.220 qpair failed and we were unable to recover it. 00:38:19.220 [2024-12-13 10:40:13.029236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.221 [2024-12-13 10:40:13.029251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.221 qpair failed and we were unable to recover it. 00:38:19.221 [2024-12-13 10:40:13.029320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.221 [2024-12-13 10:40:13.029335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.221 qpair failed and we were unable to recover it. 00:38:19.221 [2024-12-13 10:40:13.029485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.221 [2024-12-13 10:40:13.029501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.221 qpair failed and we were unable to recover it. 00:38:19.221 [2024-12-13 10:40:13.029577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.221 [2024-12-13 10:40:13.029592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.221 qpair failed and we were unable to recover it. 00:38:19.221 [2024-12-13 10:40:13.029734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.221 [2024-12-13 10:40:13.029749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.221 qpair failed and we were unable to recover it. 00:38:19.221 [2024-12-13 10:40:13.029881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.221 [2024-12-13 10:40:13.029896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.221 qpair failed and we were unable to recover it. 00:38:19.221 [2024-12-13 10:40:13.030097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.221 [2024-12-13 10:40:13.030112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.221 qpair failed and we were unable to recover it. 00:38:19.221 [2024-12-13 10:40:13.030317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.221 [2024-12-13 10:40:13.030360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.221 qpair failed and we were unable to recover it. 00:38:19.221 [2024-12-13 10:40:13.030531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.221 [2024-12-13 10:40:13.030576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.221 qpair failed and we were unable to recover it. 00:38:19.221 [2024-12-13 10:40:13.030790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.221 [2024-12-13 10:40:13.030835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.221 qpair failed and we were unable to recover it. 00:38:19.221 [2024-12-13 10:40:13.031036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.221 [2024-12-13 10:40:13.031052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.221 qpair failed and we were unable to recover it. 00:38:19.221 [2024-12-13 10:40:13.031134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.221 [2024-12-13 10:40:13.031148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.221 qpair failed and we were unable to recover it. 00:38:19.221 [2024-12-13 10:40:13.031283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.221 [2024-12-13 10:40:13.031297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.221 qpair failed and we were unable to recover it. 00:38:19.221 [2024-12-13 10:40:13.031433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.221 [2024-12-13 10:40:13.031489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.221 qpair failed and we were unable to recover it. 00:38:19.221 [2024-12-13 10:40:13.031737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.221 [2024-12-13 10:40:13.031787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:19.221 qpair failed and we were unable to recover it. 00:38:19.221 [2024-12-13 10:40:13.031941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.221 [2024-12-13 10:40:13.031991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:19.221 qpair failed and we were unable to recover it. 00:38:19.221 [2024-12-13 10:40:13.032241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.221 [2024-12-13 10:40:13.032268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:19.221 qpair failed and we were unable to recover it. 00:38:19.221 [2024-12-13 10:40:13.032440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.221 [2024-12-13 10:40:13.032470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:19.221 qpair failed and we were unable to recover it. 00:38:19.221 [2024-12-13 10:40:13.032697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.221 [2024-12-13 10:40:13.032720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:19.221 qpair failed and we were unable to recover it. 00:38:19.221 [2024-12-13 10:40:13.032873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.221 [2024-12-13 10:40:13.032895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:19.221 qpair failed and we were unable to recover it. 00:38:19.221 [2024-12-13 10:40:13.033078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.221 [2024-12-13 10:40:13.033095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.221 qpair failed and we were unable to recover it. 00:38:19.221 [2024-12-13 10:40:13.033314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.221 [2024-12-13 10:40:13.033358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.221 qpair failed and we were unable to recover it. 00:38:19.221 [2024-12-13 10:40:13.033566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.221 [2024-12-13 10:40:13.033611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.221 qpair failed and we were unable to recover it. 00:38:19.221 [2024-12-13 10:40:13.033900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.221 [2024-12-13 10:40:13.033943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.221 qpair failed and we were unable to recover it. 00:38:19.221 [2024-12-13 10:40:13.034091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.221 [2024-12-13 10:40:13.034135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.221 qpair failed and we were unable to recover it. 00:38:19.221 [2024-12-13 10:40:13.034327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.221 [2024-12-13 10:40:13.034370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.221 qpair failed and we were unable to recover it. 00:38:19.221 [2024-12-13 10:40:13.034509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.221 [2024-12-13 10:40:13.034557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.221 qpair failed and we were unable to recover it. 00:38:19.221 [2024-12-13 10:40:13.034769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.221 [2024-12-13 10:40:13.034813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.221 qpair failed and we were unable to recover it. 00:38:19.221 [2024-12-13 10:40:13.035076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.221 [2024-12-13 10:40:13.035093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.221 qpair failed and we were unable to recover it. 00:38:19.221 [2024-12-13 10:40:13.035166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.221 [2024-12-13 10:40:13.035180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.221 qpair failed and we were unable to recover it. 00:38:19.221 [2024-12-13 10:40:13.035333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.221 [2024-12-13 10:40:13.035348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.221 qpair failed and we were unable to recover it. 00:38:19.221 [2024-12-13 10:40:13.035512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.221 [2024-12-13 10:40:13.035557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.221 qpair failed and we were unable to recover it. 00:38:19.221 [2024-12-13 10:40:13.035693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.221 [2024-12-13 10:40:13.035737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.221 qpair failed and we were unable to recover it. 00:38:19.221 [2024-12-13 10:40:13.035927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.221 [2024-12-13 10:40:13.035943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.221 qpair failed and we were unable to recover it. 00:38:19.221 [2024-12-13 10:40:13.036053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.221 [2024-12-13 10:40:13.036067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.221 qpair failed and we were unable to recover it. 00:38:19.221 [2024-12-13 10:40:13.036133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.221 [2024-12-13 10:40:13.036149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.221 qpair failed and we were unable to recover it. 00:38:19.221 [2024-12-13 10:40:13.036312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.221 [2024-12-13 10:40:13.036331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.221 qpair failed and we were unable to recover it. 00:38:19.221 [2024-12-13 10:40:13.036486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.221 [2024-12-13 10:40:13.036503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.221 qpair failed and we were unable to recover it. 00:38:19.221 [2024-12-13 10:40:13.036639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.221 [2024-12-13 10:40:13.036654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.221 qpair failed and we were unable to recover it. 00:38:19.221 [2024-12-13 10:40:13.036790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.222 [2024-12-13 10:40:13.036805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.222 qpair failed and we were unable to recover it. 00:38:19.222 [2024-12-13 10:40:13.036889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.222 [2024-12-13 10:40:13.036903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.222 qpair failed and we were unable to recover it. 00:38:19.222 [2024-12-13 10:40:13.037043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.222 [2024-12-13 10:40:13.037058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.222 qpair failed and we were unable to recover it. 00:38:19.222 [2024-12-13 10:40:13.037262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.222 [2024-12-13 10:40:13.037278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.222 qpair failed and we were unable to recover it. 00:38:19.222 [2024-12-13 10:40:13.037417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.222 [2024-12-13 10:40:13.037433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.222 qpair failed and we were unable to recover it. 00:38:19.222 [2024-12-13 10:40:13.037657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.222 [2024-12-13 10:40:13.037703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.222 qpair failed and we were unable to recover it. 00:38:19.222 [2024-12-13 10:40:13.037906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.222 [2024-12-13 10:40:13.037950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.222 qpair failed and we were unable to recover it. 00:38:19.222 [2024-12-13 10:40:13.038087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.222 [2024-12-13 10:40:13.038132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.222 qpair failed and we were unable to recover it. 00:38:19.222 [2024-12-13 10:40:13.038266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.222 [2024-12-13 10:40:13.038311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.222 qpair failed and we were unable to recover it. 00:38:19.506 [2024-12-13 10:40:13.038523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.506 [2024-12-13 10:40:13.038568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.506 qpair failed and we were unable to recover it. 00:38:19.506 [2024-12-13 10:40:13.038769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.506 [2024-12-13 10:40:13.038813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.506 qpair failed and we were unable to recover it. 00:38:19.506 [2024-12-13 10:40:13.039069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.506 [2024-12-13 10:40:13.039085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.506 qpair failed and we were unable to recover it. 00:38:19.506 [2024-12-13 10:40:13.039221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.506 [2024-12-13 10:40:13.039237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.506 qpair failed and we were unable to recover it. 00:38:19.506 [2024-12-13 10:40:13.039381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.506 [2024-12-13 10:40:13.039397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.506 qpair failed and we were unable to recover it. 00:38:19.506 [2024-12-13 10:40:13.039480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.506 [2024-12-13 10:40:13.039495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.506 qpair failed and we were unable to recover it. 00:38:19.506 [2024-12-13 10:40:13.039560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.506 [2024-12-13 10:40:13.039574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.506 qpair failed and we were unable to recover it. 00:38:19.506 [2024-12-13 10:40:13.039725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.506 [2024-12-13 10:40:13.039750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:19.506 qpair failed and we were unable to recover it. 00:38:19.506 [2024-12-13 10:40:13.039862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.506 [2024-12-13 10:40:13.039887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:19.506 qpair failed and we were unable to recover it. 00:38:19.506 [2024-12-13 10:40:13.039984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.506 [2024-12-13 10:40:13.040009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:19.506 qpair failed and we were unable to recover it. 00:38:19.506 [2024-12-13 10:40:13.040173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.506 [2024-12-13 10:40:13.040189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.506 qpair failed and we were unable to recover it. 00:38:19.506 [2024-12-13 10:40:13.040257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.506 [2024-12-13 10:40:13.040271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.506 qpair failed and we were unable to recover it. 00:38:19.506 [2024-12-13 10:40:13.040470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.506 [2024-12-13 10:40:13.040488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.506 qpair failed and we were unable to recover it. 00:38:19.506 [2024-12-13 10:40:13.040583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.506 [2024-12-13 10:40:13.040602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.506 qpair failed and we were unable to recover it. 00:38:19.506 [2024-12-13 10:40:13.040753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.506 [2024-12-13 10:40:13.040782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.506 qpair failed and we were unable to recover it. 00:38:19.506 [2024-12-13 10:40:13.041012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.506 [2024-12-13 10:40:13.041029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.506 qpair failed and we were unable to recover it. 00:38:19.506 [2024-12-13 10:40:13.041185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.506 [2024-12-13 10:40:13.041200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.506 qpair failed and we were unable to recover it. 00:38:19.506 [2024-12-13 10:40:13.041299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.506 [2024-12-13 10:40:13.041314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.506 qpair failed and we were unable to recover it. 00:38:19.506 [2024-12-13 10:40:13.041454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.506 [2024-12-13 10:40:13.041469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.506 qpair failed and we were unable to recover it. 00:38:19.506 [2024-12-13 10:40:13.041567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.506 [2024-12-13 10:40:13.041582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.506 qpair failed and we were unable to recover it. 00:38:19.506 [2024-12-13 10:40:13.041714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.506 [2024-12-13 10:40:13.041733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.506 qpair failed and we were unable to recover it. 00:38:19.506 [2024-12-13 10:40:13.041805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.506 [2024-12-13 10:40:13.041819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.506 qpair failed and we were unable to recover it. 00:38:19.506 [2024-12-13 10:40:13.041980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.506 [2024-12-13 10:40:13.041995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.506 qpair failed and we were unable to recover it. 00:38:19.506 [2024-12-13 10:40:13.042077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.506 [2024-12-13 10:40:13.042092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.506 qpair failed and we were unable to recover it. 00:38:19.506 [2024-12-13 10:40:13.042179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.506 [2024-12-13 10:40:13.042194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.506 qpair failed and we were unable to recover it. 00:38:19.506 [2024-12-13 10:40:13.042346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.506 [2024-12-13 10:40:13.042361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.506 qpair failed and we were unable to recover it. 00:38:19.506 [2024-12-13 10:40:13.042452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.506 [2024-12-13 10:40:13.042468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.506 qpair failed and we were unable to recover it. 00:38:19.506 [2024-12-13 10:40:13.042675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.506 [2024-12-13 10:40:13.042690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.506 qpair failed and we were unable to recover it. 00:38:19.506 [2024-12-13 10:40:13.042786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.506 [2024-12-13 10:40:13.042801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.506 qpair failed and we were unable to recover it. 00:38:19.506 [2024-12-13 10:40:13.042949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.507 [2024-12-13 10:40:13.042964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.507 qpair failed and we were unable to recover it. 00:38:19.507 [2024-12-13 10:40:13.043041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.507 [2024-12-13 10:40:13.043056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.507 qpair failed and we were unable to recover it. 00:38:19.507 [2024-12-13 10:40:13.043206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.507 [2024-12-13 10:40:13.043221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.507 qpair failed and we were unable to recover it. 00:38:19.507 [2024-12-13 10:40:13.043301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.507 [2024-12-13 10:40:13.043315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.507 qpair failed and we were unable to recover it. 00:38:19.507 [2024-12-13 10:40:13.043389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.507 [2024-12-13 10:40:13.043403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.507 qpair failed and we were unable to recover it. 00:38:19.507 [2024-12-13 10:40:13.043485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.507 [2024-12-13 10:40:13.043501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.507 qpair failed and we were unable to recover it. 00:38:19.507 [2024-12-13 10:40:13.043736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.507 [2024-12-13 10:40:13.043761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:19.507 qpair failed and we were unable to recover it. 00:38:19.507 [2024-12-13 10:40:13.043861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.507 [2024-12-13 10:40:13.043887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:19.507 qpair failed and we were unable to recover it. 00:38:19.507 [2024-12-13 10:40:13.044117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.507 [2024-12-13 10:40:13.044142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:19.507 qpair failed and we were unable to recover it. 00:38:19.507 [2024-12-13 10:40:13.044302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.507 [2024-12-13 10:40:13.044319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.507 qpair failed and we were unable to recover it. 00:38:19.507 [2024-12-13 10:40:13.044390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.507 [2024-12-13 10:40:13.044404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.507 qpair failed and we were unable to recover it. 00:38:19.507 [2024-12-13 10:40:13.044539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.507 [2024-12-13 10:40:13.044555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.507 qpair failed and we were unable to recover it. 00:38:19.507 [2024-12-13 10:40:13.044687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.507 [2024-12-13 10:40:13.044702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.507 qpair failed and we were unable to recover it. 00:38:19.507 [2024-12-13 10:40:13.044773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.507 [2024-12-13 10:40:13.044786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.507 qpair failed and we were unable to recover it. 00:38:19.507 [2024-12-13 10:40:13.045009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.507 [2024-12-13 10:40:13.045053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.507 qpair failed and we were unable to recover it. 00:38:19.507 [2024-12-13 10:40:13.045258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.507 [2024-12-13 10:40:13.045302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.507 qpair failed and we were unable to recover it. 00:38:19.507 [2024-12-13 10:40:13.045494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.507 [2024-12-13 10:40:13.045539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.507 qpair failed and we were unable to recover it. 00:38:19.507 [2024-12-13 10:40:13.045749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.507 [2024-12-13 10:40:13.045765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.507 qpair failed and we were unable to recover it. 00:38:19.507 [2024-12-13 10:40:13.045932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.507 [2024-12-13 10:40:13.045979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:19.507 qpair failed and we were unable to recover it. 00:38:19.507 [2024-12-13 10:40:13.046223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.507 [2024-12-13 10:40:13.046269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:19.507 qpair failed and we were unable to recover it. 00:38:19.507 [2024-12-13 10:40:13.046486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.507 [2024-12-13 10:40:13.046532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:19.507 qpair failed and we were unable to recover it. 00:38:19.507 [2024-12-13 10:40:13.046735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.507 [2024-12-13 10:40:13.046758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:19.507 qpair failed and we were unable to recover it. 00:38:19.507 [2024-12-13 10:40:13.046923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.507 [2024-12-13 10:40:13.046968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:19.507 qpair failed and we were unable to recover it. 00:38:19.507 [2024-12-13 10:40:13.047134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.507 [2024-12-13 10:40:13.047180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:19.507 qpair failed and we were unable to recover it. 00:38:19.507 [2024-12-13 10:40:13.047306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.507 [2024-12-13 10:40:13.047350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:19.507 qpair failed and we were unable to recover it. 00:38:19.507 [2024-12-13 10:40:13.047507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.507 [2024-12-13 10:40:13.047553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:19.507 qpair failed and we were unable to recover it. 00:38:19.507 [2024-12-13 10:40:13.047778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.507 [2024-12-13 10:40:13.047823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:19.507 qpair failed and we were unable to recover it. 00:38:19.507 [2024-12-13 10:40:13.047980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.507 [2024-12-13 10:40:13.048004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:19.507 qpair failed and we were unable to recover it. 00:38:19.507 [2024-12-13 10:40:13.048181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.507 [2024-12-13 10:40:13.048220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:19.507 qpair failed and we were unable to recover it. 00:38:19.507 [2024-12-13 10:40:13.048431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.507 [2024-12-13 10:40:13.048497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:19.507 qpair failed and we were unable to recover it. 00:38:19.507 [2024-12-13 10:40:13.048707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.507 [2024-12-13 10:40:13.048751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:19.507 qpair failed and we were unable to recover it. 00:38:19.507 [2024-12-13 10:40:13.048978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.507 [2024-12-13 10:40:13.049001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:19.507 qpair failed and we were unable to recover it. 00:38:19.507 [2024-12-13 10:40:13.049202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.507 [2024-12-13 10:40:13.049247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:19.507 qpair failed and we were unable to recover it. 00:38:19.507 [2024-12-13 10:40:13.049537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.507 [2024-12-13 10:40:13.049594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:19.507 qpair failed and we were unable to recover it. 00:38:19.507 [2024-12-13 10:40:13.049792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.507 [2024-12-13 10:40:13.049817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:19.507 qpair failed and we were unable to recover it. 00:38:19.507 [2024-12-13 10:40:13.050014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.507 [2024-12-13 10:40:13.050038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:19.507 qpair failed and we were unable to recover it. 00:38:19.507 [2024-12-13 10:40:13.050251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.507 [2024-12-13 10:40:13.050270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.507 qpair failed and we were unable to recover it. 00:38:19.507 [2024-12-13 10:40:13.050420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.507 [2024-12-13 10:40:13.050436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.507 qpair failed and we were unable to recover it. 00:38:19.507 [2024-12-13 10:40:13.050555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.507 [2024-12-13 10:40:13.050598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.507 qpair failed and we were unable to recover it. 00:38:19.508 [2024-12-13 10:40:13.050810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.508 [2024-12-13 10:40:13.050854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.508 qpair failed and we were unable to recover it. 00:38:19.508 [2024-12-13 10:40:13.051036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.508 [2024-12-13 10:40:13.051080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.508 qpair failed and we were unable to recover it. 00:38:19.508 [2024-12-13 10:40:13.051295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.508 [2024-12-13 10:40:13.051311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.508 qpair failed and we were unable to recover it. 00:38:19.508 [2024-12-13 10:40:13.051458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.508 [2024-12-13 10:40:13.051474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.508 qpair failed and we were unable to recover it. 00:38:19.508 [2024-12-13 10:40:13.051621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.508 [2024-12-13 10:40:13.051636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.508 qpair failed and we were unable to recover it. 00:38:19.508 [2024-12-13 10:40:13.051714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.508 [2024-12-13 10:40:13.051728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.508 qpair failed and we were unable to recover it. 00:38:19.508 [2024-12-13 10:40:13.051876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.508 [2024-12-13 10:40:13.051892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.508 qpair failed and we were unable to recover it. 00:38:19.508 [2024-12-13 10:40:13.052052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.508 [2024-12-13 10:40:13.052095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.508 qpair failed and we were unable to recover it. 00:38:19.508 [2024-12-13 10:40:13.052361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.508 [2024-12-13 10:40:13.052405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.508 qpair failed and we were unable to recover it. 00:38:19.508 [2024-12-13 10:40:13.052563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.508 [2024-12-13 10:40:13.052608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.508 qpair failed and we were unable to recover it. 00:38:19.508 [2024-12-13 10:40:13.052753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.508 [2024-12-13 10:40:13.052796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.508 qpair failed and we were unable to recover it. 00:38:19.508 [2024-12-13 10:40:13.052987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.508 [2024-12-13 10:40:13.053030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.508 qpair failed and we were unable to recover it. 00:38:19.508 [2024-12-13 10:40:13.053224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.508 [2024-12-13 10:40:13.053268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.508 qpair failed and we were unable to recover it. 00:38:19.508 [2024-12-13 10:40:13.053423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.508 [2024-12-13 10:40:13.053480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.508 qpair failed and we were unable to recover it. 00:38:19.508 [2024-12-13 10:40:13.053672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.508 [2024-12-13 10:40:13.053687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.508 qpair failed and we were unable to recover it. 00:38:19.508 [2024-12-13 10:40:13.053753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.508 [2024-12-13 10:40:13.053767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.508 qpair failed and we were unable to recover it. 00:38:19.508 [2024-12-13 10:40:13.053841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.508 [2024-12-13 10:40:13.053855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.508 qpair failed and we were unable to recover it. 00:38:19.508 [2024-12-13 10:40:13.053948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.508 [2024-12-13 10:40:13.053963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.508 qpair failed and we were unable to recover it. 00:38:19.508 [2024-12-13 10:40:13.054108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.508 [2024-12-13 10:40:13.054152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.508 qpair failed and we were unable to recover it. 00:38:19.508 [2024-12-13 10:40:13.054281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.508 [2024-12-13 10:40:13.054333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.508 qpair failed and we were unable to recover it. 00:38:19.508 [2024-12-13 10:40:13.054686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.508 [2024-12-13 10:40:13.054774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:19.508 qpair failed and we were unable to recover it. 00:38:19.508 [2024-12-13 10:40:13.055004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.508 [2024-12-13 10:40:13.055055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:19.508 qpair failed and we were unable to recover it. 00:38:19.508 [2024-12-13 10:40:13.055201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.508 [2024-12-13 10:40:13.055246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:19.508 qpair failed and we were unable to recover it. 00:38:19.508 [2024-12-13 10:40:13.055443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.508 [2024-12-13 10:40:13.055496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:19.508 qpair failed and we were unable to recover it. 00:38:19.508 [2024-12-13 10:40:13.055652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.508 [2024-12-13 10:40:13.055676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:19.508 qpair failed and we were unable to recover it. 00:38:19.508 [2024-12-13 10:40:13.055833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.508 [2024-12-13 10:40:13.055892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:19.508 qpair failed and we were unable to recover it. 00:38:19.508 [2024-12-13 10:40:13.056204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.508 [2024-12-13 10:40:13.056249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:19.508 qpair failed and we were unable to recover it. 00:38:19.508 [2024-12-13 10:40:13.056467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.508 [2024-12-13 10:40:13.056514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:19.508 qpair failed and we were unable to recover it. 00:38:19.508 [2024-12-13 10:40:13.056731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.508 [2024-12-13 10:40:13.056777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:19.508 qpair failed and we were unable to recover it. 00:38:19.508 [2024-12-13 10:40:13.056981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.508 [2024-12-13 10:40:13.057036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:19.508 qpair failed and we were unable to recover it. 00:38:19.508 [2024-12-13 10:40:13.057131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.508 [2024-12-13 10:40:13.057153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:19.508 qpair failed and we were unable to recover it. 00:38:19.508 [2024-12-13 10:40:13.057350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.508 [2024-12-13 10:40:13.057394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:19.508 qpair failed and we were unable to recover it. 00:38:19.508 [2024-12-13 10:40:13.057578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.508 [2024-12-13 10:40:13.057625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:19.508 qpair failed and we were unable to recover it. 00:38:19.508 [2024-12-13 10:40:13.057769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.508 [2024-12-13 10:40:13.057821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:19.508 qpair failed and we were unable to recover it. 00:38:19.508 [2024-12-13 10:40:13.058108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.508 [2024-12-13 10:40:13.058131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:19.508 qpair failed and we were unable to recover it. 00:38:19.508 [2024-12-13 10:40:13.058296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.508 [2024-12-13 10:40:13.058319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:19.508 qpair failed and we were unable to recover it. 00:38:19.508 [2024-12-13 10:40:13.058428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.508 [2024-12-13 10:40:13.058486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:19.508 qpair failed and we were unable to recover it. 00:38:19.508 [2024-12-13 10:40:13.058704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.508 [2024-12-13 10:40:13.058749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:19.508 qpair failed and we were unable to recover it. 00:38:19.508 [2024-12-13 10:40:13.058911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.508 [2024-12-13 10:40:13.058956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:19.508 qpair failed and we were unable to recover it. 00:38:19.509 [2024-12-13 10:40:13.059161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.509 [2024-12-13 10:40:13.059185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:19.509 qpair failed and we were unable to recover it. 00:38:19.509 [2024-12-13 10:40:13.059474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.509 [2024-12-13 10:40:13.059523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:19.509 qpair failed and we were unable to recover it. 00:38:19.509 [2024-12-13 10:40:13.059754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.509 [2024-12-13 10:40:13.059799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:19.509 qpair failed and we were unable to recover it. 00:38:19.509 [2024-12-13 10:40:13.060028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.509 [2024-12-13 10:40:13.060050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:19.509 qpair failed and we were unable to recover it. 00:38:19.509 [2024-12-13 10:40:13.060149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.509 [2024-12-13 10:40:13.060180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:19.509 qpair failed and we were unable to recover it. 00:38:19.509 [2024-12-13 10:40:13.060442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.509 [2024-12-13 10:40:13.060468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.509 qpair failed and we were unable to recover it. 00:38:19.509 [2024-12-13 10:40:13.060573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.509 [2024-12-13 10:40:13.060588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.509 qpair failed and we were unable to recover it. 00:38:19.509 [2024-12-13 10:40:13.060678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.509 [2024-12-13 10:40:13.060693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.509 qpair failed and we were unable to recover it. 00:38:19.509 [2024-12-13 10:40:13.060923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.509 [2024-12-13 10:40:13.060938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.509 qpair failed and we were unable to recover it. 00:38:19.509 [2024-12-13 10:40:13.061090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.509 [2024-12-13 10:40:13.061106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.509 qpair failed and we were unable to recover it. 00:38:19.509 [2024-12-13 10:40:13.061317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.509 [2024-12-13 10:40:13.061333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.509 qpair failed and we were unable to recover it. 00:38:19.509 [2024-12-13 10:40:13.061551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.509 [2024-12-13 10:40:13.061597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.509 qpair failed and we were unable to recover it. 00:38:19.509 [2024-12-13 10:40:13.061885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.509 [2024-12-13 10:40:13.061901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.509 qpair failed and we were unable to recover it. 00:38:19.509 [2024-12-13 10:40:13.061988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.509 [2024-12-13 10:40:13.062002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.509 qpair failed and we were unable to recover it. 00:38:19.509 [2024-12-13 10:40:13.062147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.509 [2024-12-13 10:40:13.062191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.509 qpair failed and we were unable to recover it. 00:38:19.509 [2024-12-13 10:40:13.062327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.509 [2024-12-13 10:40:13.062382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.509 qpair failed and we were unable to recover it. 00:38:19.509 [2024-12-13 10:40:13.062655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.509 [2024-12-13 10:40:13.062700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.509 qpair failed and we were unable to recover it. 00:38:19.509 [2024-12-13 10:40:13.062918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.509 [2024-12-13 10:40:13.062932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.509 qpair failed and we were unable to recover it. 00:38:19.509 [2024-12-13 10:40:13.063005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.509 [2024-12-13 10:40:13.063018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.509 qpair failed and we were unable to recover it. 00:38:19.509 [2024-12-13 10:40:13.063252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.509 [2024-12-13 10:40:13.063267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.509 qpair failed and we were unable to recover it. 00:38:19.509 [2024-12-13 10:40:13.063478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.509 [2024-12-13 10:40:13.063497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.509 qpair failed and we were unable to recover it. 00:38:19.509 [2024-12-13 10:40:13.063673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.509 [2024-12-13 10:40:13.063689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.509 qpair failed and we were unable to recover it. 00:38:19.509 [2024-12-13 10:40:13.063833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.509 [2024-12-13 10:40:13.063849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.509 qpair failed and we were unable to recover it. 00:38:19.509 [2024-12-13 10:40:13.064019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.509 [2024-12-13 10:40:13.064035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.509 qpair failed and we were unable to recover it. 00:38:19.509 [2024-12-13 10:40:13.064132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.509 [2024-12-13 10:40:13.064146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.509 qpair failed and we were unable to recover it. 00:38:19.509 [2024-12-13 10:40:13.064233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.509 [2024-12-13 10:40:13.064247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.509 qpair failed and we were unable to recover it. 00:38:19.509 [2024-12-13 10:40:13.064395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.509 [2024-12-13 10:40:13.064411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.509 qpair failed and we were unable to recover it. 00:38:19.509 [2024-12-13 10:40:13.064570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.509 [2024-12-13 10:40:13.064615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.509 qpair failed and we were unable to recover it. 00:38:19.509 [2024-12-13 10:40:13.064742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.509 [2024-12-13 10:40:13.064786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.509 qpair failed and we were unable to recover it. 00:38:19.509 [2024-12-13 10:40:13.065038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.509 [2024-12-13 10:40:13.065096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:19.509 qpair failed and we were unable to recover it. 00:38:19.509 [2024-12-13 10:40:13.065284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.509 [2024-12-13 10:40:13.065314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:19.509 qpair failed and we were unable to recover it. 00:38:19.509 [2024-12-13 10:40:13.065456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.509 [2024-12-13 10:40:13.065504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:19.509 qpair failed and we were unable to recover it. 00:38:19.509 [2024-12-13 10:40:13.065665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.509 [2024-12-13 10:40:13.065706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.509 qpair failed and we were unable to recover it. 00:38:19.509 [2024-12-13 10:40:13.065878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.509 [2024-12-13 10:40:13.065894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.509 qpair failed and we were unable to recover it. 00:38:19.509 [2024-12-13 10:40:13.066146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.509 [2024-12-13 10:40:13.066161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.509 qpair failed and we were unable to recover it. 00:38:19.509 [2024-12-13 10:40:13.066304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.509 [2024-12-13 10:40:13.066318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.509 qpair failed and we were unable to recover it. 00:38:19.509 [2024-12-13 10:40:13.066525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.509 [2024-12-13 10:40:13.066540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.509 qpair failed and we were unable to recover it. 00:38:19.509 [2024-12-13 10:40:13.066638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.509 [2024-12-13 10:40:13.066652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.509 qpair failed and we were unable to recover it. 00:38:19.509 [2024-12-13 10:40:13.066773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.509 [2024-12-13 10:40:13.066789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.509 qpair failed and we were unable to recover it. 00:38:19.509 [2024-12-13 10:40:13.066948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.510 [2024-12-13 10:40:13.066963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.510 qpair failed and we were unable to recover it. 00:38:19.510 [2024-12-13 10:40:13.067040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.510 [2024-12-13 10:40:13.067053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.510 qpair failed and we were unable to recover it. 00:38:19.510 [2024-12-13 10:40:13.067132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.510 [2024-12-13 10:40:13.067145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.510 qpair failed and we were unable to recover it. 00:38:19.510 [2024-12-13 10:40:13.067331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.510 [2024-12-13 10:40:13.067346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.510 qpair failed and we were unable to recover it. 00:38:19.510 [2024-12-13 10:40:13.067481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.510 [2024-12-13 10:40:13.067497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.510 qpair failed and we were unable to recover it. 00:38:19.510 [2024-12-13 10:40:13.067584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.510 [2024-12-13 10:40:13.067598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.510 qpair failed and we were unable to recover it. 00:38:19.510 [2024-12-13 10:40:13.067754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.510 [2024-12-13 10:40:13.067769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.510 qpair failed and we were unable to recover it. 00:38:19.510 [2024-12-13 10:40:13.067836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.510 [2024-12-13 10:40:13.067850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.510 qpair failed and we were unable to recover it. 00:38:19.510 [2024-12-13 10:40:13.068054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.510 [2024-12-13 10:40:13.068070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.510 qpair failed and we were unable to recover it. 00:38:19.510 [2024-12-13 10:40:13.068147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.510 [2024-12-13 10:40:13.068162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.510 qpair failed and we were unable to recover it. 00:38:19.510 [2024-12-13 10:40:13.068391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.510 [2024-12-13 10:40:13.068406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.510 qpair failed and we were unable to recover it. 00:38:19.510 [2024-12-13 10:40:13.068488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.510 [2024-12-13 10:40:13.068502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.510 qpair failed and we were unable to recover it. 00:38:19.510 [2024-12-13 10:40:13.068581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.510 [2024-12-13 10:40:13.068594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.510 qpair failed and we were unable to recover it. 00:38:19.510 [2024-12-13 10:40:13.068664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.510 [2024-12-13 10:40:13.068678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.510 qpair failed and we were unable to recover it. 00:38:19.510 [2024-12-13 10:40:13.068892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.510 [2024-12-13 10:40:13.068937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.510 qpair failed and we were unable to recover it. 00:38:19.510 [2024-12-13 10:40:13.069138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.510 [2024-12-13 10:40:13.069186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:19.510 qpair failed and we were unable to recover it. 00:38:19.510 [2024-12-13 10:40:13.069488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.510 [2024-12-13 10:40:13.069537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:19.510 qpair failed and we were unable to recover it. 00:38:19.510 [2024-12-13 10:40:13.069802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.510 [2024-12-13 10:40:13.069857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:19.510 qpair failed and we were unable to recover it. 00:38:19.510 [2024-12-13 10:40:13.070123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.510 [2024-12-13 10:40:13.070147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:19.510 qpair failed and we were unable to recover it. 00:38:19.510 [2024-12-13 10:40:13.070252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.510 [2024-12-13 10:40:13.070276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:19.510 qpair failed and we were unable to recover it. 00:38:19.510 [2024-12-13 10:40:13.070377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.510 [2024-12-13 10:40:13.070400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:19.510 qpair failed and we were unable to recover it. 00:38:19.510 [2024-12-13 10:40:13.070524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.510 [2024-12-13 10:40:13.070543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.510 qpair failed and we were unable to recover it. 00:38:19.510 [2024-12-13 10:40:13.070691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.510 [2024-12-13 10:40:13.070707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.510 qpair failed and we were unable to recover it. 00:38:19.510 [2024-12-13 10:40:13.070909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.510 [2024-12-13 10:40:13.070924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.510 qpair failed and we were unable to recover it. 00:38:19.510 [2024-12-13 10:40:13.070997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.510 [2024-12-13 10:40:13.071011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.510 qpair failed and we were unable to recover it. 00:38:19.510 [2024-12-13 10:40:13.071161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.510 [2024-12-13 10:40:13.071175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.510 qpair failed and we were unable to recover it. 00:38:19.510 [2024-12-13 10:40:13.071384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.510 [2024-12-13 10:40:13.071403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.510 qpair failed and we were unable to recover it. 00:38:19.510 [2024-12-13 10:40:13.071608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.510 [2024-12-13 10:40:13.071624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.510 qpair failed and we were unable to recover it. 00:38:19.510 [2024-12-13 10:40:13.071853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.510 [2024-12-13 10:40:13.071869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.510 qpair failed and we were unable to recover it. 00:38:19.510 [2024-12-13 10:40:13.072022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.510 [2024-12-13 10:40:13.072037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.510 qpair failed and we were unable to recover it. 00:38:19.510 [2024-12-13 10:40:13.072111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.510 [2024-12-13 10:40:13.072145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.510 qpair failed and we were unable to recover it. 00:38:19.510 [2024-12-13 10:40:13.072351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.510 [2024-12-13 10:40:13.072395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.510 qpair failed and we were unable to recover it. 00:38:19.510 [2024-12-13 10:40:13.072621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.510 [2024-12-13 10:40:13.072679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.510 qpair failed and we were unable to recover it. 00:38:19.510 [2024-12-13 10:40:13.072861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.511 [2024-12-13 10:40:13.072875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.511 qpair failed and we were unable to recover it. 00:38:19.511 [2024-12-13 10:40:13.073041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.511 [2024-12-13 10:40:13.073083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.511 qpair failed and we were unable to recover it. 00:38:19.511 [2024-12-13 10:40:13.073305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.511 [2024-12-13 10:40:13.073350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.511 qpair failed and we were unable to recover it. 00:38:19.511 [2024-12-13 10:40:13.073559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.511 [2024-12-13 10:40:13.073603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.511 qpair failed and we were unable to recover it. 00:38:19.511 [2024-12-13 10:40:13.073837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.511 [2024-12-13 10:40:13.073880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.511 qpair failed and we were unable to recover it. 00:38:19.511 [2024-12-13 10:40:13.074080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.511 [2024-12-13 10:40:13.074096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.511 qpair failed and we were unable to recover it. 00:38:19.511 [2024-12-13 10:40:13.074249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.511 [2024-12-13 10:40:13.074293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.511 qpair failed and we were unable to recover it. 00:38:19.511 [2024-12-13 10:40:13.074433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.511 [2024-12-13 10:40:13.074487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.511 qpair failed and we were unable to recover it. 00:38:19.511 [2024-12-13 10:40:13.074696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.511 [2024-12-13 10:40:13.074739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.511 qpair failed and we were unable to recover it. 00:38:19.511 [2024-12-13 10:40:13.074942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.511 [2024-12-13 10:40:13.074957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.511 qpair failed and we were unable to recover it. 00:38:19.511 [2024-12-13 10:40:13.075104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.511 [2024-12-13 10:40:13.075147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.511 qpair failed and we were unable to recover it. 00:38:19.511 [2024-12-13 10:40:13.075280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.511 [2024-12-13 10:40:13.075323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.511 qpair failed and we were unable to recover it. 00:38:19.511 [2024-12-13 10:40:13.075524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.511 [2024-12-13 10:40:13.075570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.511 qpair failed and we were unable to recover it. 00:38:19.511 [2024-12-13 10:40:13.075772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.511 [2024-12-13 10:40:13.075815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.511 qpair failed and we were unable to recover it. 00:38:19.511 [2024-12-13 10:40:13.076102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.511 [2024-12-13 10:40:13.076147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.511 qpair failed and we were unable to recover it. 00:38:19.511 [2024-12-13 10:40:13.076372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.511 [2024-12-13 10:40:13.076420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.511 qpair failed and we were unable to recover it. 00:38:19.511 [2024-12-13 10:40:13.076677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.511 [2024-12-13 10:40:13.076734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.511 qpair failed and we were unable to recover it. 00:38:19.511 [2024-12-13 10:40:13.076944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.511 [2024-12-13 10:40:13.076988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.511 qpair failed and we were unable to recover it. 00:38:19.511 [2024-12-13 10:40:13.077142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.511 [2024-12-13 10:40:13.077183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.511 qpair failed and we were unable to recover it. 00:38:19.511 [2024-12-13 10:40:13.077410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.511 [2024-12-13 10:40:13.077465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.511 qpair failed and we were unable to recover it. 00:38:19.511 [2024-12-13 10:40:13.077711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.511 [2024-12-13 10:40:13.077754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.511 qpair failed and we were unable to recover it. 00:38:19.511 [2024-12-13 10:40:13.077940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.511 [2024-12-13 10:40:13.077983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.511 qpair failed and we were unable to recover it. 00:38:19.511 [2024-12-13 10:40:13.078099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.511 [2024-12-13 10:40:13.078115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.511 qpair failed and we were unable to recover it. 00:38:19.511 [2024-12-13 10:40:13.078200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.511 [2024-12-13 10:40:13.078214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.511 qpair failed and we were unable to recover it. 00:38:19.511 [2024-12-13 10:40:13.078363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.511 [2024-12-13 10:40:13.078378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.511 qpair failed and we were unable to recover it. 00:38:19.511 [2024-12-13 10:40:13.078537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.511 [2024-12-13 10:40:13.078553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.511 qpair failed and we were unable to recover it. 00:38:19.511 [2024-12-13 10:40:13.078628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.511 [2024-12-13 10:40:13.078642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.511 qpair failed and we were unable to recover it. 00:38:19.511 [2024-12-13 10:40:13.078786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.511 [2024-12-13 10:40:13.078801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.511 qpair failed and we were unable to recover it. 00:38:19.511 [2024-12-13 10:40:13.078938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.511 [2024-12-13 10:40:13.078955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.511 qpair failed and we were unable to recover it. 00:38:19.511 [2024-12-13 10:40:13.079121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.511 [2024-12-13 10:40:13.079163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.511 qpair failed and we were unable to recover it. 00:38:19.511 [2024-12-13 10:40:13.079300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.511 [2024-12-13 10:40:13.079343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.511 qpair failed and we were unable to recover it. 00:38:19.511 [2024-12-13 10:40:13.079545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.511 [2024-12-13 10:40:13.079589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.511 qpair failed and we were unable to recover it. 00:38:19.511 [2024-12-13 10:40:13.079814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.511 [2024-12-13 10:40:13.079828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.511 qpair failed and we were unable to recover it. 00:38:19.511 [2024-12-13 10:40:13.079971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.511 [2024-12-13 10:40:13.080012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.511 qpair failed and we were unable to recover it. 00:38:19.511 [2024-12-13 10:40:13.080262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.511 [2024-12-13 10:40:13.080305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.511 qpair failed and we were unable to recover it. 00:38:19.511 [2024-12-13 10:40:13.080590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.511 [2024-12-13 10:40:13.080633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.511 qpair failed and we were unable to recover it. 00:38:19.511 [2024-12-13 10:40:13.080769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.511 [2024-12-13 10:40:13.080785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.511 qpair failed and we were unable to recover it. 00:38:19.511 [2024-12-13 10:40:13.080920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.511 [2024-12-13 10:40:13.080934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.511 qpair failed and we were unable to recover it. 00:38:19.511 [2024-12-13 10:40:13.081039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.511 [2024-12-13 10:40:13.081052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.511 qpair failed and we were unable to recover it. 00:38:19.511 [2024-12-13 10:40:13.081188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.512 [2024-12-13 10:40:13.081245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.512 qpair failed and we were unable to recover it. 00:38:19.512 [2024-12-13 10:40:13.081394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.512 [2024-12-13 10:40:13.081439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.512 qpair failed and we were unable to recover it. 00:38:19.512 [2024-12-13 10:40:13.081760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.512 [2024-12-13 10:40:13.081808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.512 qpair failed and we were unable to recover it. 00:38:19.512 [2024-12-13 10:40:13.082015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.512 [2024-12-13 10:40:13.082031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.512 qpair failed and we were unable to recover it. 00:38:19.512 [2024-12-13 10:40:13.082165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.512 [2024-12-13 10:40:13.082179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.512 qpair failed and we were unable to recover it. 00:38:19.512 [2024-12-13 10:40:13.082337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.512 [2024-12-13 10:40:13.082379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.512 qpair failed and we were unable to recover it. 00:38:19.512 [2024-12-13 10:40:13.082546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.512 [2024-12-13 10:40:13.082590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.512 qpair failed and we were unable to recover it. 00:38:19.512 [2024-12-13 10:40:13.082794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.512 [2024-12-13 10:40:13.082838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.512 qpair failed and we were unable to recover it. 00:38:19.512 [2024-12-13 10:40:13.082992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.512 [2024-12-13 10:40:13.083006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.512 qpair failed and we were unable to recover it. 00:38:19.512 [2024-12-13 10:40:13.083214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.512 [2024-12-13 10:40:13.083257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.512 qpair failed and we were unable to recover it. 00:38:19.512 [2024-12-13 10:40:13.083405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.512 [2024-12-13 10:40:13.083460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.512 qpair failed and we were unable to recover it. 00:38:19.512 [2024-12-13 10:40:13.083618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.512 [2024-12-13 10:40:13.083660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.512 qpair failed and we were unable to recover it. 00:38:19.512 [2024-12-13 10:40:13.083943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.512 [2024-12-13 10:40:13.083985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.512 qpair failed and we were unable to recover it. 00:38:19.512 [2024-12-13 10:40:13.084131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.512 [2024-12-13 10:40:13.084175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.512 qpair failed and we were unable to recover it. 00:38:19.512 [2024-12-13 10:40:13.084367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.512 [2024-12-13 10:40:13.084409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.512 qpair failed and we were unable to recover it. 00:38:19.512 [2024-12-13 10:40:13.084640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.512 [2024-12-13 10:40:13.084690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:19.512 qpair failed and we were unable to recover it. 00:38:19.512 [2024-12-13 10:40:13.084860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.512 [2024-12-13 10:40:13.084903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:19.512 qpair failed and we were unable to recover it. 00:38:19.512 [2024-12-13 10:40:13.085170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.512 [2024-12-13 10:40:13.085192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:19.512 qpair failed and we were unable to recover it. 00:38:19.512 [2024-12-13 10:40:13.085367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.512 [2024-12-13 10:40:13.085390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:19.512 qpair failed and we were unable to recover it. 00:38:19.512 [2024-12-13 10:40:13.085563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.512 [2024-12-13 10:40:13.085590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:19.512 qpair failed and we were unable to recover it. 00:38:19.512 [2024-12-13 10:40:13.085778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.512 [2024-12-13 10:40:13.085822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:19.512 qpair failed and we were unable to recover it. 00:38:19.512 [2024-12-13 10:40:13.085961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.512 [2024-12-13 10:40:13.086004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:19.512 qpair failed and we were unable to recover it. 00:38:19.512 [2024-12-13 10:40:13.086153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.512 [2024-12-13 10:40:13.086196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:19.512 qpair failed and we were unable to recover it. 00:38:19.512 [2024-12-13 10:40:13.086456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.512 [2024-12-13 10:40:13.086502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:19.512 qpair failed and we were unable to recover it. 00:38:19.512 [2024-12-13 10:40:13.086645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.512 [2024-12-13 10:40:13.086701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:19.512 qpair failed and we were unable to recover it. 00:38:19.512 [2024-12-13 10:40:13.086924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.512 [2024-12-13 10:40:13.086947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:19.512 qpair failed and we were unable to recover it. 00:38:19.512 [2024-12-13 10:40:13.087053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.512 [2024-12-13 10:40:13.087076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:19.512 qpair failed and we were unable to recover it. 00:38:19.512 [2024-12-13 10:40:13.087274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.512 [2024-12-13 10:40:13.087322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.512 qpair failed and we were unable to recover it. 00:38:19.512 [2024-12-13 10:40:13.087493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.512 [2024-12-13 10:40:13.087547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:19.512 qpair failed and we were unable to recover it. 00:38:19.512 [2024-12-13 10:40:13.087755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.512 [2024-12-13 10:40:13.087809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:19.512 qpair failed and we were unable to recover it. 00:38:19.512 [2024-12-13 10:40:13.087971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.512 [2024-12-13 10:40:13.088016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:19.512 qpair failed and we were unable to recover it. 00:38:19.512 [2024-12-13 10:40:13.088209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.512 [2024-12-13 10:40:13.088253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:19.512 qpair failed and we were unable to recover it. 00:38:19.512 [2024-12-13 10:40:13.088579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.512 [2024-12-13 10:40:13.088623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:19.512 qpair failed and we were unable to recover it. 00:38:19.512 [2024-12-13 10:40:13.088823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.512 [2024-12-13 10:40:13.088869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:19.512 qpair failed and we were unable to recover it. 00:38:19.512 [2024-12-13 10:40:13.089030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.512 [2024-12-13 10:40:13.089075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:19.512 qpair failed and we were unable to recover it. 00:38:19.512 [2024-12-13 10:40:13.089338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.512 [2024-12-13 10:40:13.089383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:19.512 qpair failed and we were unable to recover it. 00:38:19.512 [2024-12-13 10:40:13.089527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.512 [2024-12-13 10:40:13.089573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:19.512 qpair failed and we were unable to recover it. 00:38:19.512 [2024-12-13 10:40:13.089722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.512 [2024-12-13 10:40:13.089766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:19.512 qpair failed and we were unable to recover it. 00:38:19.512 [2024-12-13 10:40:13.090005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.512 [2024-12-13 10:40:13.090051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:19.512 qpair failed and we were unable to recover it. 00:38:19.512 [2024-12-13 10:40:13.090275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.513 [2024-12-13 10:40:13.090297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:19.513 qpair failed and we were unable to recover it. 00:38:19.513 [2024-12-13 10:40:13.090400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.513 [2024-12-13 10:40:13.090423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:19.513 qpair failed and we were unable to recover it. 00:38:19.513 [2024-12-13 10:40:13.090603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.513 [2024-12-13 10:40:13.090626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:19.513 qpair failed and we were unable to recover it. 00:38:19.513 [2024-12-13 10:40:13.090816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.513 [2024-12-13 10:40:13.090839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:19.513 qpair failed and we were unable to recover it. 00:38:19.513 [2024-12-13 10:40:13.091009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.513 [2024-12-13 10:40:13.091034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:19.513 qpair failed and we were unable to recover it. 00:38:19.513 [2024-12-13 10:40:13.091212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.513 [2024-12-13 10:40:13.091235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:19.513 qpair failed and we were unable to recover it. 00:38:19.513 [2024-12-13 10:40:13.091403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.513 [2024-12-13 10:40:13.091461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:19.513 qpair failed and we were unable to recover it. 00:38:19.513 [2024-12-13 10:40:13.091679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.513 [2024-12-13 10:40:13.091724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:19.513 qpair failed and we were unable to recover it. 00:38:19.513 [2024-12-13 10:40:13.091931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.513 [2024-12-13 10:40:13.091993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:19.513 qpair failed and we were unable to recover it. 00:38:19.513 [2024-12-13 10:40:13.092260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.513 [2024-12-13 10:40:13.092283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:19.513 qpair failed and we were unable to recover it. 00:38:19.513 [2024-12-13 10:40:13.092366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.513 [2024-12-13 10:40:13.092388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:19.513 qpair failed and we were unable to recover it. 00:38:19.513 [2024-12-13 10:40:13.092540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.513 [2024-12-13 10:40:13.092559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.513 qpair failed and we were unable to recover it. 00:38:19.513 [2024-12-13 10:40:13.092705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.513 [2024-12-13 10:40:13.092721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.513 qpair failed and we were unable to recover it. 00:38:19.513 [2024-12-13 10:40:13.092890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.513 [2024-12-13 10:40:13.092906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.513 qpair failed and we were unable to recover it. 00:38:19.513 [2024-12-13 10:40:13.093061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.513 [2024-12-13 10:40:13.093105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.513 qpair failed and we were unable to recover it. 00:38:19.513 [2024-12-13 10:40:13.093246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.513 [2024-12-13 10:40:13.093290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.513 qpair failed and we were unable to recover it. 00:38:19.513 [2024-12-13 10:40:13.093487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.513 [2024-12-13 10:40:13.093532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.513 qpair failed and we were unable to recover it. 00:38:19.513 [2024-12-13 10:40:13.093676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.513 [2024-12-13 10:40:13.093721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.513 qpair failed and we were unable to recover it. 00:38:19.513 [2024-12-13 10:40:13.093914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.513 [2024-12-13 10:40:13.093929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.513 qpair failed and we were unable to recover it. 00:38:19.513 [2024-12-13 10:40:13.094140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.513 [2024-12-13 10:40:13.094184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.513 qpair failed and we were unable to recover it. 00:38:19.513 [2024-12-13 10:40:13.094382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.513 [2024-12-13 10:40:13.094425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.513 qpair failed and we were unable to recover it. 00:38:19.513 [2024-12-13 10:40:13.094588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.513 [2024-12-13 10:40:13.094633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.513 qpair failed and we were unable to recover it. 00:38:19.513 [2024-12-13 10:40:13.094817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.513 [2024-12-13 10:40:13.094832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.513 qpair failed and we were unable to recover it. 00:38:19.513 [2024-12-13 10:40:13.095007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.513 [2024-12-13 10:40:13.095050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.513 qpair failed and we were unable to recover it. 00:38:19.513 [2024-12-13 10:40:13.095247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.513 [2024-12-13 10:40:13.095291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.513 qpair failed and we were unable to recover it. 00:38:19.513 [2024-12-13 10:40:13.095490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.513 [2024-12-13 10:40:13.095534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.513 qpair failed and we were unable to recover it. 00:38:19.513 [2024-12-13 10:40:13.095809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.513 [2024-12-13 10:40:13.095853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.513 qpair failed and we were unable to recover it. 00:38:19.513 [2024-12-13 10:40:13.096070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.513 [2024-12-13 10:40:13.096114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.513 qpair failed and we were unable to recover it. 00:38:19.513 [2024-12-13 10:40:13.096219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.513 [2024-12-13 10:40:13.096232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.513 qpair failed and we were unable to recover it. 00:38:19.513 [2024-12-13 10:40:13.096347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.513 [2024-12-13 10:40:13.096362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.513 qpair failed and we were unable to recover it. 00:38:19.513 [2024-12-13 10:40:13.096530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.513 [2024-12-13 10:40:13.096554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.513 qpair failed and we were unable to recover it. 00:38:19.513 [2024-12-13 10:40:13.096717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.513 [2024-12-13 10:40:13.096733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.513 qpair failed and we were unable to recover it. 00:38:19.513 [2024-12-13 10:40:13.096811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.513 [2024-12-13 10:40:13.096826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.513 qpair failed and we were unable to recover it. 00:38:19.513 [2024-12-13 10:40:13.097034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.513 [2024-12-13 10:40:13.097077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.513 qpair failed and we were unable to recover it. 00:38:19.513 [2024-12-13 10:40:13.097226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.513 [2024-12-13 10:40:13.097269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.513 qpair failed and we were unable to recover it. 00:38:19.513 [2024-12-13 10:40:13.097475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.513 [2024-12-13 10:40:13.097524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.513 qpair failed and we were unable to recover it. 00:38:19.513 [2024-12-13 10:40:13.097677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.513 [2024-12-13 10:40:13.097722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.513 qpair failed and we were unable to recover it. 00:38:19.513 [2024-12-13 10:40:13.097869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.513 [2024-12-13 10:40:13.097883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.513 qpair failed and we were unable to recover it. 00:38:19.513 [2024-12-13 10:40:13.097974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.513 [2024-12-13 10:40:13.098022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.513 qpair failed and we were unable to recover it. 00:38:19.513 [2024-12-13 10:40:13.098233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.514 [2024-12-13 10:40:13.098276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.514 qpair failed and we were unable to recover it. 00:38:19.514 [2024-12-13 10:40:13.098490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.514 [2024-12-13 10:40:13.098534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.514 qpair failed and we were unable to recover it. 00:38:19.514 [2024-12-13 10:40:13.098747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.514 [2024-12-13 10:40:13.098791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.514 qpair failed and we were unable to recover it. 00:38:19.514 [2024-12-13 10:40:13.099009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.514 [2024-12-13 10:40:13.099024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.514 qpair failed and we were unable to recover it. 00:38:19.514 [2024-12-13 10:40:13.099185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.514 [2024-12-13 10:40:13.099227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.514 qpair failed and we were unable to recover it. 00:38:19.514 [2024-12-13 10:40:13.099437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.514 [2024-12-13 10:40:13.099504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.514 qpair failed and we were unable to recover it. 00:38:19.514 [2024-12-13 10:40:13.099789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.514 [2024-12-13 10:40:13.099831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.514 qpair failed and we were unable to recover it. 00:38:19.514 [2024-12-13 10:40:13.100015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.514 [2024-12-13 10:40:13.100030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.514 qpair failed and we were unable to recover it. 00:38:19.514 [2024-12-13 10:40:13.100176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.514 [2024-12-13 10:40:13.100220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.514 qpair failed and we were unable to recover it. 00:38:19.514 [2024-12-13 10:40:13.100445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.514 [2024-12-13 10:40:13.100500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.514 qpair failed and we were unable to recover it. 00:38:19.514 [2024-12-13 10:40:13.100694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.514 [2024-12-13 10:40:13.100736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.514 qpair failed and we were unable to recover it. 00:38:19.514 [2024-12-13 10:40:13.100887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.514 [2024-12-13 10:40:13.100902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.514 qpair failed and we were unable to recover it. 00:38:19.514 [2024-12-13 10:40:13.101077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.514 [2024-12-13 10:40:13.101119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.514 qpair failed and we were unable to recover it. 00:38:19.514 [2024-12-13 10:40:13.101401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.514 [2024-12-13 10:40:13.101445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.514 qpair failed and we were unable to recover it. 00:38:19.514 [2024-12-13 10:40:13.101681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.514 [2024-12-13 10:40:13.101725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.514 qpair failed and we were unable to recover it. 00:38:19.514 [2024-12-13 10:40:13.101862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.514 [2024-12-13 10:40:13.101904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.514 qpair failed and we were unable to recover it. 00:38:19.514 [2024-12-13 10:40:13.102103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.514 [2024-12-13 10:40:13.102146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.514 qpair failed and we were unable to recover it. 00:38:19.514 [2024-12-13 10:40:13.102444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.514 [2024-12-13 10:40:13.102505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.514 qpair failed and we were unable to recover it. 00:38:19.514 [2024-12-13 10:40:13.102716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.514 [2024-12-13 10:40:13.102759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.514 qpair failed and we were unable to recover it. 00:38:19.514 [2024-12-13 10:40:13.102952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.514 [2024-12-13 10:40:13.102994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.514 qpair failed and we were unable to recover it. 00:38:19.514 [2024-12-13 10:40:13.103280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.514 [2024-12-13 10:40:13.103323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.514 qpair failed and we were unable to recover it. 00:38:19.514 [2024-12-13 10:40:13.103511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.514 [2024-12-13 10:40:13.103556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.514 qpair failed and we were unable to recover it. 00:38:19.514 [2024-12-13 10:40:13.103700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.514 [2024-12-13 10:40:13.103743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.514 qpair failed and we were unable to recover it. 00:38:19.514 [2024-12-13 10:40:13.103998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.514 [2024-12-13 10:40:13.104013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.514 qpair failed and we were unable to recover it. 00:38:19.514 [2024-12-13 10:40:13.104221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.514 [2024-12-13 10:40:13.104264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.514 qpair failed and we were unable to recover it. 00:38:19.514 [2024-12-13 10:40:13.104393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.514 [2024-12-13 10:40:13.104436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.514 qpair failed and we were unable to recover it. 00:38:19.514 [2024-12-13 10:40:13.104642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.514 [2024-12-13 10:40:13.104689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.514 qpair failed and we were unable to recover it. 00:38:19.514 [2024-12-13 10:40:13.104859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.514 [2024-12-13 10:40:13.104874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.514 qpair failed and we were unable to recover it. 00:38:19.514 [2024-12-13 10:40:13.105039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.514 [2024-12-13 10:40:13.105082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.514 qpair failed and we were unable to recover it. 00:38:19.514 [2024-12-13 10:40:13.105274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.514 [2024-12-13 10:40:13.105317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.514 qpair failed and we were unable to recover it. 00:38:19.514 [2024-12-13 10:40:13.105473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.514 [2024-12-13 10:40:13.105518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.514 qpair failed and we were unable to recover it. 00:38:19.514 [2024-12-13 10:40:13.105718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.514 [2024-12-13 10:40:13.105766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.514 qpair failed and we were unable to recover it. 00:38:19.514 [2024-12-13 10:40:13.105967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.514 [2024-12-13 10:40:13.106010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.514 qpair failed and we were unable to recover it. 00:38:19.514 [2024-12-13 10:40:13.106139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.514 [2024-12-13 10:40:13.106155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.514 qpair failed and we were unable to recover it. 00:38:19.514 [2024-12-13 10:40:13.106388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.514 [2024-12-13 10:40:13.106431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.514 qpair failed and we were unable to recover it. 00:38:19.514 [2024-12-13 10:40:13.106680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.514 [2024-12-13 10:40:13.106724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.514 qpair failed and we were unable to recover it. 00:38:19.514 [2024-12-13 10:40:13.106943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.514 [2024-12-13 10:40:13.106986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.514 qpair failed and we were unable to recover it. 00:38:19.514 [2024-12-13 10:40:13.107216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.514 [2024-12-13 10:40:13.107260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.514 qpair failed and we were unable to recover it. 00:38:19.514 [2024-12-13 10:40:13.107402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.514 [2024-12-13 10:40:13.107467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.514 qpair failed and we were unable to recover it. 00:38:19.514 [2024-12-13 10:40:13.107745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.514 [2024-12-13 10:40:13.107790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.515 qpair failed and we were unable to recover it. 00:38:19.515 [2024-12-13 10:40:13.107943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.515 [2024-12-13 10:40:13.107986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.515 qpair failed and we were unable to recover it. 00:38:19.515 [2024-12-13 10:40:13.108125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.515 [2024-12-13 10:40:13.108140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.515 qpair failed and we were unable to recover it. 00:38:19.515 [2024-12-13 10:40:13.108285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.515 [2024-12-13 10:40:13.108334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.515 qpair failed and we were unable to recover it. 00:38:19.515 [2024-12-13 10:40:13.108555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.515 [2024-12-13 10:40:13.108598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.515 qpair failed and we were unable to recover it. 00:38:19.515 [2024-12-13 10:40:13.108743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.515 [2024-12-13 10:40:13.108787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.515 qpair failed and we were unable to recover it. 00:38:19.515 [2024-12-13 10:40:13.108996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.515 [2024-12-13 10:40:13.109011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.515 qpair failed and we were unable to recover it. 00:38:19.515 [2024-12-13 10:40:13.109171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.515 [2024-12-13 10:40:13.109213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.515 qpair failed and we were unable to recover it. 00:38:19.515 [2024-12-13 10:40:13.109377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.515 [2024-12-13 10:40:13.109421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.515 qpair failed and we were unable to recover it. 00:38:19.515 [2024-12-13 10:40:13.109719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.515 [2024-12-13 10:40:13.109763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.515 qpair failed and we were unable to recover it. 00:38:19.515 [2024-12-13 10:40:13.109954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.515 [2024-12-13 10:40:13.109969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.515 qpair failed and we were unable to recover it. 00:38:19.515 [2024-12-13 10:40:13.110090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.515 [2024-12-13 10:40:13.110133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.515 qpair failed and we were unable to recover it. 00:38:19.515 [2024-12-13 10:40:13.110357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.515 [2024-12-13 10:40:13.110400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.515 qpair failed and we were unable to recover it. 00:38:19.515 [2024-12-13 10:40:13.110562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.515 [2024-12-13 10:40:13.110606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.515 qpair failed and we were unable to recover it. 00:38:19.515 [2024-12-13 10:40:13.110827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.515 [2024-12-13 10:40:13.110870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.515 qpair failed and we were unable to recover it. 00:38:19.515 [2024-12-13 10:40:13.111015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.515 [2024-12-13 10:40:13.111057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.515 qpair failed and we were unable to recover it. 00:38:19.515 [2024-12-13 10:40:13.111265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.515 [2024-12-13 10:40:13.111308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.515 qpair failed and we were unable to recover it. 00:38:19.515 [2024-12-13 10:40:13.111596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.515 [2024-12-13 10:40:13.111642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.515 qpair failed and we were unable to recover it. 00:38:19.515 [2024-12-13 10:40:13.111849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.515 [2024-12-13 10:40:13.111893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.515 qpair failed and we were unable to recover it. 00:38:19.515 [2024-12-13 10:40:13.112209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.515 [2024-12-13 10:40:13.112295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:19.515 qpair failed and we were unable to recover it. 00:38:19.515 [2024-12-13 10:40:13.112526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.515 [2024-12-13 10:40:13.112580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:19.515 qpair failed and we were unable to recover it. 00:38:19.515 [2024-12-13 10:40:13.112847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.515 [2024-12-13 10:40:13.112891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:19.515 qpair failed and we were unable to recover it. 00:38:19.515 [2024-12-13 10:40:13.113164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.515 [2024-12-13 10:40:13.113206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:19.515 qpair failed and we were unable to recover it. 00:38:19.515 [2024-12-13 10:40:13.113493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.515 [2024-12-13 10:40:13.113542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:19.515 qpair failed and we were unable to recover it. 00:38:19.515 [2024-12-13 10:40:13.113834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.515 [2024-12-13 10:40:13.113892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:19.515 qpair failed and we were unable to recover it. 00:38:19.515 [2024-12-13 10:40:13.114065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.515 [2024-12-13 10:40:13.114108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:19.515 qpair failed and we were unable to recover it. 00:38:19.515 [2024-12-13 10:40:13.114394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.515 [2024-12-13 10:40:13.114417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:19.515 qpair failed and we were unable to recover it. 00:38:19.515 [2024-12-13 10:40:13.114532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.515 [2024-12-13 10:40:13.114554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:19.515 qpair failed and we were unable to recover it. 00:38:19.515 [2024-12-13 10:40:13.114664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.515 [2024-12-13 10:40:13.114687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:19.515 qpair failed and we were unable to recover it. 00:38:19.515 [2024-12-13 10:40:13.114802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.515 [2024-12-13 10:40:13.114824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:19.515 qpair failed and we were unable to recover it. 00:38:19.515 [2024-12-13 10:40:13.115073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.515 [2024-12-13 10:40:13.115095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:19.515 qpair failed and we were unable to recover it. 00:38:19.515 [2024-12-13 10:40:13.115267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.515 [2024-12-13 10:40:13.115289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:19.515 qpair failed and we were unable to recover it. 00:38:19.515 [2024-12-13 10:40:13.115403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.515 [2024-12-13 10:40:13.115429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:19.515 qpair failed and we were unable to recover it. 00:38:19.515 [2024-12-13 10:40:13.115568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.515 [2024-12-13 10:40:13.115615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:19.515 qpair failed and we were unable to recover it. 00:38:19.515 [2024-12-13 10:40:13.115843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.515 [2024-12-13 10:40:13.115889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:19.515 qpair failed and we were unable to recover it. 00:38:19.515 [2024-12-13 10:40:13.116011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.516 [2024-12-13 10:40:13.116030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.516 qpair failed and we were unable to recover it. 00:38:19.516 [2024-12-13 10:40:13.116128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.516 [2024-12-13 10:40:13.116141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.516 qpair failed and we were unable to recover it. 00:38:19.516 [2024-12-13 10:40:13.116217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.516 [2024-12-13 10:40:13.116232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.516 qpair failed and we were unable to recover it. 00:38:19.516 [2024-12-13 10:40:13.116385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.516 [2024-12-13 10:40:13.116405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.516 qpair failed and we were unable to recover it. 00:38:19.516 [2024-12-13 10:40:13.116496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.516 [2024-12-13 10:40:13.116512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.516 qpair failed and we were unable to recover it. 00:38:19.516 [2024-12-13 10:40:13.116685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.516 [2024-12-13 10:40:13.116729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.516 qpair failed and we were unable to recover it. 00:38:19.516 [2024-12-13 10:40:13.116863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.516 [2024-12-13 10:40:13.116908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.516 qpair failed and we were unable to recover it. 00:38:19.516 [2024-12-13 10:40:13.117067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.516 [2024-12-13 10:40:13.117111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.516 qpair failed and we were unable to recover it. 00:38:19.516 [2024-12-13 10:40:13.117241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.516 [2024-12-13 10:40:13.117256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.516 qpair failed and we were unable to recover it. 00:38:19.516 [2024-12-13 10:40:13.117392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.516 [2024-12-13 10:40:13.117408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.516 qpair failed and we were unable to recover it. 00:38:19.516 [2024-12-13 10:40:13.117581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.516 [2024-12-13 10:40:13.117597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.516 qpair failed and we were unable to recover it. 00:38:19.516 [2024-12-13 10:40:13.117675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.516 [2024-12-13 10:40:13.117688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.516 qpair failed and we were unable to recover it. 00:38:19.516 [2024-12-13 10:40:13.117760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.516 [2024-12-13 10:40:13.117774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.516 qpair failed and we were unable to recover it. 00:38:19.516 [2024-12-13 10:40:13.117915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.516 [2024-12-13 10:40:13.117929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.516 qpair failed and we were unable to recover it. 00:38:19.516 [2024-12-13 10:40:13.118022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.516 [2024-12-13 10:40:13.118064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.516 qpair failed and we were unable to recover it. 00:38:19.516 [2024-12-13 10:40:13.118274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.516 [2024-12-13 10:40:13.118318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.516 qpair failed and we were unable to recover it. 00:38:19.516 [2024-12-13 10:40:13.118476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.516 [2024-12-13 10:40:13.118521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.516 qpair failed and we were unable to recover it. 00:38:19.516 [2024-12-13 10:40:13.118730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.516 [2024-12-13 10:40:13.118773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.516 qpair failed and we were unable to recover it. 00:38:19.516 [2024-12-13 10:40:13.118935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.516 [2024-12-13 10:40:13.118950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.516 qpair failed and we were unable to recover it. 00:38:19.516 [2024-12-13 10:40:13.119029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.516 [2024-12-13 10:40:13.119043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.516 qpair failed and we were unable to recover it. 00:38:19.516 [2024-12-13 10:40:13.119230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.516 [2024-12-13 10:40:13.119245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.516 qpair failed and we were unable to recover it. 00:38:19.516 [2024-12-13 10:40:13.119333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.516 [2024-12-13 10:40:13.119347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.516 qpair failed and we were unable to recover it. 00:38:19.516 [2024-12-13 10:40:13.119496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.516 [2024-12-13 10:40:13.119511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.516 qpair failed and we were unable to recover it. 00:38:19.516 [2024-12-13 10:40:13.119662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.516 [2024-12-13 10:40:13.119677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.516 qpair failed and we were unable to recover it. 00:38:19.516 [2024-12-13 10:40:13.119847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.516 [2024-12-13 10:40:13.119877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:19.516 qpair failed and we were unable to recover it. 00:38:19.516 [2024-12-13 10:40:13.119997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.516 [2024-12-13 10:40:13.120043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:19.516 qpair failed and we were unable to recover it. 00:38:19.516 [2024-12-13 10:40:13.120180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.516 [2024-12-13 10:40:13.120223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:19.516 qpair failed and we were unable to recover it. 00:38:19.516 [2024-12-13 10:40:13.120371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.516 [2024-12-13 10:40:13.120415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:19.516 qpair failed and we were unable to recover it. 00:38:19.516 [2024-12-13 10:40:13.120727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.516 [2024-12-13 10:40:13.120772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:19.516 qpair failed and we were unable to recover it. 00:38:19.516 [2024-12-13 10:40:13.120934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.516 [2024-12-13 10:40:13.120979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:19.516 qpair failed and we were unable to recover it. 00:38:19.516 [2024-12-13 10:40:13.121133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.516 [2024-12-13 10:40:13.121156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:19.516 qpair failed and we were unable to recover it. 00:38:19.516 [2024-12-13 10:40:13.121383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.516 [2024-12-13 10:40:13.121426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:19.516 qpair failed and we were unable to recover it. 00:38:19.516 [2024-12-13 10:40:13.121595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.516 [2024-12-13 10:40:13.121639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:19.516 qpair failed and we were unable to recover it. 00:38:19.516 [2024-12-13 10:40:13.121853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.516 [2024-12-13 10:40:13.121897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:19.516 qpair failed and we were unable to recover it. 00:38:19.516 [2024-12-13 10:40:13.122004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.516 [2024-12-13 10:40:13.122027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:19.516 qpair failed and we were unable to recover it. 00:38:19.516 [2024-12-13 10:40:13.122219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.516 [2024-12-13 10:40:13.122265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:19.516 qpair failed and we were unable to recover it. 00:38:19.516 [2024-12-13 10:40:13.122499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.516 [2024-12-13 10:40:13.122546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:19.516 qpair failed and we were unable to recover it. 00:38:19.516 [2024-12-13 10:40:13.122683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.516 [2024-12-13 10:40:13.122734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:19.516 qpair failed and we were unable to recover it. 00:38:19.516 [2024-12-13 10:40:13.122949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.516 [2024-12-13 10:40:13.122974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:19.516 qpair failed and we were unable to recover it. 00:38:19.516 [2024-12-13 10:40:13.123087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.516 [2024-12-13 10:40:13.123131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:19.517 qpair failed and we were unable to recover it. 00:38:19.517 [2024-12-13 10:40:13.123303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.517 [2024-12-13 10:40:13.123348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:19.517 qpair failed and we were unable to recover it. 00:38:19.517 [2024-12-13 10:40:13.123506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.517 [2024-12-13 10:40:13.123551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:19.517 qpair failed and we were unable to recover it. 00:38:19.517 [2024-12-13 10:40:13.123765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.517 [2024-12-13 10:40:13.123808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:19.517 qpair failed and we were unable to recover it. 00:38:19.517 [2024-12-13 10:40:13.124019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.517 [2024-12-13 10:40:13.124063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:19.517 qpair failed and we were unable to recover it. 00:38:19.517 [2024-12-13 10:40:13.124269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.517 [2024-12-13 10:40:13.124292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:19.517 qpair failed and we were unable to recover it. 00:38:19.517 [2024-12-13 10:40:13.124465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.517 [2024-12-13 10:40:13.124510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:19.517 qpair failed and we were unable to recover it. 00:38:19.517 [2024-12-13 10:40:13.124725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.517 [2024-12-13 10:40:13.124769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:19.517 qpair failed and we were unable to recover it. 00:38:19.517 [2024-12-13 10:40:13.124919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.517 [2024-12-13 10:40:13.124963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:19.517 qpair failed and we were unable to recover it. 00:38:19.517 [2024-12-13 10:40:13.125093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.517 [2024-12-13 10:40:13.125116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:19.517 qpair failed and we were unable to recover it. 00:38:19.517 [2024-12-13 10:40:13.125215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.517 [2024-12-13 10:40:13.125237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:19.517 qpair failed and we were unable to recover it. 00:38:19.517 [2024-12-13 10:40:13.125349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.517 [2024-12-13 10:40:13.125373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:19.517 qpair failed and we were unable to recover it. 00:38:19.517 [2024-12-13 10:40:13.125531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.517 [2024-12-13 10:40:13.125555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:19.517 qpair failed and we were unable to recover it. 00:38:19.517 [2024-12-13 10:40:13.125675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.517 [2024-12-13 10:40:13.125698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:19.517 qpair failed and we were unable to recover it. 00:38:19.517 [2024-12-13 10:40:13.125780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.517 [2024-12-13 10:40:13.125822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:19.517 qpair failed and we were unable to recover it. 00:38:19.517 [2024-12-13 10:40:13.125966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.517 [2024-12-13 10:40:13.126010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:19.517 qpair failed and we were unable to recover it. 00:38:19.517 [2024-12-13 10:40:13.126135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.517 [2024-12-13 10:40:13.126178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:19.517 qpair failed and we were unable to recover it. 00:38:19.517 [2024-12-13 10:40:13.126386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.517 [2024-12-13 10:40:13.126431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:19.517 qpair failed and we were unable to recover it. 00:38:19.517 [2024-12-13 10:40:13.126641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.517 [2024-12-13 10:40:13.126683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:19.517 qpair failed and we were unable to recover it. 00:38:19.517 [2024-12-13 10:40:13.126816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.517 [2024-12-13 10:40:13.126839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:19.517 qpair failed and we were unable to recover it. 00:38:19.517 [2024-12-13 10:40:13.127059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.517 [2024-12-13 10:40:13.127081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:19.517 qpair failed and we were unable to recover it. 00:38:19.517 [2024-12-13 10:40:13.127165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.517 [2024-12-13 10:40:13.127187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:19.517 qpair failed and we were unable to recover it. 00:38:19.517 [2024-12-13 10:40:13.127366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.517 [2024-12-13 10:40:13.127389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:19.517 qpair failed and we were unable to recover it. 00:38:19.517 [2024-12-13 10:40:13.127491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.517 [2024-12-13 10:40:13.127512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:19.517 qpair failed and we were unable to recover it. 00:38:19.517 [2024-12-13 10:40:13.127605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.517 [2024-12-13 10:40:13.127626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:19.517 qpair failed and we were unable to recover it. 00:38:19.517 [2024-12-13 10:40:13.127728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.517 [2024-12-13 10:40:13.127746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.517 qpair failed and we were unable to recover it. 00:38:19.517 [2024-12-13 10:40:13.127817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.517 [2024-12-13 10:40:13.127831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.517 qpair failed and we were unable to recover it. 00:38:19.517 [2024-12-13 10:40:13.127921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.517 [2024-12-13 10:40:13.127936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.517 qpair failed and we were unable to recover it. 00:38:19.517 [2024-12-13 10:40:13.128014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.517 [2024-12-13 10:40:13.128028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.517 qpair failed and we were unable to recover it. 00:38:19.517 [2024-12-13 10:40:13.128113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.517 [2024-12-13 10:40:13.128127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.517 qpair failed and we were unable to recover it. 00:38:19.517 [2024-12-13 10:40:13.128207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.517 [2024-12-13 10:40:13.128220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.517 qpair failed and we were unable to recover it. 00:38:19.517 [2024-12-13 10:40:13.128335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.517 [2024-12-13 10:40:13.128377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.517 qpair failed and we were unable to recover it. 00:38:19.517 [2024-12-13 10:40:13.128515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.517 [2024-12-13 10:40:13.128559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.517 qpair failed and we were unable to recover it. 00:38:19.517 [2024-12-13 10:40:13.128772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.517 [2024-12-13 10:40:13.128817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.517 qpair failed and we were unable to recover it. 00:38:19.517 [2024-12-13 10:40:13.129008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.517 [2024-12-13 10:40:13.129050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.517 qpair failed and we were unable to recover it. 00:38:19.517 [2024-12-13 10:40:13.129163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.517 [2024-12-13 10:40:13.129179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.517 qpair failed and we were unable to recover it. 00:38:19.517 [2024-12-13 10:40:13.129275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.517 [2024-12-13 10:40:13.129289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.517 qpair failed and we were unable to recover it. 00:38:19.517 [2024-12-13 10:40:13.129500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.517 [2024-12-13 10:40:13.129516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.517 qpair failed and we were unable to recover it. 00:38:19.517 [2024-12-13 10:40:13.129658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.517 [2024-12-13 10:40:13.129676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.517 qpair failed and we were unable to recover it. 00:38:19.517 [2024-12-13 10:40:13.129867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.517 [2024-12-13 10:40:13.129909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.517 qpair failed and we were unable to recover it. 00:38:19.518 [2024-12-13 10:40:13.130119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.518 [2024-12-13 10:40:13.130162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.518 qpair failed and we were unable to recover it. 00:38:19.518 [2024-12-13 10:40:13.130361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.518 [2024-12-13 10:40:13.130405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.518 qpair failed and we were unable to recover it. 00:38:19.518 [2024-12-13 10:40:13.130565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.518 [2024-12-13 10:40:13.130611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.518 qpair failed and we were unable to recover it. 00:38:19.518 [2024-12-13 10:40:13.130886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.518 [2024-12-13 10:40:13.130928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.518 qpair failed and we were unable to recover it. 00:38:19.518 [2024-12-13 10:40:13.131170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.518 [2024-12-13 10:40:13.131220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.518 qpair failed and we were unable to recover it. 00:38:19.518 [2024-12-13 10:40:13.131481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.518 [2024-12-13 10:40:13.131497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.518 qpair failed and we were unable to recover it. 00:38:19.518 [2024-12-13 10:40:13.131600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.518 [2024-12-13 10:40:13.131616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.518 qpair failed and we were unable to recover it. 00:38:19.518 [2024-12-13 10:40:13.131711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.518 [2024-12-13 10:40:13.131725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.518 qpair failed and we were unable to recover it. 00:38:19.518 [2024-12-13 10:40:13.131870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.518 [2024-12-13 10:40:13.131885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.518 qpair failed and we were unable to recover it. 00:38:19.518 [2024-12-13 10:40:13.132117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.518 [2024-12-13 10:40:13.132160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.518 qpair failed and we were unable to recover it. 00:38:19.518 [2024-12-13 10:40:13.132304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.518 [2024-12-13 10:40:13.132349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.518 qpair failed and we were unable to recover it. 00:38:19.518 [2024-12-13 10:40:13.132491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.518 [2024-12-13 10:40:13.132536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.518 qpair failed and we were unable to recover it. 00:38:19.518 [2024-12-13 10:40:13.132695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.518 [2024-12-13 10:40:13.132738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.518 qpair failed and we were unable to recover it. 00:38:19.518 [2024-12-13 10:40:13.132869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.518 [2024-12-13 10:40:13.132912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.518 qpair failed and we were unable to recover it. 00:38:19.518 [2024-12-13 10:40:13.133055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.518 [2024-12-13 10:40:13.133070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.518 qpair failed and we were unable to recover it. 00:38:19.518 [2024-12-13 10:40:13.133259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.518 [2024-12-13 10:40:13.133304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.518 qpair failed and we were unable to recover it. 00:38:19.518 [2024-12-13 10:40:13.133465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.518 [2024-12-13 10:40:13.133510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.518 qpair failed and we were unable to recover it. 00:38:19.518 [2024-12-13 10:40:13.133662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.518 [2024-12-13 10:40:13.133705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.518 qpair failed and we were unable to recover it. 00:38:19.518 [2024-12-13 10:40:13.133992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.518 [2024-12-13 10:40:13.134035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.518 qpair failed and we were unable to recover it. 00:38:19.518 [2024-12-13 10:40:13.134254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.518 [2024-12-13 10:40:13.134298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.518 qpair failed and we were unable to recover it. 00:38:19.518 [2024-12-13 10:40:13.134564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.518 [2024-12-13 10:40:13.134610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.518 qpair failed and we were unable to recover it. 00:38:19.518 [2024-12-13 10:40:13.134763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.518 [2024-12-13 10:40:13.134804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.518 qpair failed and we were unable to recover it. 00:38:19.518 [2024-12-13 10:40:13.135030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.518 [2024-12-13 10:40:13.135045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.518 qpair failed and we were unable to recover it. 00:38:19.518 [2024-12-13 10:40:13.135205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.518 [2024-12-13 10:40:13.135220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.518 qpair failed and we were unable to recover it. 00:38:19.518 [2024-12-13 10:40:13.135358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.518 [2024-12-13 10:40:13.135374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.518 qpair failed and we were unable to recover it. 00:38:19.518 [2024-12-13 10:40:13.135543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.518 [2024-12-13 10:40:13.135570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:19.518 qpair failed and we were unable to recover it. 00:38:19.518 [2024-12-13 10:40:13.135672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.518 [2024-12-13 10:40:13.135693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:19.518 qpair failed and we were unable to recover it. 00:38:19.518 [2024-12-13 10:40:13.135802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.518 [2024-12-13 10:40:13.135824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:19.518 qpair failed and we were unable to recover it. 00:38:19.518 [2024-12-13 10:40:13.135984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.518 [2024-12-13 10:40:13.136005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:19.518 qpair failed and we were unable to recover it. 00:38:19.518 [2024-12-13 10:40:13.136095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.518 [2024-12-13 10:40:13.136153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.518 qpair failed and we were unable to recover it. 00:38:19.518 [2024-12-13 10:40:13.136305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.518 [2024-12-13 10:40:13.136349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.518 qpair failed and we were unable to recover it. 00:38:19.518 [2024-12-13 10:40:13.136633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.518 [2024-12-13 10:40:13.136679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.518 qpair failed and we were unable to recover it. 00:38:19.518 [2024-12-13 10:40:13.136892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.518 [2024-12-13 10:40:13.136992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.518 qpair failed and we were unable to recover it. 00:38:19.518 [2024-12-13 10:40:13.137268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.518 [2024-12-13 10:40:13.137283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.518 qpair failed and we were unable to recover it. 00:38:19.518 [2024-12-13 10:40:13.137438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.518 [2024-12-13 10:40:13.137459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.518 qpair failed and we were unable to recover it. 00:38:19.518 [2024-12-13 10:40:13.137683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.518 [2024-12-13 10:40:13.137726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.518 qpair failed and we were unable to recover it. 00:38:19.518 [2024-12-13 10:40:13.137859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.518 [2024-12-13 10:40:13.137902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.518 qpair failed and we were unable to recover it. 00:38:19.518 [2024-12-13 10:40:13.138108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.518 [2024-12-13 10:40:13.138160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.518 qpair failed and we were unable to recover it. 00:38:19.518 [2024-12-13 10:40:13.138408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.518 [2024-12-13 10:40:13.138473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.518 qpair failed and we were unable to recover it. 00:38:19.519 [2024-12-13 10:40:13.138619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.519 [2024-12-13 10:40:13.138662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.519 qpair failed and we were unable to recover it. 00:38:19.519 [2024-12-13 10:40:13.138868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.519 [2024-12-13 10:40:13.138910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.519 qpair failed and we were unable to recover it. 00:38:19.519 [2024-12-13 10:40:13.139127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.519 [2024-12-13 10:40:13.139142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.519 qpair failed and we were unable to recover it. 00:38:19.519 [2024-12-13 10:40:13.139261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.519 [2024-12-13 10:40:13.139304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.519 qpair failed and we were unable to recover it. 00:38:19.519 [2024-12-13 10:40:13.139494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.519 [2024-12-13 10:40:13.139539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.519 qpair failed and we were unable to recover it. 00:38:19.519 [2024-12-13 10:40:13.139803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.519 [2024-12-13 10:40:13.139846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.519 qpair failed and we were unable to recover it. 00:38:19.519 [2024-12-13 10:40:13.140028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.519 [2024-12-13 10:40:13.140043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.519 qpair failed and we were unable to recover it. 00:38:19.519 [2024-12-13 10:40:13.140202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.519 [2024-12-13 10:40:13.140217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.519 qpair failed and we were unable to recover it. 00:38:19.519 [2024-12-13 10:40:13.140378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.519 [2024-12-13 10:40:13.140393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.519 qpair failed and we were unable to recover it. 00:38:19.519 [2024-12-13 10:40:13.140491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.519 [2024-12-13 10:40:13.140506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.519 qpair failed and we were unable to recover it. 00:38:19.519 [2024-12-13 10:40:13.140604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.519 [2024-12-13 10:40:13.140618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.519 qpair failed and we were unable to recover it. 00:38:19.519 [2024-12-13 10:40:13.140824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.519 [2024-12-13 10:40:13.140840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.519 qpair failed and we were unable to recover it. 00:38:19.519 [2024-12-13 10:40:13.140923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.519 [2024-12-13 10:40:13.140937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.519 qpair failed and we were unable to recover it. 00:38:19.519 [2024-12-13 10:40:13.141032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.519 [2024-12-13 10:40:13.141048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.519 qpair failed and we were unable to recover it. 00:38:19.519 [2024-12-13 10:40:13.141186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.519 [2024-12-13 10:40:13.141227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.519 qpair failed and we were unable to recover it. 00:38:19.519 [2024-12-13 10:40:13.141426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.519 [2024-12-13 10:40:13.141493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.519 qpair failed and we were unable to recover it. 00:38:19.519 [2024-12-13 10:40:13.141634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.519 [2024-12-13 10:40:13.141676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.519 qpair failed and we were unable to recover it. 00:38:19.519 [2024-12-13 10:40:13.141883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.519 [2024-12-13 10:40:13.141926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.519 qpair failed and we were unable to recover it. 00:38:19.519 [2024-12-13 10:40:13.142147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.519 [2024-12-13 10:40:13.142190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.519 qpair failed and we were unable to recover it. 00:38:19.519 [2024-12-13 10:40:13.142357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.519 [2024-12-13 10:40:13.142401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.519 qpair failed and we were unable to recover it. 00:38:19.519 [2024-12-13 10:40:13.142617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.519 [2024-12-13 10:40:13.142662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.519 qpair failed and we were unable to recover it. 00:38:19.519 [2024-12-13 10:40:13.142788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.519 [2024-12-13 10:40:13.142830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.519 qpair failed and we were unable to recover it. 00:38:19.519 [2024-12-13 10:40:13.142969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.519 [2024-12-13 10:40:13.143013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.519 qpair failed and we were unable to recover it. 00:38:19.519 [2024-12-13 10:40:13.143152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.519 [2024-12-13 10:40:13.143199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.519 qpair failed and we were unable to recover it. 00:38:19.519 [2024-12-13 10:40:13.143287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.519 [2024-12-13 10:40:13.143300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.519 qpair failed and we were unable to recover it. 00:38:19.519 [2024-12-13 10:40:13.143460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.519 [2024-12-13 10:40:13.143476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.519 qpair failed and we were unable to recover it. 00:38:19.519 [2024-12-13 10:40:13.143625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.519 [2024-12-13 10:40:13.143640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.519 qpair failed and we were unable to recover it. 00:38:19.519 [2024-12-13 10:40:13.143719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.519 [2024-12-13 10:40:13.143733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.519 qpair failed and we were unable to recover it. 00:38:19.519 [2024-12-13 10:40:13.143829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.519 [2024-12-13 10:40:13.143872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.519 qpair failed and we were unable to recover it. 00:38:19.519 [2024-12-13 10:40:13.144137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.519 [2024-12-13 10:40:13.144181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.519 qpair failed and we were unable to recover it. 00:38:19.519 [2024-12-13 10:40:13.144330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.519 [2024-12-13 10:40:13.144373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.519 qpair failed and we were unable to recover it. 00:38:19.519 [2024-12-13 10:40:13.144584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.519 [2024-12-13 10:40:13.144628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.519 qpair failed and we were unable to recover it. 00:38:19.519 [2024-12-13 10:40:13.144779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.519 [2024-12-13 10:40:13.144795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.519 qpair failed and we were unable to recover it. 00:38:19.519 [2024-12-13 10:40:13.144888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.519 [2024-12-13 10:40:13.144901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.519 qpair failed and we were unable to recover it. 00:38:19.519 [2024-12-13 10:40:13.145071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.519 [2024-12-13 10:40:13.145115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.519 qpair failed and we were unable to recover it. 00:38:19.520 [2024-12-13 10:40:13.145371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.520 [2024-12-13 10:40:13.145416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.520 qpair failed and we were unable to recover it. 00:38:19.520 [2024-12-13 10:40:13.145644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.520 [2024-12-13 10:40:13.145688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.520 qpair failed and we were unable to recover it. 00:38:19.520 [2024-12-13 10:40:13.145960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.520 [2024-12-13 10:40:13.146004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.520 qpair failed and we were unable to recover it. 00:38:19.520 [2024-12-13 10:40:13.146146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.520 [2024-12-13 10:40:13.146160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.520 qpair failed and we were unable to recover it. 00:38:19.520 [2024-12-13 10:40:13.146246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.520 [2024-12-13 10:40:13.146262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.520 qpair failed and we were unable to recover it. 00:38:19.520 [2024-12-13 10:40:13.146350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.520 [2024-12-13 10:40:13.146365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.520 qpair failed and we were unable to recover it. 00:38:19.520 [2024-12-13 10:40:13.146458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.520 [2024-12-13 10:40:13.146472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.520 qpair failed and we were unable to recover it. 00:38:19.520 [2024-12-13 10:40:13.146611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.520 [2024-12-13 10:40:13.146625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.520 qpair failed and we were unable to recover it. 00:38:19.520 [2024-12-13 10:40:13.146761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.520 [2024-12-13 10:40:13.146776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.520 qpair failed and we were unable to recover it. 00:38:19.520 [2024-12-13 10:40:13.146875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.520 [2024-12-13 10:40:13.146889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.520 qpair failed and we were unable to recover it. 00:38:19.520 [2024-12-13 10:40:13.146972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.520 [2024-12-13 10:40:13.146985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.520 qpair failed and we were unable to recover it. 00:38:19.520 [2024-12-13 10:40:13.147073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.520 [2024-12-13 10:40:13.147086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.520 qpair failed and we were unable to recover it. 00:38:19.520 [2024-12-13 10:40:13.147155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.520 [2024-12-13 10:40:13.147169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.520 qpair failed and we were unable to recover it. 00:38:19.520 [2024-12-13 10:40:13.147246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.520 [2024-12-13 10:40:13.147260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.520 qpair failed and we were unable to recover it. 00:38:19.520 [2024-12-13 10:40:13.147332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.520 [2024-12-13 10:40:13.147346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.520 qpair failed and we were unable to recover it. 00:38:19.520 [2024-12-13 10:40:13.147422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.520 [2024-12-13 10:40:13.147436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.520 qpair failed and we were unable to recover it. 00:38:19.520 [2024-12-13 10:40:13.147481] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325d00 (9): Bad file descriptor 00:38:19.520 [2024-12-13 10:40:13.147776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.520 [2024-12-13 10:40:13.147823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:19.520 qpair failed and we were unable to recover it. 00:38:19.520 [2024-12-13 10:40:13.148013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.520 [2024-12-13 10:40:13.148051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:19.520 qpair failed and we were unable to recover it. 00:38:19.520 [2024-12-13 10:40:13.148252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.520 [2024-12-13 10:40:13.148298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:19.520 qpair failed and we were unable to recover it. 00:38:19.520 [2024-12-13 10:40:13.148409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.520 [2024-12-13 10:40:13.148424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.520 qpair failed and we were unable to recover it. 00:38:19.520 [2024-12-13 10:40:13.148596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.520 [2024-12-13 10:40:13.148612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.520 qpair failed and we were unable to recover it. 00:38:19.520 [2024-12-13 10:40:13.148750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.520 [2024-12-13 10:40:13.148766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.520 qpair failed and we were unable to recover it. 00:38:19.520 [2024-12-13 10:40:13.148925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.520 [2024-12-13 10:40:13.148940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.520 qpair failed and we were unable to recover it. 00:38:19.520 [2024-12-13 10:40:13.149038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.520 [2024-12-13 10:40:13.149052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.520 qpair failed and we were unable to recover it. 00:38:19.520 [2024-12-13 10:40:13.149153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.520 [2024-12-13 10:40:13.149168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.520 qpair failed and we were unable to recover it. 00:38:19.520 [2024-12-13 10:40:13.149263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.520 [2024-12-13 10:40:13.149282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.520 qpair failed and we were unable to recover it. 00:38:19.520 [2024-12-13 10:40:13.149378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.520 [2024-12-13 10:40:13.149392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.520 qpair failed and we were unable to recover it. 00:38:19.520 [2024-12-13 10:40:13.149490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.520 [2024-12-13 10:40:13.149505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.520 qpair failed and we were unable to recover it. 00:38:19.520 [2024-12-13 10:40:13.149603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.520 [2024-12-13 10:40:13.149617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.520 qpair failed and we were unable to recover it. 00:38:19.520 [2024-12-13 10:40:13.149762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.520 [2024-12-13 10:40:13.149777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.520 qpair failed and we were unable to recover it. 00:38:19.520 [2024-12-13 10:40:13.149956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.520 [2024-12-13 10:40:13.149971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.520 qpair failed and we were unable to recover it. 00:38:19.520 [2024-12-13 10:40:13.150121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.520 [2024-12-13 10:40:13.150137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.520 qpair failed and we were unable to recover it. 00:38:19.520 [2024-12-13 10:40:13.150211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.520 [2024-12-13 10:40:13.150226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.520 qpair failed and we were unable to recover it. 00:38:19.520 [2024-12-13 10:40:13.150373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.520 [2024-12-13 10:40:13.150389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.520 qpair failed and we were unable to recover it. 00:38:19.520 [2024-12-13 10:40:13.150534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.520 [2024-12-13 10:40:13.150551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.520 qpair failed and we were unable to recover it. 00:38:19.520 [2024-12-13 10:40:13.150650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.520 [2024-12-13 10:40:13.150664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.520 qpair failed and we were unable to recover it. 00:38:19.520 [2024-12-13 10:40:13.150798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.520 [2024-12-13 10:40:13.150813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.520 qpair failed and we were unable to recover it. 00:38:19.520 [2024-12-13 10:40:13.150974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.520 [2024-12-13 10:40:13.150989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.520 qpair failed and we were unable to recover it. 00:38:19.520 [2024-12-13 10:40:13.151140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.521 [2024-12-13 10:40:13.151155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.521 qpair failed and we were unable to recover it. 00:38:19.521 [2024-12-13 10:40:13.151242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.521 [2024-12-13 10:40:13.151256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.521 qpair failed and we were unable to recover it. 00:38:19.521 [2024-12-13 10:40:13.151403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.521 [2024-12-13 10:40:13.151418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.521 qpair failed and we were unable to recover it. 00:38:19.521 [2024-12-13 10:40:13.151576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.521 [2024-12-13 10:40:13.151592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.521 qpair failed and we were unable to recover it. 00:38:19.521 [2024-12-13 10:40:13.151673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.521 [2024-12-13 10:40:13.151687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.521 qpair failed and we were unable to recover it. 00:38:19.521 [2024-12-13 10:40:13.151776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.521 [2024-12-13 10:40:13.151790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.521 qpair failed and we were unable to recover it. 00:38:19.521 [2024-12-13 10:40:13.151935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.521 [2024-12-13 10:40:13.151950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.521 qpair failed and we were unable to recover it. 00:38:19.521 [2024-12-13 10:40:13.152029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.521 [2024-12-13 10:40:13.152043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.521 qpair failed and we were unable to recover it. 00:38:19.521 [2024-12-13 10:40:13.152197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.521 [2024-12-13 10:40:13.152212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.521 qpair failed and we were unable to recover it. 00:38:19.521 [2024-12-13 10:40:13.152382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.521 [2024-12-13 10:40:13.152398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.521 qpair failed and we were unable to recover it. 00:38:19.521 [2024-12-13 10:40:13.152478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.521 [2024-12-13 10:40:13.152493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.521 qpair failed and we were unable to recover it. 00:38:19.521 [2024-12-13 10:40:13.152560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.521 [2024-12-13 10:40:13.152575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.521 qpair failed and we were unable to recover it. 00:38:19.521 [2024-12-13 10:40:13.152659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.521 [2024-12-13 10:40:13.152675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.521 qpair failed and we were unable to recover it. 00:38:19.521 [2024-12-13 10:40:13.152760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.521 [2024-12-13 10:40:13.152774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.521 qpair failed and we were unable to recover it. 00:38:19.521 [2024-12-13 10:40:13.152912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.521 [2024-12-13 10:40:13.152926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.521 qpair failed and we were unable to recover it. 00:38:19.521 [2024-12-13 10:40:13.153004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.521 [2024-12-13 10:40:13.153018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.521 qpair failed and we were unable to recover it. 00:38:19.521 [2024-12-13 10:40:13.153088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.521 [2024-12-13 10:40:13.153102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.521 qpair failed and we were unable to recover it. 00:38:19.521 [2024-12-13 10:40:13.153191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.521 [2024-12-13 10:40:13.153204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.521 qpair failed and we were unable to recover it. 00:38:19.521 [2024-12-13 10:40:13.153288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.521 [2024-12-13 10:40:13.153302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.521 qpair failed and we were unable to recover it. 00:38:19.521 [2024-12-13 10:40:13.153380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.521 [2024-12-13 10:40:13.153396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.521 qpair failed and we were unable to recover it. 00:38:19.521 [2024-12-13 10:40:13.153491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.521 [2024-12-13 10:40:13.153506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.521 qpair failed and we were unable to recover it. 00:38:19.521 [2024-12-13 10:40:13.153577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.521 [2024-12-13 10:40:13.153591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.521 qpair failed and we were unable to recover it. 00:38:19.521 [2024-12-13 10:40:13.153658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.521 [2024-12-13 10:40:13.153672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.521 qpair failed and we were unable to recover it. 00:38:19.521 [2024-12-13 10:40:13.153745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.521 [2024-12-13 10:40:13.153759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.521 qpair failed and we were unable to recover it. 00:38:19.521 [2024-12-13 10:40:13.153929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.521 [2024-12-13 10:40:13.153943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.521 qpair failed and we were unable to recover it. 00:38:19.521 [2024-12-13 10:40:13.154010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.521 [2024-12-13 10:40:13.154024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.521 qpair failed and we were unable to recover it. 00:38:19.521 [2024-12-13 10:40:13.154094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.521 [2024-12-13 10:40:13.154108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.521 qpair failed and we were unable to recover it. 00:38:19.521 [2024-12-13 10:40:13.154197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.521 [2024-12-13 10:40:13.154211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.521 qpair failed and we were unable to recover it. 00:38:19.521 [2024-12-13 10:40:13.154309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.521 [2024-12-13 10:40:13.154323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.521 qpair failed and we were unable to recover it. 00:38:19.521 [2024-12-13 10:40:13.154405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.521 [2024-12-13 10:40:13.154421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.521 qpair failed and we were unable to recover it. 00:38:19.521 [2024-12-13 10:40:13.154511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.521 [2024-12-13 10:40:13.154526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.521 qpair failed and we were unable to recover it. 00:38:19.521 [2024-12-13 10:40:13.154596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.521 [2024-12-13 10:40:13.154610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.521 qpair failed and we were unable to recover it. 00:38:19.521 [2024-12-13 10:40:13.154693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.521 [2024-12-13 10:40:13.154706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.521 qpair failed and we were unable to recover it. 00:38:19.521 [2024-12-13 10:40:13.154796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.521 [2024-12-13 10:40:13.154810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.521 qpair failed and we were unable to recover it. 00:38:19.521 [2024-12-13 10:40:13.154971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.521 [2024-12-13 10:40:13.154984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.521 qpair failed and we were unable to recover it. 00:38:19.521 [2024-12-13 10:40:13.155067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.521 [2024-12-13 10:40:13.155081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.521 qpair failed and we were unable to recover it. 00:38:19.521 [2024-12-13 10:40:13.155157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.521 [2024-12-13 10:40:13.155171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.521 qpair failed and we were unable to recover it. 00:38:19.521 [2024-12-13 10:40:13.155239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.521 [2024-12-13 10:40:13.155253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.521 qpair failed and we were unable to recover it. 00:38:19.521 [2024-12-13 10:40:13.155322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.521 [2024-12-13 10:40:13.155336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.521 qpair failed and we were unable to recover it. 00:38:19.521 [2024-12-13 10:40:13.155413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.521 [2024-12-13 10:40:13.155427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.522 qpair failed and we were unable to recover it. 00:38:19.522 [2024-12-13 10:40:13.155510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.522 [2024-12-13 10:40:13.155525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.522 qpair failed and we were unable to recover it. 00:38:19.522 [2024-12-13 10:40:13.155595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.522 [2024-12-13 10:40:13.155609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.522 qpair failed and we were unable to recover it. 00:38:19.522 [2024-12-13 10:40:13.155774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.522 [2024-12-13 10:40:13.155788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.522 qpair failed and we were unable to recover it. 00:38:19.522 [2024-12-13 10:40:13.155890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.522 [2024-12-13 10:40:13.155934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.522 qpair failed and we were unable to recover it. 00:38:19.522 [2024-12-13 10:40:13.156072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.522 [2024-12-13 10:40:13.156114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.522 qpair failed and we were unable to recover it. 00:38:19.522 [2024-12-13 10:40:13.156251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.522 [2024-12-13 10:40:13.156295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.522 qpair failed and we were unable to recover it. 00:38:19.522 [2024-12-13 10:40:13.156488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.522 [2024-12-13 10:40:13.156546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:19.522 qpair failed and we were unable to recover it. 00:38:19.522 [2024-12-13 10:40:13.156718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.522 [2024-12-13 10:40:13.156774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:19.522 qpair failed and we were unable to recover it. 00:38:19.522 [2024-12-13 10:40:13.157029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.522 [2024-12-13 10:40:13.157088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:19.522 qpair failed and we were unable to recover it. 00:38:19.522 [2024-12-13 10:40:13.157252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.522 [2024-12-13 10:40:13.157299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.522 qpair failed and we were unable to recover it. 00:38:19.522 [2024-12-13 10:40:13.157474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.522 [2024-12-13 10:40:13.157517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.522 qpair failed and we were unable to recover it. 00:38:19.522 [2024-12-13 10:40:13.157675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.522 [2024-12-13 10:40:13.157718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.522 qpair failed and we were unable to recover it. 00:38:19.522 [2024-12-13 10:40:13.157913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.522 [2024-12-13 10:40:13.157956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.522 qpair failed and we were unable to recover it. 00:38:19.522 [2024-12-13 10:40:13.158264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.522 [2024-12-13 10:40:13.158309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.522 qpair failed and we were unable to recover it. 00:38:19.522 [2024-12-13 10:40:13.158465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.522 [2024-12-13 10:40:13.158522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.522 qpair failed and we were unable to recover it. 00:38:19.522 [2024-12-13 10:40:13.158654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.522 [2024-12-13 10:40:13.158698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.522 qpair failed and we were unable to recover it. 00:38:19.522 [2024-12-13 10:40:13.158914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.522 [2024-12-13 10:40:13.158956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.522 qpair failed and we were unable to recover it. 00:38:19.522 [2024-12-13 10:40:13.159091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.522 [2024-12-13 10:40:13.159135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.522 qpair failed and we were unable to recover it. 00:38:19.522 [2024-12-13 10:40:13.159358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.522 [2024-12-13 10:40:13.159402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.522 qpair failed and we were unable to recover it. 00:38:19.522 [2024-12-13 10:40:13.159574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.522 [2024-12-13 10:40:13.159625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.522 qpair failed and we were unable to recover it. 00:38:19.522 [2024-12-13 10:40:13.159825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.522 [2024-12-13 10:40:13.159867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.522 qpair failed and we were unable to recover it. 00:38:19.522 [2024-12-13 10:40:13.160026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.522 [2024-12-13 10:40:13.160069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.522 qpair failed and we were unable to recover it. 00:38:19.522 [2024-12-13 10:40:13.160273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.522 [2024-12-13 10:40:13.160317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.522 qpair failed and we were unable to recover it. 00:38:19.522 [2024-12-13 10:40:13.160464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.522 [2024-12-13 10:40:13.160508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.522 qpair failed and we were unable to recover it. 00:38:19.522 [2024-12-13 10:40:13.160771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.522 [2024-12-13 10:40:13.160815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.522 qpair failed and we were unable to recover it. 00:38:19.522 [2024-12-13 10:40:13.160959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.522 [2024-12-13 10:40:13.160974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.522 qpair failed and we were unable to recover it. 00:38:19.522 [2024-12-13 10:40:13.161110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.522 [2024-12-13 10:40:13.161125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.522 qpair failed and we were unable to recover it. 00:38:19.522 [2024-12-13 10:40:13.161260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.522 [2024-12-13 10:40:13.161303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.522 qpair failed and we were unable to recover it. 00:38:19.522 [2024-12-13 10:40:13.161516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.522 [2024-12-13 10:40:13.161562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.522 qpair failed and we were unable to recover it. 00:38:19.522 [2024-12-13 10:40:13.161850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.522 [2024-12-13 10:40:13.161893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.522 qpair failed and we were unable to recover it. 00:38:19.522 [2024-12-13 10:40:13.162021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.522 [2024-12-13 10:40:13.162036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.522 qpair failed and we were unable to recover it. 00:38:19.522 [2024-12-13 10:40:13.162172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.522 [2024-12-13 10:40:13.162187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.522 qpair failed and we were unable to recover it. 00:38:19.522 [2024-12-13 10:40:13.162341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.522 [2024-12-13 10:40:13.162356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.522 qpair failed and we were unable to recover it. 00:38:19.522 [2024-12-13 10:40:13.162466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.522 [2024-12-13 10:40:13.162509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.522 qpair failed and we were unable to recover it. 00:38:19.522 [2024-12-13 10:40:13.162773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.522 [2024-12-13 10:40:13.162821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.522 qpair failed and we were unable to recover it. 00:38:19.522 [2024-12-13 10:40:13.162961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.522 [2024-12-13 10:40:13.163005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.522 qpair failed and we were unable to recover it. 00:38:19.522 [2024-12-13 10:40:13.163772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.522 [2024-12-13 10:40:13.163799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.522 qpair failed and we were unable to recover it. 00:38:19.522 [2024-12-13 10:40:13.163903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.522 [2024-12-13 10:40:13.163917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.522 qpair failed and we were unable to recover it. 00:38:19.522 [2024-12-13 10:40:13.164078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.522 [2024-12-13 10:40:13.164095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.522 qpair failed and we were unable to recover it. 00:38:19.523 [2024-12-13 10:40:13.164187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.523 [2024-12-13 10:40:13.164203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.523 qpair failed and we were unable to recover it. 00:38:19.523 [2024-12-13 10:40:13.164343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.523 [2024-12-13 10:40:13.164358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.523 qpair failed and we were unable to recover it. 00:38:19.523 [2024-12-13 10:40:13.164496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.523 [2024-12-13 10:40:13.164512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.523 qpair failed and we were unable to recover it. 00:38:19.523 [2024-12-13 10:40:13.164583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.523 [2024-12-13 10:40:13.164596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.523 qpair failed and we were unable to recover it. 00:38:19.523 [2024-12-13 10:40:13.164684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.523 [2024-12-13 10:40:13.164698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.523 qpair failed and we were unable to recover it. 00:38:19.523 [2024-12-13 10:40:13.164792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.523 [2024-12-13 10:40:13.164807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.523 qpair failed and we were unable to recover it. 00:38:19.523 [2024-12-13 10:40:13.164902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.523 [2024-12-13 10:40:13.164944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.523 qpair failed and we were unable to recover it. 00:38:19.523 [2024-12-13 10:40:13.165154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.523 [2024-12-13 10:40:13.165243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:19.523 qpair failed and we were unable to recover it. 00:38:19.523 [2024-12-13 10:40:13.165432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.523 [2024-12-13 10:40:13.165491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:19.523 qpair failed and we were unable to recover it. 00:38:19.523 [2024-12-13 10:40:13.165707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.523 [2024-12-13 10:40:13.165766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:19.523 qpair failed and we were unable to recover it. 00:38:19.523 [2024-12-13 10:40:13.165982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.523 [2024-12-13 10:40:13.165999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.523 qpair failed and we were unable to recover it. 00:38:19.523 [2024-12-13 10:40:13.166190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.523 [2024-12-13 10:40:13.166233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.523 qpair failed and we were unable to recover it. 00:38:19.523 [2024-12-13 10:40:13.166436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.523 [2024-12-13 10:40:13.166492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.523 qpair failed and we were unable to recover it. 00:38:19.523 [2024-12-13 10:40:13.166628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.523 [2024-12-13 10:40:13.166670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.523 qpair failed and we were unable to recover it. 00:38:19.523 [2024-12-13 10:40:13.166885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.523 [2024-12-13 10:40:13.166928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.523 qpair failed and we were unable to recover it. 00:38:19.523 [2024-12-13 10:40:13.167208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.523 [2024-12-13 10:40:13.167259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.523 qpair failed and we were unable to recover it. 00:38:19.523 [2024-12-13 10:40:13.167444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.523 [2024-12-13 10:40:13.167470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.523 qpair failed and we were unable to recover it. 00:38:19.523 [2024-12-13 10:40:13.167626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.523 [2024-12-13 10:40:13.167641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.523 qpair failed and we were unable to recover it. 00:38:19.523 [2024-12-13 10:40:13.167750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.523 [2024-12-13 10:40:13.167793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.523 qpair failed and we were unable to recover it. 00:38:19.523 [2024-12-13 10:40:13.167944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.523 [2024-12-13 10:40:13.167989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.523 qpair failed and we were unable to recover it. 00:38:19.523 [2024-12-13 10:40:13.168134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.523 [2024-12-13 10:40:13.168182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.523 qpair failed and we were unable to recover it. 00:38:19.523 [2024-12-13 10:40:13.168384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.523 [2024-12-13 10:40:13.168399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.523 qpair failed and we were unable to recover it. 00:38:19.523 [2024-12-13 10:40:13.168487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.523 [2024-12-13 10:40:13.168502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.523 qpair failed and we were unable to recover it. 00:38:19.523 [2024-12-13 10:40:13.168650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.523 [2024-12-13 10:40:13.168665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.523 qpair failed and we were unable to recover it. 00:38:19.523 [2024-12-13 10:40:13.168744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.523 [2024-12-13 10:40:13.168758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.523 qpair failed and we were unable to recover it. 00:38:19.523 [2024-12-13 10:40:13.168956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.523 [2024-12-13 10:40:13.168971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.523 qpair failed and we were unable to recover it. 00:38:19.523 [2024-12-13 10:40:13.169068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.523 [2024-12-13 10:40:13.169082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.523 qpair failed and we were unable to recover it. 00:38:19.523 [2024-12-13 10:40:13.169181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.523 [2024-12-13 10:40:13.169195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.523 qpair failed and we were unable to recover it. 00:38:19.523 [2024-12-13 10:40:13.169309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.523 [2024-12-13 10:40:13.169350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.523 qpair failed and we were unable to recover it. 00:38:19.523 [2024-12-13 10:40:13.169506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.523 [2024-12-13 10:40:13.169554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.523 qpair failed and we were unable to recover it. 00:38:19.523 [2024-12-13 10:40:13.169682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.523 [2024-12-13 10:40:13.169726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.523 qpair failed and we were unable to recover it. 00:38:19.523 [2024-12-13 10:40:13.170019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.523 [2024-12-13 10:40:13.170062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.523 qpair failed and we were unable to recover it. 00:38:19.523 [2024-12-13 10:40:13.170317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.523 [2024-12-13 10:40:13.170361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.523 qpair failed and we were unable to recover it. 00:38:19.523 [2024-12-13 10:40:13.170503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.523 [2024-12-13 10:40:13.170546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.523 qpair failed and we were unable to recover it. 00:38:19.523 [2024-12-13 10:40:13.170759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.523 [2024-12-13 10:40:13.170803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.523 qpair failed and we were unable to recover it. 00:38:19.524 [2024-12-13 10:40:13.170935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.524 [2024-12-13 10:40:13.170977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.524 qpair failed and we were unable to recover it. 00:38:19.524 [2024-12-13 10:40:13.171149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.524 [2024-12-13 10:40:13.171193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.524 qpair failed and we were unable to recover it. 00:38:19.524 [2024-12-13 10:40:13.171437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.524 [2024-12-13 10:40:13.171490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.524 qpair failed and we were unable to recover it. 00:38:19.524 [2024-12-13 10:40:13.171690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.524 [2024-12-13 10:40:13.171734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.524 qpair failed and we were unable to recover it. 00:38:19.524 [2024-12-13 10:40:13.171931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.524 [2024-12-13 10:40:13.171974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.524 qpair failed and we were unable to recover it. 00:38:19.524 [2024-12-13 10:40:13.172116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.524 [2024-12-13 10:40:13.172131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.524 qpair failed and we were unable to recover it. 00:38:19.524 [2024-12-13 10:40:13.172289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.524 [2024-12-13 10:40:13.172305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.524 qpair failed and we were unable to recover it. 00:38:19.524 [2024-12-13 10:40:13.172506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.524 [2024-12-13 10:40:13.172526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.524 qpair failed and we were unable to recover it. 00:38:19.524 [2024-12-13 10:40:13.172615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.524 [2024-12-13 10:40:13.172629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.524 qpair failed and we were unable to recover it. 00:38:19.524 [2024-12-13 10:40:13.172709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.524 [2024-12-13 10:40:13.172722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.524 qpair failed and we were unable to recover it. 00:38:19.524 [2024-12-13 10:40:13.172814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.524 [2024-12-13 10:40:13.172855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.524 qpair failed and we were unable to recover it. 00:38:19.524 [2024-12-13 10:40:13.173043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.524 [2024-12-13 10:40:13.173084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.524 qpair failed and we were unable to recover it. 00:38:19.524 [2024-12-13 10:40:13.173269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.524 [2024-12-13 10:40:13.173342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:19.524 qpair failed and we were unable to recover it. 00:38:19.524 [2024-12-13 10:40:13.173565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.524 [2024-12-13 10:40:13.173612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:19.524 qpair failed and we were unable to recover it. 00:38:19.524 [2024-12-13 10:40:13.173762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.524 [2024-12-13 10:40:13.173809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:19.524 qpair failed and we were unable to recover it. 00:38:19.524 [2024-12-13 10:40:13.173973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.524 [2024-12-13 10:40:13.173990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.524 qpair failed and we were unable to recover it. 00:38:19.524 [2024-12-13 10:40:13.174066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.524 [2024-12-13 10:40:13.174079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.524 qpair failed and we were unable to recover it. 00:38:19.524 [2024-12-13 10:40:13.174161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.524 [2024-12-13 10:40:13.174175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.524 qpair failed and we were unable to recover it. 00:38:19.524 [2024-12-13 10:40:13.174321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.524 [2024-12-13 10:40:13.174335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.524 qpair failed and we were unable to recover it. 00:38:19.524 [2024-12-13 10:40:13.174416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.524 [2024-12-13 10:40:13.174457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.524 qpair failed and we were unable to recover it. 00:38:19.524 [2024-12-13 10:40:13.174604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.524 [2024-12-13 10:40:13.174648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.524 qpair failed and we were unable to recover it. 00:38:19.524 [2024-12-13 10:40:13.174837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.524 [2024-12-13 10:40:13.174881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.524 qpair failed and we were unable to recover it. 00:38:19.524 [2024-12-13 10:40:13.175030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.524 [2024-12-13 10:40:13.175074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.524 qpair failed and we were unable to recover it. 00:38:19.524 [2024-12-13 10:40:13.175265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.524 [2024-12-13 10:40:13.175279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.524 qpair failed and we were unable to recover it. 00:38:19.524 [2024-12-13 10:40:13.175398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.524 [2024-12-13 10:40:13.175442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.524 qpair failed and we were unable to recover it. 00:38:19.524 [2024-12-13 10:40:13.175729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.524 [2024-12-13 10:40:13.175779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.524 qpair failed and we were unable to recover it. 00:38:19.524 [2024-12-13 10:40:13.176013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.524 [2024-12-13 10:40:13.176042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:19.524 qpair failed and we were unable to recover it. 00:38:19.524 [2024-12-13 10:40:13.177444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.524 [2024-12-13 10:40:13.177492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:19.524 qpair failed and we were unable to recover it. 00:38:19.524 [2024-12-13 10:40:13.177730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.524 [2024-12-13 10:40:13.177754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:19.524 qpair failed and we were unable to recover it. 00:38:19.524 [2024-12-13 10:40:13.177998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.524 [2024-12-13 10:40:13.178044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:19.524 qpair failed and we were unable to recover it. 00:38:19.524 [2024-12-13 10:40:13.178184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.524 [2024-12-13 10:40:13.178228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:19.524 qpair failed and we were unable to recover it. 00:38:19.524 [2024-12-13 10:40:13.178514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.524 [2024-12-13 10:40:13.178560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:19.524 qpair failed and we were unable to recover it. 00:38:19.524 [2024-12-13 10:40:13.178797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.524 [2024-12-13 10:40:13.178842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:19.524 qpair failed and we were unable to recover it. 00:38:19.524 [2024-12-13 10:40:13.179075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.524 [2024-12-13 10:40:13.179098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:19.524 qpair failed and we were unable to recover it. 00:38:19.524 [2024-12-13 10:40:13.179195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.524 [2024-12-13 10:40:13.179218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:19.524 qpair failed and we were unable to recover it. 00:38:19.524 [2024-12-13 10:40:13.179410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.524 [2024-12-13 10:40:13.179468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.524 qpair failed and we were unable to recover it. 00:38:19.524 [2024-12-13 10:40:13.179686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.524 [2024-12-13 10:40:13.179729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.524 qpair failed and we were unable to recover it. 00:38:19.524 [2024-12-13 10:40:13.179861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.524 [2024-12-13 10:40:13.179905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.524 qpair failed and we were unable to recover it. 00:38:19.524 [2024-12-13 10:40:13.180094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.524 [2024-12-13 10:40:13.180108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.524 qpair failed and we were unable to recover it. 00:38:19.525 [2024-12-13 10:40:13.180315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.525 [2024-12-13 10:40:13.180360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.525 qpair failed and we were unable to recover it. 00:38:19.525 [2024-12-13 10:40:13.180584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.525 [2024-12-13 10:40:13.180629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.525 qpair failed and we were unable to recover it. 00:38:19.525 [2024-12-13 10:40:13.180768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.525 [2024-12-13 10:40:13.180811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.525 qpair failed and we were unable to recover it. 00:38:19.525 [2024-12-13 10:40:13.180997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.525 [2024-12-13 10:40:13.181013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.525 qpair failed and we were unable to recover it. 00:38:19.525 [2024-12-13 10:40:13.181109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.525 [2024-12-13 10:40:13.181123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.525 qpair failed and we were unable to recover it. 00:38:19.525 [2024-12-13 10:40:13.181270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.525 [2024-12-13 10:40:13.181284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.525 qpair failed and we were unable to recover it. 00:38:19.525 [2024-12-13 10:40:13.181487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.525 [2024-12-13 10:40:13.181502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.525 qpair failed and we were unable to recover it. 00:38:19.525 [2024-12-13 10:40:13.181597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.525 [2024-12-13 10:40:13.181612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.525 qpair failed and we were unable to recover it. 00:38:19.525 [2024-12-13 10:40:13.181759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.525 [2024-12-13 10:40:13.181801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.525 qpair failed and we were unable to recover it. 00:38:19.525 [2024-12-13 10:40:13.181947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.525 [2024-12-13 10:40:13.181989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.525 qpair failed and we were unable to recover it. 00:38:19.525 [2024-12-13 10:40:13.182183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.525 [2024-12-13 10:40:13.182233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.525 qpair failed and we were unable to recover it. 00:38:19.525 [2024-12-13 10:40:13.182371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.525 [2024-12-13 10:40:13.182386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.525 qpair failed and we were unable to recover it. 00:38:19.525 [2024-12-13 10:40:13.182476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.525 [2024-12-13 10:40:13.182490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.525 qpair failed and we were unable to recover it. 00:38:19.525 [2024-12-13 10:40:13.182645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.525 [2024-12-13 10:40:13.182661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.525 qpair failed and we were unable to recover it. 00:38:19.525 [2024-12-13 10:40:13.182815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.525 [2024-12-13 10:40:13.182858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.525 qpair failed and we were unable to recover it. 00:38:19.525 [2024-12-13 10:40:13.182993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.525 [2024-12-13 10:40:13.183036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.525 qpair failed and we were unable to recover it. 00:38:19.525 [2024-12-13 10:40:13.183238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.525 [2024-12-13 10:40:13.183325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:19.525 qpair failed and we were unable to recover it. 00:38:19.525 [2024-12-13 10:40:13.183438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.525 [2024-12-13 10:40:13.183468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:19.525 qpair failed and we were unable to recover it. 00:38:19.525 [2024-12-13 10:40:13.183646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.525 [2024-12-13 10:40:13.183669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:19.525 qpair failed and we were unable to recover it. 00:38:19.525 [2024-12-13 10:40:13.183759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.525 [2024-12-13 10:40:13.183780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:19.525 qpair failed and we were unable to recover it. 00:38:19.525 [2024-12-13 10:40:13.183946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.525 [2024-12-13 10:40:13.183990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:19.525 qpair failed and we were unable to recover it. 00:38:19.525 [2024-12-13 10:40:13.184143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.525 [2024-12-13 10:40:13.184186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:19.525 qpair failed and we were unable to recover it. 00:38:19.525 [2024-12-13 10:40:13.184352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.525 [2024-12-13 10:40:13.184395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:19.525 qpair failed and we were unable to recover it. 00:38:19.525 [2024-12-13 10:40:13.184567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.525 [2024-12-13 10:40:13.184613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:19.525 qpair failed and we were unable to recover it. 00:38:19.525 [2024-12-13 10:40:13.184832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.525 [2024-12-13 10:40:13.184876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:19.525 qpair failed and we were unable to recover it. 00:38:19.525 [2024-12-13 10:40:13.185101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.525 [2024-12-13 10:40:13.185145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:19.525 qpair failed and we were unable to recover it. 00:38:19.525 [2024-12-13 10:40:13.185287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.525 [2024-12-13 10:40:13.185346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:19.525 qpair failed and we were unable to recover it. 00:38:19.525 [2024-12-13 10:40:13.185484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.525 [2024-12-13 10:40:13.185530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:19.525 qpair failed and we were unable to recover it. 00:38:19.525 [2024-12-13 10:40:13.185724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.525 [2024-12-13 10:40:13.185767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:19.525 qpair failed and we were unable to recover it. 00:38:19.525 [2024-12-13 10:40:13.186092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.525 [2024-12-13 10:40:13.186144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:19.525 qpair failed and we were unable to recover it. 00:38:19.525 [2024-12-13 10:40:13.186308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.525 [2024-12-13 10:40:13.186331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:19.525 qpair failed and we were unable to recover it. 00:38:19.525 [2024-12-13 10:40:13.186423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.525 [2024-12-13 10:40:13.186487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:19.525 qpair failed and we were unable to recover it. 00:38:19.525 [2024-12-13 10:40:13.186630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.525 [2024-12-13 10:40:13.186673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:19.525 qpair failed and we were unable to recover it. 00:38:19.525 [2024-12-13 10:40:13.186846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.525 [2024-12-13 10:40:13.186889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:19.525 qpair failed and we were unable to recover it. 00:38:19.525 [2024-12-13 10:40:13.187039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.525 [2024-12-13 10:40:13.187082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:19.525 qpair failed and we were unable to recover it. 00:38:19.525 [2024-12-13 10:40:13.187266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.525 [2024-12-13 10:40:13.187311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:19.525 qpair failed and we were unable to recover it. 00:38:19.525 [2024-12-13 10:40:13.187444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.525 [2024-12-13 10:40:13.187499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:19.525 qpair failed and we were unable to recover it. 00:38:19.525 [2024-12-13 10:40:13.188489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.525 [2024-12-13 10:40:13.188535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:19.525 qpair failed and we were unable to recover it. 00:38:19.525 [2024-12-13 10:40:13.188658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.525 [2024-12-13 10:40:13.188685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:19.525 qpair failed and we were unable to recover it. 00:38:19.526 [2024-12-13 10:40:13.188795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.526 [2024-12-13 10:40:13.188821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:19.526 qpair failed and we were unable to recover it. 00:38:19.526 [2024-12-13 10:40:13.188996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.526 [2024-12-13 10:40:13.189024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:19.526 qpair failed and we were unable to recover it. 00:38:19.526 [2024-12-13 10:40:13.189113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.526 [2024-12-13 10:40:13.189134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:19.526 qpair failed and we were unable to recover it. 00:38:19.526 [2024-12-13 10:40:13.189394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.526 [2024-12-13 10:40:13.189418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:19.526 qpair failed and we were unable to recover it. 00:38:19.526 [2024-12-13 10:40:13.189542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.526 [2024-12-13 10:40:13.189565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:19.526 qpair failed and we were unable to recover it. 00:38:19.526 [2024-12-13 10:40:13.189660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.526 [2024-12-13 10:40:13.189682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:19.526 qpair failed and we were unable to recover it. 00:38:19.526 [2024-12-13 10:40:13.189866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.526 [2024-12-13 10:40:13.189889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:19.526 qpair failed and we were unable to recover it. 00:38:19.526 [2024-12-13 10:40:13.190050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.526 [2024-12-13 10:40:13.190074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:19.526 qpair failed and we were unable to recover it. 00:38:19.526 [2024-12-13 10:40:13.190264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.526 [2024-12-13 10:40:13.190288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:19.526 qpair failed and we were unable to recover it. 00:38:19.526 [2024-12-13 10:40:13.190394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.526 [2024-12-13 10:40:13.190416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:19.526 qpair failed and we were unable to recover it. 00:38:19.526 [2024-12-13 10:40:13.190645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.526 [2024-12-13 10:40:13.190669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:19.526 qpair failed and we were unable to recover it. 00:38:19.526 [2024-12-13 10:40:13.190764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.526 [2024-12-13 10:40:13.190786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:19.526 qpair failed and we were unable to recover it. 00:38:19.526 [2024-12-13 10:40:13.190946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.526 [2024-12-13 10:40:13.190969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:19.526 qpair failed and we were unable to recover it. 00:38:19.526 [2024-12-13 10:40:13.191092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.526 [2024-12-13 10:40:13.191114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:19.526 qpair failed and we were unable to recover it. 00:38:19.526 [2024-12-13 10:40:13.191222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.526 [2024-12-13 10:40:13.191255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:19.526 qpair failed and we were unable to recover it. 00:38:19.526 [2024-12-13 10:40:13.191467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.526 [2024-12-13 10:40:13.191522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:19.526 qpair failed and we were unable to recover it. 00:38:19.526 [2024-12-13 10:40:13.191723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.526 [2024-12-13 10:40:13.191752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.526 qpair failed and we were unable to recover it. 00:38:19.526 [2024-12-13 10:40:13.191829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.526 [2024-12-13 10:40:13.191843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.526 qpair failed and we were unable to recover it. 00:38:19.526 [2024-12-13 10:40:13.191983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.526 [2024-12-13 10:40:13.191999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.526 qpair failed and we were unable to recover it. 00:38:19.526 [2024-12-13 10:40:13.192090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.526 [2024-12-13 10:40:13.192104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.526 qpair failed and we were unable to recover it. 00:38:19.526 [2024-12-13 10:40:13.192182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.526 [2024-12-13 10:40:13.192195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.526 qpair failed and we were unable to recover it. 00:38:19.526 [2024-12-13 10:40:13.192347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.526 [2024-12-13 10:40:13.192363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.526 qpair failed and we were unable to recover it. 00:38:19.526 [2024-12-13 10:40:13.192528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.526 [2024-12-13 10:40:13.192544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.526 qpair failed and we were unable to recover it. 00:38:19.526 [2024-12-13 10:40:13.192623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.526 [2024-12-13 10:40:13.192636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.526 qpair failed and we were unable to recover it. 00:38:19.526 [2024-12-13 10:40:13.192719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.526 [2024-12-13 10:40:13.192733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.526 qpair failed and we were unable to recover it. 00:38:19.526 [2024-12-13 10:40:13.192836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.526 [2024-12-13 10:40:13.192851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.526 qpair failed and we were unable to recover it. 00:38:19.526 [2024-12-13 10:40:13.193062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.526 [2024-12-13 10:40:13.193081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.526 qpair failed and we were unable to recover it. 00:38:19.526 [2024-12-13 10:40:13.193240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.526 [2024-12-13 10:40:13.193257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.526 qpair failed and we were unable to recover it. 00:38:19.526 [2024-12-13 10:40:13.193330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.526 [2024-12-13 10:40:13.193343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.526 qpair failed and we were unable to recover it. 00:38:19.526 [2024-12-13 10:40:13.193475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.526 [2024-12-13 10:40:13.193489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.526 qpair failed and we were unable to recover it. 00:38:19.526 [2024-12-13 10:40:13.193717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.526 [2024-12-13 10:40:13.193731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.526 qpair failed and we were unable to recover it. 00:38:19.526 [2024-12-13 10:40:13.193911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.526 [2024-12-13 10:40:13.193925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.526 qpair failed and we were unable to recover it. 00:38:19.526 [2024-12-13 10:40:13.194013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.526 [2024-12-13 10:40:13.194026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.526 qpair failed and we were unable to recover it. 00:38:19.526 [2024-12-13 10:40:13.194163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.526 [2024-12-13 10:40:13.194176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.526 qpair failed and we were unable to recover it. 00:38:19.526 [2024-12-13 10:40:13.194336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.526 [2024-12-13 10:40:13.194349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.526 qpair failed and we were unable to recover it. 00:38:19.526 [2024-12-13 10:40:13.194432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.526 [2024-12-13 10:40:13.194446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.526 qpair failed and we were unable to recover it. 00:38:19.526 [2024-12-13 10:40:13.194594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.526 [2024-12-13 10:40:13.194608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.526 qpair failed and we were unable to recover it. 00:38:19.526 [2024-12-13 10:40:13.194695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.526 [2024-12-13 10:40:13.194707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.526 qpair failed and we were unable to recover it. 00:38:19.526 [2024-12-13 10:40:13.194796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.526 [2024-12-13 10:40:13.194809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.526 qpair failed and we were unable to recover it. 00:38:19.527 [2024-12-13 10:40:13.194967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.527 [2024-12-13 10:40:13.194981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.527 qpair failed and we were unable to recover it. 00:38:19.527 [2024-12-13 10:40:13.195070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.527 [2024-12-13 10:40:13.195083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.527 qpair failed and we were unable to recover it. 00:38:19.527 [2024-12-13 10:40:13.195253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.527 [2024-12-13 10:40:13.195268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.527 qpair failed and we were unable to recover it. 00:38:19.527 [2024-12-13 10:40:13.195351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.527 [2024-12-13 10:40:13.195364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.527 qpair failed and we were unable to recover it. 00:38:19.527 [2024-12-13 10:40:13.195466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.527 [2024-12-13 10:40:13.195480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.527 qpair failed and we were unable to recover it. 00:38:19.527 [2024-12-13 10:40:13.195688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.527 [2024-12-13 10:40:13.195706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.527 qpair failed and we were unable to recover it. 00:38:19.527 [2024-12-13 10:40:13.195788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.527 [2024-12-13 10:40:13.195801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.527 qpair failed and we were unable to recover it. 00:38:19.527 [2024-12-13 10:40:13.195879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.527 [2024-12-13 10:40:13.195892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.527 qpair failed and we were unable to recover it. 00:38:19.527 [2024-12-13 10:40:13.195957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.527 [2024-12-13 10:40:13.195969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.527 qpair failed and we were unable to recover it. 00:38:19.527 [2024-12-13 10:40:13.196045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.527 [2024-12-13 10:40:13.196058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.527 qpair failed and we were unable to recover it. 00:38:19.527 [2024-12-13 10:40:13.196209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.527 [2024-12-13 10:40:13.196222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.527 qpair failed and we were unable to recover it. 00:38:19.527 [2024-12-13 10:40:13.196311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.527 [2024-12-13 10:40:13.196324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.527 qpair failed and we were unable to recover it. 00:38:19.527 [2024-12-13 10:40:13.196405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.527 [2024-12-13 10:40:13.196417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.527 qpair failed and we were unable to recover it. 00:38:19.527 [2024-12-13 10:40:13.196507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.527 [2024-12-13 10:40:13.196520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.527 qpair failed and we were unable to recover it. 00:38:19.527 [2024-12-13 10:40:13.196594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.527 [2024-12-13 10:40:13.196606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.527 qpair failed and we were unable to recover it. 00:38:19.527 [2024-12-13 10:40:13.196791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.527 [2024-12-13 10:40:13.196816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:19.527 qpair failed and we were unable to recover it. 00:38:19.527 [2024-12-13 10:40:13.197030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.527 [2024-12-13 10:40:13.197059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:19.527 qpair failed and we were unable to recover it. 00:38:19.527 [2024-12-13 10:40:13.197174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.527 [2024-12-13 10:40:13.197208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:19.527 qpair failed and we were unable to recover it. 00:38:19.527 [2024-12-13 10:40:13.197297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.527 [2024-12-13 10:40:13.197312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.527 qpair failed and we were unable to recover it. 00:38:19.527 [2024-12-13 10:40:13.197383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.527 [2024-12-13 10:40:13.197396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.527 qpair failed and we were unable to recover it. 00:38:19.527 [2024-12-13 10:40:13.197487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.527 [2024-12-13 10:40:13.197500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.527 qpair failed and we were unable to recover it. 00:38:19.527 [2024-12-13 10:40:13.197598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.527 [2024-12-13 10:40:13.197611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.527 qpair failed and we were unable to recover it. 00:38:19.527 [2024-12-13 10:40:13.197712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.527 [2024-12-13 10:40:13.197725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.527 qpair failed and we were unable to recover it. 00:38:19.527 [2024-12-13 10:40:13.197870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.527 [2024-12-13 10:40:13.197883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.527 qpair failed and we were unable to recover it. 00:38:19.527 [2024-12-13 10:40:13.198036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.527 [2024-12-13 10:40:13.198050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.527 qpair failed and we were unable to recover it. 00:38:19.527 [2024-12-13 10:40:13.198141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.527 [2024-12-13 10:40:13.198153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.527 qpair failed and we were unable to recover it. 00:38:19.527 [2024-12-13 10:40:13.198227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.527 [2024-12-13 10:40:13.198240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.527 qpair failed and we were unable to recover it. 00:38:19.527 [2024-12-13 10:40:13.198318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.527 [2024-12-13 10:40:13.198331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.527 qpair failed and we were unable to recover it. 00:38:19.527 [2024-12-13 10:40:13.198476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.527 [2024-12-13 10:40:13.198494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.527 qpair failed and we were unable to recover it. 00:38:19.527 [2024-12-13 10:40:13.198560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.527 [2024-12-13 10:40:13.198573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.527 qpair failed and we were unable to recover it. 00:38:19.527 [2024-12-13 10:40:13.198780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.527 [2024-12-13 10:40:13.198793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.527 qpair failed and we were unable to recover it. 00:38:19.527 [2024-12-13 10:40:13.198875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.527 [2024-12-13 10:40:13.198888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.527 qpair failed and we were unable to recover it. 00:38:19.527 [2024-12-13 10:40:13.198957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.527 [2024-12-13 10:40:13.198969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.527 qpair failed and we were unable to recover it. 00:38:19.527 [2024-12-13 10:40:13.199069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.527 [2024-12-13 10:40:13.199081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.527 qpair failed and we were unable to recover it. 00:38:19.527 [2024-12-13 10:40:13.199153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.527 [2024-12-13 10:40:13.199166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.527 qpair failed and we were unable to recover it. 00:38:19.527 [2024-12-13 10:40:13.199264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.527 [2024-12-13 10:40:13.199276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.527 qpair failed and we were unable to recover it. 00:38:19.527 [2024-12-13 10:40:13.199343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.527 [2024-12-13 10:40:13.199356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.527 qpair failed and we were unable to recover it. 00:38:19.527 [2024-12-13 10:40:13.199457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.527 [2024-12-13 10:40:13.199470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.527 qpair failed and we were unable to recover it. 00:38:19.527 [2024-12-13 10:40:13.199540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.527 [2024-12-13 10:40:13.199553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.527 qpair failed and we were unable to recover it. 00:38:19.528 [2024-12-13 10:40:13.199695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.528 [2024-12-13 10:40:13.199707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.528 qpair failed and we were unable to recover it. 00:38:19.528 [2024-12-13 10:40:13.199782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.528 [2024-12-13 10:40:13.199795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.528 qpair failed and we were unable to recover it. 00:38:19.528 [2024-12-13 10:40:13.199877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.528 [2024-12-13 10:40:13.199889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.528 qpair failed and we were unable to recover it. 00:38:19.528 [2024-12-13 10:40:13.200046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.528 [2024-12-13 10:40:13.200060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.528 qpair failed and we were unable to recover it. 00:38:19.528 [2024-12-13 10:40:13.200141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.528 [2024-12-13 10:40:13.200153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.528 qpair failed and we were unable to recover it. 00:38:19.528 [2024-12-13 10:40:13.200290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.528 [2024-12-13 10:40:13.200303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.528 qpair failed and we were unable to recover it. 00:38:19.528 [2024-12-13 10:40:13.200380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.528 [2024-12-13 10:40:13.200393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.528 qpair failed and we were unable to recover it. 00:38:19.528 [2024-12-13 10:40:13.200465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.528 [2024-12-13 10:40:13.200478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.528 qpair failed and we were unable to recover it. 00:38:19.528 [2024-12-13 10:40:13.200569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.528 [2024-12-13 10:40:13.200582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.528 qpair failed and we were unable to recover it. 00:38:19.528 [2024-12-13 10:40:13.200676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.528 [2024-12-13 10:40:13.200689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.528 qpair failed and we were unable to recover it. 00:38:19.528 [2024-12-13 10:40:13.200772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.528 [2024-12-13 10:40:13.200785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.528 qpair failed and we were unable to recover it. 00:38:19.528 [2024-12-13 10:40:13.200935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.528 [2024-12-13 10:40:13.200949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.528 qpair failed and we were unable to recover it. 00:38:19.528 [2024-12-13 10:40:13.201030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.528 [2024-12-13 10:40:13.201042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.528 qpair failed and we were unable to recover it. 00:38:19.528 [2024-12-13 10:40:13.201135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.528 [2024-12-13 10:40:13.201148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.528 qpair failed and we were unable to recover it. 00:38:19.528 [2024-12-13 10:40:13.201285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.528 [2024-12-13 10:40:13.201299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.528 qpair failed and we were unable to recover it. 00:38:19.528 [2024-12-13 10:40:13.201527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.528 [2024-12-13 10:40:13.201572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.528 qpair failed and we were unable to recover it. 00:38:19.528 [2024-12-13 10:40:13.201704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.528 [2024-12-13 10:40:13.201753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.528 qpair failed and we were unable to recover it. 00:38:19.528 [2024-12-13 10:40:13.201893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.528 [2024-12-13 10:40:13.201936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.528 qpair failed and we were unable to recover it. 00:38:19.528 [2024-12-13 10:40:13.202063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.528 [2024-12-13 10:40:13.202105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.528 qpair failed and we were unable to recover it. 00:38:19.528 [2024-12-13 10:40:13.202281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.528 [2024-12-13 10:40:13.202295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.528 qpair failed and we were unable to recover it. 00:38:19.528 [2024-12-13 10:40:13.202515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.528 [2024-12-13 10:40:13.202560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.528 qpair failed and we were unable to recover it. 00:38:19.528 [2024-12-13 10:40:13.202772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.528 [2024-12-13 10:40:13.202815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.528 qpair failed and we were unable to recover it. 00:38:19.528 [2024-12-13 10:40:13.202949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.528 [2024-12-13 10:40:13.202992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.528 qpair failed and we were unable to recover it. 00:38:19.528 [2024-12-13 10:40:13.203163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.528 [2024-12-13 10:40:13.203176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.528 qpair failed and we were unable to recover it. 00:38:19.528 [2024-12-13 10:40:13.203390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.528 [2024-12-13 10:40:13.203432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.528 qpair failed and we were unable to recover it. 00:38:19.528 [2024-12-13 10:40:13.203691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.528 [2024-12-13 10:40:13.203734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.528 qpair failed and we were unable to recover it. 00:38:19.528 [2024-12-13 10:40:13.203938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.528 [2024-12-13 10:40:13.203980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.528 qpair failed and we were unable to recover it. 00:38:19.528 [2024-12-13 10:40:13.204141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.528 [2024-12-13 10:40:13.204155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.528 qpair failed and we were unable to recover it. 00:38:19.528 [2024-12-13 10:40:13.204289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.528 [2024-12-13 10:40:13.204303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.528 qpair failed and we were unable to recover it. 00:38:19.528 [2024-12-13 10:40:13.204439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.528 [2024-12-13 10:40:13.204460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.528 qpair failed and we were unable to recover it. 00:38:19.528 [2024-12-13 10:40:13.204539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.528 [2024-12-13 10:40:13.204552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.528 qpair failed and we were unable to recover it. 00:38:19.528 [2024-12-13 10:40:13.204703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.528 [2024-12-13 10:40:13.204717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.528 qpair failed and we were unable to recover it. 00:38:19.528 [2024-12-13 10:40:13.204804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.528 [2024-12-13 10:40:13.204817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.528 qpair failed and we were unable to recover it. 00:38:19.528 [2024-12-13 10:40:13.204952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.528 [2024-12-13 10:40:13.204970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.528 qpair failed and we were unable to recover it. 00:38:19.528 [2024-12-13 10:40:13.205086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.528 [2024-12-13 10:40:13.205099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.528 qpair failed and we were unable to recover it. 00:38:19.529 [2024-12-13 10:40:13.205246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.529 [2024-12-13 10:40:13.205259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.529 qpair failed and we were unable to recover it. 00:38:19.529 [2024-12-13 10:40:13.205353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.529 [2024-12-13 10:40:13.205366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.529 qpair failed and we were unable to recover it. 00:38:19.529 [2024-12-13 10:40:13.205469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.529 [2024-12-13 10:40:13.205482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.529 qpair failed and we were unable to recover it. 00:38:19.529 [2024-12-13 10:40:13.205561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.529 [2024-12-13 10:40:13.205574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.529 qpair failed and we were unable to recover it. 00:38:19.529 [2024-12-13 10:40:13.205643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.529 [2024-12-13 10:40:13.205656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.529 qpair failed and we were unable to recover it. 00:38:19.529 [2024-12-13 10:40:13.205818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.529 [2024-12-13 10:40:13.205832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.529 qpair failed and we were unable to recover it. 00:38:19.529 [2024-12-13 10:40:13.205917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.529 [2024-12-13 10:40:13.205929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.529 qpair failed and we were unable to recover it. 00:38:19.529 [2024-12-13 10:40:13.206027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.529 [2024-12-13 10:40:13.206040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.529 qpair failed and we were unable to recover it. 00:38:19.529 [2024-12-13 10:40:13.206182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.529 [2024-12-13 10:40:13.206195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.529 qpair failed and we were unable to recover it. 00:38:19.529 [2024-12-13 10:40:13.206272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.529 [2024-12-13 10:40:13.206285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.529 qpair failed and we were unable to recover it. 00:38:19.529 [2024-12-13 10:40:13.206371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.529 [2024-12-13 10:40:13.206383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.529 qpair failed and we were unable to recover it. 00:38:19.529 [2024-12-13 10:40:13.206466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.529 [2024-12-13 10:40:13.206480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.529 qpair failed and we were unable to recover it. 00:38:19.529 [2024-12-13 10:40:13.206592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.529 [2024-12-13 10:40:13.206605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.529 qpair failed and we were unable to recover it. 00:38:19.529 [2024-12-13 10:40:13.206672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.529 [2024-12-13 10:40:13.206686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.529 qpair failed and we were unable to recover it. 00:38:19.529 [2024-12-13 10:40:13.206835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.529 [2024-12-13 10:40:13.206850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.529 qpair failed and we were unable to recover it. 00:38:19.529 [2024-12-13 10:40:13.206991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.529 [2024-12-13 10:40:13.207004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.529 qpair failed and we were unable to recover it. 00:38:19.529 [2024-12-13 10:40:13.207094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.529 [2024-12-13 10:40:13.207107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.529 qpair failed and we were unable to recover it. 00:38:19.529 [2024-12-13 10:40:13.207186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.529 [2024-12-13 10:40:13.207199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.529 qpair failed and we were unable to recover it. 00:38:19.529 [2024-12-13 10:40:13.207270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.529 [2024-12-13 10:40:13.207283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.529 qpair failed and we were unable to recover it. 00:38:19.529 [2024-12-13 10:40:13.207360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.529 [2024-12-13 10:40:13.207372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.529 qpair failed and we were unable to recover it. 00:38:19.529 [2024-12-13 10:40:13.207456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.529 [2024-12-13 10:40:13.207469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.529 qpair failed and we were unable to recover it. 00:38:19.529 [2024-12-13 10:40:13.207539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.529 [2024-12-13 10:40:13.207554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.529 qpair failed and we were unable to recover it. 00:38:19.529 [2024-12-13 10:40:13.207688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.529 [2024-12-13 10:40:13.207700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.529 qpair failed and we were unable to recover it. 00:38:19.529 [2024-12-13 10:40:13.207799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.529 [2024-12-13 10:40:13.207812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.529 qpair failed and we were unable to recover it. 00:38:19.529 [2024-12-13 10:40:13.207895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.529 [2024-12-13 10:40:13.207907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.529 qpair failed and we were unable to recover it. 00:38:19.529 [2024-12-13 10:40:13.207968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.529 [2024-12-13 10:40:13.207980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.529 qpair failed and we were unable to recover it. 00:38:19.529 [2024-12-13 10:40:13.208055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.529 [2024-12-13 10:40:13.208068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.529 qpair failed and we were unable to recover it. 00:38:19.529 [2024-12-13 10:40:13.208207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.529 [2024-12-13 10:40:13.208220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.529 qpair failed and we were unable to recover it. 00:38:19.529 [2024-12-13 10:40:13.208307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.529 [2024-12-13 10:40:13.208320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.529 qpair failed and we were unable to recover it. 00:38:19.529 [2024-12-13 10:40:13.208384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.529 [2024-12-13 10:40:13.208396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.529 qpair failed and we were unable to recover it. 00:38:19.529 [2024-12-13 10:40:13.208560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.529 [2024-12-13 10:40:13.208573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.529 qpair failed and we were unable to recover it. 00:38:19.529 [2024-12-13 10:40:13.208655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.529 [2024-12-13 10:40:13.208667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.529 qpair failed and we were unable to recover it. 00:38:19.529 [2024-12-13 10:40:13.208821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.529 [2024-12-13 10:40:13.208833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.529 qpair failed and we were unable to recover it. 00:38:19.529 [2024-12-13 10:40:13.208927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.529 [2024-12-13 10:40:13.208939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.529 qpair failed and we were unable to recover it. 00:38:19.529 [2024-12-13 10:40:13.209090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.529 [2024-12-13 10:40:13.209102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.529 qpair failed and we were unable to recover it. 00:38:19.529 [2024-12-13 10:40:13.209189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.529 [2024-12-13 10:40:13.209202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.529 qpair failed and we were unable to recover it. 00:38:19.529 [2024-12-13 10:40:13.209337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.529 [2024-12-13 10:40:13.209350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.529 qpair failed and we were unable to recover it. 00:38:19.529 [2024-12-13 10:40:13.209437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.529 [2024-12-13 10:40:13.209456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.529 qpair failed and we were unable to recover it. 00:38:19.529 [2024-12-13 10:40:13.209532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.529 [2024-12-13 10:40:13.209544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.529 qpair failed and we were unable to recover it. 00:38:19.530 [2024-12-13 10:40:13.209635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.530 [2024-12-13 10:40:13.209648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.530 qpair failed and we were unable to recover it. 00:38:19.530 [2024-12-13 10:40:13.209731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.530 [2024-12-13 10:40:13.209743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.530 qpair failed and we were unable to recover it. 00:38:19.530 [2024-12-13 10:40:13.209826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.530 [2024-12-13 10:40:13.209838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.530 qpair failed and we were unable to recover it. 00:38:19.530 [2024-12-13 10:40:13.209979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.530 [2024-12-13 10:40:13.209991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.530 qpair failed and we were unable to recover it. 00:38:19.530 [2024-12-13 10:40:13.210068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.530 [2024-12-13 10:40:13.210081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.530 qpair failed and we were unable to recover it. 00:38:19.530 [2024-12-13 10:40:13.210246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.530 [2024-12-13 10:40:13.210260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.530 qpair failed and we were unable to recover it. 00:38:19.530 [2024-12-13 10:40:13.210344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.530 [2024-12-13 10:40:13.210356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.530 qpair failed and we were unable to recover it. 00:38:19.530 [2024-12-13 10:40:13.210437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.530 [2024-12-13 10:40:13.210456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.530 qpair failed and we were unable to recover it. 00:38:19.530 [2024-12-13 10:40:13.210600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.530 [2024-12-13 10:40:13.210614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.530 qpair failed and we were unable to recover it. 00:38:19.530 [2024-12-13 10:40:13.210696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.530 [2024-12-13 10:40:13.210709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.530 qpair failed and we were unable to recover it. 00:38:19.530 [2024-12-13 10:40:13.210843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.530 [2024-12-13 10:40:13.210857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.530 qpair failed and we were unable to recover it. 00:38:19.530 [2024-12-13 10:40:13.210926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.530 [2024-12-13 10:40:13.210939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.530 qpair failed and we were unable to recover it. 00:38:19.530 [2024-12-13 10:40:13.211016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.530 [2024-12-13 10:40:13.211029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.530 qpair failed and we were unable to recover it. 00:38:19.530 [2024-12-13 10:40:13.211157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.530 [2024-12-13 10:40:13.211171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.530 qpair failed and we were unable to recover it. 00:38:19.530 [2024-12-13 10:40:13.211308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.530 [2024-12-13 10:40:13.211321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.530 qpair failed and we were unable to recover it. 00:38:19.530 [2024-12-13 10:40:13.211430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.530 [2024-12-13 10:40:13.211443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.530 qpair failed and we were unable to recover it. 00:38:19.530 [2024-12-13 10:40:13.211589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.530 [2024-12-13 10:40:13.211603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.530 qpair failed and we were unable to recover it. 00:38:19.530 [2024-12-13 10:40:13.211676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.530 [2024-12-13 10:40:13.211689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.530 qpair failed and we were unable to recover it. 00:38:19.530 [2024-12-13 10:40:13.211767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.530 [2024-12-13 10:40:13.211779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.530 qpair failed and we were unable to recover it. 00:38:19.530 [2024-12-13 10:40:13.211937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.530 [2024-12-13 10:40:13.211950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.530 qpair failed and we were unable to recover it. 00:38:19.530 [2024-12-13 10:40:13.212032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.530 [2024-12-13 10:40:13.212044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.530 qpair failed and we were unable to recover it. 00:38:19.530 [2024-12-13 10:40:13.212111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.530 [2024-12-13 10:40:13.212123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.530 qpair failed and we were unable to recover it. 00:38:19.530 [2024-12-13 10:40:13.212202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.530 [2024-12-13 10:40:13.212218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.530 qpair failed and we were unable to recover it. 00:38:19.530 [2024-12-13 10:40:13.212295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.530 [2024-12-13 10:40:13.212312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.530 qpair failed and we were unable to recover it. 00:38:19.530 [2024-12-13 10:40:13.212400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.530 [2024-12-13 10:40:13.212413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.530 qpair failed and we were unable to recover it. 00:38:19.530 [2024-12-13 10:40:13.212604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.530 [2024-12-13 10:40:13.212618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.530 qpair failed and we were unable to recover it. 00:38:19.530 [2024-12-13 10:40:13.212767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.530 [2024-12-13 10:40:13.212780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.530 qpair failed and we were unable to recover it. 00:38:19.530 [2024-12-13 10:40:13.212848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.530 [2024-12-13 10:40:13.212861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.530 qpair failed and we were unable to recover it. 00:38:19.530 [2024-12-13 10:40:13.213064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.530 [2024-12-13 10:40:13.213078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.530 qpair failed and we were unable to recover it. 00:38:19.530 [2024-12-13 10:40:13.213159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.530 [2024-12-13 10:40:13.213171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.530 qpair failed and we were unable to recover it. 00:38:19.530 [2024-12-13 10:40:13.213253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.530 [2024-12-13 10:40:13.213265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.530 qpair failed and we were unable to recover it. 00:38:19.530 [2024-12-13 10:40:13.213347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.530 [2024-12-13 10:40:13.213359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.530 qpair failed and we were unable to recover it. 00:38:19.530 [2024-12-13 10:40:13.213492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.530 [2024-12-13 10:40:13.213505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.530 qpair failed and we were unable to recover it. 00:38:19.530 [2024-12-13 10:40:13.213645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.530 [2024-12-13 10:40:13.213658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.530 qpair failed and we were unable to recover it. 00:38:19.530 [2024-12-13 10:40:13.213743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.530 [2024-12-13 10:40:13.213756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.530 qpair failed and we were unable to recover it. 00:38:19.530 [2024-12-13 10:40:13.213837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.530 [2024-12-13 10:40:13.213850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.530 qpair failed and we were unable to recover it. 00:38:19.530 [2024-12-13 10:40:13.213927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.530 [2024-12-13 10:40:13.213939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.530 qpair failed and we were unable to recover it. 00:38:19.530 [2024-12-13 10:40:13.214096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.530 [2024-12-13 10:40:13.214109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.530 qpair failed and we were unable to recover it. 00:38:19.530 [2024-12-13 10:40:13.214311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.530 [2024-12-13 10:40:13.214364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.530 qpair failed and we were unable to recover it. 00:38:19.530 [2024-12-13 10:40:13.214595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.531 [2024-12-13 10:40:13.214654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:19.531 qpair failed and we were unable to recover it. 00:38:19.531 [2024-12-13 10:40:13.214818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.531 [2024-12-13 10:40:13.214871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:19.531 qpair failed and we were unable to recover it. 00:38:19.531 [2024-12-13 10:40:13.215037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.531 [2024-12-13 10:40:13.215084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:19.531 qpair failed and we were unable to recover it. 00:38:19.531 [2024-12-13 10:40:13.215287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.531 [2024-12-13 10:40:13.215330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:19.531 qpair failed and we were unable to recover it. 00:38:19.531 [2024-12-13 10:40:13.215520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.531 [2024-12-13 10:40:13.215543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:19.531 qpair failed and we were unable to recover it. 00:38:19.531 [2024-12-13 10:40:13.215811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.531 [2024-12-13 10:40:13.215832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:19.531 qpair failed and we were unable to recover it. 00:38:19.531 [2024-12-13 10:40:13.215907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.531 [2024-12-13 10:40:13.215921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.531 qpair failed and we were unable to recover it. 00:38:19.531 [2024-12-13 10:40:13.216002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.531 [2024-12-13 10:40:13.216014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.531 qpair failed and we were unable to recover it. 00:38:19.531 [2024-12-13 10:40:13.216079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.531 [2024-12-13 10:40:13.216091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.531 qpair failed and we were unable to recover it. 00:38:19.531 [2024-12-13 10:40:13.216184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.531 [2024-12-13 10:40:13.216197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.531 qpair failed and we were unable to recover it. 00:38:19.531 [2024-12-13 10:40:13.216401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.531 [2024-12-13 10:40:13.216415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.531 qpair failed and we were unable to recover it. 00:38:19.531 [2024-12-13 10:40:13.216573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.531 [2024-12-13 10:40:13.216587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.531 qpair failed and we were unable to recover it. 00:38:19.531 [2024-12-13 10:40:13.216688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.531 [2024-12-13 10:40:13.216705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.531 qpair failed and we were unable to recover it. 00:38:19.531 [2024-12-13 10:40:13.216878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.531 [2024-12-13 10:40:13.216901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.531 qpair failed and we were unable to recover it. 00:38:19.531 [2024-12-13 10:40:13.217003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.531 [2024-12-13 10:40:13.217026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.531 qpair failed and we were unable to recover it. 00:38:19.531 [2024-12-13 10:40:13.217213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.531 [2024-12-13 10:40:13.217231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.531 qpair failed and we were unable to recover it. 00:38:19.531 [2024-12-13 10:40:13.217378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.531 [2024-12-13 10:40:13.217396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.531 qpair failed and we were unable to recover it. 00:38:19.531 [2024-12-13 10:40:13.217486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.531 [2024-12-13 10:40:13.217500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.531 qpair failed and we were unable to recover it. 00:38:19.531 [2024-12-13 10:40:13.217646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.531 [2024-12-13 10:40:13.217662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.531 qpair failed and we were unable to recover it. 00:38:19.531 [2024-12-13 10:40:13.217740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.531 [2024-12-13 10:40:13.217752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.531 qpair failed and we were unable to recover it. 00:38:19.531 [2024-12-13 10:40:13.217828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.531 [2024-12-13 10:40:13.217842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.531 qpair failed and we were unable to recover it. 00:38:19.531 [2024-12-13 10:40:13.217982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.531 [2024-12-13 10:40:13.217997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.531 qpair failed and we were unable to recover it. 00:38:19.531 [2024-12-13 10:40:13.218083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.531 [2024-12-13 10:40:13.218096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.531 qpair failed and we were unable to recover it. 00:38:19.531 [2024-12-13 10:40:13.218165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.531 [2024-12-13 10:40:13.218181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.531 qpair failed and we were unable to recover it. 00:38:19.531 [2024-12-13 10:40:13.218263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.531 [2024-12-13 10:40:13.218276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.531 qpair failed and we were unable to recover it. 00:38:19.531 [2024-12-13 10:40:13.218410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.531 [2024-12-13 10:40:13.218423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.531 qpair failed and we were unable to recover it. 00:38:19.531 [2024-12-13 10:40:13.218496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.531 [2024-12-13 10:40:13.218509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.531 qpair failed and we were unable to recover it. 00:38:19.531 [2024-12-13 10:40:13.218659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.531 [2024-12-13 10:40:13.218673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.531 qpair failed and we were unable to recover it. 00:38:19.531 [2024-12-13 10:40:13.218767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.531 [2024-12-13 10:40:13.218780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.531 qpair failed and we were unable to recover it. 00:38:19.531 [2024-12-13 10:40:13.218853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.531 [2024-12-13 10:40:13.218865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.531 qpair failed and we were unable to recover it. 00:38:19.531 [2024-12-13 10:40:13.218951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.531 [2024-12-13 10:40:13.218965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.531 qpair failed and we were unable to recover it. 00:38:19.531 [2024-12-13 10:40:13.219046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.531 [2024-12-13 10:40:13.219058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.531 qpair failed and we were unable to recover it. 00:38:19.531 [2024-12-13 10:40:13.219192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.531 [2024-12-13 10:40:13.219210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.531 qpair failed and we were unable to recover it. 00:38:19.531 [2024-12-13 10:40:13.219360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.531 [2024-12-13 10:40:13.219373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.531 qpair failed and we were unable to recover it. 00:38:19.531 [2024-12-13 10:40:13.219466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.531 [2024-12-13 10:40:13.219480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.531 qpair failed and we were unable to recover it. 00:38:19.531 [2024-12-13 10:40:13.219619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.531 [2024-12-13 10:40:13.219633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.531 qpair failed and we were unable to recover it. 00:38:19.531 [2024-12-13 10:40:13.219775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.531 [2024-12-13 10:40:13.219788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.531 qpair failed and we were unable to recover it. 00:38:19.531 [2024-12-13 10:40:13.219871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.531 [2024-12-13 10:40:13.219884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.531 qpair failed and we were unable to recover it. 00:38:19.531 [2024-12-13 10:40:13.219955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.531 [2024-12-13 10:40:13.219968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.531 qpair failed and we were unable to recover it. 00:38:19.531 [2024-12-13 10:40:13.220038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.532 [2024-12-13 10:40:13.220050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.532 qpair failed and we were unable to recover it. 00:38:19.532 [2024-12-13 10:40:13.220206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.532 [2024-12-13 10:40:13.220219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.532 qpair failed and we were unable to recover it. 00:38:19.532 [2024-12-13 10:40:13.220289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.532 [2024-12-13 10:40:13.220302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.532 qpair failed and we were unable to recover it. 00:38:19.532 [2024-12-13 10:40:13.220382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.532 [2024-12-13 10:40:13.220394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.532 qpair failed and we were unable to recover it. 00:38:19.532 [2024-12-13 10:40:13.220488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.532 [2024-12-13 10:40:13.220503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.532 qpair failed and we were unable to recover it. 00:38:19.532 [2024-12-13 10:40:13.220592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.532 [2024-12-13 10:40:13.220606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.532 qpair failed and we were unable to recover it. 00:38:19.532 [2024-12-13 10:40:13.220674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.532 [2024-12-13 10:40:13.220687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.532 qpair failed and we were unable to recover it. 00:38:19.532 [2024-12-13 10:40:13.220763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.532 [2024-12-13 10:40:13.220776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.532 qpair failed and we were unable to recover it. 00:38:19.532 [2024-12-13 10:40:13.220841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.532 [2024-12-13 10:40:13.220853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.532 qpair failed and we were unable to recover it. 00:38:19.532 [2024-12-13 10:40:13.221007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.532 [2024-12-13 10:40:13.221020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.532 qpair failed and we were unable to recover it. 00:38:19.532 [2024-12-13 10:40:13.221116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.532 [2024-12-13 10:40:13.221128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.532 qpair failed and we were unable to recover it. 00:38:19.532 [2024-12-13 10:40:13.221209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.532 [2024-12-13 10:40:13.221222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.532 qpair failed and we were unable to recover it. 00:38:19.532 [2024-12-13 10:40:13.221297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.532 [2024-12-13 10:40:13.221310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.532 qpair failed and we were unable to recover it. 00:38:19.532 [2024-12-13 10:40:13.221379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.532 [2024-12-13 10:40:13.221392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.532 qpair failed and we were unable to recover it. 00:38:19.532 [2024-12-13 10:40:13.221553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.532 [2024-12-13 10:40:13.221570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.532 qpair failed and we were unable to recover it. 00:38:19.532 [2024-12-13 10:40:13.221640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.532 [2024-12-13 10:40:13.221658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.532 qpair failed and we were unable to recover it. 00:38:19.532 [2024-12-13 10:40:13.221731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.532 [2024-12-13 10:40:13.221744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.532 qpair failed and we were unable to recover it. 00:38:19.532 [2024-12-13 10:40:13.221816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.532 [2024-12-13 10:40:13.221831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.532 qpair failed and we were unable to recover it. 00:38:19.532 [2024-12-13 10:40:13.221969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.532 [2024-12-13 10:40:13.221983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.532 qpair failed and we were unable to recover it. 00:38:19.532 [2024-12-13 10:40:13.222067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.532 [2024-12-13 10:40:13.222080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.532 qpair failed and we were unable to recover it. 00:38:19.532 [2024-12-13 10:40:13.222147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.532 [2024-12-13 10:40:13.222160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.532 qpair failed and we were unable to recover it. 00:38:19.532 [2024-12-13 10:40:13.222231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.532 [2024-12-13 10:40:13.222243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.532 qpair failed and we were unable to recover it. 00:38:19.532 [2024-12-13 10:40:13.222323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.532 [2024-12-13 10:40:13.222336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.532 qpair failed and we were unable to recover it. 00:38:19.532 [2024-12-13 10:40:13.222409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.532 [2024-12-13 10:40:13.222422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.532 qpair failed and we were unable to recover it. 00:38:19.532 [2024-12-13 10:40:13.222490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.532 [2024-12-13 10:40:13.222506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.532 qpair failed and we were unable to recover it. 00:38:19.532 [2024-12-13 10:40:13.222573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.532 [2024-12-13 10:40:13.222585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.532 qpair failed and we were unable to recover it. 00:38:19.532 [2024-12-13 10:40:13.222651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.532 [2024-12-13 10:40:13.222664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.532 qpair failed and we were unable to recover it. 00:38:19.532 [2024-12-13 10:40:13.222742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.532 [2024-12-13 10:40:13.222755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.532 qpair failed and we were unable to recover it. 00:38:19.532 [2024-12-13 10:40:13.222830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.532 [2024-12-13 10:40:13.222842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.532 qpair failed and we were unable to recover it. 00:38:19.532 [2024-12-13 10:40:13.222912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.532 [2024-12-13 10:40:13.222925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.532 qpair failed and we were unable to recover it. 00:38:19.532 [2024-12-13 10:40:13.222995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.532 [2024-12-13 10:40:13.223007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.532 qpair failed and we were unable to recover it. 00:38:19.532 [2024-12-13 10:40:13.223071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.532 [2024-12-13 10:40:13.223083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.532 qpair failed and we were unable to recover it. 00:38:19.532 [2024-12-13 10:40:13.223179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.532 [2024-12-13 10:40:13.223192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.532 qpair failed and we were unable to recover it. 00:38:19.532 [2024-12-13 10:40:13.223267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.532 [2024-12-13 10:40:13.223280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.532 qpair failed and we were unable to recover it. 00:38:19.532 [2024-12-13 10:40:13.223347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.533 [2024-12-13 10:40:13.223360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.533 qpair failed and we were unable to recover it. 00:38:19.533 [2024-12-13 10:40:13.223432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.533 [2024-12-13 10:40:13.223446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.533 qpair failed and we were unable to recover it. 00:38:19.533 [2024-12-13 10:40:13.223589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.533 [2024-12-13 10:40:13.223601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.533 qpair failed and we were unable to recover it. 00:38:19.533 [2024-12-13 10:40:13.223738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.533 [2024-12-13 10:40:13.223785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.533 qpair failed and we were unable to recover it. 00:38:19.533 [2024-12-13 10:40:13.223999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.533 [2024-12-13 10:40:13.224043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.533 qpair failed and we were unable to recover it. 00:38:19.533 [2024-12-13 10:40:13.224184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.533 [2024-12-13 10:40:13.224227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.533 qpair failed and we were unable to recover it. 00:38:19.533 [2024-12-13 10:40:13.224444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.533 [2024-12-13 10:40:13.224490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.533 qpair failed and we were unable to recover it. 00:38:19.533 [2024-12-13 10:40:13.224572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.533 [2024-12-13 10:40:13.224587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.533 qpair failed and we were unable to recover it. 00:38:19.533 [2024-12-13 10:40:13.224669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.533 [2024-12-13 10:40:13.224683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.533 qpair failed and we were unable to recover it. 00:38:19.533 [2024-12-13 10:40:13.224777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.533 [2024-12-13 10:40:13.224790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.533 qpair failed and we were unable to recover it. 00:38:19.533 [2024-12-13 10:40:13.224850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.533 [2024-12-13 10:40:13.224863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.533 qpair failed and we were unable to recover it. 00:38:19.533 [2024-12-13 10:40:13.224958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.533 [2024-12-13 10:40:13.224971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.533 qpair failed and we were unable to recover it. 00:38:19.533 [2024-12-13 10:40:13.225108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.533 [2024-12-13 10:40:13.225122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.533 qpair failed and we were unable to recover it. 00:38:19.533 [2024-12-13 10:40:13.225221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.533 [2024-12-13 10:40:13.225235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.533 qpair failed and we were unable to recover it. 00:38:19.533 [2024-12-13 10:40:13.225311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.533 [2024-12-13 10:40:13.225324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.533 qpair failed and we were unable to recover it. 00:38:19.533 [2024-12-13 10:40:13.225396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.533 [2024-12-13 10:40:13.225410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.533 qpair failed and we were unable to recover it. 00:38:19.533 [2024-12-13 10:40:13.225488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.533 [2024-12-13 10:40:13.225502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.533 qpair failed and we were unable to recover it. 00:38:19.533 [2024-12-13 10:40:13.225588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.533 [2024-12-13 10:40:13.225600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.533 qpair failed and we were unable to recover it. 00:38:19.533 [2024-12-13 10:40:13.225763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.533 [2024-12-13 10:40:13.225777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.533 qpair failed and we were unable to recover it. 00:38:19.533 [2024-12-13 10:40:13.225857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.533 [2024-12-13 10:40:13.225869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.533 qpair failed and we were unable to recover it. 00:38:19.533 [2024-12-13 10:40:13.225945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.533 [2024-12-13 10:40:13.225958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.533 qpair failed and we were unable to recover it. 00:38:19.533 [2024-12-13 10:40:13.226039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.533 [2024-12-13 10:40:13.226052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.533 qpair failed and we were unable to recover it. 00:38:19.533 [2024-12-13 10:40:13.226136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.533 [2024-12-13 10:40:13.226151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.533 qpair failed and we were unable to recover it. 00:38:19.533 [2024-12-13 10:40:13.226231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.533 [2024-12-13 10:40:13.226243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.533 qpair failed and we were unable to recover it. 00:38:19.533 [2024-12-13 10:40:13.226379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.533 [2024-12-13 10:40:13.226392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.533 qpair failed and we were unable to recover it. 00:38:19.533 [2024-12-13 10:40:13.226489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.533 [2024-12-13 10:40:13.226502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.533 qpair failed and we were unable to recover it. 00:38:19.533 [2024-12-13 10:40:13.226600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.533 [2024-12-13 10:40:13.226614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.533 qpair failed and we were unable to recover it. 00:38:19.533 [2024-12-13 10:40:13.226682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.533 [2024-12-13 10:40:13.226695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.533 qpair failed and we were unable to recover it. 00:38:19.533 [2024-12-13 10:40:13.226837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.533 [2024-12-13 10:40:13.226851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.533 qpair failed and we were unable to recover it. 00:38:19.533 [2024-12-13 10:40:13.226924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.533 [2024-12-13 10:40:13.226937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.533 qpair failed and we were unable to recover it. 00:38:19.533 [2024-12-13 10:40:13.227027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.533 [2024-12-13 10:40:13.227042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.533 qpair failed and we were unable to recover it. 00:38:19.533 [2024-12-13 10:40:13.227142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.533 [2024-12-13 10:40:13.227155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.533 qpair failed and we were unable to recover it. 00:38:19.533 [2024-12-13 10:40:13.227227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.533 [2024-12-13 10:40:13.227240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.533 qpair failed and we were unable to recover it. 00:38:19.533 [2024-12-13 10:40:13.227309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.533 [2024-12-13 10:40:13.227322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.533 qpair failed and we were unable to recover it. 00:38:19.533 [2024-12-13 10:40:13.227462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.533 [2024-12-13 10:40:13.227477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.533 qpair failed and we were unable to recover it. 00:38:19.533 [2024-12-13 10:40:13.227618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.533 [2024-12-13 10:40:13.227632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.533 qpair failed and we were unable to recover it. 00:38:19.533 [2024-12-13 10:40:13.227715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.533 [2024-12-13 10:40:13.227728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.533 qpair failed and we were unable to recover it. 00:38:19.533 [2024-12-13 10:40:13.227874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.533 [2024-12-13 10:40:13.227888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.533 qpair failed and we were unable to recover it. 00:38:19.533 [2024-12-13 10:40:13.228034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.533 [2024-12-13 10:40:13.228048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.533 qpair failed and we were unable to recover it. 00:38:19.533 [2024-12-13 10:40:13.228184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.534 [2024-12-13 10:40:13.228197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.534 qpair failed and we were unable to recover it. 00:38:19.534 [2024-12-13 10:40:13.228278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.534 [2024-12-13 10:40:13.228290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.534 qpair failed and we were unable to recover it. 00:38:19.534 [2024-12-13 10:40:13.228364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.534 [2024-12-13 10:40:13.228377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.534 qpair failed and we were unable to recover it. 00:38:19.534 [2024-12-13 10:40:13.228532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.534 [2024-12-13 10:40:13.228546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.534 qpair failed and we were unable to recover it. 00:38:19.534 [2024-12-13 10:40:13.228614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.534 [2024-12-13 10:40:13.228626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.534 qpair failed and we were unable to recover it. 00:38:19.534 [2024-12-13 10:40:13.228711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.534 [2024-12-13 10:40:13.228725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.534 qpair failed and we were unable to recover it. 00:38:19.534 [2024-12-13 10:40:13.228802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.534 [2024-12-13 10:40:13.228820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.534 qpair failed and we were unable to recover it. 00:38:19.534 [2024-12-13 10:40:13.228897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.534 [2024-12-13 10:40:13.228909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.534 qpair failed and we were unable to recover it. 00:38:19.534 [2024-12-13 10:40:13.228992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.534 [2024-12-13 10:40:13.229006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.534 qpair failed and we were unable to recover it. 00:38:19.534 [2024-12-13 10:40:13.229095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.534 [2024-12-13 10:40:13.229109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.534 qpair failed and we were unable to recover it. 00:38:19.534 [2024-12-13 10:40:13.229200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.534 [2024-12-13 10:40:13.229213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.534 qpair failed and we were unable to recover it. 00:38:19.534 [2024-12-13 10:40:13.229280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.534 [2024-12-13 10:40:13.229293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.534 qpair failed and we were unable to recover it. 00:38:19.534 [2024-12-13 10:40:13.229457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.534 [2024-12-13 10:40:13.229478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.534 qpair failed and we were unable to recover it. 00:38:19.534 [2024-12-13 10:40:13.229620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.534 [2024-12-13 10:40:13.229636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.534 qpair failed and we were unable to recover it. 00:38:19.534 [2024-12-13 10:40:13.229711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.534 [2024-12-13 10:40:13.229723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.534 qpair failed and we were unable to recover it. 00:38:19.534 [2024-12-13 10:40:13.229820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.534 [2024-12-13 10:40:13.229833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.534 qpair failed and we were unable to recover it. 00:38:19.534 [2024-12-13 10:40:13.229907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.534 [2024-12-13 10:40:13.229919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.534 qpair failed and we were unable to recover it. 00:38:19.534 [2024-12-13 10:40:13.230052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.534 [2024-12-13 10:40:13.230065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.534 qpair failed and we were unable to recover it. 00:38:19.534 [2024-12-13 10:40:13.230135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.534 [2024-12-13 10:40:13.230149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.534 qpair failed and we were unable to recover it. 00:38:19.534 [2024-12-13 10:40:13.230229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.534 [2024-12-13 10:40:13.230242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.534 qpair failed and we were unable to recover it. 00:38:19.534 [2024-12-13 10:40:13.230320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.534 [2024-12-13 10:40:13.230332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.534 qpair failed and we were unable to recover it. 00:38:19.534 [2024-12-13 10:40:13.230415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.534 [2024-12-13 10:40:13.230429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.534 qpair failed and we were unable to recover it. 00:38:19.534 [2024-12-13 10:40:13.230579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.534 [2024-12-13 10:40:13.230593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.534 qpair failed and we were unable to recover it. 00:38:19.534 [2024-12-13 10:40:13.230668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.534 [2024-12-13 10:40:13.230681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.534 qpair failed and we were unable to recover it. 00:38:19.534 [2024-12-13 10:40:13.230768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.534 [2024-12-13 10:40:13.230781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.534 qpair failed and we were unable to recover it. 00:38:19.534 [2024-12-13 10:40:13.230919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.534 [2024-12-13 10:40:13.230933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.534 qpair failed and we were unable to recover it. 00:38:19.534 [2024-12-13 10:40:13.231075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.534 [2024-12-13 10:40:13.231088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.534 qpair failed and we were unable to recover it. 00:38:19.534 [2024-12-13 10:40:13.231228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.534 [2024-12-13 10:40:13.231241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.534 qpair failed and we were unable to recover it. 00:38:19.534 [2024-12-13 10:40:13.231315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.534 [2024-12-13 10:40:13.231328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.534 qpair failed and we were unable to recover it. 00:38:19.534 [2024-12-13 10:40:13.231418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.534 [2024-12-13 10:40:13.231431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.534 qpair failed and we were unable to recover it. 00:38:19.534 [2024-12-13 10:40:13.231513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.534 [2024-12-13 10:40:13.231526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.534 qpair failed and we were unable to recover it. 00:38:19.534 [2024-12-13 10:40:13.231626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.534 [2024-12-13 10:40:13.231641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.534 qpair failed and we were unable to recover it. 00:38:19.534 [2024-12-13 10:40:13.231732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.534 [2024-12-13 10:40:13.231745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.534 qpair failed and we were unable to recover it. 00:38:19.534 [2024-12-13 10:40:13.231815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.534 [2024-12-13 10:40:13.231830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.534 qpair failed and we were unable to recover it. 00:38:19.534 [2024-12-13 10:40:13.231912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.534 [2024-12-13 10:40:13.231925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.534 qpair failed and we were unable to recover it. 00:38:19.534 [2024-12-13 10:40:13.232077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.534 [2024-12-13 10:40:13.232090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.534 qpair failed and we were unable to recover it. 00:38:19.534 [2024-12-13 10:40:13.232173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.534 [2024-12-13 10:40:13.232186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.534 qpair failed and we were unable to recover it. 00:38:19.534 [2024-12-13 10:40:13.232322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.534 [2024-12-13 10:40:13.232335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.534 qpair failed and we were unable to recover it. 00:38:19.534 [2024-12-13 10:40:13.232493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.534 [2024-12-13 10:40:13.232512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.534 qpair failed and we were unable to recover it. 00:38:19.534 [2024-12-13 10:40:13.232657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.535 [2024-12-13 10:40:13.232671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.535 qpair failed and we were unable to recover it. 00:38:19.535 [2024-12-13 10:40:13.232744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.535 [2024-12-13 10:40:13.232756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.535 qpair failed and we were unable to recover it. 00:38:19.535 [2024-12-13 10:40:13.232832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.535 [2024-12-13 10:40:13.232845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.535 qpair failed and we were unable to recover it. 00:38:19.535 [2024-12-13 10:40:13.232993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.535 [2024-12-13 10:40:13.233006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.535 qpair failed and we were unable to recover it. 00:38:19.535 [2024-12-13 10:40:13.233088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.535 [2024-12-13 10:40:13.233100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.535 qpair failed and we were unable to recover it. 00:38:19.535 [2024-12-13 10:40:13.233178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.535 [2024-12-13 10:40:13.233191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.535 qpair failed and we were unable to recover it. 00:38:19.535 [2024-12-13 10:40:13.233358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.535 [2024-12-13 10:40:13.233374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.535 qpair failed and we were unable to recover it. 00:38:19.535 [2024-12-13 10:40:13.233471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.535 [2024-12-13 10:40:13.233484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.535 qpair failed and we were unable to recover it. 00:38:19.535 [2024-12-13 10:40:13.233642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.535 [2024-12-13 10:40:13.233656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.535 qpair failed and we were unable to recover it. 00:38:19.535 [2024-12-13 10:40:13.233726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.535 [2024-12-13 10:40:13.233738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.535 qpair failed and we were unable to recover it. 00:38:19.535 [2024-12-13 10:40:13.233824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.535 [2024-12-13 10:40:13.233838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.535 qpair failed and we were unable to recover it. 00:38:19.535 [2024-12-13 10:40:13.233980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.535 [2024-12-13 10:40:13.233994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.535 qpair failed and we were unable to recover it. 00:38:19.535 [2024-12-13 10:40:13.234059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.535 [2024-12-13 10:40:13.234072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.535 qpair failed and we were unable to recover it. 00:38:19.535 [2024-12-13 10:40:13.234137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.535 [2024-12-13 10:40:13.234149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.535 qpair failed and we were unable to recover it. 00:38:19.535 [2024-12-13 10:40:13.234224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.535 [2024-12-13 10:40:13.234238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.535 qpair failed and we were unable to recover it. 00:38:19.535 [2024-12-13 10:40:13.234307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.535 [2024-12-13 10:40:13.234320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.535 qpair failed and we were unable to recover it. 00:38:19.535 [2024-12-13 10:40:13.234387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.535 [2024-12-13 10:40:13.234399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.535 qpair failed and we were unable to recover it. 00:38:19.535 [2024-12-13 10:40:13.234551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.535 [2024-12-13 10:40:13.234566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.535 qpair failed and we were unable to recover it. 00:38:19.535 [2024-12-13 10:40:13.234713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.535 [2024-12-13 10:40:13.234726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.535 qpair failed and we were unable to recover it. 00:38:19.535 [2024-12-13 10:40:13.234892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.535 [2024-12-13 10:40:13.234907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.535 qpair failed and we were unable to recover it. 00:38:19.535 [2024-12-13 10:40:13.234979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.535 [2024-12-13 10:40:13.234992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.535 qpair failed and we were unable to recover it. 00:38:19.535 [2024-12-13 10:40:13.235128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.535 [2024-12-13 10:40:13.235142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.535 qpair failed and we were unable to recover it. 00:38:19.535 [2024-12-13 10:40:13.235280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.535 [2024-12-13 10:40:13.235294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.535 qpair failed and we were unable to recover it. 00:38:19.535 [2024-12-13 10:40:13.235376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.535 [2024-12-13 10:40:13.235390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.535 qpair failed and we were unable to recover it. 00:38:19.535 [2024-12-13 10:40:13.235480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.535 [2024-12-13 10:40:13.235493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.535 qpair failed and we were unable to recover it. 00:38:19.535 [2024-12-13 10:40:13.235648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.535 [2024-12-13 10:40:13.235663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.535 qpair failed and we were unable to recover it. 00:38:19.535 [2024-12-13 10:40:13.235828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.535 [2024-12-13 10:40:13.235842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.535 qpair failed and we were unable to recover it. 00:38:19.535 [2024-12-13 10:40:13.235928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.535 [2024-12-13 10:40:13.235941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.535 qpair failed and we were unable to recover it. 00:38:19.535 [2024-12-13 10:40:13.236081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.535 [2024-12-13 10:40:13.236095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.535 qpair failed and we were unable to recover it. 00:38:19.535 [2024-12-13 10:40:13.236188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.535 [2024-12-13 10:40:13.236201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.535 qpair failed and we were unable to recover it. 00:38:19.535 [2024-12-13 10:40:13.236286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.535 [2024-12-13 10:40:13.236300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.535 qpair failed and we were unable to recover it. 00:38:19.535 [2024-12-13 10:40:13.236377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.535 [2024-12-13 10:40:13.236394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.535 qpair failed and we were unable to recover it. 00:38:19.535 [2024-12-13 10:40:13.236535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.535 [2024-12-13 10:40:13.236553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.535 qpair failed and we were unable to recover it. 00:38:19.535 [2024-12-13 10:40:13.236626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.535 [2024-12-13 10:40:13.236638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.535 qpair failed and we were unable to recover it. 00:38:19.535 [2024-12-13 10:40:13.236717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.535 [2024-12-13 10:40:13.236730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.535 qpair failed and we were unable to recover it. 00:38:19.535 [2024-12-13 10:40:13.236796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.535 [2024-12-13 10:40:13.236809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.535 qpair failed and we were unable to recover it. 00:38:19.535 [2024-12-13 10:40:13.236941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.535 [2024-12-13 10:40:13.236954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.535 qpair failed and we were unable to recover it. 00:38:19.535 [2024-12-13 10:40:13.237028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.535 [2024-12-13 10:40:13.237041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.535 qpair failed and we were unable to recover it. 00:38:19.535 [2024-12-13 10:40:13.237112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.535 [2024-12-13 10:40:13.237125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.535 qpair failed and we were unable to recover it. 00:38:19.535 [2024-12-13 10:40:13.237202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.535 [2024-12-13 10:40:13.237215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.536 qpair failed and we were unable to recover it. 00:38:19.536 [2024-12-13 10:40:13.237303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.536 [2024-12-13 10:40:13.237317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.536 qpair failed and we were unable to recover it. 00:38:19.536 [2024-12-13 10:40:13.237467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.536 [2024-12-13 10:40:13.237482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.536 qpair failed and we were unable to recover it. 00:38:19.536 [2024-12-13 10:40:13.237559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.536 [2024-12-13 10:40:13.237572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.536 qpair failed and we were unable to recover it. 00:38:19.536 [2024-12-13 10:40:13.237710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.536 [2024-12-13 10:40:13.237722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.536 qpair failed and we were unable to recover it. 00:38:19.536 [2024-12-13 10:40:13.237867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.536 [2024-12-13 10:40:13.237880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.536 qpair failed and we were unable to recover it. 00:38:19.536 [2024-12-13 10:40:13.237955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.536 [2024-12-13 10:40:13.237968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.536 qpair failed and we were unable to recover it. 00:38:19.536 [2024-12-13 10:40:13.238054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.536 [2024-12-13 10:40:13.238066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.536 qpair failed and we were unable to recover it. 00:38:19.536 [2024-12-13 10:40:13.238147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.536 [2024-12-13 10:40:13.238159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.536 qpair failed and we were unable to recover it. 00:38:19.536 [2024-12-13 10:40:13.238305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.536 [2024-12-13 10:40:13.238319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.536 qpair failed and we were unable to recover it. 00:38:19.536 [2024-12-13 10:40:13.238411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.536 [2024-12-13 10:40:13.238424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.536 qpair failed and we were unable to recover it. 00:38:19.536 [2024-12-13 10:40:13.238515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.536 [2024-12-13 10:40:13.238529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.536 qpair failed and we were unable to recover it. 00:38:19.536 [2024-12-13 10:40:13.238601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.536 [2024-12-13 10:40:13.238613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.536 qpair failed and we were unable to recover it. 00:38:19.536 [2024-12-13 10:40:13.238696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.536 [2024-12-13 10:40:13.238708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.536 qpair failed and we were unable to recover it. 00:38:19.536 [2024-12-13 10:40:13.238914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.536 [2024-12-13 10:40:13.238927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.536 qpair failed and we were unable to recover it. 00:38:19.536 [2024-12-13 10:40:13.239000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.536 [2024-12-13 10:40:13.239013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.536 qpair failed and we were unable to recover it. 00:38:19.536 [2024-12-13 10:40:13.239210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.536 [2024-12-13 10:40:13.239225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.536 qpair failed and we were unable to recover it. 00:38:19.536 [2024-12-13 10:40:13.239376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.536 [2024-12-13 10:40:13.239390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.536 qpair failed and we were unable to recover it. 00:38:19.536 [2024-12-13 10:40:13.239545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.536 [2024-12-13 10:40:13.239559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.536 qpair failed and we were unable to recover it. 00:38:19.536 [2024-12-13 10:40:13.239636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.536 [2024-12-13 10:40:13.239649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.536 qpair failed and we were unable to recover it. 00:38:19.536 [2024-12-13 10:40:13.239735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.536 [2024-12-13 10:40:13.239748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.536 qpair failed and we were unable to recover it. 00:38:19.536 [2024-12-13 10:40:13.239896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.536 [2024-12-13 10:40:13.239909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.536 qpair failed and we were unable to recover it. 00:38:19.536 [2024-12-13 10:40:13.240072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.536 [2024-12-13 10:40:13.240086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.536 qpair failed and we were unable to recover it. 00:38:19.536 [2024-12-13 10:40:13.240181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.536 [2024-12-13 10:40:13.240193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.536 qpair failed and we were unable to recover it. 00:38:19.536 [2024-12-13 10:40:13.240270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.536 [2024-12-13 10:40:13.240283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.536 qpair failed and we were unable to recover it. 00:38:19.536 [2024-12-13 10:40:13.240372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.536 [2024-12-13 10:40:13.240384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.536 qpair failed and we were unable to recover it. 00:38:19.536 [2024-12-13 10:40:13.240524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.536 [2024-12-13 10:40:13.240538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.536 qpair failed and we were unable to recover it. 00:38:19.536 [2024-12-13 10:40:13.240615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.536 [2024-12-13 10:40:13.240628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.536 qpair failed and we were unable to recover it. 00:38:19.536 [2024-12-13 10:40:13.240697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.536 [2024-12-13 10:40:13.240709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.536 qpair failed and we were unable to recover it. 00:38:19.536 [2024-12-13 10:40:13.240789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.536 [2024-12-13 10:40:13.240801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.536 qpair failed and we were unable to recover it. 00:38:19.536 [2024-12-13 10:40:13.240993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.536 [2024-12-13 10:40:13.241006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.536 qpair failed and we were unable to recover it. 00:38:19.536 [2024-12-13 10:40:13.241157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.536 [2024-12-13 10:40:13.241170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.536 qpair failed and we were unable to recover it. 00:38:19.536 [2024-12-13 10:40:13.241259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.536 [2024-12-13 10:40:13.241272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.536 qpair failed and we were unable to recover it. 00:38:19.536 [2024-12-13 10:40:13.241352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.536 [2024-12-13 10:40:13.241367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.536 qpair failed and we were unable to recover it. 00:38:19.536 [2024-12-13 10:40:13.241439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.536 [2024-12-13 10:40:13.241458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.536 qpair failed and we were unable to recover it. 00:38:19.536 [2024-12-13 10:40:13.241524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.536 [2024-12-13 10:40:13.241536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.536 qpair failed and we were unable to recover it. 00:38:19.536 [2024-12-13 10:40:13.241692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.536 [2024-12-13 10:40:13.241705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.536 qpair failed and we were unable to recover it. 00:38:19.536 [2024-12-13 10:40:13.241791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.536 [2024-12-13 10:40:13.241806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.536 qpair failed and we were unable to recover it. 00:38:19.536 [2024-12-13 10:40:13.241946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.536 [2024-12-13 10:40:13.241960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.536 qpair failed and we were unable to recover it. 00:38:19.536 [2024-12-13 10:40:13.242056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.536 [2024-12-13 10:40:13.242070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.537 qpair failed and we were unable to recover it. 00:38:19.537 [2024-12-13 10:40:13.242156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.537 [2024-12-13 10:40:13.242170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.537 qpair failed and we were unable to recover it. 00:38:19.537 [2024-12-13 10:40:13.242245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.537 [2024-12-13 10:40:13.242257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.537 qpair failed and we were unable to recover it. 00:38:19.537 [2024-12-13 10:40:13.242424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.537 [2024-12-13 10:40:13.242477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.537 qpair failed and we were unable to recover it. 00:38:19.537 [2024-12-13 10:40:13.242687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.537 [2024-12-13 10:40:13.242729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.537 qpair failed and we were unable to recover it. 00:38:19.537 [2024-12-13 10:40:13.242870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.537 [2024-12-13 10:40:13.242912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.537 qpair failed and we were unable to recover it. 00:38:19.537 [2024-12-13 10:40:13.243050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.537 [2024-12-13 10:40:13.243092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.537 qpair failed and we were unable to recover it. 00:38:19.537 [2024-12-13 10:40:13.243202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.537 [2024-12-13 10:40:13.243215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.537 qpair failed and we were unable to recover it. 00:38:19.537 [2024-12-13 10:40:13.243299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.537 [2024-12-13 10:40:13.243312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.537 qpair failed and we were unable to recover it. 00:38:19.537 [2024-12-13 10:40:13.243444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.537 [2024-12-13 10:40:13.243464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.537 qpair failed and we were unable to recover it. 00:38:19.537 [2024-12-13 10:40:13.243599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.537 [2024-12-13 10:40:13.243612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.537 qpair failed and we were unable to recover it. 00:38:19.537 [2024-12-13 10:40:13.243746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.537 [2024-12-13 10:40:13.243759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.537 qpair failed and we were unable to recover it. 00:38:19.537 [2024-12-13 10:40:13.243842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.537 [2024-12-13 10:40:13.243855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.537 qpair failed and we were unable to recover it. 00:38:19.537 [2024-12-13 10:40:13.244003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.537 [2024-12-13 10:40:13.244016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.537 qpair failed and we were unable to recover it. 00:38:19.537 [2024-12-13 10:40:13.244111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.537 [2024-12-13 10:40:13.244125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.537 qpair failed and we were unable to recover it. 00:38:19.537 [2024-12-13 10:40:13.244208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.537 [2024-12-13 10:40:13.244221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.537 qpair failed and we were unable to recover it. 00:38:19.537 [2024-12-13 10:40:13.244309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.537 [2024-12-13 10:40:13.244325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.537 qpair failed and we were unable to recover it. 00:38:19.537 [2024-12-13 10:40:13.244398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.537 [2024-12-13 10:40:13.244415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.537 qpair failed and we were unable to recover it. 00:38:19.537 [2024-12-13 10:40:13.244492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.537 [2024-12-13 10:40:13.244505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.537 qpair failed and we were unable to recover it. 00:38:19.537 [2024-12-13 10:40:13.244643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.537 [2024-12-13 10:40:13.244656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.537 qpair failed and we were unable to recover it. 00:38:19.537 [2024-12-13 10:40:13.244728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.537 [2024-12-13 10:40:13.244741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.537 qpair failed and we were unable to recover it. 00:38:19.537 [2024-12-13 10:40:13.244810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.537 [2024-12-13 10:40:13.244823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.537 qpair failed and we were unable to recover it. 00:38:19.537 [2024-12-13 10:40:13.244961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.537 [2024-12-13 10:40:13.244975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.537 qpair failed and we were unable to recover it. 00:38:19.537 [2024-12-13 10:40:13.245043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.537 [2024-12-13 10:40:13.245056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.537 qpair failed and we were unable to recover it. 00:38:19.537 [2024-12-13 10:40:13.245195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.537 [2024-12-13 10:40:13.245209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.537 qpair failed and we were unable to recover it. 00:38:19.537 [2024-12-13 10:40:13.245307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.537 [2024-12-13 10:40:13.245320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.537 qpair failed and we were unable to recover it. 00:38:19.537 [2024-12-13 10:40:13.245459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.537 [2024-12-13 10:40:13.245473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.537 qpair failed and we were unable to recover it. 00:38:19.537 [2024-12-13 10:40:13.245633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.537 [2024-12-13 10:40:13.245646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.537 qpair failed and we were unable to recover it. 00:38:19.537 [2024-12-13 10:40:13.245715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.537 [2024-12-13 10:40:13.245727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.537 qpair failed and we were unable to recover it. 00:38:19.537 [2024-12-13 10:40:13.245870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.537 [2024-12-13 10:40:13.245884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.537 qpair failed and we were unable to recover it. 00:38:19.537 [2024-12-13 10:40:13.245980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.537 [2024-12-13 10:40:13.245994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.537 qpair failed and we were unable to recover it. 00:38:19.537 [2024-12-13 10:40:13.246172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.537 [2024-12-13 10:40:13.246186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.537 qpair failed and we were unable to recover it. 00:38:19.537 [2024-12-13 10:40:13.246273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.537 [2024-12-13 10:40:13.246286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.537 qpair failed and we were unable to recover it. 00:38:19.537 [2024-12-13 10:40:13.246355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.537 [2024-12-13 10:40:13.246368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.537 qpair failed and we were unable to recover it. 00:38:19.537 [2024-12-13 10:40:13.246475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.537 [2024-12-13 10:40:13.246491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.537 qpair failed and we were unable to recover it. 00:38:19.537 [2024-12-13 10:40:13.246577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.538 [2024-12-13 10:40:13.246591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.538 qpair failed and we were unable to recover it. 00:38:19.538 [2024-12-13 10:40:13.246684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.538 [2024-12-13 10:40:13.246697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.538 qpair failed and we were unable to recover it. 00:38:19.538 [2024-12-13 10:40:13.246780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.538 [2024-12-13 10:40:13.246794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.538 qpair failed and we were unable to recover it. 00:38:19.538 [2024-12-13 10:40:13.246994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.538 [2024-12-13 10:40:13.247008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.538 qpair failed and we were unable to recover it. 00:38:19.538 [2024-12-13 10:40:13.247081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.538 [2024-12-13 10:40:13.247094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.538 qpair failed and we were unable to recover it. 00:38:19.538 [2024-12-13 10:40:13.247230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.538 [2024-12-13 10:40:13.247244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.538 qpair failed and we were unable to recover it. 00:38:19.538 [2024-12-13 10:40:13.247327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.538 [2024-12-13 10:40:13.247340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.538 qpair failed and we were unable to recover it. 00:38:19.538 [2024-12-13 10:40:13.247408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.538 [2024-12-13 10:40:13.247422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.538 qpair failed and we were unable to recover it. 00:38:19.538 [2024-12-13 10:40:13.247580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.538 [2024-12-13 10:40:13.247594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.538 qpair failed and we were unable to recover it. 00:38:19.538 [2024-12-13 10:40:13.247694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.538 [2024-12-13 10:40:13.247708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.538 qpair failed and we were unable to recover it. 00:38:19.538 [2024-12-13 10:40:13.247780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.538 [2024-12-13 10:40:13.247793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.538 qpair failed and we were unable to recover it. 00:38:19.538 [2024-12-13 10:40:13.247940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.538 [2024-12-13 10:40:13.247953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.538 qpair failed and we were unable to recover it. 00:38:19.538 [2024-12-13 10:40:13.248032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.538 [2024-12-13 10:40:13.248045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.538 qpair failed and we were unable to recover it. 00:38:19.538 [2024-12-13 10:40:13.248190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.538 [2024-12-13 10:40:13.248204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.538 qpair failed and we were unable to recover it. 00:38:19.538 [2024-12-13 10:40:13.248343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.538 [2024-12-13 10:40:13.248357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.538 qpair failed and we were unable to recover it. 00:38:19.538 [2024-12-13 10:40:13.248424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.538 [2024-12-13 10:40:13.248436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.538 qpair failed and we were unable to recover it. 00:38:19.538 [2024-12-13 10:40:13.248530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.538 [2024-12-13 10:40:13.248558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:19.538 qpair failed and we were unable to recover it. 00:38:19.538 [2024-12-13 10:40:13.248684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.538 [2024-12-13 10:40:13.248716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:19.538 qpair failed and we were unable to recover it. 00:38:19.538 [2024-12-13 10:40:13.248861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.538 [2024-12-13 10:40:13.248897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:19.538 qpair failed and we were unable to recover it. 00:38:19.538 [2024-12-13 10:40:13.248987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.538 [2024-12-13 10:40:13.249003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.538 qpair failed and we were unable to recover it. 00:38:19.538 [2024-12-13 10:40:13.249144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.538 [2024-12-13 10:40:13.249157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.538 qpair failed and we were unable to recover it. 00:38:19.538 [2024-12-13 10:40:13.249252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.538 [2024-12-13 10:40:13.249265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.538 qpair failed and we were unable to recover it. 00:38:19.538 [2024-12-13 10:40:13.249333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.538 [2024-12-13 10:40:13.249346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.538 qpair failed and we were unable to recover it. 00:38:19.538 [2024-12-13 10:40:13.249446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.538 [2024-12-13 10:40:13.249466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.538 qpair failed and we were unable to recover it. 00:38:19.538 [2024-12-13 10:40:13.249602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.538 [2024-12-13 10:40:13.249619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.538 qpair failed and we were unable to recover it. 00:38:19.538 [2024-12-13 10:40:13.249686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.538 [2024-12-13 10:40:13.249699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.538 qpair failed and we were unable to recover it. 00:38:19.538 [2024-12-13 10:40:13.249771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.538 [2024-12-13 10:40:13.249785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.538 qpair failed and we were unable to recover it. 00:38:19.538 [2024-12-13 10:40:13.249926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.538 [2024-12-13 10:40:13.249939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.538 qpair failed and we were unable to recover it. 00:38:19.538 [2024-12-13 10:40:13.250024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.538 [2024-12-13 10:40:13.250037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.538 qpair failed and we were unable to recover it. 00:38:19.538 [2024-12-13 10:40:13.250109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.538 [2024-12-13 10:40:13.250122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.538 qpair failed and we were unable to recover it. 00:38:19.538 [2024-12-13 10:40:13.250257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.538 [2024-12-13 10:40:13.250271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.538 qpair failed and we were unable to recover it. 00:38:19.538 [2024-12-13 10:40:13.250352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.538 [2024-12-13 10:40:13.250365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.538 qpair failed and we were unable to recover it. 00:38:19.538 [2024-12-13 10:40:13.250453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.538 [2024-12-13 10:40:13.250467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.538 qpair failed and we were unable to recover it. 00:38:19.538 [2024-12-13 10:40:13.250544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.538 [2024-12-13 10:40:13.250558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.538 qpair failed and we were unable to recover it. 00:38:19.538 [2024-12-13 10:40:13.250633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.538 [2024-12-13 10:40:13.250646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.538 qpair failed and we were unable to recover it. 00:38:19.538 [2024-12-13 10:40:13.250713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.538 [2024-12-13 10:40:13.250726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.538 qpair failed and we were unable to recover it. 00:38:19.538 [2024-12-13 10:40:13.250800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.538 [2024-12-13 10:40:13.250813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.538 qpair failed and we were unable to recover it. 00:38:19.538 [2024-12-13 10:40:13.250899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.538 [2024-12-13 10:40:13.250912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.538 qpair failed and we were unable to recover it. 00:38:19.538 [2024-12-13 10:40:13.250984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.538 [2024-12-13 10:40:13.250998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.538 qpair failed and we were unable to recover it. 00:38:19.538 [2024-12-13 10:40:13.251093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.538 [2024-12-13 10:40:13.251108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.539 qpair failed and we were unable to recover it. 00:38:19.539 [2024-12-13 10:40:13.251181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.539 [2024-12-13 10:40:13.251194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.539 qpair failed and we were unable to recover it. 00:38:19.539 [2024-12-13 10:40:13.251339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.539 [2024-12-13 10:40:13.251352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.539 qpair failed and we were unable to recover it. 00:38:19.539 [2024-12-13 10:40:13.251446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.539 [2024-12-13 10:40:13.251472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.539 qpair failed and we were unable to recover it. 00:38:19.539 [2024-12-13 10:40:13.251544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.539 [2024-12-13 10:40:13.251557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.539 qpair failed and we were unable to recover it. 00:38:19.539 [2024-12-13 10:40:13.251634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.539 [2024-12-13 10:40:13.251648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.539 qpair failed and we were unable to recover it. 00:38:19.539 [2024-12-13 10:40:13.251738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.539 [2024-12-13 10:40:13.251752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.539 qpair failed and we were unable to recover it. 00:38:19.539 [2024-12-13 10:40:13.251832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.539 [2024-12-13 10:40:13.251845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.539 qpair failed and we were unable to recover it. 00:38:19.539 [2024-12-13 10:40:13.251916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.539 [2024-12-13 10:40:13.251930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.539 qpair failed and we were unable to recover it. 00:38:19.539 [2024-12-13 10:40:13.252086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.539 [2024-12-13 10:40:13.252100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.539 qpair failed and we were unable to recover it. 00:38:19.539 [2024-12-13 10:40:13.252191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.539 [2024-12-13 10:40:13.252210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.539 qpair failed and we were unable to recover it. 00:38:19.539 [2024-12-13 10:40:13.252360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.539 [2024-12-13 10:40:13.252375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.539 qpair failed and we were unable to recover it. 00:38:19.539 [2024-12-13 10:40:13.252454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.539 [2024-12-13 10:40:13.252468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.539 qpair failed and we were unable to recover it. 00:38:19.539 [2024-12-13 10:40:13.252605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.539 [2024-12-13 10:40:13.252617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.539 qpair failed and we were unable to recover it. 00:38:19.539 [2024-12-13 10:40:13.252693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.539 [2024-12-13 10:40:13.252707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.539 qpair failed and we were unable to recover it. 00:38:19.539 [2024-12-13 10:40:13.252771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.539 [2024-12-13 10:40:13.252785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.539 qpair failed and we were unable to recover it. 00:38:19.539 [2024-12-13 10:40:13.252862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.539 [2024-12-13 10:40:13.252874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.539 qpair failed and we were unable to recover it. 00:38:19.539 [2024-12-13 10:40:13.252947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.539 [2024-12-13 10:40:13.252960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.539 qpair failed and we were unable to recover it. 00:38:19.539 [2024-12-13 10:40:13.253043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.539 [2024-12-13 10:40:13.253056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.539 qpair failed and we were unable to recover it. 00:38:19.539 [2024-12-13 10:40:13.253201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.539 [2024-12-13 10:40:13.253214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.539 qpair failed and we were unable to recover it. 00:38:19.539 [2024-12-13 10:40:13.253345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.539 [2024-12-13 10:40:13.253363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.539 qpair failed and we were unable to recover it. 00:38:19.539 [2024-12-13 10:40:13.253445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.539 [2024-12-13 10:40:13.253465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.539 qpair failed and we were unable to recover it. 00:38:19.539 [2024-12-13 10:40:13.253547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.539 [2024-12-13 10:40:13.253560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.539 qpair failed and we were unable to recover it. 00:38:19.539 [2024-12-13 10:40:13.253702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.539 [2024-12-13 10:40:13.253716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.539 qpair failed and we were unable to recover it. 00:38:19.539 [2024-12-13 10:40:13.253817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.539 [2024-12-13 10:40:13.253831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.539 qpair failed and we were unable to recover it. 00:38:19.539 [2024-12-13 10:40:13.253901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.539 [2024-12-13 10:40:13.253913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.539 qpair failed and we were unable to recover it. 00:38:19.539 [2024-12-13 10:40:13.253981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.539 [2024-12-13 10:40:13.253993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.539 qpair failed and we were unable to recover it. 00:38:19.539 [2024-12-13 10:40:13.254157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.539 [2024-12-13 10:40:13.254182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:19.539 qpair failed and we were unable to recover it. 00:38:19.539 [2024-12-13 10:40:13.254282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.539 [2024-12-13 10:40:13.254308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:19.539 qpair failed and we were unable to recover it. 00:38:19.539 [2024-12-13 10:40:13.254408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.539 [2024-12-13 10:40:13.254435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:19.539 qpair failed and we were unable to recover it. 00:38:19.539 [2024-12-13 10:40:13.254541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.539 [2024-12-13 10:40:13.254557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.539 qpair failed and we were unable to recover it. 00:38:19.539 [2024-12-13 10:40:13.254640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.539 [2024-12-13 10:40:13.254653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.539 qpair failed and we were unable to recover it. 00:38:19.539 [2024-12-13 10:40:13.254799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.539 [2024-12-13 10:40:13.254812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.539 qpair failed and we were unable to recover it. 00:38:19.539 [2024-12-13 10:40:13.254883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.539 [2024-12-13 10:40:13.254896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.539 qpair failed and we were unable to recover it. 00:38:19.539 [2024-12-13 10:40:13.254970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.539 [2024-12-13 10:40:13.254982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.539 qpair failed and we were unable to recover it. 00:38:19.539 [2024-12-13 10:40:13.255056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.539 [2024-12-13 10:40:13.255068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.539 qpair failed and we were unable to recover it. 00:38:19.539 [2024-12-13 10:40:13.255154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.539 [2024-12-13 10:40:13.255167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.539 qpair failed and we were unable to recover it. 00:38:19.539 [2024-12-13 10:40:13.255337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.539 [2024-12-13 10:40:13.255351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.539 qpair failed and we were unable to recover it. 00:38:19.539 [2024-12-13 10:40:13.255440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.539 [2024-12-13 10:40:13.255460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.539 qpair failed and we were unable to recover it. 00:38:19.539 [2024-12-13 10:40:13.255597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.539 [2024-12-13 10:40:13.255611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.540 qpair failed and we were unable to recover it. 00:38:19.540 [2024-12-13 10:40:13.255692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.540 [2024-12-13 10:40:13.255709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.540 qpair failed and we were unable to recover it. 00:38:19.540 [2024-12-13 10:40:13.255782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.540 [2024-12-13 10:40:13.255795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.540 qpair failed and we were unable to recover it. 00:38:19.540 [2024-12-13 10:40:13.255947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.540 [2024-12-13 10:40:13.255960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.540 qpair failed and we were unable to recover it. 00:38:19.540 [2024-12-13 10:40:13.256037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.540 [2024-12-13 10:40:13.256050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.540 qpair failed and we were unable to recover it. 00:38:19.540 [2024-12-13 10:40:13.256115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.540 [2024-12-13 10:40:13.256129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.540 qpair failed and we were unable to recover it. 00:38:19.540 [2024-12-13 10:40:13.256204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.540 [2024-12-13 10:40:13.256217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.540 qpair failed and we were unable to recover it. 00:38:19.540 [2024-12-13 10:40:13.256291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.540 [2024-12-13 10:40:13.256305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.540 qpair failed and we were unable to recover it. 00:38:19.540 [2024-12-13 10:40:13.256491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.540 [2024-12-13 10:40:13.256505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.540 qpair failed and we were unable to recover it. 00:38:19.540 [2024-12-13 10:40:13.256576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.540 [2024-12-13 10:40:13.256590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.540 qpair failed and we were unable to recover it. 00:38:19.540 [2024-12-13 10:40:13.256655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.540 [2024-12-13 10:40:13.256669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.540 qpair failed and we were unable to recover it. 00:38:19.540 [2024-12-13 10:40:13.256739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.540 [2024-12-13 10:40:13.256752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.540 qpair failed and we were unable to recover it. 00:38:19.540 [2024-12-13 10:40:13.256822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.540 [2024-12-13 10:40:13.256834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.540 qpair failed and we were unable to recover it. 00:38:19.540 [2024-12-13 10:40:13.256904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.540 [2024-12-13 10:40:13.256916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.540 qpair failed and we were unable to recover it. 00:38:19.540 [2024-12-13 10:40:13.257071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.540 [2024-12-13 10:40:13.257085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.540 qpair failed and we were unable to recover it. 00:38:19.540 [2024-12-13 10:40:13.257160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.540 [2024-12-13 10:40:13.257174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.540 qpair failed and we were unable to recover it. 00:38:19.540 [2024-12-13 10:40:13.257240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.540 [2024-12-13 10:40:13.257253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.540 qpair failed and we were unable to recover it. 00:38:19.540 [2024-12-13 10:40:13.257332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.540 [2024-12-13 10:40:13.257344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.540 qpair failed and we were unable to recover it. 00:38:19.540 [2024-12-13 10:40:13.257420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.540 [2024-12-13 10:40:13.257432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.540 qpair failed and we were unable to recover it. 00:38:19.540 [2024-12-13 10:40:13.257509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.540 [2024-12-13 10:40:13.257521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.540 qpair failed and we were unable to recover it. 00:38:19.540 [2024-12-13 10:40:13.257612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.540 [2024-12-13 10:40:13.257626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.540 qpair failed and we were unable to recover it. 00:38:19.540 [2024-12-13 10:40:13.257784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.540 [2024-12-13 10:40:13.257797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.540 qpair failed and we were unable to recover it. 00:38:19.540 [2024-12-13 10:40:13.257967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.540 [2024-12-13 10:40:13.257981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.540 qpair failed and we were unable to recover it. 00:38:19.540 [2024-12-13 10:40:13.258050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.540 [2024-12-13 10:40:13.258064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.540 qpair failed and we were unable to recover it. 00:38:19.540 [2024-12-13 10:40:13.258155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.540 [2024-12-13 10:40:13.258168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.540 qpair failed and we were unable to recover it. 00:38:19.540 [2024-12-13 10:40:13.258249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.540 [2024-12-13 10:40:13.258261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.540 qpair failed and we were unable to recover it. 00:38:19.540 [2024-12-13 10:40:13.258327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.540 [2024-12-13 10:40:13.258339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.540 qpair failed and we were unable to recover it. 00:38:19.540 [2024-12-13 10:40:13.258408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.540 [2024-12-13 10:40:13.258420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.540 qpair failed and we were unable to recover it. 00:38:19.540 [2024-12-13 10:40:13.258515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.540 [2024-12-13 10:40:13.258540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:19.540 qpair failed and we were unable to recover it. 00:38:19.540 [2024-12-13 10:40:13.258631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.540 [2024-12-13 10:40:13.258654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:19.540 qpair failed and we were unable to recover it. 00:38:19.540 [2024-12-13 10:40:13.258814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.540 [2024-12-13 10:40:13.258836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:19.540 qpair failed and we were unable to recover it. 00:38:19.540 [2024-12-13 10:40:13.258958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.540 [2024-12-13 10:40:13.258981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:19.540 qpair failed and we were unable to recover it. 00:38:19.540 [2024-12-13 10:40:13.259083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.540 [2024-12-13 10:40:13.259111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:19.540 qpair failed and we were unable to recover it. 00:38:19.540 [2024-12-13 10:40:13.259210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.540 [2024-12-13 10:40:13.259232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:19.540 qpair failed and we were unable to recover it. 00:38:19.540 [2024-12-13 10:40:13.259323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.540 [2024-12-13 10:40:13.259340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.540 qpair failed and we were unable to recover it. 00:38:19.540 [2024-12-13 10:40:13.259492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.540 [2024-12-13 10:40:13.259506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.540 qpair failed and we were unable to recover it. 00:38:19.540 [2024-12-13 10:40:13.259578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.540 [2024-12-13 10:40:13.259590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.540 qpair failed and we were unable to recover it. 00:38:19.540 [2024-12-13 10:40:13.259673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.540 [2024-12-13 10:40:13.259686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.540 qpair failed and we were unable to recover it. 00:38:19.540 [2024-12-13 10:40:13.259835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.540 [2024-12-13 10:40:13.259849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.540 qpair failed and we were unable to recover it. 00:38:19.540 [2024-12-13 10:40:13.259918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.540 [2024-12-13 10:40:13.259931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.541 qpair failed and we were unable to recover it. 00:38:19.541 [2024-12-13 10:40:13.260008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.541 [2024-12-13 10:40:13.260021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.541 qpair failed and we were unable to recover it. 00:38:19.541 [2024-12-13 10:40:13.260094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.541 [2024-12-13 10:40:13.260111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.541 qpair failed and we were unable to recover it. 00:38:19.541 [2024-12-13 10:40:13.260188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.541 [2024-12-13 10:40:13.260200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.541 qpair failed and we were unable to recover it. 00:38:19.541 [2024-12-13 10:40:13.260294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.541 [2024-12-13 10:40:13.260312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.541 qpair failed and we were unable to recover it. 00:38:19.541 [2024-12-13 10:40:13.260384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.541 [2024-12-13 10:40:13.260401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.541 qpair failed and we were unable to recover it. 00:38:19.541 [2024-12-13 10:40:13.260475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.541 [2024-12-13 10:40:13.260488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.541 qpair failed and we were unable to recover it. 00:38:19.541 [2024-12-13 10:40:13.260559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.541 [2024-12-13 10:40:13.260571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.541 qpair failed and we were unable to recover it. 00:38:19.541 [2024-12-13 10:40:13.260638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.541 [2024-12-13 10:40:13.260651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.541 qpair failed and we were unable to recover it. 00:38:19.541 [2024-12-13 10:40:13.260735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.541 [2024-12-13 10:40:13.260747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.541 qpair failed and we were unable to recover it. 00:38:19.541 [2024-12-13 10:40:13.260881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.541 [2024-12-13 10:40:13.260895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.541 qpair failed and we were unable to recover it. 00:38:19.541 [2024-12-13 10:40:13.260964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.541 [2024-12-13 10:40:13.260977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.541 qpair failed and we were unable to recover it. 00:38:19.541 [2024-12-13 10:40:13.261068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.541 [2024-12-13 10:40:13.261082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.541 qpair failed and we were unable to recover it. 00:38:19.541 [2024-12-13 10:40:13.261168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.541 [2024-12-13 10:40:13.261181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.541 qpair failed and we were unable to recover it. 00:38:19.541 [2024-12-13 10:40:13.261250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.541 [2024-12-13 10:40:13.261263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.541 qpair failed and we were unable to recover it. 00:38:19.541 [2024-12-13 10:40:13.261329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.541 [2024-12-13 10:40:13.261341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.541 qpair failed and we were unable to recover it. 00:38:19.541 [2024-12-13 10:40:13.261436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.541 [2024-12-13 10:40:13.261456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.541 qpair failed and we were unable to recover it. 00:38:19.541 [2024-12-13 10:40:13.261611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.541 [2024-12-13 10:40:13.261627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.541 qpair failed and we were unable to recover it. 00:38:19.541 [2024-12-13 10:40:13.261706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.541 [2024-12-13 10:40:13.261719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.541 qpair failed and we were unable to recover it. 00:38:19.541 [2024-12-13 10:40:13.261802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.541 [2024-12-13 10:40:13.261815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.541 qpair failed and we were unable to recover it. 00:38:19.541 [2024-12-13 10:40:13.261905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.541 [2024-12-13 10:40:13.261919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.541 qpair failed and we were unable to recover it. 00:38:19.541 [2024-12-13 10:40:13.261986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.541 [2024-12-13 10:40:13.261999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.541 qpair failed and we were unable to recover it. 00:38:19.541 [2024-12-13 10:40:13.262084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.541 [2024-12-13 10:40:13.262097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.541 qpair failed and we were unable to recover it. 00:38:19.541 [2024-12-13 10:40:13.262184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.541 [2024-12-13 10:40:13.262197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.541 qpair failed and we were unable to recover it. 00:38:19.541 [2024-12-13 10:40:13.262272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.541 [2024-12-13 10:40:13.262285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.541 qpair failed and we were unable to recover it. 00:38:19.541 [2024-12-13 10:40:13.262376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.541 [2024-12-13 10:40:13.262390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.541 qpair failed and we were unable to recover it. 00:38:19.541 [2024-12-13 10:40:13.262551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.541 [2024-12-13 10:40:13.262565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.541 qpair failed and we were unable to recover it. 00:38:19.541 [2024-12-13 10:40:13.262642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.541 [2024-12-13 10:40:13.262655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.541 qpair failed and we were unable to recover it. 00:38:19.541 [2024-12-13 10:40:13.262804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.541 [2024-12-13 10:40:13.262817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.541 qpair failed and we were unable to recover it. 00:38:19.541 [2024-12-13 10:40:13.262889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.541 [2024-12-13 10:40:13.262904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.541 qpair failed and we were unable to recover it. 00:38:19.541 [2024-12-13 10:40:13.263103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.541 [2024-12-13 10:40:13.263118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.541 qpair failed and we were unable to recover it. 00:38:19.541 [2024-12-13 10:40:13.263212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.541 [2024-12-13 10:40:13.263226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.541 qpair failed and we were unable to recover it. 00:38:19.541 [2024-12-13 10:40:13.263363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.541 [2024-12-13 10:40:13.263376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.541 qpair failed and we were unable to recover it. 00:38:19.541 [2024-12-13 10:40:13.263457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.541 [2024-12-13 10:40:13.263470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.541 qpair failed and we were unable to recover it. 00:38:19.541 [2024-12-13 10:40:13.263610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.541 [2024-12-13 10:40:13.263624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.541 qpair failed and we were unable to recover it. 00:38:19.541 [2024-12-13 10:40:13.263762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.542 [2024-12-13 10:40:13.263777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.542 qpair failed and we were unable to recover it. 00:38:19.542 [2024-12-13 10:40:13.263922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.542 [2024-12-13 10:40:13.263935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.542 qpair failed and we were unable to recover it. 00:38:19.542 [2024-12-13 10:40:13.264067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.542 [2024-12-13 10:40:13.264082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.542 qpair failed and we were unable to recover it. 00:38:19.542 [2024-12-13 10:40:13.264161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.542 [2024-12-13 10:40:13.264173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.542 qpair failed and we were unable to recover it. 00:38:19.542 [2024-12-13 10:40:13.264238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.542 [2024-12-13 10:40:13.264251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.542 qpair failed and we were unable to recover it. 00:38:19.542 [2024-12-13 10:40:13.264346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.542 [2024-12-13 10:40:13.264359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.542 qpair failed and we were unable to recover it. 00:38:19.542 [2024-12-13 10:40:13.264442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.542 [2024-12-13 10:40:13.264463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.542 qpair failed and we were unable to recover it. 00:38:19.542 [2024-12-13 10:40:13.264545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.542 [2024-12-13 10:40:13.264558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.542 qpair failed and we were unable to recover it. 00:38:19.542 [2024-12-13 10:40:13.264638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.542 [2024-12-13 10:40:13.264655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.542 qpair failed and we were unable to recover it. 00:38:19.542 [2024-12-13 10:40:13.264722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.542 [2024-12-13 10:40:13.264735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.542 qpair failed and we were unable to recover it. 00:38:19.542 [2024-12-13 10:40:13.264812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.542 [2024-12-13 10:40:13.264825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.542 qpair failed and we were unable to recover it. 00:38:19.542 [2024-12-13 10:40:13.264976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.542 [2024-12-13 10:40:13.264989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.542 qpair failed and we were unable to recover it. 00:38:19.542 [2024-12-13 10:40:13.265133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.542 [2024-12-13 10:40:13.265148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.542 qpair failed and we were unable to recover it. 00:38:19.542 [2024-12-13 10:40:13.265235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.542 [2024-12-13 10:40:13.265250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.542 qpair failed and we were unable to recover it. 00:38:19.542 [2024-12-13 10:40:13.265327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.542 [2024-12-13 10:40:13.265339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.542 qpair failed and we were unable to recover it. 00:38:19.542 [2024-12-13 10:40:13.265556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.542 [2024-12-13 10:40:13.265571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.542 qpair failed and we were unable to recover it. 00:38:19.542 [2024-12-13 10:40:13.265645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.542 [2024-12-13 10:40:13.265658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.542 qpair failed and we were unable to recover it. 00:38:19.542 [2024-12-13 10:40:13.265793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.542 [2024-12-13 10:40:13.265806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.542 qpair failed and we were unable to recover it. 00:38:19.542 [2024-12-13 10:40:13.265889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.542 [2024-12-13 10:40:13.265902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.542 qpair failed and we were unable to recover it. 00:38:19.542 [2024-12-13 10:40:13.266113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.542 [2024-12-13 10:40:13.266126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.542 qpair failed and we were unable to recover it. 00:38:19.542 [2024-12-13 10:40:13.266205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.542 [2024-12-13 10:40:13.266217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.542 qpair failed and we were unable to recover it. 00:38:19.542 [2024-12-13 10:40:13.266291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.542 [2024-12-13 10:40:13.266303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.542 qpair failed and we were unable to recover it. 00:38:19.542 [2024-12-13 10:40:13.266378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.542 [2024-12-13 10:40:13.266391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.542 qpair failed and we were unable to recover it. 00:38:19.542 [2024-12-13 10:40:13.266472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.542 [2024-12-13 10:40:13.266486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.542 qpair failed and we were unable to recover it. 00:38:19.542 [2024-12-13 10:40:13.266567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.542 [2024-12-13 10:40:13.266580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.542 qpair failed and we were unable to recover it. 00:38:19.542 [2024-12-13 10:40:13.266655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.542 [2024-12-13 10:40:13.266667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.542 qpair failed and we were unable to recover it. 00:38:19.542 [2024-12-13 10:40:13.266833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.542 [2024-12-13 10:40:13.266874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.542 qpair failed and we were unable to recover it. 00:38:19.542 [2024-12-13 10:40:13.267024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.542 [2024-12-13 10:40:13.267067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.542 qpair failed and we were unable to recover it. 00:38:19.542 [2024-12-13 10:40:13.267196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.542 [2024-12-13 10:40:13.267237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.542 qpair failed and we were unable to recover it. 00:38:19.542 [2024-12-13 10:40:13.267369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.542 [2024-12-13 10:40:13.267412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.542 qpair failed and we were unable to recover it. 00:38:19.542 [2024-12-13 10:40:13.267526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.542 [2024-12-13 10:40:13.267540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.542 qpair failed and we were unable to recover it. 00:38:19.542 [2024-12-13 10:40:13.267626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.542 [2024-12-13 10:40:13.267638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.542 qpair failed and we were unable to recover it. 00:38:19.542 [2024-12-13 10:40:13.267860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.542 [2024-12-13 10:40:13.267874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.542 qpair failed and we were unable to recover it. 00:38:19.542 [2024-12-13 10:40:13.268039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.542 [2024-12-13 10:40:13.268052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.542 qpair failed and we were unable to recover it. 00:38:19.542 [2024-12-13 10:40:13.268135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.542 [2024-12-13 10:40:13.268158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.542 qpair failed and we were unable to recover it. 00:38:19.542 [2024-12-13 10:40:13.268237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.542 [2024-12-13 10:40:13.268250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.542 qpair failed and we were unable to recover it. 00:38:19.542 [2024-12-13 10:40:13.268330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.542 [2024-12-13 10:40:13.268343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.542 qpair failed and we were unable to recover it. 00:38:19.542 [2024-12-13 10:40:13.268413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.542 [2024-12-13 10:40:13.268426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.542 qpair failed and we were unable to recover it. 00:38:19.542 [2024-12-13 10:40:13.268510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.542 [2024-12-13 10:40:13.268523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.542 qpair failed and we were unable to recover it. 00:38:19.542 [2024-12-13 10:40:13.268678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.543 [2024-12-13 10:40:13.268692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.543 qpair failed and we were unable to recover it. 00:38:19.543 [2024-12-13 10:40:13.268771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.543 [2024-12-13 10:40:13.268783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.543 qpair failed and we were unable to recover it. 00:38:19.543 [2024-12-13 10:40:13.268852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.543 [2024-12-13 10:40:13.268865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.543 qpair failed and we were unable to recover it. 00:38:19.543 [2024-12-13 10:40:13.268940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.543 [2024-12-13 10:40:13.268953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.543 qpair failed and we were unable to recover it. 00:38:19.543 [2024-12-13 10:40:13.269025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.543 [2024-12-13 10:40:13.269038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.543 qpair failed and we were unable to recover it. 00:38:19.543 [2024-12-13 10:40:13.269111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.543 [2024-12-13 10:40:13.269123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.543 qpair failed and we were unable to recover it. 00:38:19.543 [2024-12-13 10:40:13.269274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.543 [2024-12-13 10:40:13.269287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.543 qpair failed and we were unable to recover it. 00:38:19.543 [2024-12-13 10:40:13.269425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.543 [2024-12-13 10:40:13.269438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.543 qpair failed and we were unable to recover it. 00:38:19.543 [2024-12-13 10:40:13.269522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.543 [2024-12-13 10:40:13.269535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.543 qpair failed and we were unable to recover it. 00:38:19.543 [2024-12-13 10:40:13.269691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.543 [2024-12-13 10:40:13.269704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.543 qpair failed and we were unable to recover it. 00:38:19.543 [2024-12-13 10:40:13.269780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.543 [2024-12-13 10:40:13.269792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.543 qpair failed and we were unable to recover it. 00:38:19.543 [2024-12-13 10:40:13.269872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.543 [2024-12-13 10:40:13.269886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.543 qpair failed and we were unable to recover it. 00:38:19.543 [2024-12-13 10:40:13.269971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.543 [2024-12-13 10:40:13.269984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.543 qpair failed and we were unable to recover it. 00:38:19.543 [2024-12-13 10:40:13.270125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.543 [2024-12-13 10:40:13.270138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.543 qpair failed and we were unable to recover it. 00:38:19.543 [2024-12-13 10:40:13.270239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.543 [2024-12-13 10:40:13.270253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.543 qpair failed and we were unable to recover it. 00:38:19.543 [2024-12-13 10:40:13.270323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.543 [2024-12-13 10:40:13.270337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.543 qpair failed and we were unable to recover it. 00:38:19.543 [2024-12-13 10:40:13.270425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.543 [2024-12-13 10:40:13.270438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.543 qpair failed and we were unable to recover it. 00:38:19.543 [2024-12-13 10:40:13.270541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.543 [2024-12-13 10:40:13.270554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.543 qpair failed and we were unable to recover it. 00:38:19.543 [2024-12-13 10:40:13.270626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.543 [2024-12-13 10:40:13.270639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.543 qpair failed and we were unable to recover it. 00:38:19.543 [2024-12-13 10:40:13.270709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.543 [2024-12-13 10:40:13.270721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.543 qpair failed and we were unable to recover it. 00:38:19.543 [2024-12-13 10:40:13.270791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.543 [2024-12-13 10:40:13.270805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.543 qpair failed and we were unable to recover it. 00:38:19.543 [2024-12-13 10:40:13.270886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.543 [2024-12-13 10:40:13.270898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.543 qpair failed and we were unable to recover it. 00:38:19.543 [2024-12-13 10:40:13.270975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.543 [2024-12-13 10:40:13.270988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.543 qpair failed and we were unable to recover it. 00:38:19.543 [2024-12-13 10:40:13.271054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.543 [2024-12-13 10:40:13.271067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.543 qpair failed and we were unable to recover it. 00:38:19.543 [2024-12-13 10:40:13.271208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.543 [2024-12-13 10:40:13.271222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.543 qpair failed and we were unable to recover it. 00:38:19.543 [2024-12-13 10:40:13.271357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.543 [2024-12-13 10:40:13.271370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.543 qpair failed and we were unable to recover it. 00:38:19.543 [2024-12-13 10:40:13.271460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.543 [2024-12-13 10:40:13.271473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.543 qpair failed and we were unable to recover it. 00:38:19.543 [2024-12-13 10:40:13.271548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.543 [2024-12-13 10:40:13.271561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.543 qpair failed and we were unable to recover it. 00:38:19.543 [2024-12-13 10:40:13.271635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.543 [2024-12-13 10:40:13.271649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.543 qpair failed and we were unable to recover it. 00:38:19.543 [2024-12-13 10:40:13.271733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.543 [2024-12-13 10:40:13.271747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.543 qpair failed and we were unable to recover it. 00:38:19.543 [2024-12-13 10:40:13.271808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.543 [2024-12-13 10:40:13.271821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.543 qpair failed and we were unable to recover it. 00:38:19.543 [2024-12-13 10:40:13.271968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.543 [2024-12-13 10:40:13.271981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.543 qpair failed and we were unable to recover it. 00:38:19.543 [2024-12-13 10:40:13.272117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.543 [2024-12-13 10:40:13.272130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.543 qpair failed and we were unable to recover it. 00:38:19.543 [2024-12-13 10:40:13.272274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.543 [2024-12-13 10:40:13.272288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.543 qpair failed and we were unable to recover it. 00:38:19.543 [2024-12-13 10:40:13.272433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.543 [2024-12-13 10:40:13.272454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.543 qpair failed and we were unable to recover it. 00:38:19.543 [2024-12-13 10:40:13.272537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.543 [2024-12-13 10:40:13.272553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.543 qpair failed and we were unable to recover it. 00:38:19.543 [2024-12-13 10:40:13.272647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.543 [2024-12-13 10:40:13.272662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.543 qpair failed and we were unable to recover it. 00:38:19.543 [2024-12-13 10:40:13.272749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.543 [2024-12-13 10:40:13.272764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.543 qpair failed and we were unable to recover it. 00:38:19.543 [2024-12-13 10:40:13.272839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.543 [2024-12-13 10:40:13.272853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.543 qpair failed and we were unable to recover it. 00:38:19.543 [2024-12-13 10:40:13.272997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.543 [2024-12-13 10:40:13.273012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.544 qpair failed and we were unable to recover it. 00:38:19.544 [2024-12-13 10:40:13.273094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.544 [2024-12-13 10:40:13.273108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.544 qpair failed and we were unable to recover it. 00:38:19.544 [2024-12-13 10:40:13.273193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.544 [2024-12-13 10:40:13.273206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.544 qpair failed and we were unable to recover it. 00:38:19.544 [2024-12-13 10:40:13.273293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.544 [2024-12-13 10:40:13.273306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.544 qpair failed and we were unable to recover it. 00:38:19.544 [2024-12-13 10:40:13.273376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.544 [2024-12-13 10:40:13.273389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.544 qpair failed and we were unable to recover it. 00:38:19.544 [2024-12-13 10:40:13.273539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.544 [2024-12-13 10:40:13.273553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.544 qpair failed and we were unable to recover it. 00:38:19.544 [2024-12-13 10:40:13.273617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.544 [2024-12-13 10:40:13.273630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.544 qpair failed and we were unable to recover it. 00:38:19.544 [2024-12-13 10:40:13.273711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.544 [2024-12-13 10:40:13.273725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.544 qpair failed and we were unable to recover it. 00:38:19.544 [2024-12-13 10:40:13.273925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.544 [2024-12-13 10:40:13.273938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.544 qpair failed and we were unable to recover it. 00:38:19.544 [2024-12-13 10:40:13.274011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.544 [2024-12-13 10:40:13.274024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.544 qpair failed and we were unable to recover it. 00:38:19.544 [2024-12-13 10:40:13.274106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.544 [2024-12-13 10:40:13.274119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.544 qpair failed and we were unable to recover it. 00:38:19.544 [2024-12-13 10:40:13.274253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.544 [2024-12-13 10:40:13.274266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.544 qpair failed and we were unable to recover it. 00:38:19.544 [2024-12-13 10:40:13.274352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.544 [2024-12-13 10:40:13.274365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.544 qpair failed and we were unable to recover it. 00:38:19.544 [2024-12-13 10:40:13.274429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.544 [2024-12-13 10:40:13.274442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.544 qpair failed and we were unable to recover it. 00:38:19.544 [2024-12-13 10:40:13.274539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.544 [2024-12-13 10:40:13.274553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.544 qpair failed and we were unable to recover it. 00:38:19.544 [2024-12-13 10:40:13.274620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.544 [2024-12-13 10:40:13.274634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.544 qpair failed and we were unable to recover it. 00:38:19.544 [2024-12-13 10:40:13.274721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.544 [2024-12-13 10:40:13.274734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.544 qpair failed and we were unable to recover it. 00:38:19.544 [2024-12-13 10:40:13.274804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.544 [2024-12-13 10:40:13.274818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.544 qpair failed and we were unable to recover it. 00:38:19.544 [2024-12-13 10:40:13.274905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.544 [2024-12-13 10:40:13.274919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.544 qpair failed and we were unable to recover it. 00:38:19.544 [2024-12-13 10:40:13.274997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.544 [2024-12-13 10:40:13.275011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.544 qpair failed and we were unable to recover it. 00:38:19.544 [2024-12-13 10:40:13.275079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.544 [2024-12-13 10:40:13.275115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.544 qpair failed and we were unable to recover it. 00:38:19.544 [2024-12-13 10:40:13.275209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.544 [2024-12-13 10:40:13.275223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.544 qpair failed and we were unable to recover it. 00:38:19.544 [2024-12-13 10:40:13.275306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.544 [2024-12-13 10:40:13.275319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.544 qpair failed and we were unable to recover it. 00:38:19.544 [2024-12-13 10:40:13.275395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.544 [2024-12-13 10:40:13.275410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.544 qpair failed and we were unable to recover it. 00:38:19.544 [2024-12-13 10:40:13.275483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.544 [2024-12-13 10:40:13.275497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.544 qpair failed and we were unable to recover it. 00:38:19.544 [2024-12-13 10:40:13.275650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.544 [2024-12-13 10:40:13.275665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.544 qpair failed and we were unable to recover it. 00:38:19.544 [2024-12-13 10:40:13.275757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.544 [2024-12-13 10:40:13.275770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.544 qpair failed and we were unable to recover it. 00:38:19.544 [2024-12-13 10:40:13.275853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.544 [2024-12-13 10:40:13.275867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.544 qpair failed and we were unable to recover it. 00:38:19.544 [2024-12-13 10:40:13.275941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.544 [2024-12-13 10:40:13.275953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.544 qpair failed and we were unable to recover it. 00:38:19.544 [2024-12-13 10:40:13.276037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.544 [2024-12-13 10:40:13.276050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.544 qpair failed and we were unable to recover it. 00:38:19.544 [2024-12-13 10:40:13.276257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.544 [2024-12-13 10:40:13.276272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.544 qpair failed and we were unable to recover it. 00:38:19.544 [2024-12-13 10:40:13.276344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.544 [2024-12-13 10:40:13.276357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.544 qpair failed and we were unable to recover it. 00:38:19.544 [2024-12-13 10:40:13.276452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.544 [2024-12-13 10:40:13.276466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.544 qpair failed and we were unable to recover it. 00:38:19.544 [2024-12-13 10:40:13.276534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.544 [2024-12-13 10:40:13.276548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.544 qpair failed and we were unable to recover it. 00:38:19.544 [2024-12-13 10:40:13.276682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.544 [2024-12-13 10:40:13.276695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.544 qpair failed and we were unable to recover it. 00:38:19.544 [2024-12-13 10:40:13.276843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.544 [2024-12-13 10:40:13.276856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.544 qpair failed and we were unable to recover it. 00:38:19.544 [2024-12-13 10:40:13.276936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.544 [2024-12-13 10:40:13.276952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.544 qpair failed and we were unable to recover it. 00:38:19.544 [2024-12-13 10:40:13.277220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.544 [2024-12-13 10:40:13.277233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.544 qpair failed and we were unable to recover it. 00:38:19.544 [2024-12-13 10:40:13.277301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.544 [2024-12-13 10:40:13.277315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.544 qpair failed and we were unable to recover it. 00:38:19.544 [2024-12-13 10:40:13.277398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.544 [2024-12-13 10:40:13.277412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.545 qpair failed and we were unable to recover it. 00:38:19.545 [2024-12-13 10:40:13.277488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.545 [2024-12-13 10:40:13.277502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.545 qpair failed and we were unable to recover it. 00:38:19.545 [2024-12-13 10:40:13.277591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.545 [2024-12-13 10:40:13.277604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.545 qpair failed and we were unable to recover it. 00:38:19.545 [2024-12-13 10:40:13.277674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.545 [2024-12-13 10:40:13.277688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.545 qpair failed and we were unable to recover it. 00:38:19.545 [2024-12-13 10:40:13.277758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.545 [2024-12-13 10:40:13.277771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.545 qpair failed and we were unable to recover it. 00:38:19.545 [2024-12-13 10:40:13.277992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.545 [2024-12-13 10:40:13.278005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.545 qpair failed and we were unable to recover it. 00:38:19.545 [2024-12-13 10:40:13.278090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.545 [2024-12-13 10:40:13.278104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.545 qpair failed and we were unable to recover it. 00:38:19.545 [2024-12-13 10:40:13.278240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.545 [2024-12-13 10:40:13.278253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.545 qpair failed and we were unable to recover it. 00:38:19.545 [2024-12-13 10:40:13.278324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.545 [2024-12-13 10:40:13.278337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.545 qpair failed and we were unable to recover it. 00:38:19.545 [2024-12-13 10:40:13.278496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.545 [2024-12-13 10:40:13.278510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.545 qpair failed and we were unable to recover it. 00:38:19.545 [2024-12-13 10:40:13.278596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.545 [2024-12-13 10:40:13.278610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.545 qpair failed and we were unable to recover it. 00:38:19.545 [2024-12-13 10:40:13.278744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.545 [2024-12-13 10:40:13.278758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.545 qpair failed and we were unable to recover it. 00:38:19.545 [2024-12-13 10:40:13.278900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.545 [2024-12-13 10:40:13.278914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.545 qpair failed and we were unable to recover it. 00:38:19.545 [2024-12-13 10:40:13.278994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.545 [2024-12-13 10:40:13.279008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.545 qpair failed and we were unable to recover it. 00:38:19.545 [2024-12-13 10:40:13.279083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.545 [2024-12-13 10:40:13.279097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.545 qpair failed and we were unable to recover it. 00:38:19.545 [2024-12-13 10:40:13.279173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.545 [2024-12-13 10:40:13.279186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.545 qpair failed and we were unable to recover it. 00:38:19.545 [2024-12-13 10:40:13.279254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.545 [2024-12-13 10:40:13.279267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.545 qpair failed and we were unable to recover it. 00:38:19.545 [2024-12-13 10:40:13.279348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.545 [2024-12-13 10:40:13.279362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.545 qpair failed and we were unable to recover it. 00:38:19.545 [2024-12-13 10:40:13.279497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.545 [2024-12-13 10:40:13.279510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.545 qpair failed and we were unable to recover it. 00:38:19.545 [2024-12-13 10:40:13.279583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.545 [2024-12-13 10:40:13.279599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.545 qpair failed and we were unable to recover it. 00:38:19.545 [2024-12-13 10:40:13.279679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.545 [2024-12-13 10:40:13.279692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.545 qpair failed and we were unable to recover it. 00:38:19.545 [2024-12-13 10:40:13.279789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.545 [2024-12-13 10:40:13.279802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.545 qpair failed and we were unable to recover it. 00:38:19.545 [2024-12-13 10:40:13.279872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.545 [2024-12-13 10:40:13.279885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.545 qpair failed and we were unable to recover it. 00:38:19.545 [2024-12-13 10:40:13.279959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.545 [2024-12-13 10:40:13.279973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.545 qpair failed and we were unable to recover it. 00:38:19.545 [2024-12-13 10:40:13.280058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.545 [2024-12-13 10:40:13.280072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.545 qpair failed and we were unable to recover it. 00:38:19.545 [2024-12-13 10:40:13.280219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.545 [2024-12-13 10:40:13.280232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.545 qpair failed and we were unable to recover it. 00:38:19.545 [2024-12-13 10:40:13.280378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.545 [2024-12-13 10:40:13.280392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.545 qpair failed and we were unable to recover it. 00:38:19.545 [2024-12-13 10:40:13.280476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.545 [2024-12-13 10:40:13.280489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.545 qpair failed and we were unable to recover it. 00:38:19.545 [2024-12-13 10:40:13.280573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.545 [2024-12-13 10:40:13.280587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.545 qpair failed and we were unable to recover it. 00:38:19.545 [2024-12-13 10:40:13.280654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.545 [2024-12-13 10:40:13.280667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.545 qpair failed and we were unable to recover it. 00:38:19.545 [2024-12-13 10:40:13.280806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.545 [2024-12-13 10:40:13.280820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.545 qpair failed and we were unable to recover it. 00:38:19.545 [2024-12-13 10:40:13.280958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.545 [2024-12-13 10:40:13.280971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.545 qpair failed and we were unable to recover it. 00:38:19.545 [2024-12-13 10:40:13.281133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.545 [2024-12-13 10:40:13.281146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.545 qpair failed and we were unable to recover it. 00:38:19.545 [2024-12-13 10:40:13.281217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.545 [2024-12-13 10:40:13.281230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.545 qpair failed and we were unable to recover it. 00:38:19.545 [2024-12-13 10:40:13.281311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.545 [2024-12-13 10:40:13.281325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.545 qpair failed and we were unable to recover it. 00:38:19.545 [2024-12-13 10:40:13.281397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.545 [2024-12-13 10:40:13.281410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.545 qpair failed and we were unable to recover it. 00:38:19.545 [2024-12-13 10:40:13.281497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.545 [2024-12-13 10:40:13.281511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.545 qpair failed and we were unable to recover it. 00:38:19.545 [2024-12-13 10:40:13.281583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.545 [2024-12-13 10:40:13.281599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.545 qpair failed and we were unable to recover it. 00:38:19.545 [2024-12-13 10:40:13.281667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.545 [2024-12-13 10:40:13.281681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.545 qpair failed and we were unable to recover it. 00:38:19.545 [2024-12-13 10:40:13.281761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.545 [2024-12-13 10:40:13.281775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.545 qpair failed and we were unable to recover it. 00:38:19.546 [2024-12-13 10:40:13.281931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.546 [2024-12-13 10:40:13.281944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.546 qpair failed and we were unable to recover it. 00:38:19.546 [2024-12-13 10:40:13.282080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.546 [2024-12-13 10:40:13.282094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.546 qpair failed and we were unable to recover it. 00:38:19.546 [2024-12-13 10:40:13.282184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.546 [2024-12-13 10:40:13.282197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.546 qpair failed and we were unable to recover it. 00:38:19.546 [2024-12-13 10:40:13.282265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.546 [2024-12-13 10:40:13.282278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.546 qpair failed and we were unable to recover it. 00:38:19.546 [2024-12-13 10:40:13.282348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.546 [2024-12-13 10:40:13.282362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.546 qpair failed and we were unable to recover it. 00:38:19.546 [2024-12-13 10:40:13.282513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.546 [2024-12-13 10:40:13.282532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.546 qpair failed and we were unable to recover it. 00:38:19.546 [2024-12-13 10:40:13.282738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.546 [2024-12-13 10:40:13.282751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.546 qpair failed and we were unable to recover it. 00:38:19.546 [2024-12-13 10:40:13.282954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.546 [2024-12-13 10:40:13.282967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.546 qpair failed and we were unable to recover it. 00:38:19.546 [2024-12-13 10:40:13.283184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.546 [2024-12-13 10:40:13.283197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.546 qpair failed and we were unable to recover it. 00:38:19.546 [2024-12-13 10:40:13.283343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.546 [2024-12-13 10:40:13.283356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.546 qpair failed and we were unable to recover it. 00:38:19.546 [2024-12-13 10:40:13.283455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.546 [2024-12-13 10:40:13.283471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.546 qpair failed and we were unable to recover it. 00:38:19.546 [2024-12-13 10:40:13.283559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.546 [2024-12-13 10:40:13.283573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.546 qpair failed and we were unable to recover it. 00:38:19.546 [2024-12-13 10:40:13.283657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.546 [2024-12-13 10:40:13.283670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.546 qpair failed and we were unable to recover it. 00:38:19.546 [2024-12-13 10:40:13.283747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.546 [2024-12-13 10:40:13.283761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.546 qpair failed and we were unable to recover it. 00:38:19.546 [2024-12-13 10:40:13.283840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.546 [2024-12-13 10:40:13.283853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.546 qpair failed and we were unable to recover it. 00:38:19.546 [2024-12-13 10:40:13.283922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.546 [2024-12-13 10:40:13.283935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.546 qpair failed and we were unable to recover it. 00:38:19.546 [2024-12-13 10:40:13.284014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.546 [2024-12-13 10:40:13.284028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.546 qpair failed and we were unable to recover it. 00:38:19.546 [2024-12-13 10:40:13.284115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.546 [2024-12-13 10:40:13.284130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.546 qpair failed and we were unable to recover it. 00:38:19.546 [2024-12-13 10:40:13.284201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.546 [2024-12-13 10:40:13.284214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.546 qpair failed and we were unable to recover it. 00:38:19.546 [2024-12-13 10:40:13.284381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.546 [2024-12-13 10:40:13.284394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.546 qpair failed and we were unable to recover it. 00:38:19.546 [2024-12-13 10:40:13.284532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.546 [2024-12-13 10:40:13.284547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.546 qpair failed and we were unable to recover it. 00:38:19.546 [2024-12-13 10:40:13.284629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.546 [2024-12-13 10:40:13.284641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.546 qpair failed and we were unable to recover it. 00:38:19.546 [2024-12-13 10:40:13.284715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.546 [2024-12-13 10:40:13.284727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.546 qpair failed and we were unable to recover it. 00:38:19.546 [2024-12-13 10:40:13.284801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.546 [2024-12-13 10:40:13.284813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.546 qpair failed and we were unable to recover it. 00:38:19.546 [2024-12-13 10:40:13.284893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.546 [2024-12-13 10:40:13.284905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.546 qpair failed and we were unable to recover it. 00:38:19.546 [2024-12-13 10:40:13.285020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.546 [2024-12-13 10:40:13.285032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.546 qpair failed and we were unable to recover it. 00:38:19.546 [2024-12-13 10:40:13.285184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.546 [2024-12-13 10:40:13.285197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.546 qpair failed and we were unable to recover it. 00:38:19.546 [2024-12-13 10:40:13.285284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.546 [2024-12-13 10:40:13.285297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.546 qpair failed and we were unable to recover it. 00:38:19.546 [2024-12-13 10:40:13.285379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.546 [2024-12-13 10:40:13.285393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.546 qpair failed and we were unable to recover it. 00:38:19.546 [2024-12-13 10:40:13.285531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.546 [2024-12-13 10:40:13.285544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.546 qpair failed and we were unable to recover it. 00:38:19.546 [2024-12-13 10:40:13.285633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.546 [2024-12-13 10:40:13.285646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.546 qpair failed and we were unable to recover it. 00:38:19.546 [2024-12-13 10:40:13.285727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.546 [2024-12-13 10:40:13.285740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.546 qpair failed and we were unable to recover it. 00:38:19.546 [2024-12-13 10:40:13.285843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.546 [2024-12-13 10:40:13.285877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:19.546 qpair failed and we were unable to recover it. 00:38:19.546 [2024-12-13 10:40:13.286009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.546 [2024-12-13 10:40:13.286037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:19.546 qpair failed and we were unable to recover it. 00:38:19.546 [2024-12-13 10:40:13.286143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.546 [2024-12-13 10:40:13.286169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:19.547 qpair failed and we were unable to recover it. 00:38:19.547 [2024-12-13 10:40:13.286251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.547 [2024-12-13 10:40:13.286265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.547 qpair failed and we were unable to recover it. 00:38:19.547 [2024-12-13 10:40:13.286335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.547 [2024-12-13 10:40:13.286348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.547 qpair failed and we were unable to recover it. 00:38:19.547 [2024-12-13 10:40:13.286422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.547 [2024-12-13 10:40:13.286438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.547 qpair failed and we were unable to recover it. 00:38:19.547 [2024-12-13 10:40:13.286513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.547 [2024-12-13 10:40:13.286526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.547 qpair failed and we were unable to recover it. 00:38:19.547 [2024-12-13 10:40:13.286597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.547 [2024-12-13 10:40:13.286609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.547 qpair failed and we were unable to recover it. 00:38:19.547 [2024-12-13 10:40:13.286753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.547 [2024-12-13 10:40:13.286766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.547 qpair failed and we were unable to recover it. 00:38:19.547 [2024-12-13 10:40:13.286916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.547 [2024-12-13 10:40:13.286928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.547 qpair failed and we were unable to recover it. 00:38:19.547 [2024-12-13 10:40:13.287132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.547 [2024-12-13 10:40:13.287145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.547 qpair failed and we were unable to recover it. 00:38:19.547 [2024-12-13 10:40:13.287282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.547 [2024-12-13 10:40:13.287295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.547 qpair failed and we were unable to recover it. 00:38:19.547 [2024-12-13 10:40:13.287472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.547 [2024-12-13 10:40:13.287487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.547 qpair failed and we were unable to recover it. 00:38:19.547 [2024-12-13 10:40:13.287558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.547 [2024-12-13 10:40:13.287571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.547 qpair failed and we were unable to recover it. 00:38:19.547 [2024-12-13 10:40:13.287699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.547 [2024-12-13 10:40:13.287713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.547 qpair failed and we were unable to recover it. 00:38:19.547 [2024-12-13 10:40:13.287923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.547 [2024-12-13 10:40:13.287937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.547 qpair failed and we were unable to recover it. 00:38:19.547 [2024-12-13 10:40:13.288012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.547 [2024-12-13 10:40:13.288025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.547 qpair failed and we were unable to recover it. 00:38:19.547 [2024-12-13 10:40:13.288173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.547 [2024-12-13 10:40:13.288187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.547 qpair failed and we were unable to recover it. 00:38:19.547 [2024-12-13 10:40:13.288329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.547 [2024-12-13 10:40:13.288343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.547 qpair failed and we were unable to recover it. 00:38:19.547 [2024-12-13 10:40:13.288422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.547 [2024-12-13 10:40:13.288435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.547 qpair failed and we were unable to recover it. 00:38:19.547 [2024-12-13 10:40:13.288546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.547 [2024-12-13 10:40:13.288559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.547 qpair failed and we were unable to recover it. 00:38:19.547 [2024-12-13 10:40:13.288658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.547 [2024-12-13 10:40:13.288670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.547 qpair failed and we were unable to recover it. 00:38:19.547 [2024-12-13 10:40:13.288809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.547 [2024-12-13 10:40:13.288822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.547 qpair failed and we were unable to recover it. 00:38:19.547 [2024-12-13 10:40:13.288897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.547 [2024-12-13 10:40:13.288910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.547 qpair failed and we were unable to recover it. 00:38:19.547 [2024-12-13 10:40:13.289048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.547 [2024-12-13 10:40:13.289062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.547 qpair failed and we were unable to recover it. 00:38:19.547 [2024-12-13 10:40:13.289199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.547 [2024-12-13 10:40:13.289213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.547 qpair failed and we were unable to recover it. 00:38:19.547 [2024-12-13 10:40:13.289439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.547 [2024-12-13 10:40:13.289460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.547 qpair failed and we were unable to recover it. 00:38:19.547 [2024-12-13 10:40:13.289636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.547 [2024-12-13 10:40:13.289650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.547 qpair failed and we were unable to recover it. 00:38:19.547 [2024-12-13 10:40:13.289735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.547 [2024-12-13 10:40:13.289748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.547 qpair failed and we were unable to recover it. 00:38:19.547 [2024-12-13 10:40:13.289900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.547 [2024-12-13 10:40:13.289913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.547 qpair failed and we were unable to recover it. 00:38:19.547 [2024-12-13 10:40:13.289999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.547 [2024-12-13 10:40:13.290011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.547 qpair failed and we were unable to recover it. 00:38:19.547 [2024-12-13 10:40:13.290103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.547 [2024-12-13 10:40:13.290116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.547 qpair failed and we were unable to recover it. 00:38:19.547 [2024-12-13 10:40:13.290258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.547 [2024-12-13 10:40:13.290273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.547 qpair failed and we were unable to recover it. 00:38:19.547 [2024-12-13 10:40:13.290342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.547 [2024-12-13 10:40:13.290355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.547 qpair failed and we were unable to recover it. 00:38:19.547 [2024-12-13 10:40:13.290486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.547 [2024-12-13 10:40:13.290503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.547 qpair failed and we were unable to recover it. 00:38:19.547 [2024-12-13 10:40:13.290583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.547 [2024-12-13 10:40:13.290596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.547 qpair failed and we were unable to recover it. 00:38:19.547 [2024-12-13 10:40:13.290759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.547 [2024-12-13 10:40:13.290773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.547 qpair failed and we were unable to recover it. 00:38:19.547 [2024-12-13 10:40:13.290932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.547 [2024-12-13 10:40:13.290946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.547 qpair failed and we were unable to recover it. 00:38:19.547 [2024-12-13 10:40:13.291047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.547 [2024-12-13 10:40:13.291060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.547 qpair failed and we were unable to recover it. 00:38:19.547 [2024-12-13 10:40:13.291264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.547 [2024-12-13 10:40:13.291279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.547 qpair failed and we were unable to recover it. 00:38:19.547 [2024-12-13 10:40:13.291376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.547 [2024-12-13 10:40:13.291394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.547 qpair failed and we were unable to recover it. 00:38:19.547 [2024-12-13 10:40:13.291468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.548 [2024-12-13 10:40:13.291481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.548 qpair failed and we were unable to recover it. 00:38:19.548 [2024-12-13 10:40:13.291655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.548 [2024-12-13 10:40:13.291669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.548 qpair failed and we were unable to recover it. 00:38:19.548 [2024-12-13 10:40:13.291748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.548 [2024-12-13 10:40:13.291761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.548 qpair failed and we were unable to recover it. 00:38:19.548 [2024-12-13 10:40:13.291901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.548 [2024-12-13 10:40:13.291915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.548 qpair failed and we were unable to recover it. 00:38:19.548 [2024-12-13 10:40:13.291999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.548 [2024-12-13 10:40:13.292014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.548 qpair failed and we were unable to recover it. 00:38:19.548 [2024-12-13 10:40:13.292159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.548 [2024-12-13 10:40:13.292173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.548 qpair failed and we were unable to recover it. 00:38:19.548 [2024-12-13 10:40:13.292260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.548 [2024-12-13 10:40:13.292273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.548 qpair failed and we were unable to recover it. 00:38:19.548 [2024-12-13 10:40:13.292435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.548 [2024-12-13 10:40:13.292463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.548 qpair failed and we were unable to recover it. 00:38:19.548 [2024-12-13 10:40:13.292542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.548 [2024-12-13 10:40:13.292555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.548 qpair failed and we were unable to recover it. 00:38:19.548 [2024-12-13 10:40:13.292625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.548 [2024-12-13 10:40:13.292638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.548 qpair failed and we were unable to recover it. 00:38:19.548 [2024-12-13 10:40:13.292720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.548 [2024-12-13 10:40:13.292733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.548 qpair failed and we were unable to recover it. 00:38:19.548 [2024-12-13 10:40:13.292809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.548 [2024-12-13 10:40:13.292821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.548 qpair failed and we were unable to recover it. 00:38:19.548 [2024-12-13 10:40:13.293024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.548 [2024-12-13 10:40:13.293038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.548 qpair failed and we were unable to recover it. 00:38:19.548 [2024-12-13 10:40:13.293111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.548 [2024-12-13 10:40:13.293124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.548 qpair failed and we were unable to recover it. 00:38:19.548 [2024-12-13 10:40:13.293268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.548 [2024-12-13 10:40:13.293281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.548 qpair failed and we were unable to recover it. 00:38:19.548 [2024-12-13 10:40:13.293434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.548 [2024-12-13 10:40:13.293455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.548 qpair failed and we were unable to recover it. 00:38:19.548 [2024-12-13 10:40:13.293619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.548 [2024-12-13 10:40:13.293633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.548 qpair failed and we were unable to recover it. 00:38:19.548 [2024-12-13 10:40:13.293714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.548 [2024-12-13 10:40:13.293727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.548 qpair failed and we were unable to recover it. 00:38:19.548 [2024-12-13 10:40:13.293806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.548 [2024-12-13 10:40:13.293819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.548 qpair failed and we were unable to recover it. 00:38:19.548 [2024-12-13 10:40:13.294049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.548 [2024-12-13 10:40:13.294063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.548 qpair failed and we were unable to recover it. 00:38:19.548 [2024-12-13 10:40:13.294128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.548 [2024-12-13 10:40:13.294141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.548 qpair failed and we were unable to recover it. 00:38:19.548 [2024-12-13 10:40:13.294215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.548 [2024-12-13 10:40:13.294228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.548 qpair failed and we were unable to recover it. 00:38:19.548 [2024-12-13 10:40:13.294386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.548 [2024-12-13 10:40:13.294399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.548 qpair failed and we were unable to recover it. 00:38:19.548 [2024-12-13 10:40:13.294551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.548 [2024-12-13 10:40:13.294565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.548 qpair failed and we were unable to recover it. 00:38:19.548 [2024-12-13 10:40:13.294646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.548 [2024-12-13 10:40:13.294659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.548 qpair failed and we were unable to recover it. 00:38:19.548 [2024-12-13 10:40:13.294725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.548 [2024-12-13 10:40:13.294738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.548 qpair failed and we were unable to recover it. 00:38:19.548 [2024-12-13 10:40:13.294824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.548 [2024-12-13 10:40:13.294837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.548 qpair failed and we were unable to recover it. 00:38:19.548 [2024-12-13 10:40:13.294971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.548 [2024-12-13 10:40:13.294984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.548 qpair failed and we were unable to recover it. 00:38:19.548 [2024-12-13 10:40:13.295056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.548 [2024-12-13 10:40:13.295068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.548 qpair failed and we were unable to recover it. 00:38:19.548 [2024-12-13 10:40:13.295269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.548 [2024-12-13 10:40:13.295283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.548 qpair failed and we were unable to recover it. 00:38:19.548 [2024-12-13 10:40:13.295353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.548 [2024-12-13 10:40:13.295365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.548 qpair failed and we were unable to recover it. 00:38:19.548 [2024-12-13 10:40:13.295460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.548 [2024-12-13 10:40:13.295474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.548 qpair failed and we were unable to recover it. 00:38:19.548 [2024-12-13 10:40:13.295562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.548 [2024-12-13 10:40:13.295575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.548 qpair failed and we were unable to recover it. 00:38:19.548 [2024-12-13 10:40:13.295663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.548 [2024-12-13 10:40:13.295676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.548 qpair failed and we were unable to recover it. 00:38:19.548 [2024-12-13 10:40:13.295938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.548 [2024-12-13 10:40:13.295953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.548 qpair failed and we were unable to recover it. 00:38:19.548 [2024-12-13 10:40:13.296221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.548 [2024-12-13 10:40:13.296277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.548 qpair failed and we were unable to recover it. 00:38:19.548 [2024-12-13 10:40:13.296424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.548 [2024-12-13 10:40:13.296478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.548 qpair failed and we were unable to recover it. 00:38:19.548 [2024-12-13 10:40:13.296648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.548 [2024-12-13 10:40:13.296692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.548 qpair failed and we were unable to recover it. 00:38:19.548 [2024-12-13 10:40:13.296845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.548 [2024-12-13 10:40:13.296888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.548 qpair failed and we were unable to recover it. 00:38:19.548 [2024-12-13 10:40:13.297024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.548 [2024-12-13 10:40:13.297066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.548 qpair failed and we were unable to recover it. 00:38:19.548 [2024-12-13 10:40:13.297349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.548 [2024-12-13 10:40:13.297416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:19.548 qpair failed and we were unable to recover it. 00:38:19.548 [2024-12-13 10:40:13.297566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.548 [2024-12-13 10:40:13.297596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:19.548 qpair failed and we were unable to recover it. 00:38:19.548 [2024-12-13 10:40:13.297721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.548 [2024-12-13 10:40:13.297746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:19.549 qpair failed and we were unable to recover it. 00:38:19.549 [2024-12-13 10:40:13.297921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.549 [2024-12-13 10:40:13.297936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.549 qpair failed and we were unable to recover it. 00:38:19.549 [2024-12-13 10:40:13.298149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.549 [2024-12-13 10:40:13.298165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.549 qpair failed and we were unable to recover it. 00:38:19.549 [2024-12-13 10:40:13.298306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.549 [2024-12-13 10:40:13.298320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.549 qpair failed and we were unable to recover it. 00:38:19.549 [2024-12-13 10:40:13.298493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.549 [2024-12-13 10:40:13.298507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.549 qpair failed and we were unable to recover it. 00:38:19.549 [2024-12-13 10:40:13.298654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.549 [2024-12-13 10:40:13.298667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.549 qpair failed and we were unable to recover it. 00:38:19.549 [2024-12-13 10:40:13.298750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.549 [2024-12-13 10:40:13.298763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.549 qpair failed and we were unable to recover it. 00:38:19.549 [2024-12-13 10:40:13.298855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.549 [2024-12-13 10:40:13.298867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.549 qpair failed and we were unable to recover it. 00:38:19.549 [2024-12-13 10:40:13.298941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.549 [2024-12-13 10:40:13.298955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.549 qpair failed and we were unable to recover it. 00:38:19.549 [2024-12-13 10:40:13.299103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.549 [2024-12-13 10:40:13.299117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.549 qpair failed and we were unable to recover it. 00:38:19.549 [2024-12-13 10:40:13.299252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.549 [2024-12-13 10:40:13.299265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.549 qpair failed and we were unable to recover it. 00:38:19.549 [2024-12-13 10:40:13.299412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.549 [2024-12-13 10:40:13.299426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.549 qpair failed and we were unable to recover it. 00:38:19.549 [2024-12-13 10:40:13.299572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.549 [2024-12-13 10:40:13.299585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.549 qpair failed and we were unable to recover it. 00:38:19.549 [2024-12-13 10:40:13.299718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.549 [2024-12-13 10:40:13.299731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.549 qpair failed and we were unable to recover it. 00:38:19.549 [2024-12-13 10:40:13.299811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.549 [2024-12-13 10:40:13.299824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.549 qpair failed and we were unable to recover it. 00:38:19.549 [2024-12-13 10:40:13.299906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.549 [2024-12-13 10:40:13.299919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.549 qpair failed and we were unable to recover it. 00:38:19.549 [2024-12-13 10:40:13.300060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.549 [2024-12-13 10:40:13.300073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.549 qpair failed and we were unable to recover it. 00:38:19.549 [2024-12-13 10:40:13.300143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.549 [2024-12-13 10:40:13.300156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.549 qpair failed and we were unable to recover it. 00:38:19.549 [2024-12-13 10:40:13.300233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.549 [2024-12-13 10:40:13.300247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.549 qpair failed and we were unable to recover it. 00:38:19.549 [2024-12-13 10:40:13.300326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.549 [2024-12-13 10:40:13.300339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.549 qpair failed and we were unable to recover it. 00:38:19.549 [2024-12-13 10:40:13.300430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.549 [2024-12-13 10:40:13.300443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.549 qpair failed and we were unable to recover it. 00:38:19.549 [2024-12-13 10:40:13.300606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.549 [2024-12-13 10:40:13.300620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.549 qpair failed and we were unable to recover it. 00:38:19.549 [2024-12-13 10:40:13.300704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.549 [2024-12-13 10:40:13.300718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.549 qpair failed and we were unable to recover it. 00:38:19.549 [2024-12-13 10:40:13.300798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.549 [2024-12-13 10:40:13.300811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.549 qpair failed and we were unable to recover it. 00:38:19.549 [2024-12-13 10:40:13.300922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.549 [2024-12-13 10:40:13.300946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.549 qpair failed and we were unable to recover it. 00:38:19.549 [2024-12-13 10:40:13.301027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.549 [2024-12-13 10:40:13.301040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.549 qpair failed and we were unable to recover it. 00:38:19.549 [2024-12-13 10:40:13.301103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.549 [2024-12-13 10:40:13.301116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.549 qpair failed and we were unable to recover it. 00:38:19.549 [2024-12-13 10:40:13.301181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.549 [2024-12-13 10:40:13.301193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.549 qpair failed and we were unable to recover it. 00:38:19.549 [2024-12-13 10:40:13.301282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.549 [2024-12-13 10:40:13.301294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.549 qpair failed and we were unable to recover it. 00:38:19.549 [2024-12-13 10:40:13.301440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.549 [2024-12-13 10:40:13.301460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.549 qpair failed and we were unable to recover it. 00:38:19.549 [2024-12-13 10:40:13.301602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.549 [2024-12-13 10:40:13.301616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.549 qpair failed and we were unable to recover it. 00:38:19.549 [2024-12-13 10:40:13.301698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.549 [2024-12-13 10:40:13.301711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.549 qpair failed and we were unable to recover it. 00:38:19.549 [2024-12-13 10:40:13.301791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.549 [2024-12-13 10:40:13.301804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.549 qpair failed and we were unable to recover it. 00:38:19.549 [2024-12-13 10:40:13.301872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.549 [2024-12-13 10:40:13.301885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.549 qpair failed and we were unable to recover it. 00:38:19.549 [2024-12-13 10:40:13.302035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.549 [2024-12-13 10:40:13.302049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.549 qpair failed and we were unable to recover it. 00:38:19.549 [2024-12-13 10:40:13.302129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.549 [2024-12-13 10:40:13.302144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.549 qpair failed and we were unable to recover it. 00:38:19.549 [2024-12-13 10:40:13.302227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.549 [2024-12-13 10:40:13.302240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.549 qpair failed and we were unable to recover it. 00:38:19.549 [2024-12-13 10:40:13.302311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.549 [2024-12-13 10:40:13.302323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.549 qpair failed and we were unable to recover it. 00:38:19.549 [2024-12-13 10:40:13.302420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.549 [2024-12-13 10:40:13.302434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.549 qpair failed and we were unable to recover it. 00:38:19.549 [2024-12-13 10:40:13.302586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.549 [2024-12-13 10:40:13.302601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.549 qpair failed and we were unable to recover it. 00:38:19.549 [2024-12-13 10:40:13.302673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.549 [2024-12-13 10:40:13.302685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.549 qpair failed and we were unable to recover it. 00:38:19.549 [2024-12-13 10:40:13.302830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.549 [2024-12-13 10:40:13.302843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.549 qpair failed and we were unable to recover it. 00:38:19.549 [2024-12-13 10:40:13.302977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.549 [2024-12-13 10:40:13.302994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.549 qpair failed and we were unable to recover it. 00:38:19.550 [2024-12-13 10:40:13.303155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.550 [2024-12-13 10:40:13.303169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.550 qpair failed and we were unable to recover it. 00:38:19.550 [2024-12-13 10:40:13.303249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.550 [2024-12-13 10:40:13.303262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.550 qpair failed and we were unable to recover it. 00:38:19.550 [2024-12-13 10:40:13.303395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.550 [2024-12-13 10:40:13.303410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.550 qpair failed and we were unable to recover it. 00:38:19.550 [2024-12-13 10:40:13.303501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.550 [2024-12-13 10:40:13.303515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.550 qpair failed and we were unable to recover it. 00:38:19.550 [2024-12-13 10:40:13.303596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.550 [2024-12-13 10:40:13.303607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.550 qpair failed and we were unable to recover it. 00:38:19.550 [2024-12-13 10:40:13.303676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.550 [2024-12-13 10:40:13.303689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.550 qpair failed and we were unable to recover it. 00:38:19.550 [2024-12-13 10:40:13.303764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.550 [2024-12-13 10:40:13.303776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.550 qpair failed and we were unable to recover it. 00:38:19.550 [2024-12-13 10:40:13.303922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.550 [2024-12-13 10:40:13.303935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.550 qpair failed and we were unable to recover it. 00:38:19.550 [2024-12-13 10:40:13.304016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.550 [2024-12-13 10:40:13.304029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.550 qpair failed and we were unable to recover it. 00:38:19.550 [2024-12-13 10:40:13.304096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.550 [2024-12-13 10:40:13.304109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.550 qpair failed and we were unable to recover it. 00:38:19.550 [2024-12-13 10:40:13.304269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.550 [2024-12-13 10:40:13.304282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.550 qpair failed and we were unable to recover it. 00:38:19.550 [2024-12-13 10:40:13.304428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.550 [2024-12-13 10:40:13.304441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.550 qpair failed and we were unable to recover it. 00:38:19.550 [2024-12-13 10:40:13.304538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.550 [2024-12-13 10:40:13.304551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.550 qpair failed and we were unable to recover it. 00:38:19.550 [2024-12-13 10:40:13.304702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.550 [2024-12-13 10:40:13.304716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.550 qpair failed and we were unable to recover it. 00:38:19.550 [2024-12-13 10:40:13.304795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.550 [2024-12-13 10:40:13.304807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.550 qpair failed and we were unable to recover it. 00:38:19.550 [2024-12-13 10:40:13.304879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.550 [2024-12-13 10:40:13.304891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.550 qpair failed and we were unable to recover it. 00:38:19.550 [2024-12-13 10:40:13.304977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.550 [2024-12-13 10:40:13.304991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.550 qpair failed and we were unable to recover it. 00:38:19.550 [2024-12-13 10:40:13.305063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.550 [2024-12-13 10:40:13.305075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.550 qpair failed and we were unable to recover it. 00:38:19.550 [2024-12-13 10:40:13.305213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.550 [2024-12-13 10:40:13.305226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.550 qpair failed and we were unable to recover it. 00:38:19.550 [2024-12-13 10:40:13.305363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.550 [2024-12-13 10:40:13.305377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.550 qpair failed and we were unable to recover it. 00:38:19.550 [2024-12-13 10:40:13.305462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.550 [2024-12-13 10:40:13.305474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.550 qpair failed and we were unable to recover it. 00:38:19.550 [2024-12-13 10:40:13.305562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.550 [2024-12-13 10:40:13.305575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.550 qpair failed and we were unable to recover it. 00:38:19.550 [2024-12-13 10:40:13.305716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.550 [2024-12-13 10:40:13.305731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.550 qpair failed and we were unable to recover it. 00:38:19.550 [2024-12-13 10:40:13.305806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.550 [2024-12-13 10:40:13.305819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.550 qpair failed and we were unable to recover it. 00:38:19.550 [2024-12-13 10:40:13.305898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.550 [2024-12-13 10:40:13.305911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.550 qpair failed and we were unable to recover it. 00:38:19.550 [2024-12-13 10:40:13.305976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.550 [2024-12-13 10:40:13.305988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.550 qpair failed and we were unable to recover it. 00:38:19.550 [2024-12-13 10:40:13.306077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.550 [2024-12-13 10:40:13.306103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:19.550 qpair failed and we were unable to recover it. 00:38:19.550 [2024-12-13 10:40:13.306317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.550 [2024-12-13 10:40:13.306343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:19.550 qpair failed and we were unable to recover it. 00:38:19.550 [2024-12-13 10:40:13.306471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.550 [2024-12-13 10:40:13.306497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:19.550 qpair failed and we were unable to recover it. 00:38:19.550 [2024-12-13 10:40:13.306597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.550 [2024-12-13 10:40:13.306612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.550 qpair failed and we were unable to recover it. 00:38:19.550 [2024-12-13 10:40:13.306818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.550 [2024-12-13 10:40:13.306832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.550 qpair failed and we were unable to recover it. 00:38:19.550 [2024-12-13 10:40:13.306981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.550 [2024-12-13 10:40:13.306994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.550 qpair failed and we were unable to recover it. 00:38:19.551 [2024-12-13 10:40:13.307064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.551 [2024-12-13 10:40:13.307077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.551 qpair failed and we were unable to recover it. 00:38:19.551 [2024-12-13 10:40:13.307147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.551 [2024-12-13 10:40:13.307159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.551 qpair failed and we were unable to recover it. 00:38:19.551 [2024-12-13 10:40:13.307333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.551 [2024-12-13 10:40:13.307346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.551 qpair failed and we were unable to recover it. 00:38:19.551 [2024-12-13 10:40:13.307434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.551 [2024-12-13 10:40:13.307454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.551 qpair failed and we were unable to recover it. 00:38:19.551 [2024-12-13 10:40:13.307555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.551 [2024-12-13 10:40:13.307568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.551 qpair failed and we were unable to recover it. 00:38:19.551 [2024-12-13 10:40:13.307712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.551 [2024-12-13 10:40:13.307726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.551 qpair failed and we were unable to recover it. 00:38:19.551 [2024-12-13 10:40:13.307793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.551 [2024-12-13 10:40:13.307805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.551 qpair failed and we were unable to recover it. 00:38:19.551 [2024-12-13 10:40:13.307885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.551 [2024-12-13 10:40:13.307900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.551 qpair failed and we were unable to recover it. 00:38:19.551 [2024-12-13 10:40:13.307985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.551 [2024-12-13 10:40:13.307997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.551 qpair failed and we were unable to recover it. 00:38:19.551 [2024-12-13 10:40:13.308066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.551 [2024-12-13 10:40:13.308079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.551 qpair failed and we were unable to recover it. 00:38:19.551 [2024-12-13 10:40:13.308211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.551 [2024-12-13 10:40:13.308225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.551 qpair failed and we were unable to recover it. 00:38:19.551 [2024-12-13 10:40:13.308298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.551 [2024-12-13 10:40:13.308311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.551 qpair failed and we were unable to recover it. 00:38:19.551 [2024-12-13 10:40:13.308381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.551 [2024-12-13 10:40:13.308394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.551 qpair failed and we were unable to recover it. 00:38:19.551 [2024-12-13 10:40:13.308476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.551 [2024-12-13 10:40:13.308489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.551 qpair failed and we were unable to recover it. 00:38:19.551 [2024-12-13 10:40:13.308582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.551 [2024-12-13 10:40:13.308595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.551 qpair failed and we were unable to recover it. 00:38:19.551 [2024-12-13 10:40:13.308690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.551 [2024-12-13 10:40:13.308706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.551 qpair failed and we were unable to recover it. 00:38:19.551 [2024-12-13 10:40:13.308774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.551 [2024-12-13 10:40:13.308791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.551 qpair failed and we were unable to recover it. 00:38:19.551 [2024-12-13 10:40:13.308865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.551 [2024-12-13 10:40:13.308878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.551 qpair failed and we were unable to recover it. 00:38:19.551 [2024-12-13 10:40:13.309035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.551 [2024-12-13 10:40:13.309049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.551 qpair failed and we were unable to recover it. 00:38:19.551 [2024-12-13 10:40:13.309215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.551 [2024-12-13 10:40:13.309257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.551 qpair failed and we were unable to recover it. 00:38:19.551 [2024-12-13 10:40:13.309385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.551 [2024-12-13 10:40:13.309427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.551 qpair failed and we were unable to recover it. 00:38:19.551 [2024-12-13 10:40:13.309583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.551 [2024-12-13 10:40:13.309628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.551 qpair failed and we were unable to recover it. 00:38:19.551 [2024-12-13 10:40:13.309779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.551 [2024-12-13 10:40:13.309828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:19.551 qpair failed and we were unable to recover it. 00:38:19.551 [2024-12-13 10:40:13.310117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.551 [2024-12-13 10:40:13.310170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:19.551 qpair failed and we were unable to recover it. 00:38:19.551 [2024-12-13 10:40:13.310313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.551 [2024-12-13 10:40:13.310361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:19.551 qpair failed and we were unable to recover it. 00:38:19.551 [2024-12-13 10:40:13.310577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.551 [2024-12-13 10:40:13.310622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.551 qpair failed and we were unable to recover it. 00:38:19.551 [2024-12-13 10:40:13.310763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.551 [2024-12-13 10:40:13.310778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.551 qpair failed and we were unable to recover it. 00:38:19.551 [2024-12-13 10:40:13.310850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.551 [2024-12-13 10:40:13.310863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.551 qpair failed and we were unable to recover it. 00:38:19.551 [2024-12-13 10:40:13.310929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.551 [2024-12-13 10:40:13.310942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.551 qpair failed and we were unable to recover it. 00:38:19.551 [2024-12-13 10:40:13.311028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.551 [2024-12-13 10:40:13.311041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.551 qpair failed and we were unable to recover it. 00:38:19.551 [2024-12-13 10:40:13.311212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.552 [2024-12-13 10:40:13.311225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.552 qpair failed and we were unable to recover it. 00:38:19.552 [2024-12-13 10:40:13.311312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.552 [2024-12-13 10:40:13.311325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.552 qpair failed and we were unable to recover it. 00:38:19.552 [2024-12-13 10:40:13.311390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.552 [2024-12-13 10:40:13.311402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.552 qpair failed and we were unable to recover it. 00:38:19.552 [2024-12-13 10:40:13.311486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.552 [2024-12-13 10:40:13.311499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.552 qpair failed and we were unable to recover it. 00:38:19.552 [2024-12-13 10:40:13.311670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.552 [2024-12-13 10:40:13.311692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:19.552 qpair failed and we were unable to recover it. 00:38:19.552 [2024-12-13 10:40:13.311782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.552 [2024-12-13 10:40:13.311805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:19.552 qpair failed and we were unable to recover it. 00:38:19.552 [2024-12-13 10:40:13.311967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.552 [2024-12-13 10:40:13.311992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:19.552 qpair failed and we were unable to recover it. 00:38:19.552 [2024-12-13 10:40:13.312079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.552 [2024-12-13 10:40:13.312094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.552 qpair failed and we were unable to recover it. 00:38:19.552 [2024-12-13 10:40:13.312164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.552 [2024-12-13 10:40:13.312176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.552 qpair failed and we were unable to recover it. 00:38:19.552 [2024-12-13 10:40:13.312256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.552 [2024-12-13 10:40:13.312268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.552 qpair failed and we were unable to recover it. 00:38:19.552 [2024-12-13 10:40:13.312349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.552 [2024-12-13 10:40:13.312362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.552 qpair failed and we were unable to recover it. 00:38:19.552 [2024-12-13 10:40:13.312456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.552 [2024-12-13 10:40:13.312470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.552 qpair failed and we were unable to recover it. 00:38:19.552 [2024-12-13 10:40:13.312544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.552 [2024-12-13 10:40:13.312558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.552 qpair failed and we were unable to recover it. 00:38:19.552 [2024-12-13 10:40:13.312712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.552 [2024-12-13 10:40:13.312725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.552 qpair failed and we were unable to recover it. 00:38:19.552 [2024-12-13 10:40:13.312875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.552 [2024-12-13 10:40:13.312889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.552 qpair failed and we were unable to recover it. 00:38:19.552 [2024-12-13 10:40:13.312982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.552 [2024-12-13 10:40:13.312997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.552 qpair failed and we were unable to recover it. 00:38:19.552 [2024-12-13 10:40:13.313136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.552 [2024-12-13 10:40:13.313150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.552 qpair failed and we were unable to recover it. 00:38:19.552 [2024-12-13 10:40:13.313248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.552 [2024-12-13 10:40:13.313263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.552 qpair failed and we were unable to recover it. 00:38:19.552 [2024-12-13 10:40:13.313335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.552 [2024-12-13 10:40:13.313351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.552 qpair failed and we were unable to recover it. 00:38:19.552 [2024-12-13 10:40:13.313416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.552 [2024-12-13 10:40:13.313428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.552 qpair failed and we were unable to recover it. 00:38:19.552 [2024-12-13 10:40:13.313501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.552 [2024-12-13 10:40:13.313515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.552 qpair failed and we were unable to recover it. 00:38:19.552 [2024-12-13 10:40:13.313596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.552 [2024-12-13 10:40:13.313610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.552 qpair failed and we were unable to recover it. 00:38:19.552 [2024-12-13 10:40:13.313692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.552 [2024-12-13 10:40:13.313705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.552 qpair failed and we were unable to recover it. 00:38:19.552 [2024-12-13 10:40:13.313773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.552 [2024-12-13 10:40:13.313787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.552 qpair failed and we were unable to recover it. 00:38:19.552 [2024-12-13 10:40:13.313871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.552 [2024-12-13 10:40:13.313884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.552 qpair failed and we were unable to recover it. 00:38:19.552 [2024-12-13 10:40:13.313967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.552 [2024-12-13 10:40:13.313980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.552 qpair failed and we were unable to recover it. 00:38:19.552 [2024-12-13 10:40:13.314128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.552 [2024-12-13 10:40:13.314142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.552 qpair failed and we were unable to recover it. 00:38:19.552 [2024-12-13 10:40:13.314305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.552 [2024-12-13 10:40:13.314319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.552 qpair failed and we were unable to recover it. 00:38:19.552 [2024-12-13 10:40:13.314394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.552 [2024-12-13 10:40:13.314408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.552 qpair failed and we were unable to recover it. 00:38:19.552 [2024-12-13 10:40:13.314490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.552 [2024-12-13 10:40:13.314504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.552 qpair failed and we were unable to recover it. 00:38:19.552 [2024-12-13 10:40:13.314614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.552 [2024-12-13 10:40:13.314628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.552 qpair failed and we were unable to recover it. 00:38:19.552 [2024-12-13 10:40:13.314710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.552 [2024-12-13 10:40:13.314724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.552 qpair failed and we were unable to recover it. 00:38:19.552 [2024-12-13 10:40:13.314798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.552 [2024-12-13 10:40:13.314812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.552 qpair failed and we were unable to recover it. 00:38:19.552 [2024-12-13 10:40:13.314906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.552 [2024-12-13 10:40:13.314921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.552 qpair failed and we were unable to recover it. 00:38:19.552 [2024-12-13 10:40:13.314992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.552 [2024-12-13 10:40:13.315005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.552 qpair failed and we were unable to recover it. 00:38:19.552 [2024-12-13 10:40:13.315083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.552 [2024-12-13 10:40:13.315098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.552 qpair failed and we were unable to recover it. 00:38:19.552 [2024-12-13 10:40:13.315295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.552 [2024-12-13 10:40:13.315308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.552 qpair failed and we were unable to recover it. 00:38:19.552 [2024-12-13 10:40:13.315389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.553 [2024-12-13 10:40:13.315402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.553 qpair failed and we were unable to recover it. 00:38:19.553 [2024-12-13 10:40:13.315473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.553 [2024-12-13 10:40:13.315487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.553 qpair failed and we were unable to recover it. 00:38:19.553 [2024-12-13 10:40:13.315571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.553 [2024-12-13 10:40:13.315584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.553 qpair failed and we were unable to recover it. 00:38:19.553 [2024-12-13 10:40:13.315652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.553 [2024-12-13 10:40:13.315665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.553 qpair failed and we were unable to recover it. 00:38:19.553 [2024-12-13 10:40:13.315734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.553 [2024-12-13 10:40:13.315747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.553 qpair failed and we were unable to recover it. 00:38:19.553 [2024-12-13 10:40:13.315888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.553 [2024-12-13 10:40:13.315901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.553 qpair failed and we were unable to recover it. 00:38:19.553 [2024-12-13 10:40:13.315968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.553 [2024-12-13 10:40:13.315981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.553 qpair failed and we were unable to recover it. 00:38:19.553 [2024-12-13 10:40:13.316061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.553 [2024-12-13 10:40:13.316074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.553 qpair failed and we were unable to recover it. 00:38:19.553 [2024-12-13 10:40:13.316146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.553 [2024-12-13 10:40:13.316159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.553 qpair failed and we were unable to recover it. 00:38:19.553 [2024-12-13 10:40:13.316229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.553 [2024-12-13 10:40:13.316242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.553 qpair failed and we were unable to recover it. 00:38:19.553 [2024-12-13 10:40:13.316312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.553 [2024-12-13 10:40:13.316325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.553 qpair failed and we were unable to recover it. 00:38:19.553 [2024-12-13 10:40:13.316469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.553 [2024-12-13 10:40:13.316483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.553 qpair failed and we were unable to recover it. 00:38:19.553 [2024-12-13 10:40:13.316573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.553 [2024-12-13 10:40:13.316586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.553 qpair failed and we were unable to recover it. 00:38:19.553 [2024-12-13 10:40:13.316651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.553 [2024-12-13 10:40:13.316664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.553 qpair failed and we were unable to recover it. 00:38:19.553 [2024-12-13 10:40:13.316729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.553 [2024-12-13 10:40:13.316742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.553 qpair failed and we were unable to recover it. 00:38:19.553 [2024-12-13 10:40:13.316823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.553 [2024-12-13 10:40:13.316837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.553 qpair failed and we were unable to recover it. 00:38:19.553 [2024-12-13 10:40:13.316905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.553 [2024-12-13 10:40:13.316919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.553 qpair failed and we were unable to recover it. 00:38:19.553 [2024-12-13 10:40:13.317002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.553 [2024-12-13 10:40:13.317015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.553 qpair failed and we were unable to recover it. 00:38:19.553 [2024-12-13 10:40:13.317157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.553 [2024-12-13 10:40:13.317171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.553 qpair failed and we were unable to recover it. 00:38:19.553 [2024-12-13 10:40:13.317321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.553 [2024-12-13 10:40:13.317339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.553 qpair failed and we were unable to recover it. 00:38:19.553 [2024-12-13 10:40:13.317411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.553 [2024-12-13 10:40:13.317427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.553 qpair failed and we were unable to recover it. 00:38:19.553 [2024-12-13 10:40:13.317569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.553 [2024-12-13 10:40:13.317583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.553 qpair failed and we were unable to recover it. 00:38:19.553 [2024-12-13 10:40:13.317652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.553 [2024-12-13 10:40:13.317666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.553 qpair failed and we were unable to recover it. 00:38:19.553 [2024-12-13 10:40:13.317812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.553 [2024-12-13 10:40:13.317827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.553 qpair failed and we were unable to recover it. 00:38:19.553 [2024-12-13 10:40:13.317899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.553 [2024-12-13 10:40:13.317912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.553 qpair failed and we were unable to recover it. 00:38:19.553 [2024-12-13 10:40:13.317978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.553 [2024-12-13 10:40:13.317991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.553 qpair failed and we were unable to recover it. 00:38:19.553 [2024-12-13 10:40:13.318068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.553 [2024-12-13 10:40:13.318082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.553 qpair failed and we were unable to recover it. 00:38:19.553 [2024-12-13 10:40:13.318167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.553 [2024-12-13 10:40:13.318180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.553 qpair failed and we were unable to recover it. 00:38:19.553 [2024-12-13 10:40:13.318328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.553 [2024-12-13 10:40:13.318342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.553 qpair failed and we were unable to recover it. 00:38:19.553 [2024-12-13 10:40:13.318490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.553 [2024-12-13 10:40:13.318536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.553 qpair failed and we were unable to recover it. 00:38:19.553 [2024-12-13 10:40:13.318673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.553 [2024-12-13 10:40:13.318716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.553 qpair failed and we were unable to recover it. 00:38:19.553 [2024-12-13 10:40:13.318845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.553 [2024-12-13 10:40:13.318887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.553 qpair failed and we were unable to recover it. 00:38:19.553 [2024-12-13 10:40:13.319094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.553 [2024-12-13 10:40:13.319136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.553 qpair failed and we were unable to recover it. 00:38:19.553 [2024-12-13 10:40:13.319273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.553 [2024-12-13 10:40:13.319286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.553 qpair failed and we were unable to recover it. 00:38:19.553 [2024-12-13 10:40:13.319361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.553 [2024-12-13 10:40:13.319375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.553 qpair failed and we were unable to recover it. 00:38:19.553 [2024-12-13 10:40:13.319467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.553 [2024-12-13 10:40:13.319483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.553 qpair failed and we were unable to recover it. 00:38:19.553 [2024-12-13 10:40:13.319550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.553 [2024-12-13 10:40:13.319562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.553 qpair failed and we were unable to recover it. 00:38:19.553 [2024-12-13 10:40:13.319707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.553 [2024-12-13 10:40:13.319720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.553 qpair failed and we were unable to recover it. 00:38:19.553 [2024-12-13 10:40:13.319796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.553 [2024-12-13 10:40:13.319809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.554 qpair failed and we were unable to recover it. 00:38:19.554 [2024-12-13 10:40:13.319895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.554 [2024-12-13 10:40:13.319911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.554 qpair failed and we were unable to recover it. 00:38:19.554 [2024-12-13 10:40:13.320046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.554 [2024-12-13 10:40:13.320061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.554 qpair failed and we were unable to recover it. 00:38:19.554 [2024-12-13 10:40:13.320131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.554 [2024-12-13 10:40:13.320145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.554 qpair failed and we were unable to recover it. 00:38:19.554 [2024-12-13 10:40:13.320221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.554 [2024-12-13 10:40:13.320235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.554 qpair failed and we were unable to recover it. 00:38:19.554 [2024-12-13 10:40:13.320338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.554 [2024-12-13 10:40:13.320352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.554 qpair failed and we were unable to recover it. 00:38:19.554 [2024-12-13 10:40:13.320530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.554 [2024-12-13 10:40:13.320544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.554 qpair failed and we were unable to recover it. 00:38:19.554 [2024-12-13 10:40:13.320618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.554 [2024-12-13 10:40:13.320633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.554 qpair failed and we were unable to recover it. 00:38:19.554 [2024-12-13 10:40:13.320696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.554 [2024-12-13 10:40:13.320710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.554 qpair failed and we were unable to recover it. 00:38:19.554 [2024-12-13 10:40:13.320881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.554 [2024-12-13 10:40:13.320906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:19.554 qpair failed and we were unable to recover it. 00:38:19.554 [2024-12-13 10:40:13.321030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.554 [2024-12-13 10:40:13.321057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:19.554 qpair failed and we were unable to recover it. 00:38:19.554 [2024-12-13 10:40:13.321161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.554 [2024-12-13 10:40:13.321185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:19.554 qpair failed and we were unable to recover it. 00:38:19.554 [2024-12-13 10:40:13.321285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.554 [2024-12-13 10:40:13.321300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.554 qpair failed and we were unable to recover it. 00:38:19.554 [2024-12-13 10:40:13.321371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.554 [2024-12-13 10:40:13.321385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.554 qpair failed and we were unable to recover it. 00:38:19.554 [2024-12-13 10:40:13.321460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.554 [2024-12-13 10:40:13.321474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.554 qpair failed and we were unable to recover it. 00:38:19.554 [2024-12-13 10:40:13.321548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.554 [2024-12-13 10:40:13.321563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.554 qpair failed and we were unable to recover it. 00:38:19.554 [2024-12-13 10:40:13.321706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.554 [2024-12-13 10:40:13.321719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.554 qpair failed and we were unable to recover it. 00:38:19.554 [2024-12-13 10:40:13.321791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.554 [2024-12-13 10:40:13.321805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.554 qpair failed and we were unable to recover it. 00:38:19.554 [2024-12-13 10:40:13.321876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.554 [2024-12-13 10:40:13.321889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.554 qpair failed and we were unable to recover it. 00:38:19.554 [2024-12-13 10:40:13.321984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.554 [2024-12-13 10:40:13.321998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.554 qpair failed and we were unable to recover it. 00:38:19.554 [2024-12-13 10:40:13.322142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.554 [2024-12-13 10:40:13.322155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.554 qpair failed and we were unable to recover it. 00:38:19.554 [2024-12-13 10:40:13.322240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.554 [2024-12-13 10:40:13.322253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.554 qpair failed and we were unable to recover it. 00:38:19.554 [2024-12-13 10:40:13.322319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.554 [2024-12-13 10:40:13.322334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.554 qpair failed and we were unable to recover it. 00:38:19.554 [2024-12-13 10:40:13.322420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.554 [2024-12-13 10:40:13.322433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.554 qpair failed and we were unable to recover it. 00:38:19.554 [2024-12-13 10:40:13.322515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.554 [2024-12-13 10:40:13.322530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.554 qpair failed and we were unable to recover it. 00:38:19.554 [2024-12-13 10:40:13.322616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.554 [2024-12-13 10:40:13.322631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.554 qpair failed and we were unable to recover it. 00:38:19.554 [2024-12-13 10:40:13.322721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.554 [2024-12-13 10:40:13.322763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.554 qpair failed and we were unable to recover it. 00:38:19.554 [2024-12-13 10:40:13.322958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.554 [2024-12-13 10:40:13.322999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.554 qpair failed and we were unable to recover it. 00:38:19.554 [2024-12-13 10:40:13.323145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.554 [2024-12-13 10:40:13.323206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:19.554 qpair failed and we were unable to recover it. 00:38:19.554 [2024-12-13 10:40:13.323320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.554 [2024-12-13 10:40:13.323344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:19.554 qpair failed and we were unable to recover it. 00:38:19.554 [2024-12-13 10:40:13.323528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.554 [2024-12-13 10:40:13.323551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:19.554 qpair failed and we were unable to recover it. 00:38:19.554 [2024-12-13 10:40:13.323717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.554 [2024-12-13 10:40:13.323761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.554 qpair failed and we were unable to recover it. 00:38:19.554 [2024-12-13 10:40:13.323966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.554 [2024-12-13 10:40:13.324010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.554 qpair failed and we were unable to recover it. 00:38:19.554 [2024-12-13 10:40:13.324150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.554 [2024-12-13 10:40:13.324193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.554 qpair failed and we were unable to recover it. 00:38:19.554 [2024-12-13 10:40:13.324326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.554 [2024-12-13 10:40:13.324339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.554 qpair failed and we were unable to recover it. 00:38:19.554 [2024-12-13 10:40:13.324413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.554 [2024-12-13 10:40:13.324427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.554 qpair failed and we were unable to recover it. 00:38:19.554 [2024-12-13 10:40:13.324510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.554 [2024-12-13 10:40:13.324524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.554 qpair failed and we were unable to recover it. 00:38:19.554 [2024-12-13 10:40:13.324602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.554 [2024-12-13 10:40:13.324616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.554 qpair failed and we were unable to recover it. 00:38:19.554 [2024-12-13 10:40:13.324699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.554 [2024-12-13 10:40:13.324714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.554 qpair failed and we were unable to recover it. 00:38:19.555 [2024-12-13 10:40:13.324805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.555 [2024-12-13 10:40:13.324818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.555 qpair failed and we were unable to recover it. 00:38:19.555 [2024-12-13 10:40:13.324886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.555 [2024-12-13 10:40:13.324900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.555 qpair failed and we were unable to recover it. 00:38:19.555 [2024-12-13 10:40:13.324994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.555 [2024-12-13 10:40:13.325008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.555 qpair failed and we were unable to recover it. 00:38:19.555 [2024-12-13 10:40:13.325082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.555 [2024-12-13 10:40:13.325096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.555 qpair failed and we were unable to recover it. 00:38:19.555 [2024-12-13 10:40:13.325171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.555 [2024-12-13 10:40:13.325185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.555 qpair failed and we were unable to recover it. 00:38:19.555 [2024-12-13 10:40:13.325250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.555 [2024-12-13 10:40:13.325263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.555 qpair failed and we were unable to recover it. 00:38:19.555 [2024-12-13 10:40:13.325337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.555 [2024-12-13 10:40:13.325350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.555 qpair failed and we were unable to recover it. 00:38:19.555 [2024-12-13 10:40:13.325466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.555 [2024-12-13 10:40:13.325481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.555 qpair failed and we were unable to recover it. 00:38:19.555 [2024-12-13 10:40:13.325624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.555 [2024-12-13 10:40:13.325638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.555 qpair failed and we were unable to recover it. 00:38:19.555 [2024-12-13 10:40:13.325779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.555 [2024-12-13 10:40:13.325794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.555 qpair failed and we were unable to recover it. 00:38:19.555 [2024-12-13 10:40:13.325869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.555 [2024-12-13 10:40:13.325884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.555 qpair failed and we were unable to recover it. 00:38:19.555 [2024-12-13 10:40:13.326027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.555 [2024-12-13 10:40:13.326041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.555 qpair failed and we were unable to recover it. 00:38:19.555 [2024-12-13 10:40:13.326129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.555 [2024-12-13 10:40:13.326147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.555 qpair failed and we were unable to recover it. 00:38:19.555 [2024-12-13 10:40:13.326286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.555 [2024-12-13 10:40:13.326301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.555 qpair failed and we were unable to recover it. 00:38:19.555 [2024-12-13 10:40:13.326368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.555 [2024-12-13 10:40:13.326381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.555 qpair failed and we were unable to recover it. 00:38:19.555 [2024-12-13 10:40:13.326515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.555 [2024-12-13 10:40:13.326530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.555 qpair failed and we were unable to recover it. 00:38:19.555 [2024-12-13 10:40:13.326615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.555 [2024-12-13 10:40:13.326628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.555 qpair failed and we were unable to recover it. 00:38:19.555 [2024-12-13 10:40:13.326764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.555 [2024-12-13 10:40:13.326807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.555 qpair failed and we were unable to recover it. 00:38:19.555 [2024-12-13 10:40:13.327123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.555 [2024-12-13 10:40:13.327165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.555 qpair failed and we were unable to recover it. 00:38:19.555 [2024-12-13 10:40:13.327293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.555 [2024-12-13 10:40:13.327335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.555 qpair failed and we were unable to recover it. 00:38:19.555 [2024-12-13 10:40:13.327472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.555 [2024-12-13 10:40:13.327520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:19.555 qpair failed and we were unable to recover it. 00:38:19.555 [2024-12-13 10:40:13.327757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.555 [2024-12-13 10:40:13.327779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:19.555 qpair failed and we were unable to recover it. 00:38:19.555 [2024-12-13 10:40:13.327940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.555 [2024-12-13 10:40:13.327961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:19.555 qpair failed and we were unable to recover it. 00:38:19.555 [2024-12-13 10:40:13.328140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.555 [2024-12-13 10:40:13.328187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.555 qpair failed and we were unable to recover it. 00:38:19.555 [2024-12-13 10:40:13.328353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.555 [2024-12-13 10:40:13.328402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:19.555 qpair failed and we were unable to recover it. 00:38:19.555 [2024-12-13 10:40:13.328570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.555 [2024-12-13 10:40:13.328619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:19.555 qpair failed and we were unable to recover it. 00:38:19.555 [2024-12-13 10:40:13.328765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.555 [2024-12-13 10:40:13.328789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:19.555 qpair failed and we were unable to recover it. 00:38:19.555 [2024-12-13 10:40:13.328936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.555 [2024-12-13 10:40:13.328951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.555 qpair failed and we were unable to recover it. 00:38:19.555 [2024-12-13 10:40:13.329092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.555 [2024-12-13 10:40:13.329106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.555 qpair failed and we were unable to recover it. 00:38:19.555 [2024-12-13 10:40:13.329194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.555 [2024-12-13 10:40:13.329208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.555 qpair failed and we were unable to recover it. 00:38:19.555 [2024-12-13 10:40:13.329307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.555 [2024-12-13 10:40:13.329320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.555 qpair failed and we were unable to recover it. 00:38:19.556 [2024-12-13 10:40:13.329412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.556 [2024-12-13 10:40:13.329425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.556 qpair failed and we were unable to recover it. 00:38:19.556 [2024-12-13 10:40:13.329515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.556 [2024-12-13 10:40:13.329529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.556 qpair failed and we were unable to recover it. 00:38:19.556 [2024-12-13 10:40:13.329621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.556 [2024-12-13 10:40:13.329635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.556 qpair failed and we were unable to recover it. 00:38:19.556 [2024-12-13 10:40:13.329707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.556 [2024-12-13 10:40:13.329721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.556 qpair failed and we were unable to recover it. 00:38:19.556 [2024-12-13 10:40:13.329868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.556 [2024-12-13 10:40:13.329882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.556 qpair failed and we were unable to recover it. 00:38:19.556 [2024-12-13 10:40:13.329973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.556 [2024-12-13 10:40:13.329986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.556 qpair failed and we were unable to recover it. 00:38:19.556 [2024-12-13 10:40:13.330158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.556 [2024-12-13 10:40:13.330200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.556 qpair failed and we were unable to recover it. 00:38:19.556 [2024-12-13 10:40:13.330412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.556 [2024-12-13 10:40:13.330467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.556 qpair failed and we were unable to recover it. 00:38:19.556 [2024-12-13 10:40:13.330622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.556 [2024-12-13 10:40:13.330664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.556 qpair failed and we were unable to recover it. 00:38:19.556 [2024-12-13 10:40:13.330794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.556 [2024-12-13 10:40:13.330837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.556 qpair failed and we were unable to recover it. 00:38:19.556 [2024-12-13 10:40:13.331060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.556 [2024-12-13 10:40:13.331102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.556 qpair failed and we were unable to recover it. 00:38:19.556 [2024-12-13 10:40:13.331409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.556 [2024-12-13 10:40:13.331465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.556 qpair failed and we were unable to recover it. 00:38:19.556 [2024-12-13 10:40:13.331613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.556 [2024-12-13 10:40:13.331626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.556 qpair failed and we were unable to recover it. 00:38:19.556 [2024-12-13 10:40:13.331846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.556 [2024-12-13 10:40:13.331889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.556 qpair failed and we were unable to recover it. 00:38:19.556 [2024-12-13 10:40:13.332095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.556 [2024-12-13 10:40:13.332138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.556 qpair failed and we were unable to recover it. 00:38:19.556 [2024-12-13 10:40:13.332424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.556 [2024-12-13 10:40:13.332535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.556 qpair failed and we were unable to recover it. 00:38:19.556 [2024-12-13 10:40:13.332708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.556 [2024-12-13 10:40:13.332751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.556 qpair failed and we were unable to recover it. 00:38:19.556 [2024-12-13 10:40:13.332900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.556 [2024-12-13 10:40:13.332941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.556 qpair failed and we were unable to recover it. 00:38:19.556 [2024-12-13 10:40:13.333088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.556 [2024-12-13 10:40:13.333130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.556 qpair failed and we were unable to recover it. 00:38:19.556 [2024-12-13 10:40:13.333343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.556 [2024-12-13 10:40:13.333392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.556 qpair failed and we were unable to recover it. 00:38:19.556 [2024-12-13 10:40:13.333561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.556 [2024-12-13 10:40:13.333575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.556 qpair failed and we were unable to recover it. 00:38:19.556 [2024-12-13 10:40:13.333645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.556 [2024-12-13 10:40:13.333660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.556 qpair failed and we were unable to recover it. 00:38:19.556 [2024-12-13 10:40:13.333928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.556 [2024-12-13 10:40:13.333942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.556 qpair failed and we were unable to recover it. 00:38:19.556 [2024-12-13 10:40:13.334095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.556 [2024-12-13 10:40:13.334132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.556 qpair failed and we were unable to recover it. 00:38:19.556 [2024-12-13 10:40:13.334412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.556 [2024-12-13 10:40:13.334469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.556 qpair failed and we were unable to recover it. 00:38:19.556 [2024-12-13 10:40:13.334601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.556 [2024-12-13 10:40:13.334644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.556 qpair failed and we were unable to recover it. 00:38:19.556 [2024-12-13 10:40:13.334851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.556 [2024-12-13 10:40:13.334865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.556 qpair failed and we were unable to recover it. 00:38:19.556 [2024-12-13 10:40:13.335025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.556 [2024-12-13 10:40:13.335068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.556 qpair failed and we were unable to recover it. 00:38:19.556 [2024-12-13 10:40:13.335290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.556 [2024-12-13 10:40:13.335334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.556 qpair failed and we were unable to recover it. 00:38:19.556 [2024-12-13 10:40:13.335554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.556 [2024-12-13 10:40:13.335571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.556 qpair failed and we were unable to recover it. 00:38:19.556 [2024-12-13 10:40:13.335658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.556 [2024-12-13 10:40:13.335671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.556 qpair failed and we were unable to recover it. 00:38:19.556 [2024-12-13 10:40:13.335765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.556 [2024-12-13 10:40:13.335778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.556 qpair failed and we were unable to recover it. 00:38:19.556 [2024-12-13 10:40:13.335868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.556 [2024-12-13 10:40:13.335882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.556 qpair failed and we were unable to recover it. 00:38:19.556 [2024-12-13 10:40:13.336117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.556 [2024-12-13 10:40:13.336160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.556 qpair failed and we were unable to recover it. 00:38:19.556 [2024-12-13 10:40:13.336322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.556 [2024-12-13 10:40:13.336364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.556 qpair failed and we were unable to recover it. 00:38:19.556 [2024-12-13 10:40:13.336514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.556 [2024-12-13 10:40:13.336557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.556 qpair failed and we were unable to recover it. 00:38:19.556 [2024-12-13 10:40:13.336761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.556 [2024-12-13 10:40:13.336775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.556 qpair failed and we were unable to recover it. 00:38:19.556 [2024-12-13 10:40:13.336919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.556 [2024-12-13 10:40:13.336935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.556 qpair failed and we were unable to recover it. 00:38:19.556 [2024-12-13 10:40:13.337035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.556 [2024-12-13 10:40:13.337050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.557 qpair failed and we were unable to recover it. 00:38:19.557 [2024-12-13 10:40:13.337211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.557 [2024-12-13 10:40:13.337255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.557 qpair failed and we were unable to recover it. 00:38:19.557 [2024-12-13 10:40:13.337480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.557 [2024-12-13 10:40:13.337525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.557 qpair failed and we were unable to recover it. 00:38:19.557 [2024-12-13 10:40:13.337664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.557 [2024-12-13 10:40:13.337718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.557 qpair failed and we were unable to recover it. 00:38:19.557 [2024-12-13 10:40:13.338502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.557 [2024-12-13 10:40:13.338529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.557 qpair failed and we were unable to recover it. 00:38:19.557 [2024-12-13 10:40:13.338708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.557 [2024-12-13 10:40:13.338722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.557 qpair failed and we were unable to recover it. 00:38:19.557 [2024-12-13 10:40:13.338872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.557 [2024-12-13 10:40:13.338885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.557 qpair failed and we were unable to recover it. 00:38:19.557 [2024-12-13 10:40:13.339036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.557 [2024-12-13 10:40:13.339049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.557 qpair failed and we were unable to recover it. 00:38:19.557 [2024-12-13 10:40:13.339142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.557 [2024-12-13 10:40:13.339156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.557 qpair failed and we were unable to recover it. 00:38:19.557 [2024-12-13 10:40:13.339304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.557 [2024-12-13 10:40:13.339317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.557 qpair failed and we were unable to recover it. 00:38:19.557 [2024-12-13 10:40:13.339470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.557 [2024-12-13 10:40:13.339484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.557 qpair failed and we were unable to recover it. 00:38:19.557 [2024-12-13 10:40:13.339560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.557 [2024-12-13 10:40:13.339574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.557 qpair failed and we were unable to recover it. 00:38:19.557 [2024-12-13 10:40:13.339643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.557 [2024-12-13 10:40:13.339656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.557 qpair failed and we were unable to recover it. 00:38:19.557 [2024-12-13 10:40:13.339803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.557 [2024-12-13 10:40:13.339821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.557 qpair failed and we were unable to recover it. 00:38:19.557 [2024-12-13 10:40:13.339980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.557 [2024-12-13 10:40:13.339994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.557 qpair failed and we were unable to recover it. 00:38:19.557 [2024-12-13 10:40:13.340137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.557 [2024-12-13 10:40:13.340151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.557 qpair failed and we were unable to recover it. 00:38:19.557 [2024-12-13 10:40:13.340243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.557 [2024-12-13 10:40:13.340284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.557 qpair failed and we were unable to recover it. 00:38:19.557 [2024-12-13 10:40:13.340473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.557 [2024-12-13 10:40:13.340519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.557 qpair failed and we were unable to recover it. 00:38:19.557 [2024-12-13 10:40:13.340652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.557 [2024-12-13 10:40:13.340695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.557 qpair failed and we were unable to recover it. 00:38:19.557 [2024-12-13 10:40:13.340913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.557 [2024-12-13 10:40:13.340955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.557 qpair failed and we were unable to recover it. 00:38:19.557 [2024-12-13 10:40:13.341101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.557 [2024-12-13 10:40:13.341143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.557 qpair failed and we were unable to recover it. 00:38:19.557 [2024-12-13 10:40:13.341291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.557 [2024-12-13 10:40:13.341349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:19.557 qpair failed and we were unable to recover it. 00:38:19.557 [2024-12-13 10:40:13.341633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.557 [2024-12-13 10:40:13.341695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:19.557 qpair failed and we were unable to recover it. 00:38:19.557 [2024-12-13 10:40:13.341953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.557 [2024-12-13 10:40:13.341976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:19.557 qpair failed and we were unable to recover it. 00:38:19.557 [2024-12-13 10:40:13.342147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.557 [2024-12-13 10:40:13.342168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:19.557 qpair failed and we were unable to recover it. 00:38:19.557 [2024-12-13 10:40:13.342324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.557 [2024-12-13 10:40:13.342346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:19.557 qpair failed and we were unable to recover it. 00:38:19.557 [2024-12-13 10:40:13.342433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.557 [2024-12-13 10:40:13.342463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:19.557 qpair failed and we were unable to recover it. 00:38:19.557 [2024-12-13 10:40:13.342633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.557 [2024-12-13 10:40:13.342649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.557 qpair failed and we were unable to recover it. 00:38:19.557 [2024-12-13 10:40:13.342742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.557 [2024-12-13 10:40:13.342792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.557 qpair failed and we were unable to recover it. 00:38:19.557 [2024-12-13 10:40:13.342928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.557 [2024-12-13 10:40:13.342972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.557 qpair failed and we were unable to recover it. 00:38:19.557 [2024-12-13 10:40:13.343246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.557 [2024-12-13 10:40:13.343289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.557 qpair failed and we were unable to recover it. 00:38:19.557 [2024-12-13 10:40:13.343456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.557 [2024-12-13 10:40:13.343470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.557 qpair failed and we were unable to recover it. 00:38:19.557 [2024-12-13 10:40:13.343552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.557 [2024-12-13 10:40:13.343565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.557 qpair failed and we were unable to recover it. 00:38:19.557 [2024-12-13 10:40:13.343707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.557 [2024-12-13 10:40:13.343721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.557 qpair failed and we were unable to recover it. 00:38:19.557 [2024-12-13 10:40:13.343805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.557 [2024-12-13 10:40:13.343818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.557 qpair failed and we were unable to recover it. 00:38:19.557 [2024-12-13 10:40:13.343947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.557 [2024-12-13 10:40:13.343961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.557 qpair failed and we were unable to recover it. 00:38:19.557 [2024-12-13 10:40:13.344031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.557 [2024-12-13 10:40:13.344045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.558 qpair failed and we were unable to recover it. 00:38:19.558 [2024-12-13 10:40:13.344180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.558 [2024-12-13 10:40:13.344194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.558 qpair failed and we were unable to recover it. 00:38:19.558 [2024-12-13 10:40:13.344343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.558 [2024-12-13 10:40:13.344356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.558 qpair failed and we were unable to recover it. 00:38:19.558 [2024-12-13 10:40:13.344504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.558 [2024-12-13 10:40:13.344518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.558 qpair failed and we were unable to recover it. 00:38:19.558 [2024-12-13 10:40:13.344669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.558 [2024-12-13 10:40:13.344683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.558 qpair failed and we were unable to recover it. 00:38:19.558 [2024-12-13 10:40:13.344760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.558 [2024-12-13 10:40:13.344774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.558 qpair failed and we were unable to recover it. 00:38:19.558 [2024-12-13 10:40:13.344979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.558 [2024-12-13 10:40:13.344993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.558 qpair failed and we were unable to recover it. 00:38:19.558 [2024-12-13 10:40:13.345745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.558 [2024-12-13 10:40:13.345771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.558 qpair failed and we were unable to recover it. 00:38:19.558 [2024-12-13 10:40:13.345870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.558 [2024-12-13 10:40:13.345884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.558 qpair failed and we were unable to recover it. 00:38:19.558 [2024-12-13 10:40:13.346026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.558 [2024-12-13 10:40:13.346039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.558 qpair failed and we were unable to recover it. 00:38:19.558 [2024-12-13 10:40:13.346269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.558 [2024-12-13 10:40:13.346282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.558 qpair failed and we were unable to recover it. 00:38:19.558 [2024-12-13 10:40:13.346509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.558 [2024-12-13 10:40:13.346523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.558 qpair failed and we were unable to recover it. 00:38:19.558 [2024-12-13 10:40:13.346632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.558 [2024-12-13 10:40:13.346646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.558 qpair failed and we were unable to recover it. 00:38:19.558 [2024-12-13 10:40:13.346813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.558 [2024-12-13 10:40:13.346826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.558 qpair failed and we were unable to recover it. 00:38:19.558 [2024-12-13 10:40:13.346916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.558 [2024-12-13 10:40:13.346930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.558 qpair failed and we were unable to recover it. 00:38:19.558 [2024-12-13 10:40:13.347018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.558 [2024-12-13 10:40:13.347059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.558 qpair failed and we were unable to recover it. 00:38:19.558 [2024-12-13 10:40:13.347261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.558 [2024-12-13 10:40:13.347303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.558 qpair failed and we were unable to recover it. 00:38:19.558 [2024-12-13 10:40:13.347468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.558 [2024-12-13 10:40:13.347513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.558 qpair failed and we were unable to recover it. 00:38:19.558 [2024-12-13 10:40:13.347660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.558 [2024-12-13 10:40:13.347674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.558 qpair failed and we were unable to recover it. 00:38:19.558 [2024-12-13 10:40:13.347860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.558 [2024-12-13 10:40:13.347873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.558 qpair failed and we were unable to recover it. 00:38:19.558 [2024-12-13 10:40:13.347957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.558 [2024-12-13 10:40:13.347970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.558 qpair failed and we were unable to recover it. 00:38:19.558 [2024-12-13 10:40:13.348125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.558 [2024-12-13 10:40:13.348138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.558 qpair failed and we were unable to recover it. 00:38:19.558 [2024-12-13 10:40:13.348211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.558 [2024-12-13 10:40:13.348224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.558 qpair failed and we were unable to recover it. 00:38:19.558 [2024-12-13 10:40:13.348340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.558 [2024-12-13 10:40:13.348353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.558 qpair failed and we were unable to recover it. 00:38:19.558 [2024-12-13 10:40:13.348434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.558 [2024-12-13 10:40:13.348454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.558 qpair failed and we were unable to recover it. 00:38:19.558 [2024-12-13 10:40:13.348679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.558 [2024-12-13 10:40:13.348694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.558 qpair failed and we were unable to recover it. 00:38:19.558 [2024-12-13 10:40:13.348777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.558 [2024-12-13 10:40:13.348790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.558 qpair failed and we were unable to recover it. 00:38:19.558 [2024-12-13 10:40:13.348874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.558 [2024-12-13 10:40:13.348887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.558 qpair failed and we were unable to recover it. 00:38:19.558 [2024-12-13 10:40:13.348961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.558 [2024-12-13 10:40:13.348974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.558 qpair failed and we were unable to recover it. 00:38:19.558 [2024-12-13 10:40:13.349063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.558 [2024-12-13 10:40:13.349077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.558 qpair failed and we were unable to recover it. 00:38:19.558 [2024-12-13 10:40:13.349169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.558 [2024-12-13 10:40:13.349183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.558 qpair failed and we were unable to recover it. 00:38:19.558 [2024-12-13 10:40:13.349256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.558 [2024-12-13 10:40:13.349268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.558 qpair failed and we were unable to recover it. 00:38:19.558 [2024-12-13 10:40:13.349436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.558 [2024-12-13 10:40:13.349456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.558 qpair failed and we were unable to recover it. 00:38:19.558 [2024-12-13 10:40:13.349533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.558 [2024-12-13 10:40:13.349547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.558 qpair failed and we were unable to recover it. 00:38:19.558 [2024-12-13 10:40:13.349687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.558 [2024-12-13 10:40:13.349700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.558 qpair failed and we were unable to recover it. 00:38:19.558 [2024-12-13 10:40:13.349773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.558 [2024-12-13 10:40:13.349786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.558 qpair failed and we were unable to recover it. 00:38:19.558 [2024-12-13 10:40:13.349939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.558 [2024-12-13 10:40:13.349952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.558 qpair failed and we were unable to recover it. 00:38:19.558 [2024-12-13 10:40:13.350016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.558 [2024-12-13 10:40:13.350030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.558 qpair failed and we were unable to recover it. 00:38:19.559 [2024-12-13 10:40:13.350219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.559 [2024-12-13 10:40:13.350233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.559 qpair failed and we were unable to recover it. 00:38:19.559 [2024-12-13 10:40:13.350321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.559 [2024-12-13 10:40:13.350334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.559 qpair failed and we were unable to recover it. 00:38:19.559 [2024-12-13 10:40:13.350494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.559 [2024-12-13 10:40:13.350508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.559 qpair failed and we were unable to recover it. 00:38:19.559 [2024-12-13 10:40:13.350580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.559 [2024-12-13 10:40:13.350593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.559 qpair failed and we were unable to recover it. 00:38:19.559 [2024-12-13 10:40:13.350659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.559 [2024-12-13 10:40:13.350672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.559 qpair failed and we were unable to recover it. 00:38:19.559 [2024-12-13 10:40:13.350746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.559 [2024-12-13 10:40:13.350760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.559 qpair failed and we were unable to recover it. 00:38:19.559 [2024-12-13 10:40:13.350849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.559 [2024-12-13 10:40:13.350862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.559 qpair failed and we were unable to recover it. 00:38:19.559 [2024-12-13 10:40:13.350941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.559 [2024-12-13 10:40:13.350955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.559 qpair failed and we were unable to recover it. 00:38:19.559 [2024-12-13 10:40:13.351065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.559 [2024-12-13 10:40:13.351079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.559 qpair failed and we were unable to recover it. 00:38:19.559 [2024-12-13 10:40:13.351246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.559 [2024-12-13 10:40:13.351264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.559 qpair failed and we were unable to recover it. 00:38:19.559 [2024-12-13 10:40:13.351360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.559 [2024-12-13 10:40:13.351374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.559 qpair failed and we were unable to recover it. 00:38:19.559 [2024-12-13 10:40:13.351457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.559 [2024-12-13 10:40:13.351471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.559 qpair failed and we were unable to recover it. 00:38:19.559 [2024-12-13 10:40:13.351559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.559 [2024-12-13 10:40:13.351572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.559 qpair failed and we were unable to recover it. 00:38:19.559 [2024-12-13 10:40:13.351737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.559 [2024-12-13 10:40:13.351751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.559 qpair failed and we were unable to recover it. 00:38:19.559 [2024-12-13 10:40:13.351817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.559 [2024-12-13 10:40:13.351831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.559 qpair failed and we were unable to recover it. 00:38:19.559 [2024-12-13 10:40:13.351896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.559 [2024-12-13 10:40:13.351909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.559 qpair failed and we were unable to recover it. 00:38:19.559 [2024-12-13 10:40:13.351986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.559 [2024-12-13 10:40:13.351999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.559 qpair failed and we were unable to recover it. 00:38:19.559 [2024-12-13 10:40:13.352069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.559 [2024-12-13 10:40:13.352082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.559 qpair failed and we were unable to recover it. 00:38:19.559 [2024-12-13 10:40:13.352220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.559 [2024-12-13 10:40:13.352234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.559 qpair failed and we were unable to recover it. 00:38:19.559 [2024-12-13 10:40:13.352374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.559 [2024-12-13 10:40:13.352388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.559 qpair failed and we were unable to recover it. 00:38:19.559 [2024-12-13 10:40:13.352483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.559 [2024-12-13 10:40:13.352497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.559 qpair failed and we were unable to recover it. 00:38:19.559 [2024-12-13 10:40:13.352572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.559 [2024-12-13 10:40:13.352585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.559 qpair failed and we were unable to recover it. 00:38:19.559 [2024-12-13 10:40:13.352729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.559 [2024-12-13 10:40:13.352743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.559 qpair failed and we were unable to recover it. 00:38:19.559 [2024-12-13 10:40:13.352821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.559 [2024-12-13 10:40:13.352834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.559 qpair failed and we were unable to recover it. 00:38:19.559 [2024-12-13 10:40:13.352928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.559 [2024-12-13 10:40:13.352942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.559 qpair failed and we were unable to recover it. 00:38:19.559 [2024-12-13 10:40:13.353009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.559 [2024-12-13 10:40:13.353022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.559 qpair failed and we were unable to recover it. 00:38:19.559 [2024-12-13 10:40:13.353159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.559 [2024-12-13 10:40:13.353172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.559 qpair failed and we were unable to recover it. 00:38:19.559 [2024-12-13 10:40:13.353250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.559 [2024-12-13 10:40:13.353266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.559 qpair failed and we were unable to recover it. 00:38:19.559 [2024-12-13 10:40:13.353337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.559 [2024-12-13 10:40:13.353350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.559 qpair failed and we were unable to recover it. 00:38:19.559 [2024-12-13 10:40:13.353427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.559 [2024-12-13 10:40:13.353441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.559 qpair failed and we were unable to recover it. 00:38:19.559 [2024-12-13 10:40:13.353517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.559 [2024-12-13 10:40:13.353530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.559 qpair failed and we were unable to recover it. 00:38:19.559 [2024-12-13 10:40:13.353604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.559 [2024-12-13 10:40:13.353650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.559 qpair failed and we were unable to recover it. 00:38:19.559 [2024-12-13 10:40:13.353860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.559 [2024-12-13 10:40:13.353903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.559 qpair failed and we were unable to recover it. 00:38:19.559 [2024-12-13 10:40:13.354049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.559 [2024-12-13 10:40:13.354091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.559 qpair failed and we were unable to recover it. 00:38:19.560 [2024-12-13 10:40:13.354220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.560 [2024-12-13 10:40:13.354262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.560 qpair failed and we were unable to recover it. 00:38:19.560 [2024-12-13 10:40:13.354422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.560 [2024-12-13 10:40:13.354479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.560 qpair failed and we were unable to recover it. 00:38:19.560 [2024-12-13 10:40:13.354635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.560 [2024-12-13 10:40:13.354677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.560 qpair failed and we were unable to recover it. 00:38:19.560 [2024-12-13 10:40:13.354817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.560 [2024-12-13 10:40:13.354830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.560 qpair failed and we were unable to recover it. 00:38:19.560 [2024-12-13 10:40:13.355038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.560 [2024-12-13 10:40:13.355051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.560 qpair failed and we were unable to recover it. 00:38:19.560 [2024-12-13 10:40:13.355129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.560 [2024-12-13 10:40:13.355143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.560 qpair failed and we were unable to recover it. 00:38:19.560 [2024-12-13 10:40:13.355217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.560 [2024-12-13 10:40:13.355230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.560 qpair failed and we were unable to recover it. 00:38:19.560 [2024-12-13 10:40:13.355371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.560 [2024-12-13 10:40:13.355384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.560 qpair failed and we were unable to recover it. 00:38:19.560 [2024-12-13 10:40:13.355537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.560 [2024-12-13 10:40:13.355551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.560 qpair failed and we were unable to recover it. 00:38:19.560 [2024-12-13 10:40:13.355717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.560 [2024-12-13 10:40:13.355731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.560 qpair failed and we were unable to recover it. 00:38:19.560 [2024-12-13 10:40:13.355821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.560 [2024-12-13 10:40:13.355836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.560 qpair failed and we were unable to recover it. 00:38:19.560 [2024-12-13 10:40:13.355916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.560 [2024-12-13 10:40:13.355930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.560 qpair failed and we were unable to recover it. 00:38:19.560 [2024-12-13 10:40:13.356011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.560 [2024-12-13 10:40:13.356039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.560 qpair failed and we were unable to recover it. 00:38:19.560 [2024-12-13 10:40:13.356347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.560 [2024-12-13 10:40:13.356401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:19.560 qpair failed and we were unable to recover it. 00:38:19.560 [2024-12-13 10:40:13.356581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.560 [2024-12-13 10:40:13.356632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:19.560 qpair failed and we were unable to recover it. 00:38:19.560 [2024-12-13 10:40:13.356741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.560 [2024-12-13 10:40:13.356763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:19.560 qpair failed and we were unable to recover it. 00:38:19.560 [2024-12-13 10:40:13.356862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.560 [2024-12-13 10:40:13.356883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:19.560 qpair failed and we were unable to recover it. 00:38:19.560 [2024-12-13 10:40:13.356989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.560 [2024-12-13 10:40:13.357011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:19.560 qpair failed and we were unable to recover it. 00:38:19.560 [2024-12-13 10:40:13.357182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.560 [2024-12-13 10:40:13.357203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:19.560 qpair failed and we were unable to recover it. 00:38:19.560 [2024-12-13 10:40:13.357290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.560 [2024-12-13 10:40:13.357311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:19.560 qpair failed and we were unable to recover it. 00:38:19.560 [2024-12-13 10:40:13.357422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.560 [2024-12-13 10:40:13.357444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:19.560 qpair failed and we were unable to recover it. 00:38:19.560 [2024-12-13 10:40:13.357543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.560 [2024-12-13 10:40:13.357565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:19.560 qpair failed and we were unable to recover it. 00:38:19.560 [2024-12-13 10:40:13.357654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.560 [2024-12-13 10:40:13.357670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.560 qpair failed and we were unable to recover it. 00:38:19.560 [2024-12-13 10:40:13.357740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.560 [2024-12-13 10:40:13.357755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.560 qpair failed and we were unable to recover it. 00:38:19.560 [2024-12-13 10:40:13.357991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.560 [2024-12-13 10:40:13.358005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.560 qpair failed and we were unable to recover it. 00:38:19.560 [2024-12-13 10:40:13.358178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.560 [2024-12-13 10:40:13.358192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.560 qpair failed and we were unable to recover it. 00:38:19.560 [2024-12-13 10:40:13.359148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.560 [2024-12-13 10:40:13.359175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.560 qpair failed and we were unable to recover it. 00:38:19.560 [2024-12-13 10:40:13.359334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.560 [2024-12-13 10:40:13.359349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.560 qpair failed and we were unable to recover it. 00:38:19.560 [2024-12-13 10:40:13.359582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.560 [2024-12-13 10:40:13.359597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.560 qpair failed and we were unable to recover it. 00:38:19.560 [2024-12-13 10:40:13.360275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.560 [2024-12-13 10:40:13.360300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.560 qpair failed and we were unable to recover it. 00:38:19.560 [2024-12-13 10:40:13.360390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.560 [2024-12-13 10:40:13.360405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.560 qpair failed and we were unable to recover it. 00:38:19.560 [2024-12-13 10:40:13.360549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.560 [2024-12-13 10:40:13.360565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.560 qpair failed and we were unable to recover it. 00:38:19.560 [2024-12-13 10:40:13.360796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.560 [2024-12-13 10:40:13.360811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.560 qpair failed and we were unable to recover it. 00:38:19.560 [2024-12-13 10:40:13.361626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.560 [2024-12-13 10:40:13.361656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.560 qpair failed and we were unable to recover it. 00:38:19.560 [2024-12-13 10:40:13.361808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.560 [2024-12-13 10:40:13.361823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.560 qpair failed and we were unable to recover it. 00:38:19.560 [2024-12-13 10:40:13.361978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.560 [2024-12-13 10:40:13.362023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.560 qpair failed and we were unable to recover it. 00:38:19.560 [2024-12-13 10:40:13.362221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.560 [2024-12-13 10:40:13.362262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.560 qpair failed and we were unable to recover it. 00:38:19.560 [2024-12-13 10:40:13.362409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.561 [2024-12-13 10:40:13.362488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.561 qpair failed and we were unable to recover it. 00:38:19.561 [2024-12-13 10:40:13.362669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.561 [2024-12-13 10:40:13.362683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.561 qpair failed and we were unable to recover it. 00:38:19.561 [2024-12-13 10:40:13.362886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.561 [2024-12-13 10:40:13.362899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.561 qpair failed and we were unable to recover it. 00:38:19.561 [2024-12-13 10:40:13.363098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.561 [2024-12-13 10:40:13.363140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.561 qpair failed and we were unable to recover it. 00:38:19.561 [2024-12-13 10:40:13.363284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.561 [2024-12-13 10:40:13.363326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.561 qpair failed and we were unable to recover it. 00:38:19.561 [2024-12-13 10:40:13.363527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.561 [2024-12-13 10:40:13.363572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.561 qpair failed and we were unable to recover it. 00:38:19.561 [2024-12-13 10:40:13.363700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.561 [2024-12-13 10:40:13.363714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.561 qpair failed and we were unable to recover it. 00:38:19.561 [2024-12-13 10:40:13.363855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.561 [2024-12-13 10:40:13.363869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.561 qpair failed and we were unable to recover it. 00:38:19.561 [2024-12-13 10:40:13.364016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.561 [2024-12-13 10:40:13.364060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.561 qpair failed and we were unable to recover it. 00:38:19.561 [2024-12-13 10:40:13.364197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.561 [2024-12-13 10:40:13.364239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.561 qpair failed and we were unable to recover it. 00:38:19.561 [2024-12-13 10:40:13.364421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.561 [2024-12-13 10:40:13.364435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.561 qpair failed and we were unable to recover it. 00:38:19.561 [2024-12-13 10:40:13.364586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.561 [2024-12-13 10:40:13.364604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.561 qpair failed and we were unable to recover it. 00:38:19.561 [2024-12-13 10:40:13.364761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.561 [2024-12-13 10:40:13.364803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.561 qpair failed and we were unable to recover it. 00:38:19.561 [2024-12-13 10:40:13.365066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.561 [2024-12-13 10:40:13.365106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.561 qpair failed and we were unable to recover it. 00:38:19.561 [2024-12-13 10:40:13.365315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.561 [2024-12-13 10:40:13.365357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.561 qpair failed and we were unable to recover it. 00:38:19.561 [2024-12-13 10:40:13.365623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.561 [2024-12-13 10:40:13.365669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.561 qpair failed and we were unable to recover it. 00:38:19.561 [2024-12-13 10:40:13.366439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.561 [2024-12-13 10:40:13.366472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.561 qpair failed and we were unable to recover it. 00:38:19.561 [2024-12-13 10:40:13.366657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.561 [2024-12-13 10:40:13.366671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.561 qpair failed and we were unable to recover it. 00:38:19.561 [2024-12-13 10:40:13.366811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.561 [2024-12-13 10:40:13.366826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.561 qpair failed and we were unable to recover it. 00:38:19.561 [2024-12-13 10:40:13.366973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.561 [2024-12-13 10:40:13.366987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.561 qpair failed and we were unable to recover it. 00:38:19.561 [2024-12-13 10:40:13.367788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.561 [2024-12-13 10:40:13.367816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.561 qpair failed and we were unable to recover it. 00:38:19.561 [2024-12-13 10:40:13.367985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.561 [2024-12-13 10:40:13.368001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.561 qpair failed and we were unable to recover it. 00:38:19.561 [2024-12-13 10:40:13.368146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.561 [2024-12-13 10:40:13.368162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.561 qpair failed and we were unable to recover it. 00:38:19.561 [2024-12-13 10:40:13.368350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.561 [2024-12-13 10:40:13.368365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.561 qpair failed and we were unable to recover it. 00:38:19.561 [2024-12-13 10:40:13.368427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.561 [2024-12-13 10:40:13.368442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.561 qpair failed and we were unable to recover it. 00:38:19.561 [2024-12-13 10:40:13.368533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.561 [2024-12-13 10:40:13.368547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.561 qpair failed and we were unable to recover it. 00:38:19.561 [2024-12-13 10:40:13.368694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.561 [2024-12-13 10:40:13.368708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.561 qpair failed and we were unable to recover it. 00:38:19.561 [2024-12-13 10:40:13.368778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.561 [2024-12-13 10:40:13.368791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.561 qpair failed and we were unable to recover it. 00:38:19.561 [2024-12-13 10:40:13.368873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.561 [2024-12-13 10:40:13.368887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.561 qpair failed and we were unable to recover it. 00:38:19.561 [2024-12-13 10:40:13.369113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.561 [2024-12-13 10:40:13.369127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.561 qpair failed and we were unable to recover it. 00:38:19.561 [2024-12-13 10:40:13.369209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.561 [2024-12-13 10:40:13.369222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.561 qpair failed and we were unable to recover it. 00:38:19.561 [2024-12-13 10:40:13.369364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.561 [2024-12-13 10:40:13.369377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.561 qpair failed and we were unable to recover it. 00:38:19.561 [2024-12-13 10:40:13.369531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.561 [2024-12-13 10:40:13.369545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.561 qpair failed and we were unable to recover it. 00:38:19.561 [2024-12-13 10:40:13.369693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.561 [2024-12-13 10:40:13.369707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.561 qpair failed and we were unable to recover it. 00:38:19.561 [2024-12-13 10:40:13.369862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.561 [2024-12-13 10:40:13.369876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.561 qpair failed and we were unable to recover it. 00:38:19.561 [2024-12-13 10:40:13.369957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.561 [2024-12-13 10:40:13.369971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.561 qpair failed and we were unable to recover it. 00:38:19.561 [2024-12-13 10:40:13.370061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.561 [2024-12-13 10:40:13.370077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.561 qpair failed and we were unable to recover it. 00:38:19.561 [2024-12-13 10:40:13.370233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.561 [2024-12-13 10:40:13.370247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.561 qpair failed and we were unable to recover it. 00:38:19.561 [2024-12-13 10:40:13.370392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.561 [2024-12-13 10:40:13.370405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.561 qpair failed and we were unable to recover it. 00:38:19.561 [2024-12-13 10:40:13.370490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.562 [2024-12-13 10:40:13.370505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.562 qpair failed and we were unable to recover it. 00:38:19.562 [2024-12-13 10:40:13.370658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.562 [2024-12-13 10:40:13.370672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.562 qpair failed and we were unable to recover it. 00:38:19.562 [2024-12-13 10:40:13.370750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.562 [2024-12-13 10:40:13.370765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.562 qpair failed and we were unable to recover it. 00:38:19.562 [2024-12-13 10:40:13.370850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.562 [2024-12-13 10:40:13.370864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.562 qpair failed and we were unable to recover it. 00:38:19.562 [2024-12-13 10:40:13.371011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.562 [2024-12-13 10:40:13.371025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.562 qpair failed and we were unable to recover it. 00:38:19.562 [2024-12-13 10:40:13.371109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.562 [2024-12-13 10:40:13.371122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.562 qpair failed and we were unable to recover it. 00:38:19.562 [2024-12-13 10:40:13.371211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.562 [2024-12-13 10:40:13.371225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.562 qpair failed and we were unable to recover it. 00:38:19.562 [2024-12-13 10:40:13.371307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.562 [2024-12-13 10:40:13.371322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.562 qpair failed and we were unable to recover it. 00:38:19.562 [2024-12-13 10:40:13.371476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.562 [2024-12-13 10:40:13.371491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.562 qpair failed and we were unable to recover it. 00:38:19.562 [2024-12-13 10:40:13.371652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.562 [2024-12-13 10:40:13.371666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.562 qpair failed and we were unable to recover it. 00:38:19.562 [2024-12-13 10:40:13.371903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.562 [2024-12-13 10:40:13.371916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.562 qpair failed and we were unable to recover it. 00:38:19.562 [2024-12-13 10:40:13.371992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.562 [2024-12-13 10:40:13.372006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.562 qpair failed and we were unable to recover it. 00:38:19.562 [2024-12-13 10:40:13.372107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.562 [2024-12-13 10:40:13.372120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.562 qpair failed and we were unable to recover it. 00:38:19.562 [2024-12-13 10:40:13.372195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.562 [2024-12-13 10:40:13.372209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.562 qpair failed and we were unable to recover it. 00:38:19.562 [2024-12-13 10:40:13.372305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.562 [2024-12-13 10:40:13.372320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.562 qpair failed and we were unable to recover it. 00:38:19.562 [2024-12-13 10:40:13.372406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.562 [2024-12-13 10:40:13.372420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.562 qpair failed and we were unable to recover it. 00:38:19.562 [2024-12-13 10:40:13.372569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.562 [2024-12-13 10:40:13.372583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.562 qpair failed and we were unable to recover it. 00:38:19.562 [2024-12-13 10:40:13.372674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.562 [2024-12-13 10:40:13.372687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.562 qpair failed and we were unable to recover it. 00:38:19.562 [2024-12-13 10:40:13.372769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.562 [2024-12-13 10:40:13.372782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.562 qpair failed and we were unable to recover it. 00:38:19.562 [2024-12-13 10:40:13.372860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.562 [2024-12-13 10:40:13.372874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.562 qpair failed and we were unable to recover it. 00:38:19.562 [2024-12-13 10:40:13.372934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.562 [2024-12-13 10:40:13.372947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.562 qpair failed and we were unable to recover it. 00:38:19.846 [2024-12-13 10:40:13.373026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.846 [2024-12-13 10:40:13.373040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.846 qpair failed and we were unable to recover it. 00:38:19.846 [2024-12-13 10:40:13.373119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.846 [2024-12-13 10:40:13.373132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.846 qpair failed and we were unable to recover it. 00:38:19.846 [2024-12-13 10:40:13.373273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.846 [2024-12-13 10:40:13.373287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.846 qpair failed and we were unable to recover it. 00:38:19.846 [2024-12-13 10:40:13.373386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.846 [2024-12-13 10:40:13.373400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.846 qpair failed and we were unable to recover it. 00:38:19.846 [2024-12-13 10:40:13.373494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.846 [2024-12-13 10:40:13.373508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.846 qpair failed and we were unable to recover it. 00:38:19.846 [2024-12-13 10:40:13.373580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.846 [2024-12-13 10:40:13.373594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.846 qpair failed and we were unable to recover it. 00:38:19.846 [2024-12-13 10:40:13.373681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.846 [2024-12-13 10:40:13.373695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.846 qpair failed and we were unable to recover it. 00:38:19.846 [2024-12-13 10:40:13.373777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.846 [2024-12-13 10:40:13.373791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.846 qpair failed and we were unable to recover it. 00:38:19.846 [2024-12-13 10:40:13.373884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.846 [2024-12-13 10:40:13.373898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.846 qpair failed and we were unable to recover it. 00:38:19.846 [2024-12-13 10:40:13.373976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.846 [2024-12-13 10:40:13.373991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.846 qpair failed and we were unable to recover it. 00:38:19.846 [2024-12-13 10:40:13.374058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.846 [2024-12-13 10:40:13.374071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.846 qpair failed and we were unable to recover it. 00:38:19.846 [2024-12-13 10:40:13.374140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.846 [2024-12-13 10:40:13.374153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.846 qpair failed and we were unable to recover it. 00:38:19.846 [2024-12-13 10:40:13.374220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.846 [2024-12-13 10:40:13.374233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.846 qpair failed and we were unable to recover it. 00:38:19.846 [2024-12-13 10:40:13.374308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.846 [2024-12-13 10:40:13.374323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.846 qpair failed and we were unable to recover it. 00:38:19.846 [2024-12-13 10:40:13.374471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.846 [2024-12-13 10:40:13.374487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.846 qpair failed and we were unable to recover it. 00:38:19.846 [2024-12-13 10:40:13.374563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.846 [2024-12-13 10:40:13.374580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.846 qpair failed and we were unable to recover it. 00:38:19.846 [2024-12-13 10:40:13.374668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.846 [2024-12-13 10:40:13.374683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.846 qpair failed and we were unable to recover it. 00:38:19.846 [2024-12-13 10:40:13.374823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.846 [2024-12-13 10:40:13.374836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.846 qpair failed and we were unable to recover it. 00:38:19.846 [2024-12-13 10:40:13.374910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.846 [2024-12-13 10:40:13.374923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.846 qpair failed and we were unable to recover it. 00:38:19.846 [2024-12-13 10:40:13.375006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.846 [2024-12-13 10:40:13.375019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.846 qpair failed and we were unable to recover it. 00:38:19.846 [2024-12-13 10:40:13.375096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.846 [2024-12-13 10:40:13.375109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.846 qpair failed and we were unable to recover it. 00:38:19.846 [2024-12-13 10:40:13.375192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.846 [2024-12-13 10:40:13.375205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.846 qpair failed and we were unable to recover it. 00:38:19.846 [2024-12-13 10:40:13.375293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.846 [2024-12-13 10:40:13.375306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.846 qpair failed and we were unable to recover it. 00:38:19.846 [2024-12-13 10:40:13.375416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.846 [2024-12-13 10:40:13.375430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.846 qpair failed and we were unable to recover it. 00:38:19.846 [2024-12-13 10:40:13.375531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.846 [2024-12-13 10:40:13.375545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.846 qpair failed and we were unable to recover it. 00:38:19.846 [2024-12-13 10:40:13.375613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.846 [2024-12-13 10:40:13.375627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.846 qpair failed and we were unable to recover it. 00:38:19.846 [2024-12-13 10:40:13.375708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.846 [2024-12-13 10:40:13.375720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.846 qpair failed and we were unable to recover it. 00:38:19.846 [2024-12-13 10:40:13.375788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.846 [2024-12-13 10:40:13.375801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.846 qpair failed and we were unable to recover it. 00:38:19.846 [2024-12-13 10:40:13.375951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.846 [2024-12-13 10:40:13.375964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.846 qpair failed and we were unable to recover it. 00:38:19.846 [2024-12-13 10:40:13.376039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.846 [2024-12-13 10:40:13.376052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.846 qpair failed and we were unable to recover it. 00:38:19.846 [2024-12-13 10:40:13.376136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.847 [2024-12-13 10:40:13.376149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.847 qpair failed and we were unable to recover it. 00:38:19.847 [2024-12-13 10:40:13.376229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.847 [2024-12-13 10:40:13.376242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.847 qpair failed and we were unable to recover it. 00:38:19.847 [2024-12-13 10:40:13.376332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.847 [2024-12-13 10:40:13.376346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.847 qpair failed and we were unable to recover it. 00:38:19.847 [2024-12-13 10:40:13.376411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.847 [2024-12-13 10:40:13.376424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.847 qpair failed and we were unable to recover it. 00:38:19.847 [2024-12-13 10:40:13.376606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.847 [2024-12-13 10:40:13.376620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.847 qpair failed and we were unable to recover it. 00:38:19.847 [2024-12-13 10:40:13.376708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.847 [2024-12-13 10:40:13.376722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.847 qpair failed and we were unable to recover it. 00:38:19.847 [2024-12-13 10:40:13.376795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.847 [2024-12-13 10:40:13.376808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.847 qpair failed and we were unable to recover it. 00:38:19.847 [2024-12-13 10:40:13.376899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.847 [2024-12-13 10:40:13.376912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.847 qpair failed and we were unable to recover it. 00:38:19.847 [2024-12-13 10:40:13.376988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.847 [2024-12-13 10:40:13.377001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.847 qpair failed and we were unable to recover it. 00:38:19.847 [2024-12-13 10:40:13.377073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.847 [2024-12-13 10:40:13.377087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.847 qpair failed and we were unable to recover it. 00:38:19.847 [2024-12-13 10:40:13.377163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.847 [2024-12-13 10:40:13.377177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.847 qpair failed and we were unable to recover it. 00:38:19.847 [2024-12-13 10:40:13.377247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.847 [2024-12-13 10:40:13.377260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.847 qpair failed and we were unable to recover it. 00:38:19.847 [2024-12-13 10:40:13.377466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.847 [2024-12-13 10:40:13.377480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.847 qpair failed and we were unable to recover it. 00:38:19.847 [2024-12-13 10:40:13.377563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.847 [2024-12-13 10:40:13.377577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.847 qpair failed and we were unable to recover it. 00:38:19.847 [2024-12-13 10:40:13.377659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.847 [2024-12-13 10:40:13.377672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.847 qpair failed and we were unable to recover it. 00:38:19.847 [2024-12-13 10:40:13.377838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.847 [2024-12-13 10:40:13.377852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.847 qpair failed and we were unable to recover it. 00:38:19.847 [2024-12-13 10:40:13.377921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.847 [2024-12-13 10:40:13.377934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.847 qpair failed and we were unable to recover it. 00:38:19.847 [2024-12-13 10:40:13.378014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.847 [2024-12-13 10:40:13.378027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.847 qpair failed and we were unable to recover it. 00:38:19.847 [2024-12-13 10:40:13.378106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.847 [2024-12-13 10:40:13.378120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.847 qpair failed and we were unable to recover it. 00:38:19.847 [2024-12-13 10:40:13.378279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.847 [2024-12-13 10:40:13.378292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.847 qpair failed and we were unable to recover it. 00:38:19.847 [2024-12-13 10:40:13.378472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.847 [2024-12-13 10:40:13.378488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.847 qpair failed and we were unable to recover it. 00:38:19.847 [2024-12-13 10:40:13.378581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.847 [2024-12-13 10:40:13.378596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.847 qpair failed and we were unable to recover it. 00:38:19.847 [2024-12-13 10:40:13.378669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.847 [2024-12-13 10:40:13.378682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.847 qpair failed and we were unable to recover it. 00:38:19.847 [2024-12-13 10:40:13.378784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.847 [2024-12-13 10:40:13.378798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.847 qpair failed and we were unable to recover it. 00:38:19.847 [2024-12-13 10:40:13.378885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.847 [2024-12-13 10:40:13.378898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.847 qpair failed and we were unable to recover it. 00:38:19.847 [2024-12-13 10:40:13.378977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.847 [2024-12-13 10:40:13.378990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.847 qpair failed and we were unable to recover it. 00:38:19.847 [2024-12-13 10:40:13.379057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.847 [2024-12-13 10:40:13.379075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.847 qpair failed and we were unable to recover it. 00:38:19.847 [2024-12-13 10:40:13.379141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.847 [2024-12-13 10:40:13.379155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.847 qpair failed and we were unable to recover it. 00:38:19.847 [2024-12-13 10:40:13.379241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.847 [2024-12-13 10:40:13.379255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.847 qpair failed and we were unable to recover it. 00:38:19.847 [2024-12-13 10:40:13.379436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.847 [2024-12-13 10:40:13.379493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.847 qpair failed and we were unable to recover it. 00:38:19.847 [2024-12-13 10:40:13.379707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.847 [2024-12-13 10:40:13.379749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.847 qpair failed and we were unable to recover it. 00:38:19.847 [2024-12-13 10:40:13.379960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.847 [2024-12-13 10:40:13.380002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.847 qpair failed and we were unable to recover it. 00:38:19.847 [2024-12-13 10:40:13.380137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.847 [2024-12-13 10:40:13.380179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.847 qpair failed and we were unable to recover it. 00:38:19.847 [2024-12-13 10:40:13.380306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.847 [2024-12-13 10:40:13.380347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.847 qpair failed and we were unable to recover it. 00:38:19.847 [2024-12-13 10:40:13.380609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.847 [2024-12-13 10:40:13.380653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.847 qpair failed and we were unable to recover it. 00:38:19.847 [2024-12-13 10:40:13.380851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.847 [2024-12-13 10:40:13.380864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.847 qpair failed and we were unable to recover it. 00:38:19.847 [2024-12-13 10:40:13.380947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.847 [2024-12-13 10:40:13.380960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.847 qpair failed and we were unable to recover it. 00:38:19.847 [2024-12-13 10:40:13.381031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.847 [2024-12-13 10:40:13.381045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.847 qpair failed and we were unable to recover it. 00:38:19.847 [2024-12-13 10:40:13.381211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.847 [2024-12-13 10:40:13.381224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.847 qpair failed and we were unable to recover it. 00:38:19.847 [2024-12-13 10:40:13.381304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.848 [2024-12-13 10:40:13.381318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.848 qpair failed and we were unable to recover it. 00:38:19.848 [2024-12-13 10:40:13.381407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.848 [2024-12-13 10:40:13.381422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.848 qpair failed and we were unable to recover it. 00:38:19.848 [2024-12-13 10:40:13.381515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.848 [2024-12-13 10:40:13.381529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.848 qpair failed and we were unable to recover it. 00:38:19.848 [2024-12-13 10:40:13.381616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.848 [2024-12-13 10:40:13.381630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.848 qpair failed and we were unable to recover it. 00:38:19.848 [2024-12-13 10:40:13.382420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.848 [2024-12-13 10:40:13.382446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.848 qpair failed and we were unable to recover it. 00:38:19.848 [2024-12-13 10:40:13.382611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.848 [2024-12-13 10:40:13.382626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.848 qpair failed and we were unable to recover it. 00:38:19.848 [2024-12-13 10:40:13.382718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.848 [2024-12-13 10:40:13.382733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.848 qpair failed and we were unable to recover it. 00:38:19.848 [2024-12-13 10:40:13.382953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.848 [2024-12-13 10:40:13.382996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.848 qpair failed and we were unable to recover it. 00:38:19.848 [2024-12-13 10:40:13.383205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.848 [2024-12-13 10:40:13.383249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.848 qpair failed and we were unable to recover it. 00:38:19.848 [2024-12-13 10:40:13.383406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.848 [2024-12-13 10:40:13.383463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.848 qpair failed and we were unable to recover it. 00:38:19.848 [2024-12-13 10:40:13.383610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.848 [2024-12-13 10:40:13.383664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.848 qpair failed and we were unable to recover it. 00:38:19.848 [2024-12-13 10:40:13.383807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.848 [2024-12-13 10:40:13.383843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.848 qpair failed and we were unable to recover it. 00:38:19.848 [2024-12-13 10:40:13.383915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.848 [2024-12-13 10:40:13.383928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.848 qpair failed and we were unable to recover it. 00:38:19.848 [2024-12-13 10:40:13.383998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.848 [2024-12-13 10:40:13.384013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.848 qpair failed and we were unable to recover it. 00:38:19.848 [2024-12-13 10:40:13.384118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.848 [2024-12-13 10:40:13.384149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:19.848 qpair failed and we were unable to recover it. 00:38:19.848 [2024-12-13 10:40:13.384320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.848 [2024-12-13 10:40:13.384345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:19.848 qpair failed and we were unable to recover it. 00:38:19.848 [2024-12-13 10:40:13.384511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.848 [2024-12-13 10:40:13.384533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:19.848 qpair failed and we were unable to recover it. 00:38:19.848 [2024-12-13 10:40:13.384639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.848 [2024-12-13 10:40:13.384661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:19.848 qpair failed and we were unable to recover it. 00:38:19.848 [2024-12-13 10:40:13.384829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.848 [2024-12-13 10:40:13.384850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:19.848 qpair failed and we were unable to recover it. 00:38:19.848 [2024-12-13 10:40:13.384940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.848 [2024-12-13 10:40:13.384960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:19.848 qpair failed and we were unable to recover it. 00:38:19.848 [2024-12-13 10:40:13.385111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.848 [2024-12-13 10:40:13.385132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:19.848 qpair failed and we were unable to recover it. 00:38:19.848 [2024-12-13 10:40:13.385285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.848 [2024-12-13 10:40:13.385307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:19.848 qpair failed and we were unable to recover it. 00:38:19.848 [2024-12-13 10:40:13.385517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.848 [2024-12-13 10:40:13.385539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:19.848 qpair failed and we were unable to recover it. 00:38:19.848 [2024-12-13 10:40:13.385689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.848 [2024-12-13 10:40:13.385705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.848 qpair failed and we were unable to recover it. 00:38:19.848 [2024-12-13 10:40:13.385785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.848 [2024-12-13 10:40:13.385799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.848 qpair failed and we were unable to recover it. 00:38:19.848 [2024-12-13 10:40:13.385884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.848 [2024-12-13 10:40:13.385898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.848 qpair failed and we were unable to recover it. 00:38:19.848 [2024-12-13 10:40:13.386018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.848 [2024-12-13 10:40:13.386031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.848 qpair failed and we were unable to recover it. 00:38:19.848 [2024-12-13 10:40:13.386170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.848 [2024-12-13 10:40:13.386186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.848 qpair failed and we were unable to recover it. 00:38:19.848 [2024-12-13 10:40:13.386274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.848 [2024-12-13 10:40:13.386289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.848 qpair failed and we were unable to recover it. 00:38:19.848 [2024-12-13 10:40:13.386360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.848 [2024-12-13 10:40:13.386375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.848 qpair failed and we were unable to recover it. 00:38:19.848 [2024-12-13 10:40:13.386474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.848 [2024-12-13 10:40:13.386489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.848 qpair failed and we were unable to recover it. 00:38:19.848 [2024-12-13 10:40:13.386633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.848 [2024-12-13 10:40:13.386646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.848 qpair failed and we were unable to recover it. 00:38:19.848 [2024-12-13 10:40:13.387345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.848 [2024-12-13 10:40:13.387370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.848 qpair failed and we were unable to recover it. 00:38:19.848 [2024-12-13 10:40:13.387479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.848 [2024-12-13 10:40:13.387495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.848 qpair failed and we were unable to recover it. 00:38:19.848 [2024-12-13 10:40:13.387582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.848 [2024-12-13 10:40:13.387596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.848 qpair failed and we were unable to recover it. 00:38:19.848 [2024-12-13 10:40:13.387678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.848 [2024-12-13 10:40:13.387692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.848 qpair failed and we were unable to recover it. 00:38:19.848 [2024-12-13 10:40:13.387776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.848 [2024-12-13 10:40:13.387789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.849 qpair failed and we were unable to recover it. 00:38:19.849 [2024-12-13 10:40:13.387967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.849 [2024-12-13 10:40:13.387981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.849 qpair failed and we were unable to recover it. 00:38:19.849 [2024-12-13 10:40:13.388135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.849 [2024-12-13 10:40:13.388150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.849 qpair failed and we were unable to recover it. 00:38:19.849 [2024-12-13 10:40:13.388234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.849 [2024-12-13 10:40:13.388248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.849 qpair failed and we were unable to recover it. 00:38:19.849 [2024-12-13 10:40:13.388330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.849 [2024-12-13 10:40:13.388344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.849 qpair failed and we were unable to recover it. 00:38:19.849 [2024-12-13 10:40:13.388500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.849 [2024-12-13 10:40:13.388515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.849 qpair failed and we were unable to recover it. 00:38:19.849 [2024-12-13 10:40:13.388650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.849 [2024-12-13 10:40:13.388664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.849 qpair failed and we were unable to recover it. 00:38:19.849 [2024-12-13 10:40:13.388760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.849 [2024-12-13 10:40:13.388774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.849 qpair failed and we were unable to recover it. 00:38:19.849 [2024-12-13 10:40:13.388843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.849 [2024-12-13 10:40:13.388857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.849 qpair failed and we were unable to recover it. 00:38:19.849 [2024-12-13 10:40:13.388952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.849 [2024-12-13 10:40:13.388967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.849 qpair failed and we were unable to recover it. 00:38:19.849 [2024-12-13 10:40:13.389126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.849 [2024-12-13 10:40:13.389168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.849 qpair failed and we were unable to recover it. 00:38:19.849 [2024-12-13 10:40:13.389301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.849 [2024-12-13 10:40:13.389344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.849 qpair failed and we were unable to recover it. 00:38:19.849 [2024-12-13 10:40:13.389476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.849 [2024-12-13 10:40:13.389521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.849 qpair failed and we were unable to recover it. 00:38:19.849 [2024-12-13 10:40:13.389657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.849 [2024-12-13 10:40:13.389700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.849 qpair failed and we were unable to recover it. 00:38:19.849 [2024-12-13 10:40:13.389846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.849 [2024-12-13 10:40:13.389890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:19.849 qpair failed and we were unable to recover it. 00:38:19.849 [2024-12-13 10:40:13.390130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.849 [2024-12-13 10:40:13.390175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:19.849 qpair failed and we were unable to recover it. 00:38:19.849 [2024-12-13 10:40:13.390307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.849 [2024-12-13 10:40:13.390351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:19.849 qpair failed and we were unable to recover it. 00:38:19.849 [2024-12-13 10:40:13.390496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.849 [2024-12-13 10:40:13.390542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:19.849 qpair failed and we were unable to recover it. 00:38:19.849 [2024-12-13 10:40:13.390769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.849 [2024-12-13 10:40:13.390818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:19.849 qpair failed and we were unable to recover it. 00:38:19.849 [2024-12-13 10:40:13.391037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.849 [2024-12-13 10:40:13.391059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:19.849 qpair failed and we were unable to recover it. 00:38:19.849 [2024-12-13 10:40:13.391224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.849 [2024-12-13 10:40:13.391269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.849 qpair failed and we were unable to recover it. 00:38:19.849 [2024-12-13 10:40:13.391412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.849 [2024-12-13 10:40:13.391497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.849 qpair failed and we were unable to recover it. 00:38:19.849 [2024-12-13 10:40:13.391728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.849 [2024-12-13 10:40:13.391757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.849 qpair failed and we were unable to recover it. 00:38:19.849 [2024-12-13 10:40:13.391896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.849 [2024-12-13 10:40:13.391910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.849 qpair failed and we were unable to recover it. 00:38:19.849 [2024-12-13 10:40:13.392620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.849 [2024-12-13 10:40:13.392646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.849 qpair failed and we were unable to recover it. 00:38:19.849 [2024-12-13 10:40:13.392811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.849 [2024-12-13 10:40:13.392825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.849 qpair failed and we were unable to recover it. 00:38:19.849 [2024-12-13 10:40:13.393077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.849 [2024-12-13 10:40:13.393119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.849 qpair failed and we were unable to recover it. 00:38:19.849 [2024-12-13 10:40:13.393261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.849 [2024-12-13 10:40:13.393303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.849 qpair failed and we were unable to recover it. 00:38:19.849 [2024-12-13 10:40:13.393442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.849 [2024-12-13 10:40:13.393500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.849 qpair failed and we were unable to recover it. 00:38:19.849 [2024-12-13 10:40:13.393641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.849 [2024-12-13 10:40:13.393654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.849 qpair failed and we were unable to recover it. 00:38:19.849 [2024-12-13 10:40:13.393803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.849 [2024-12-13 10:40:13.393816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.849 qpair failed and we were unable to recover it. 00:38:19.849 [2024-12-13 10:40:13.393954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.849 [2024-12-13 10:40:13.393970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.849 qpair failed and we were unable to recover it. 00:38:19.849 [2024-12-13 10:40:13.394192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.849 [2024-12-13 10:40:13.394206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.849 qpair failed and we were unable to recover it. 00:38:19.849 [2024-12-13 10:40:13.394317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.849 [2024-12-13 10:40:13.394331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.849 qpair failed and we were unable to recover it. 00:38:19.849 [2024-12-13 10:40:13.394438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.849 [2024-12-13 10:40:13.394461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.849 qpair failed and we were unable to recover it. 00:38:19.849 [2024-12-13 10:40:13.394612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.849 [2024-12-13 10:40:13.394626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.849 qpair failed and we were unable to recover it. 00:38:19.849 [2024-12-13 10:40:13.394859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.849 [2024-12-13 10:40:13.394873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.849 qpair failed and we were unable to recover it. 00:38:19.849 [2024-12-13 10:40:13.394958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.849 [2024-12-13 10:40:13.394972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.850 qpair failed and we were unable to recover it. 00:38:19.850 [2024-12-13 10:40:13.395062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.850 [2024-12-13 10:40:13.395076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.850 qpair failed and we were unable to recover it. 00:38:19.850 [2024-12-13 10:40:13.395863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.850 [2024-12-13 10:40:13.395891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.850 qpair failed and we were unable to recover it. 00:38:19.850 [2024-12-13 10:40:13.396149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.850 [2024-12-13 10:40:13.396166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.850 qpair failed and we were unable to recover it. 00:38:19.850 [2024-12-13 10:40:13.396239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.850 [2024-12-13 10:40:13.396254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.850 qpair failed and we were unable to recover it. 00:38:19.850 [2024-12-13 10:40:13.396329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.850 [2024-12-13 10:40:13.396344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.850 qpair failed and we were unable to recover it. 00:38:19.850 [2024-12-13 10:40:13.396535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.850 [2024-12-13 10:40:13.396584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.850 qpair failed and we were unable to recover it. 00:38:19.850 [2024-12-13 10:40:13.396750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.850 [2024-12-13 10:40:13.396792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.850 qpair failed and we were unable to recover it. 00:38:19.850 [2024-12-13 10:40:13.396949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.850 [2024-12-13 10:40:13.396992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.850 qpair failed and we were unable to recover it. 00:38:19.850 [2024-12-13 10:40:13.397212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.850 [2024-12-13 10:40:13.397254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.850 qpair failed and we were unable to recover it. 00:38:19.850 [2024-12-13 10:40:13.397463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.850 [2024-12-13 10:40:13.397478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.850 qpair failed and we were unable to recover it. 00:38:19.850 [2024-12-13 10:40:13.397642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.850 [2024-12-13 10:40:13.397656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.850 qpair failed and we were unable to recover it. 00:38:19.850 [2024-12-13 10:40:13.397735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.850 [2024-12-13 10:40:13.397749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.850 qpair failed and we were unable to recover it. 00:38:19.850 [2024-12-13 10:40:13.397885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.850 [2024-12-13 10:40:13.397899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.850 qpair failed and we were unable to recover it. 00:38:19.850 [2024-12-13 10:40:13.397978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.850 [2024-12-13 10:40:13.397992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.850 qpair failed and we were unable to recover it. 00:38:19.850 [2024-12-13 10:40:13.398064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.850 [2024-12-13 10:40:13.398078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.850 qpair failed and we were unable to recover it. 00:38:19.850 [2024-12-13 10:40:13.398216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.850 [2024-12-13 10:40:13.398233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.850 qpair failed and we were unable to recover it. 00:38:19.850 [2024-12-13 10:40:13.398398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.850 [2024-12-13 10:40:13.398442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.850 qpair failed and we were unable to recover it. 00:38:19.850 [2024-12-13 10:40:13.398597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.850 [2024-12-13 10:40:13.398641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.850 qpair failed and we were unable to recover it. 00:38:19.850 [2024-12-13 10:40:13.398782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.850 [2024-12-13 10:40:13.398825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.850 qpair failed and we were unable to recover it. 00:38:19.850 [2024-12-13 10:40:13.398961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.850 [2024-12-13 10:40:13.399003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.850 qpair failed and we were unable to recover it. 00:38:19.850 [2024-12-13 10:40:13.399250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.850 [2024-12-13 10:40:13.399311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:19.850 qpair failed and we were unable to recover it. 00:38:19.850 [2024-12-13 10:40:13.399497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.850 [2024-12-13 10:40:13.399546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:19.850 qpair failed and we were unable to recover it. 00:38:19.850 [2024-12-13 10:40:13.399702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.850 [2024-12-13 10:40:13.399746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:19.850 qpair failed and we were unable to recover it. 00:38:19.850 [2024-12-13 10:40:13.399954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.850 [2024-12-13 10:40:13.399975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:19.850 qpair failed and we were unable to recover it. 00:38:19.850 [2024-12-13 10:40:13.400129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.850 [2024-12-13 10:40:13.400151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:19.850 qpair failed and we were unable to recover it. 00:38:19.850 [2024-12-13 10:40:13.400308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.850 [2024-12-13 10:40:13.400329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:19.850 qpair failed and we were unable to recover it. 00:38:19.850 [2024-12-13 10:40:13.400453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.850 [2024-12-13 10:40:13.400476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:19.850 qpair failed and we were unable to recover it. 00:38:19.850 [2024-12-13 10:40:13.400588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.850 [2024-12-13 10:40:13.400608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:19.850 qpair failed and we were unable to recover it. 00:38:19.850 [2024-12-13 10:40:13.400698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.850 [2024-12-13 10:40:13.400719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:19.850 qpair failed and we were unable to recover it. 00:38:19.850 [2024-12-13 10:40:13.400808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.850 [2024-12-13 10:40:13.400830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:19.850 qpair failed and we were unable to recover it. 00:38:19.850 [2024-12-13 10:40:13.400989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.850 [2024-12-13 10:40:13.401011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:19.850 qpair failed and we were unable to recover it. 00:38:19.850 [2024-12-13 10:40:13.401103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.850 [2024-12-13 10:40:13.401124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:19.850 qpair failed and we were unable to recover it. 00:38:19.850 [2024-12-13 10:40:13.401290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.851 [2024-12-13 10:40:13.401306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.851 qpair failed and we were unable to recover it. 00:38:19.851 [2024-12-13 10:40:13.401457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.851 [2024-12-13 10:40:13.401473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.851 qpair failed and we were unable to recover it. 00:38:19.851 [2024-12-13 10:40:13.401540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.851 [2024-12-13 10:40:13.401553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.851 qpair failed and we were unable to recover it. 00:38:19.851 [2024-12-13 10:40:13.401646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.851 [2024-12-13 10:40:13.401659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.851 qpair failed and we were unable to recover it. 00:38:19.851 [2024-12-13 10:40:13.401738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.851 [2024-12-13 10:40:13.401751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.851 qpair failed and we were unable to recover it. 00:38:19.851 [2024-12-13 10:40:13.401813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.851 [2024-12-13 10:40:13.401827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.851 qpair failed and we were unable to recover it. 00:38:19.851 [2024-12-13 10:40:13.401892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.851 [2024-12-13 10:40:13.401905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.851 qpair failed and we were unable to recover it. 00:38:19.851 [2024-12-13 10:40:13.401973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.851 [2024-12-13 10:40:13.401987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.851 qpair failed and we were unable to recover it. 00:38:19.851 [2024-12-13 10:40:13.402087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.851 [2024-12-13 10:40:13.402100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.851 qpair failed and we were unable to recover it. 00:38:19.851 [2024-12-13 10:40:13.403271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.851 [2024-12-13 10:40:13.403299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.851 qpair failed and we were unable to recover it. 00:38:19.851 [2024-12-13 10:40:13.403529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.851 [2024-12-13 10:40:13.403545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.851 qpair failed and we were unable to recover it. 00:38:19.851 [2024-12-13 10:40:13.403755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.851 [2024-12-13 10:40:13.403769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.851 qpair failed and we were unable to recover it. 00:38:19.851 [2024-12-13 10:40:13.403909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.851 [2024-12-13 10:40:13.403922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.851 qpair failed and we were unable to recover it. 00:38:19.851 [2024-12-13 10:40:13.404072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.851 [2024-12-13 10:40:13.404085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.851 qpair failed and we were unable to recover it. 00:38:19.851 [2024-12-13 10:40:13.404224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.851 [2024-12-13 10:40:13.404238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.851 qpair failed and we were unable to recover it. 00:38:19.851 [2024-12-13 10:40:13.404378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.851 [2024-12-13 10:40:13.404393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.851 qpair failed and we were unable to recover it. 00:38:19.851 [2024-12-13 10:40:13.404544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.851 [2024-12-13 10:40:13.404558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.851 qpair failed and we were unable to recover it. 00:38:19.851 [2024-12-13 10:40:13.404644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.851 [2024-12-13 10:40:13.404659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.851 qpair failed and we were unable to recover it. 00:38:19.851 [2024-12-13 10:40:13.404750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.851 [2024-12-13 10:40:13.404764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.851 qpair failed and we were unable to recover it. 00:38:19.851 [2024-12-13 10:40:13.404852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.851 [2024-12-13 10:40:13.404867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.851 qpair failed and we were unable to recover it. 00:38:19.851 [2024-12-13 10:40:13.405011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.851 [2024-12-13 10:40:13.405054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.851 qpair failed and we were unable to recover it. 00:38:19.851 [2024-12-13 10:40:13.405341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.851 [2024-12-13 10:40:13.405382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.851 qpair failed and we were unable to recover it. 00:38:19.851 [2024-12-13 10:40:13.405598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.851 [2024-12-13 10:40:13.405646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.851 qpair failed and we were unable to recover it. 00:38:19.851 [2024-12-13 10:40:13.405726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.851 [2024-12-13 10:40:13.405739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.851 qpair failed and we were unable to recover it. 00:38:19.851 [2024-12-13 10:40:13.405973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.851 [2024-12-13 10:40:13.405987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.851 qpair failed and we were unable to recover it. 00:38:19.851 [2024-12-13 10:40:13.406071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.851 [2024-12-13 10:40:13.406085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.851 qpair failed and we were unable to recover it. 00:38:19.851 [2024-12-13 10:40:13.406153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.851 [2024-12-13 10:40:13.406166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.851 qpair failed and we were unable to recover it. 00:38:19.851 [2024-12-13 10:40:13.406319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.851 [2024-12-13 10:40:13.406332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.851 qpair failed and we were unable to recover it. 00:38:19.851 [2024-12-13 10:40:13.406426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.851 [2024-12-13 10:40:13.406462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:19.851 qpair failed and we were unable to recover it. 00:38:19.851 [2024-12-13 10:40:13.406641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.851 [2024-12-13 10:40:13.406666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:19.851 qpair failed and we were unable to recover it. 00:38:19.851 [2024-12-13 10:40:13.406782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.851 [2024-12-13 10:40:13.406805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:19.851 qpair failed and we were unable to recover it. 00:38:19.851 [2024-12-13 10:40:13.406917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.851 [2024-12-13 10:40:13.406933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.851 qpair failed and we were unable to recover it. 00:38:19.851 [2024-12-13 10:40:13.407006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.851 [2024-12-13 10:40:13.407021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.851 qpair failed and we were unable to recover it. 00:38:19.851 [2024-12-13 10:40:13.407100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.851 [2024-12-13 10:40:13.407114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.851 qpair failed and we were unable to recover it. 00:38:19.851 [2024-12-13 10:40:13.407203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.851 [2024-12-13 10:40:13.407220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.851 qpair failed and we were unable to recover it. 00:38:19.851 [2024-12-13 10:40:13.407320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.851 [2024-12-13 10:40:13.407334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.851 qpair failed and we were unable to recover it. 00:38:19.851 [2024-12-13 10:40:13.407444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.851 [2024-12-13 10:40:13.407500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.851 qpair failed and we were unable to recover it. 00:38:19.851 [2024-12-13 10:40:13.407699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.851 [2024-12-13 10:40:13.407743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.851 qpair failed and we were unable to recover it. 00:38:19.851 [2024-12-13 10:40:13.407878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.851 [2024-12-13 10:40:13.407921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.851 qpair failed and we were unable to recover it. 00:38:19.851 [2024-12-13 10:40:13.408066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.852 [2024-12-13 10:40:13.408109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.852 qpair failed and we were unable to recover it. 00:38:19.852 [2024-12-13 10:40:13.408243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.852 [2024-12-13 10:40:13.408286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.852 qpair failed and we were unable to recover it. 00:38:19.852 [2024-12-13 10:40:13.408432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.852 [2024-12-13 10:40:13.408545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.852 qpair failed and we were unable to recover it. 00:38:19.852 [2024-12-13 10:40:13.408758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.852 [2024-12-13 10:40:13.408801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.852 qpair failed and we were unable to recover it. 00:38:19.852 [2024-12-13 10:40:13.408936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.852 [2024-12-13 10:40:13.408978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.852 qpair failed and we were unable to recover it. 00:38:19.852 [2024-12-13 10:40:13.409125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.852 [2024-12-13 10:40:13.409168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.852 qpair failed and we were unable to recover it. 00:38:19.852 [2024-12-13 10:40:13.409325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.852 [2024-12-13 10:40:13.409368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.852 qpair failed and we were unable to recover it. 00:38:19.852 [2024-12-13 10:40:13.409588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.852 [2024-12-13 10:40:13.409632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.852 qpair failed and we were unable to recover it. 00:38:19.852 [2024-12-13 10:40:13.409846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.852 [2024-12-13 10:40:13.409901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.852 qpair failed and we were unable to recover it. 00:38:19.852 [2024-12-13 10:40:13.410053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.852 [2024-12-13 10:40:13.410067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.852 qpair failed and we were unable to recover it. 00:38:19.852 [2024-12-13 10:40:13.410153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.852 [2024-12-13 10:40:13.410167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.852 qpair failed and we were unable to recover it. 00:38:19.852 [2024-12-13 10:40:13.410310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.852 [2024-12-13 10:40:13.410323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.852 qpair failed and we were unable to recover it. 00:38:19.852 [2024-12-13 10:40:13.410416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.852 [2024-12-13 10:40:13.410467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.852 qpair failed and we were unable to recover it. 00:38:19.852 [2024-12-13 10:40:13.410695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.852 [2024-12-13 10:40:13.410737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.852 qpair failed and we were unable to recover it. 00:38:19.852 [2024-12-13 10:40:13.410932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.852 [2024-12-13 10:40:13.410973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.852 qpair failed and we were unable to recover it. 00:38:19.852 [2024-12-13 10:40:13.411104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.852 [2024-12-13 10:40:13.411147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.852 qpair failed and we were unable to recover it. 00:38:19.852 [2024-12-13 10:40:13.411298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.852 [2024-12-13 10:40:13.411341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.852 qpair failed and we were unable to recover it. 00:38:19.852 [2024-12-13 10:40:13.411476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.852 [2024-12-13 10:40:13.411518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.852 qpair failed and we were unable to recover it. 00:38:19.852 [2024-12-13 10:40:13.411737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.852 [2024-12-13 10:40:13.411784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.852 qpair failed and we were unable to recover it. 00:38:19.852 [2024-12-13 10:40:13.412009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.852 [2024-12-13 10:40:13.412022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.852 qpair failed and we were unable to recover it. 00:38:19.852 [2024-12-13 10:40:13.412104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.852 [2024-12-13 10:40:13.412118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.852 qpair failed and we were unable to recover it. 00:38:19.852 [2024-12-13 10:40:13.412180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.852 [2024-12-13 10:40:13.412194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.852 qpair failed and we were unable to recover it. 00:38:19.852 [2024-12-13 10:40:13.412368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.852 [2024-12-13 10:40:13.412385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.852 qpair failed and we were unable to recover it. 00:38:19.852 [2024-12-13 10:40:13.412596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.852 [2024-12-13 10:40:13.412614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.852 qpair failed and we were unable to recover it. 00:38:19.852 [2024-12-13 10:40:13.412712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.852 [2024-12-13 10:40:13.412725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.852 qpair failed and we were unable to recover it. 00:38:19.852 [2024-12-13 10:40:13.412871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.852 [2024-12-13 10:40:13.412884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.852 qpair failed and we were unable to recover it. 00:38:19.852 [2024-12-13 10:40:13.412953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.852 [2024-12-13 10:40:13.412967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.852 qpair failed and we were unable to recover it. 00:38:19.852 [2024-12-13 10:40:13.413127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.852 [2024-12-13 10:40:13.413141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.852 qpair failed and we were unable to recover it. 00:38:19.852 [2024-12-13 10:40:13.413224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.852 [2024-12-13 10:40:13.413238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.852 qpair failed and we were unable to recover it. 00:38:19.852 [2024-12-13 10:40:13.413315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.852 [2024-12-13 10:40:13.413329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.852 qpair failed and we were unable to recover it. 00:38:19.852 [2024-12-13 10:40:13.413399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.852 [2024-12-13 10:40:13.413413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.852 qpair failed and we were unable to recover it. 00:38:19.852 [2024-12-13 10:40:13.413562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.852 [2024-12-13 10:40:13.413577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.852 qpair failed and we were unable to recover it. 00:38:19.852 [2024-12-13 10:40:13.413652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.852 [2024-12-13 10:40:13.413665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.852 qpair failed and we were unable to recover it. 00:38:19.852 [2024-12-13 10:40:13.413812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.852 [2024-12-13 10:40:13.413826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.852 qpair failed and we were unable to recover it. 00:38:19.852 [2024-12-13 10:40:13.413998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.852 [2024-12-13 10:40:13.414011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.852 qpair failed and we were unable to recover it. 00:38:19.852 [2024-12-13 10:40:13.414195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.852 [2024-12-13 10:40:13.414208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.852 qpair failed and we were unable to recover it. 00:38:19.852 [2024-12-13 10:40:13.414346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.852 [2024-12-13 10:40:13.414359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.852 qpair failed and we were unable to recover it. 00:38:19.852 [2024-12-13 10:40:13.414455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.852 [2024-12-13 10:40:13.414469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.853 qpair failed and we were unable to recover it. 00:38:19.853 [2024-12-13 10:40:13.414641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.853 [2024-12-13 10:40:13.414655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.853 qpair failed and we were unable to recover it. 00:38:19.853 [2024-12-13 10:40:13.414819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.853 [2024-12-13 10:40:13.414833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.853 qpair failed and we were unable to recover it. 00:38:19.853 [2024-12-13 10:40:13.414922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.853 [2024-12-13 10:40:13.414936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.853 qpair failed and we were unable to recover it. 00:38:19.853 [2024-12-13 10:40:13.415010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.853 [2024-12-13 10:40:13.415024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.853 qpair failed and we were unable to recover it. 00:38:19.853 [2024-12-13 10:40:13.415161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.853 [2024-12-13 10:40:13.415174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.853 qpair failed and we were unable to recover it. 00:38:19.853 [2024-12-13 10:40:13.415411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.853 [2024-12-13 10:40:13.415426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.853 qpair failed and we were unable to recover it. 00:38:19.853 [2024-12-13 10:40:13.415522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.853 [2024-12-13 10:40:13.415536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.853 qpair failed and we were unable to recover it. 00:38:19.853 [2024-12-13 10:40:13.415630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.853 [2024-12-13 10:40:13.415644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.853 qpair failed and we were unable to recover it. 00:38:19.853 [2024-12-13 10:40:13.415725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.853 [2024-12-13 10:40:13.415738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.853 qpair failed and we were unable to recover it. 00:38:19.853 [2024-12-13 10:40:13.415889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.853 [2024-12-13 10:40:13.415903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.853 qpair failed and we were unable to recover it. 00:38:19.853 [2024-12-13 10:40:13.416104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.853 [2024-12-13 10:40:13.416118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.853 qpair failed and we were unable to recover it. 00:38:19.853 [2024-12-13 10:40:13.416270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.853 [2024-12-13 10:40:13.416283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.853 qpair failed and we were unable to recover it. 00:38:19.853 [2024-12-13 10:40:13.416491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.853 [2024-12-13 10:40:13.416505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.853 qpair failed and we were unable to recover it. 00:38:19.853 [2024-12-13 10:40:13.416579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.853 [2024-12-13 10:40:13.416592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.853 qpair failed and we were unable to recover it. 00:38:19.853 [2024-12-13 10:40:13.416673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.853 [2024-12-13 10:40:13.416687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.853 qpair failed and we were unable to recover it. 00:38:19.853 [2024-12-13 10:40:13.416765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.853 [2024-12-13 10:40:13.416778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.853 qpair failed and we were unable to recover it. 00:38:19.853 [2024-12-13 10:40:13.416861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.853 [2024-12-13 10:40:13.416875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.853 qpair failed and we were unable to recover it. 00:38:19.853 [2024-12-13 10:40:13.416957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.853 [2024-12-13 10:40:13.416970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.853 qpair failed and we were unable to recover it. 00:38:19.853 [2024-12-13 10:40:13.417131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.853 [2024-12-13 10:40:13.417144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.853 qpair failed and we were unable to recover it. 00:38:19.853 [2024-12-13 10:40:13.417275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.853 [2024-12-13 10:40:13.417289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.853 qpair failed and we were unable to recover it. 00:38:19.853 [2024-12-13 10:40:13.417367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.853 [2024-12-13 10:40:13.417380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.853 qpair failed and we were unable to recover it. 00:38:19.853 [2024-12-13 10:40:13.417471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.853 [2024-12-13 10:40:13.417485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.853 qpair failed and we were unable to recover it. 00:38:19.853 [2024-12-13 10:40:13.417567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.853 [2024-12-13 10:40:13.417580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.853 qpair failed and we were unable to recover it. 00:38:19.853 [2024-12-13 10:40:13.417662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.853 [2024-12-13 10:40:13.417676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.853 qpair failed and we were unable to recover it. 00:38:19.853 [2024-12-13 10:40:13.417841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.853 [2024-12-13 10:40:13.417855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.853 qpair failed and we were unable to recover it. 00:38:19.853 [2024-12-13 10:40:13.417935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.853 [2024-12-13 10:40:13.417949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.853 qpair failed and we were unable to recover it. 00:38:19.853 [2024-12-13 10:40:13.418033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.853 [2024-12-13 10:40:13.418047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.853 qpair failed and we were unable to recover it. 00:38:19.853 [2024-12-13 10:40:13.418118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.853 [2024-12-13 10:40:13.418131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.853 qpair failed and we were unable to recover it. 00:38:19.853 [2024-12-13 10:40:13.418216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.853 [2024-12-13 10:40:13.418228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.853 qpair failed and we were unable to recover it. 00:38:19.853 [2024-12-13 10:40:13.418367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.853 [2024-12-13 10:40:13.418381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.853 qpair failed and we were unable to recover it. 00:38:19.853 [2024-12-13 10:40:13.418455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.853 [2024-12-13 10:40:13.418469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.853 qpair failed and we were unable to recover it. 00:38:19.853 [2024-12-13 10:40:13.418543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.853 [2024-12-13 10:40:13.418559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.853 qpair failed and we were unable to recover it. 00:38:19.853 [2024-12-13 10:40:13.418786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.853 [2024-12-13 10:40:13.418799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.853 qpair failed and we were unable to recover it. 00:38:19.853 [2024-12-13 10:40:13.418883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.853 [2024-12-13 10:40:13.418896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.853 qpair failed and we were unable to recover it. 00:38:19.853 [2024-12-13 10:40:13.418976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.853 [2024-12-13 10:40:13.418989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.853 qpair failed and we were unable to recover it. 00:38:19.853 [2024-12-13 10:40:13.419061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.853 [2024-12-13 10:40:13.419074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.853 qpair failed and we were unable to recover it. 00:38:19.853 [2024-12-13 10:40:13.419208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.853 [2024-12-13 10:40:13.419222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.853 qpair failed and we were unable to recover it. 00:38:19.853 [2024-12-13 10:40:13.419303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.853 [2024-12-13 10:40:13.419317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.853 qpair failed and we were unable to recover it. 00:38:19.853 [2024-12-13 10:40:13.419406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.854 [2024-12-13 10:40:13.419419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.854 qpair failed and we were unable to recover it. 00:38:19.854 [2024-12-13 10:40:13.419512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.854 [2024-12-13 10:40:13.419526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.854 qpair failed and we were unable to recover it. 00:38:19.854 [2024-12-13 10:40:13.419703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.854 [2024-12-13 10:40:13.419717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.854 qpair failed and we were unable to recover it. 00:38:19.854 [2024-12-13 10:40:13.419851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.854 [2024-12-13 10:40:13.419864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.854 qpair failed and we were unable to recover it. 00:38:19.854 [2024-12-13 10:40:13.419932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.854 [2024-12-13 10:40:13.419946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.854 qpair failed and we were unable to recover it. 00:38:19.854 [2024-12-13 10:40:13.420039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.854 [2024-12-13 10:40:13.420052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.854 qpair failed and we were unable to recover it. 00:38:19.854 [2024-12-13 10:40:13.420155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.854 [2024-12-13 10:40:13.420169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.854 qpair failed and we were unable to recover it. 00:38:19.854 [2024-12-13 10:40:13.420270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.854 [2024-12-13 10:40:13.420285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.854 qpair failed and we were unable to recover it. 00:38:19.854 [2024-12-13 10:40:13.420365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.854 [2024-12-13 10:40:13.420379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.854 qpair failed and we were unable to recover it. 00:38:19.854 [2024-12-13 10:40:13.420465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.854 [2024-12-13 10:40:13.420479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.854 qpair failed and we were unable to recover it. 00:38:19.854 [2024-12-13 10:40:13.420578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.854 [2024-12-13 10:40:13.420592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.854 qpair failed and we were unable to recover it. 00:38:19.854 [2024-12-13 10:40:13.420664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.854 [2024-12-13 10:40:13.420681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.854 qpair failed and we were unable to recover it. 00:38:19.854 [2024-12-13 10:40:13.420767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.854 [2024-12-13 10:40:13.420781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.854 qpair failed and we were unable to recover it. 00:38:19.854 [2024-12-13 10:40:13.420864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.854 [2024-12-13 10:40:13.420877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.854 qpair failed and we were unable to recover it. 00:38:19.854 [2024-12-13 10:40:13.420955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.854 [2024-12-13 10:40:13.420968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.854 qpair failed and we were unable to recover it. 00:38:19.854 [2024-12-13 10:40:13.421035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.854 [2024-12-13 10:40:13.421048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.854 qpair failed and we were unable to recover it. 00:38:19.854 [2024-12-13 10:40:13.421129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.854 [2024-12-13 10:40:13.421142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.854 qpair failed and we were unable to recover it. 00:38:19.854 [2024-12-13 10:40:13.421211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.854 [2024-12-13 10:40:13.421224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.854 qpair failed and we were unable to recover it. 00:38:19.854 [2024-12-13 10:40:13.421299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.854 [2024-12-13 10:40:13.421313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.854 qpair failed and we were unable to recover it. 00:38:19.854 [2024-12-13 10:40:13.421461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.854 [2024-12-13 10:40:13.421475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.854 qpair failed and we were unable to recover it. 00:38:19.854 [2024-12-13 10:40:13.421562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.854 [2024-12-13 10:40:13.421576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.854 qpair failed and we were unable to recover it. 00:38:19.854 [2024-12-13 10:40:13.421642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.854 [2024-12-13 10:40:13.421656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.854 qpair failed and we were unable to recover it. 00:38:19.854 [2024-12-13 10:40:13.421722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.854 [2024-12-13 10:40:13.421736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.854 qpair failed and we were unable to recover it. 00:38:19.854 [2024-12-13 10:40:13.421866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.854 [2024-12-13 10:40:13.421879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.854 qpair failed and we were unable to recover it. 00:38:19.854 [2024-12-13 10:40:13.422016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.854 [2024-12-13 10:40:13.422030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.854 qpair failed and we were unable to recover it. 00:38:19.854 [2024-12-13 10:40:13.422122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.854 [2024-12-13 10:40:13.422135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.854 qpair failed and we were unable to recover it. 00:38:19.854 [2024-12-13 10:40:13.422216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.854 [2024-12-13 10:40:13.422230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.854 qpair failed and we were unable to recover it. 00:38:19.854 [2024-12-13 10:40:13.422319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.854 [2024-12-13 10:40:13.422332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.854 qpair failed and we were unable to recover it. 00:38:19.854 [2024-12-13 10:40:13.422480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.854 [2024-12-13 10:40:13.422494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.854 qpair failed and we were unable to recover it. 00:38:19.854 [2024-12-13 10:40:13.422571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.854 [2024-12-13 10:40:13.422584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.854 qpair failed and we were unable to recover it. 00:38:19.854 [2024-12-13 10:40:13.422680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.854 [2024-12-13 10:40:13.422694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.854 qpair failed and we were unable to recover it. 00:38:19.854 [2024-12-13 10:40:13.422864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.854 [2024-12-13 10:40:13.422878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.854 qpair failed and we were unable to recover it. 00:38:19.854 [2024-12-13 10:40:13.423051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.854 [2024-12-13 10:40:13.423064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.854 qpair failed and we were unable to recover it. 00:38:19.854 [2024-12-13 10:40:13.423211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.854 [2024-12-13 10:40:13.423227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.854 qpair failed and we were unable to recover it. 00:38:19.854 [2024-12-13 10:40:13.423302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.854 [2024-12-13 10:40:13.423314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.854 qpair failed and we were unable to recover it. 00:38:19.854 [2024-12-13 10:40:13.423395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.854 [2024-12-13 10:40:13.423408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.854 qpair failed and we were unable to recover it. 00:38:19.854 [2024-12-13 10:40:13.423547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.854 [2024-12-13 10:40:13.423561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.854 qpair failed and we were unable to recover it. 00:38:19.854 [2024-12-13 10:40:13.423643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.854 [2024-12-13 10:40:13.423657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.854 qpair failed and we were unable to recover it. 00:38:19.854 [2024-12-13 10:40:13.423808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.854 [2024-12-13 10:40:13.423821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.854 qpair failed and we were unable to recover it. 00:38:19.854 [2024-12-13 10:40:13.423915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.855 [2024-12-13 10:40:13.423929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.855 qpair failed and we were unable to recover it. 00:38:19.855 [2024-12-13 10:40:13.423998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.855 [2024-12-13 10:40:13.424011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.855 qpair failed and we were unable to recover it. 00:38:19.855 [2024-12-13 10:40:13.424100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.855 [2024-12-13 10:40:13.424113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.855 qpair failed and we were unable to recover it. 00:38:19.855 [2024-12-13 10:40:13.424253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.855 [2024-12-13 10:40:13.424267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.855 qpair failed and we were unable to recover it. 00:38:19.855 [2024-12-13 10:40:13.424425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.855 [2024-12-13 10:40:13.424438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.855 qpair failed and we were unable to recover it. 00:38:19.855 [2024-12-13 10:40:13.424589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.855 [2024-12-13 10:40:13.424603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.855 qpair failed and we were unable to recover it. 00:38:19.855 [2024-12-13 10:40:13.424817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.855 [2024-12-13 10:40:13.424831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.855 qpair failed and we were unable to recover it. 00:38:19.855 [2024-12-13 10:40:13.424912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.855 [2024-12-13 10:40:13.424928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.855 qpair failed and we were unable to recover it. 00:38:19.855 [2024-12-13 10:40:13.425008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.855 [2024-12-13 10:40:13.425022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.855 qpair failed and we were unable to recover it. 00:38:19.855 [2024-12-13 10:40:13.425094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.855 [2024-12-13 10:40:13.425108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.855 qpair failed and we were unable to recover it. 00:38:19.855 [2024-12-13 10:40:13.425257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.855 [2024-12-13 10:40:13.425269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.855 qpair failed and we were unable to recover it. 00:38:19.855 [2024-12-13 10:40:13.425471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.855 [2024-12-13 10:40:13.425507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:19.855 qpair failed and we were unable to recover it. 00:38:19.855 [2024-12-13 10:40:13.425633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.855 [2024-12-13 10:40:13.425658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:19.855 qpair failed and we were unable to recover it. 00:38:19.855 [2024-12-13 10:40:13.425792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.855 [2024-12-13 10:40:13.425838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:19.855 qpair failed and we were unable to recover it. 00:38:19.855 [2024-12-13 10:40:13.425995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.855 [2024-12-13 10:40:13.426011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.855 qpair failed and we were unable to recover it. 00:38:19.855 [2024-12-13 10:40:13.426095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.855 [2024-12-13 10:40:13.426108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.855 qpair failed and we were unable to recover it. 00:38:19.855 [2024-12-13 10:40:13.426201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.855 [2024-12-13 10:40:13.426216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.855 qpair failed and we were unable to recover it. 00:38:19.855 [2024-12-13 10:40:13.426297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.855 [2024-12-13 10:40:13.426311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.855 qpair failed and we were unable to recover it. 00:38:19.855 [2024-12-13 10:40:13.426380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.855 [2024-12-13 10:40:13.426411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.855 qpair failed and we were unable to recover it. 00:38:19.855 [2024-12-13 10:40:13.426549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.855 [2024-12-13 10:40:13.426563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.855 qpair failed and we were unable to recover it. 00:38:19.855 [2024-12-13 10:40:13.426698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.855 [2024-12-13 10:40:13.426712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.855 qpair failed and we were unable to recover it. 00:38:19.855 [2024-12-13 10:40:13.426803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.855 [2024-12-13 10:40:13.426817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.855 qpair failed and we were unable to recover it. 00:38:19.855 [2024-12-13 10:40:13.426883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.855 [2024-12-13 10:40:13.426896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.855 qpair failed and we were unable to recover it. 00:38:19.855 [2024-12-13 10:40:13.427039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.855 [2024-12-13 10:40:13.427053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.855 qpair failed and we were unable to recover it. 00:38:19.855 [2024-12-13 10:40:13.427190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.855 [2024-12-13 10:40:13.427203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.855 qpair failed and we were unable to recover it. 00:38:19.855 [2024-12-13 10:40:13.427278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.855 [2024-12-13 10:40:13.427292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.855 qpair failed and we were unable to recover it. 00:38:19.855 [2024-12-13 10:40:13.427433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.855 [2024-12-13 10:40:13.427447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.855 qpair failed and we were unable to recover it. 00:38:19.855 [2024-12-13 10:40:13.427586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.855 [2024-12-13 10:40:13.427599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.855 qpair failed and we were unable to recover it. 00:38:19.855 [2024-12-13 10:40:13.427687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.855 [2024-12-13 10:40:13.427701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.855 qpair failed and we were unable to recover it. 00:38:19.855 [2024-12-13 10:40:13.427906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.855 [2024-12-13 10:40:13.427920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.855 qpair failed and we were unable to recover it. 00:38:19.855 [2024-12-13 10:40:13.427997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.855 [2024-12-13 10:40:13.428010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.855 qpair failed and we were unable to recover it. 00:38:19.855 [2024-12-13 10:40:13.428100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.855 [2024-12-13 10:40:13.428114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.855 qpair failed and we were unable to recover it. 00:38:19.855 [2024-12-13 10:40:13.428273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.855 [2024-12-13 10:40:13.428286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.855 qpair failed and we were unable to recover it. 00:38:19.855 [2024-12-13 10:40:13.428424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.855 [2024-12-13 10:40:13.428437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.855 qpair failed and we were unable to recover it. 00:38:19.856 [2024-12-13 10:40:13.428531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.856 [2024-12-13 10:40:13.428548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.856 qpair failed and we were unable to recover it. 00:38:19.856 [2024-12-13 10:40:13.428650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.856 [2024-12-13 10:40:13.428664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.856 qpair failed and we were unable to recover it. 00:38:19.856 [2024-12-13 10:40:13.428874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.856 [2024-12-13 10:40:13.428888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.856 qpair failed and we were unable to recover it. 00:38:19.856 [2024-12-13 10:40:13.428975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.856 [2024-12-13 10:40:13.428988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.856 qpair failed and we were unable to recover it. 00:38:19.856 [2024-12-13 10:40:13.429139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.856 [2024-12-13 10:40:13.429153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.856 qpair failed and we were unable to recover it. 00:38:19.856 [2024-12-13 10:40:13.429228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.856 [2024-12-13 10:40:13.429246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.856 qpair failed and we were unable to recover it. 00:38:19.856 [2024-12-13 10:40:13.429309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.856 [2024-12-13 10:40:13.429323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.856 qpair failed and we were unable to recover it. 00:38:19.856 [2024-12-13 10:40:13.429392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.856 [2024-12-13 10:40:13.429405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.856 qpair failed and we were unable to recover it. 00:38:19.856 [2024-12-13 10:40:13.429568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.856 [2024-12-13 10:40:13.429582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.856 qpair failed and we were unable to recover it. 00:38:19.856 [2024-12-13 10:40:13.429666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.856 [2024-12-13 10:40:13.429679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.856 qpair failed and we were unable to recover it. 00:38:19.856 [2024-12-13 10:40:13.429758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.856 [2024-12-13 10:40:13.429771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.856 qpair failed and we were unable to recover it. 00:38:19.856 [2024-12-13 10:40:13.429850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.856 [2024-12-13 10:40:13.429863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.856 qpair failed and we were unable to recover it. 00:38:19.856 [2024-12-13 10:40:13.430006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.856 [2024-12-13 10:40:13.430019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.856 qpair failed and we were unable to recover it. 00:38:19.856 [2024-12-13 10:40:13.430099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.856 [2024-12-13 10:40:13.430113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.856 qpair failed and we were unable to recover it. 00:38:19.856 [2024-12-13 10:40:13.430253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.856 [2024-12-13 10:40:13.430267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.856 qpair failed and we were unable to recover it. 00:38:19.856 [2024-12-13 10:40:13.430354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.856 [2024-12-13 10:40:13.430368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.856 qpair failed and we were unable to recover it. 00:38:19.856 [2024-12-13 10:40:13.430462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.856 [2024-12-13 10:40:13.430476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.856 qpair failed and we were unable to recover it. 00:38:19.856 [2024-12-13 10:40:13.430554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.856 [2024-12-13 10:40:13.430567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.856 qpair failed and we were unable to recover it. 00:38:19.856 [2024-12-13 10:40:13.430705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.856 [2024-12-13 10:40:13.430718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.856 qpair failed and we were unable to recover it. 00:38:19.856 [2024-12-13 10:40:13.430785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.856 [2024-12-13 10:40:13.430798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.856 qpair failed and we were unable to recover it. 00:38:19.856 [2024-12-13 10:40:13.430940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.856 [2024-12-13 10:40:13.430954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.856 qpair failed and we were unable to recover it. 00:38:19.856 [2024-12-13 10:40:13.431094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.856 [2024-12-13 10:40:13.431107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.856 qpair failed and we were unable to recover it. 00:38:19.856 [2024-12-13 10:40:13.431185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.856 [2024-12-13 10:40:13.431199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.856 qpair failed and we were unable to recover it. 00:38:19.856 [2024-12-13 10:40:13.431268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.856 [2024-12-13 10:40:13.431282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.856 qpair failed and we were unable to recover it. 00:38:19.856 [2024-12-13 10:40:13.431355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.856 [2024-12-13 10:40:13.431369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.856 qpair failed and we were unable to recover it. 00:38:19.856 [2024-12-13 10:40:13.431481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.856 [2024-12-13 10:40:13.431495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.856 qpair failed and we were unable to recover it. 00:38:19.856 [2024-12-13 10:40:13.431699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.856 [2024-12-13 10:40:13.431714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.856 qpair failed and we were unable to recover it. 00:38:19.856 [2024-12-13 10:40:13.431888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.856 [2024-12-13 10:40:13.431902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.856 qpair failed and we were unable to recover it. 00:38:19.856 [2024-12-13 10:40:13.431996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.856 [2024-12-13 10:40:13.432009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.856 qpair failed and we were unable to recover it. 00:38:19.856 [2024-12-13 10:40:13.432097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.856 [2024-12-13 10:40:13.432111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.856 qpair failed and we were unable to recover it. 00:38:19.856 [2024-12-13 10:40:13.432214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.856 [2024-12-13 10:40:13.432227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.856 qpair failed and we were unable to recover it. 00:38:19.856 [2024-12-13 10:40:13.432307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.856 [2024-12-13 10:40:13.432320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.856 qpair failed and we were unable to recover it. 00:38:19.856 [2024-12-13 10:40:13.432403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.856 [2024-12-13 10:40:13.432416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.856 qpair failed and we were unable to recover it. 00:38:19.856 [2024-12-13 10:40:13.432554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.856 [2024-12-13 10:40:13.432567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.856 qpair failed and we were unable to recover it. 00:38:19.856 [2024-12-13 10:40:13.432638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.856 [2024-12-13 10:40:13.432652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.856 qpair failed and we were unable to recover it. 00:38:19.856 [2024-12-13 10:40:13.432731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.856 [2024-12-13 10:40:13.432745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.856 qpair failed and we were unable to recover it. 00:38:19.856 [2024-12-13 10:40:13.432837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.856 [2024-12-13 10:40:13.432850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.856 qpair failed and we were unable to recover it. 00:38:19.856 [2024-12-13 10:40:13.432991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.856 [2024-12-13 10:40:13.433004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.856 qpair failed and we were unable to recover it. 00:38:19.856 [2024-12-13 10:40:13.433084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.856 [2024-12-13 10:40:13.433097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.856 qpair failed and we were unable to recover it. 00:38:19.857 [2024-12-13 10:40:13.433184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.857 [2024-12-13 10:40:13.433198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.857 qpair failed and we were unable to recover it. 00:38:19.857 [2024-12-13 10:40:13.433340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.857 [2024-12-13 10:40:13.433356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.857 qpair failed and we were unable to recover it. 00:38:19.857 [2024-12-13 10:40:13.433475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.857 [2024-12-13 10:40:13.433488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.857 qpair failed and we were unable to recover it. 00:38:19.857 [2024-12-13 10:40:13.433552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.857 [2024-12-13 10:40:13.433566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.857 qpair failed and we were unable to recover it. 00:38:19.857 [2024-12-13 10:40:13.433645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.857 [2024-12-13 10:40:13.433659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.857 qpair failed and we were unable to recover it. 00:38:19.857 [2024-12-13 10:40:13.433729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.857 [2024-12-13 10:40:13.433743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.857 qpair failed and we were unable to recover it. 00:38:19.857 [2024-12-13 10:40:13.433977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.857 [2024-12-13 10:40:13.433991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.857 qpair failed and we were unable to recover it. 00:38:19.857 [2024-12-13 10:40:13.434144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.857 [2024-12-13 10:40:13.434157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.857 qpair failed and we were unable to recover it. 00:38:19.857 [2024-12-13 10:40:13.434244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.857 [2024-12-13 10:40:13.434258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.857 qpair failed and we were unable to recover it. 00:38:19.857 [2024-12-13 10:40:13.434411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.857 [2024-12-13 10:40:13.434425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.857 qpair failed and we were unable to recover it. 00:38:19.857 [2024-12-13 10:40:13.434519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.857 [2024-12-13 10:40:13.434534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.857 qpair failed and we were unable to recover it. 00:38:19.857 [2024-12-13 10:40:13.434608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.857 [2024-12-13 10:40:13.434622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.857 qpair failed and we were unable to recover it. 00:38:19.857 [2024-12-13 10:40:13.434692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.857 [2024-12-13 10:40:13.434705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.857 qpair failed and we were unable to recover it. 00:38:19.857 [2024-12-13 10:40:13.434792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.857 [2024-12-13 10:40:13.434805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.857 qpair failed and we were unable to recover it. 00:38:19.857 [2024-12-13 10:40:13.434883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.857 [2024-12-13 10:40:13.434897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.857 qpair failed and we were unable to recover it. 00:38:19.857 [2024-12-13 10:40:13.434974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.857 [2024-12-13 10:40:13.434987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.857 qpair failed and we were unable to recover it. 00:38:19.857 [2024-12-13 10:40:13.435061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.857 [2024-12-13 10:40:13.435074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.857 qpair failed and we were unable to recover it. 00:38:19.857 [2024-12-13 10:40:13.435171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.857 [2024-12-13 10:40:13.435184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.857 qpair failed and we were unable to recover it. 00:38:19.857 [2024-12-13 10:40:13.435265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.857 [2024-12-13 10:40:13.435279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.857 qpair failed and we were unable to recover it. 00:38:19.857 [2024-12-13 10:40:13.435420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.857 [2024-12-13 10:40:13.435434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.857 qpair failed and we were unable to recover it. 00:38:19.857 [2024-12-13 10:40:13.435505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.857 [2024-12-13 10:40:13.435519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.857 qpair failed and we were unable to recover it. 00:38:19.857 [2024-12-13 10:40:13.435692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.857 [2024-12-13 10:40:13.435706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.857 qpair failed and we were unable to recover it. 00:38:19.857 [2024-12-13 10:40:13.435786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.857 [2024-12-13 10:40:13.435799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.857 qpair failed and we were unable to recover it. 00:38:19.857 [2024-12-13 10:40:13.435936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.857 [2024-12-13 10:40:13.435949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.857 qpair failed and we were unable to recover it. 00:38:19.857 [2024-12-13 10:40:13.436018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.857 [2024-12-13 10:40:13.436031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.857 qpair failed and we were unable to recover it. 00:38:19.857 [2024-12-13 10:40:13.436117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.857 [2024-12-13 10:40:13.436131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.857 qpair failed and we were unable to recover it. 00:38:19.857 [2024-12-13 10:40:13.436201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.857 [2024-12-13 10:40:13.436214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.857 qpair failed and we were unable to recover it. 00:38:19.857 [2024-12-13 10:40:13.436290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.857 [2024-12-13 10:40:13.436304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.857 qpair failed and we were unable to recover it. 00:38:19.857 [2024-12-13 10:40:13.436381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.857 [2024-12-13 10:40:13.436394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.857 qpair failed and we were unable to recover it. 00:38:19.857 [2024-12-13 10:40:13.436482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.857 [2024-12-13 10:40:13.436506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.857 qpair failed and we were unable to recover it. 00:38:19.857 [2024-12-13 10:40:13.436579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.857 [2024-12-13 10:40:13.436609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.857 qpair failed and we were unable to recover it. 00:38:19.857 [2024-12-13 10:40:13.436819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.857 [2024-12-13 10:40:13.436832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.857 qpair failed and we were unable to recover it. 00:38:19.857 [2024-12-13 10:40:13.436973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.857 [2024-12-13 10:40:13.436986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.857 qpair failed and we were unable to recover it. 00:38:19.857 [2024-12-13 10:40:13.437132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.857 [2024-12-13 10:40:13.437145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.857 qpair failed and we were unable to recover it. 00:38:19.857 [2024-12-13 10:40:13.437235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.857 [2024-12-13 10:40:13.437249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.857 qpair failed and we were unable to recover it. 00:38:19.857 [2024-12-13 10:40:13.437319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.857 [2024-12-13 10:40:13.437332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.857 qpair failed and we were unable to recover it. 00:38:19.857 [2024-12-13 10:40:13.437427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.857 [2024-12-13 10:40:13.437441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.857 qpair failed and we were unable to recover it. 00:38:19.857 [2024-12-13 10:40:13.437545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.858 [2024-12-13 10:40:13.437559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.858 qpair failed and we were unable to recover it. 00:38:19.858 [2024-12-13 10:40:13.437633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.858 [2024-12-13 10:40:13.437647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.858 qpair failed and we were unable to recover it. 00:38:19.858 [2024-12-13 10:40:13.437875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.858 [2024-12-13 10:40:13.437888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.858 qpair failed and we were unable to recover it. 00:38:19.858 [2024-12-13 10:40:13.438063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.858 [2024-12-13 10:40:13.438076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.858 qpair failed and we were unable to recover it. 00:38:19.858 [2024-12-13 10:40:13.438216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.858 [2024-12-13 10:40:13.438232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.858 qpair failed and we were unable to recover it. 00:38:19.858 [2024-12-13 10:40:13.438413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.858 [2024-12-13 10:40:13.438427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.858 qpair failed and we were unable to recover it. 00:38:19.858 [2024-12-13 10:40:13.438589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.858 [2024-12-13 10:40:13.438603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.858 qpair failed and we were unable to recover it. 00:38:19.858 [2024-12-13 10:40:13.438697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.858 [2024-12-13 10:40:13.438710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.858 qpair failed and we were unable to recover it. 00:38:19.858 [2024-12-13 10:40:13.438796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.858 [2024-12-13 10:40:13.438809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.858 qpair failed and we were unable to recover it. 00:38:19.858 [2024-12-13 10:40:13.438945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.858 [2024-12-13 10:40:13.438959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.858 qpair failed and we were unable to recover it. 00:38:19.858 [2024-12-13 10:40:13.439041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.858 [2024-12-13 10:40:13.439055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.858 qpair failed and we were unable to recover it. 00:38:19.858 [2024-12-13 10:40:13.439128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.858 [2024-12-13 10:40:13.439142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.858 qpair failed and we were unable to recover it. 00:38:19.858 [2024-12-13 10:40:13.439230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.858 [2024-12-13 10:40:13.439243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.858 qpair failed and we were unable to recover it. 00:38:19.858 [2024-12-13 10:40:13.439347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.858 [2024-12-13 10:40:13.439361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.858 qpair failed and we were unable to recover it. 00:38:19.858 [2024-12-13 10:40:13.439437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.858 [2024-12-13 10:40:13.439454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.858 qpair failed and we were unable to recover it. 00:38:19.858 [2024-12-13 10:40:13.439530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.858 [2024-12-13 10:40:13.439544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.858 qpair failed and we were unable to recover it. 00:38:19.858 [2024-12-13 10:40:13.439627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.858 [2024-12-13 10:40:13.439641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.858 qpair failed and we were unable to recover it. 00:38:19.858 [2024-12-13 10:40:13.439734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.858 [2024-12-13 10:40:13.439747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.858 qpair failed and we were unable to recover it. 00:38:19.858 [2024-12-13 10:40:13.439830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.858 [2024-12-13 10:40:13.439843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.858 qpair failed and we were unable to recover it. 00:38:19.858 [2024-12-13 10:40:13.439913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.858 [2024-12-13 10:40:13.439927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.858 qpair failed and we were unable to recover it. 00:38:19.858 [2024-12-13 10:40:13.439996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.858 [2024-12-13 10:40:13.440009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.858 qpair failed and we were unable to recover it. 00:38:19.858 [2024-12-13 10:40:13.440076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.858 [2024-12-13 10:40:13.440090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.858 qpair failed and we were unable to recover it. 00:38:19.858 [2024-12-13 10:40:13.440256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.858 [2024-12-13 10:40:13.440269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.858 qpair failed and we were unable to recover it. 00:38:19.858 [2024-12-13 10:40:13.440356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.858 [2024-12-13 10:40:13.440369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.858 qpair failed and we were unable to recover it. 00:38:19.858 [2024-12-13 10:40:13.440436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.858 [2024-12-13 10:40:13.440456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.858 qpair failed and we were unable to recover it. 00:38:19.858 [2024-12-13 10:40:13.440528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.858 [2024-12-13 10:40:13.440542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.858 qpair failed and we were unable to recover it. 00:38:19.858 [2024-12-13 10:40:13.440612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.858 [2024-12-13 10:40:13.440626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.858 qpair failed and we were unable to recover it. 00:38:19.858 [2024-12-13 10:40:13.440777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.858 [2024-12-13 10:40:13.440790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.858 qpair failed and we were unable to recover it. 00:38:19.858 [2024-12-13 10:40:13.440860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.858 [2024-12-13 10:40:13.440876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.858 qpair failed and we were unable to recover it. 00:38:19.858 [2024-12-13 10:40:13.441016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.858 [2024-12-13 10:40:13.441029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.858 qpair failed and we were unable to recover it. 00:38:19.858 [2024-12-13 10:40:13.441105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.858 [2024-12-13 10:40:13.441118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.858 qpair failed and we were unable to recover it. 00:38:19.858 [2024-12-13 10:40:13.441189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.858 [2024-12-13 10:40:13.441203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.858 qpair failed and we were unable to recover it. 00:38:19.858 [2024-12-13 10:40:13.441287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.858 [2024-12-13 10:40:13.441300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.858 qpair failed and we were unable to recover it. 00:38:19.858 [2024-12-13 10:40:13.441392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.858 [2024-12-13 10:40:13.441406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.858 qpair failed and we were unable to recover it. 00:38:19.858 [2024-12-13 10:40:13.441472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.858 [2024-12-13 10:40:13.441486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.858 qpair failed and we were unable to recover it. 00:38:19.858 [2024-12-13 10:40:13.441571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.858 [2024-12-13 10:40:13.441587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.858 qpair failed and we were unable to recover it. 00:38:19.858 [2024-12-13 10:40:13.441747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.858 [2024-12-13 10:40:13.441760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.858 qpair failed and we were unable to recover it. 00:38:19.858 [2024-12-13 10:40:13.441831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.858 [2024-12-13 10:40:13.441844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.858 qpair failed and we were unable to recover it. 00:38:19.858 [2024-12-13 10:40:13.441912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.858 [2024-12-13 10:40:13.441926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.858 qpair failed and we were unable to recover it. 00:38:19.858 [2024-12-13 10:40:13.441995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.859 [2024-12-13 10:40:13.442008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.859 qpair failed and we were unable to recover it. 00:38:19.859 [2024-12-13 10:40:13.442147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.859 [2024-12-13 10:40:13.442160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.859 qpair failed and we were unable to recover it. 00:38:19.859 [2024-12-13 10:40:13.442229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.859 [2024-12-13 10:40:13.442242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.859 qpair failed and we were unable to recover it. 00:38:19.859 [2024-12-13 10:40:13.442323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.859 [2024-12-13 10:40:13.442337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.859 qpair failed and we were unable to recover it. 00:38:19.859 [2024-12-13 10:40:13.442418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.859 [2024-12-13 10:40:13.442431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.859 qpair failed and we were unable to recover it. 00:38:19.859 [2024-12-13 10:40:13.442503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.859 [2024-12-13 10:40:13.442519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.859 qpair failed and we were unable to recover it. 00:38:19.859 [2024-12-13 10:40:13.442590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.859 [2024-12-13 10:40:13.442604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.859 qpair failed and we were unable to recover it. 00:38:19.859 [2024-12-13 10:40:13.442674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.859 [2024-12-13 10:40:13.442687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.859 qpair failed and we were unable to recover it. 00:38:19.859 [2024-12-13 10:40:13.442754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.859 [2024-12-13 10:40:13.442767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.859 qpair failed and we were unable to recover it. 00:38:19.859 [2024-12-13 10:40:13.442855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.859 [2024-12-13 10:40:13.442868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.859 qpair failed and we were unable to recover it. 00:38:19.859 [2024-12-13 10:40:13.442965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.859 [2024-12-13 10:40:13.442978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.859 qpair failed and we were unable to recover it. 00:38:19.859 [2024-12-13 10:40:13.443066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.859 [2024-12-13 10:40:13.443079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.859 qpair failed and we were unable to recover it. 00:38:19.859 [2024-12-13 10:40:13.443147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.859 [2024-12-13 10:40:13.443160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.859 qpair failed and we were unable to recover it. 00:38:19.859 [2024-12-13 10:40:13.443228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.859 [2024-12-13 10:40:13.443242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.859 qpair failed and we were unable to recover it. 00:38:19.859 [2024-12-13 10:40:13.443314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.859 [2024-12-13 10:40:13.443328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.859 qpair failed and we were unable to recover it. 00:38:19.859 [2024-12-13 10:40:13.443414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.859 [2024-12-13 10:40:13.443427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.859 qpair failed and we were unable to recover it. 00:38:19.859 [2024-12-13 10:40:13.443534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.859 [2024-12-13 10:40:13.443549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.859 qpair failed and we were unable to recover it. 00:38:19.859 [2024-12-13 10:40:13.443621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.859 [2024-12-13 10:40:13.443635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.859 qpair failed and we were unable to recover it. 00:38:19.859 [2024-12-13 10:40:13.443781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.859 [2024-12-13 10:40:13.443801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.859 qpair failed and we were unable to recover it. 00:38:19.859 [2024-12-13 10:40:13.443873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.859 [2024-12-13 10:40:13.443887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.859 qpair failed and we were unable to recover it. 00:38:19.859 [2024-12-13 10:40:13.443984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.859 [2024-12-13 10:40:13.443997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.859 qpair failed and we were unable to recover it. 00:38:19.859 [2024-12-13 10:40:13.444082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.859 [2024-12-13 10:40:13.444096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.859 qpair failed and we were unable to recover it. 00:38:19.859 [2024-12-13 10:40:13.444165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.859 [2024-12-13 10:40:13.444178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.859 qpair failed and we were unable to recover it. 00:38:19.859 [2024-12-13 10:40:13.444248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.859 [2024-12-13 10:40:13.444262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.859 qpair failed and we were unable to recover it. 00:38:19.859 [2024-12-13 10:40:13.444330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.859 [2024-12-13 10:40:13.444342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.859 qpair failed and we were unable to recover it. 00:38:19.859 [2024-12-13 10:40:13.444406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.859 [2024-12-13 10:40:13.444419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.859 qpair failed and we were unable to recover it. 00:38:19.859 [2024-12-13 10:40:13.444487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.859 [2024-12-13 10:40:13.444519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.859 qpair failed and we were unable to recover it. 00:38:19.859 [2024-12-13 10:40:13.444594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.859 [2024-12-13 10:40:13.444608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.859 qpair failed and we were unable to recover it. 00:38:19.859 [2024-12-13 10:40:13.444699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.859 [2024-12-13 10:40:13.444713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.859 qpair failed and we were unable to recover it. 00:38:19.859 [2024-12-13 10:40:13.444803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.859 [2024-12-13 10:40:13.444817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.859 qpair failed and we were unable to recover it. 00:38:19.859 [2024-12-13 10:40:13.444910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.859 [2024-12-13 10:40:13.444924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.859 qpair failed and we were unable to recover it. 00:38:19.859 [2024-12-13 10:40:13.445002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.859 [2024-12-13 10:40:13.445017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.859 qpair failed and we were unable to recover it. 00:38:19.859 [2024-12-13 10:40:13.445094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.859 [2024-12-13 10:40:13.445109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.859 qpair failed and we were unable to recover it. 00:38:19.859 [2024-12-13 10:40:13.445185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.859 [2024-12-13 10:40:13.445201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.859 qpair failed and we were unable to recover it. 00:38:19.860 [2024-12-13 10:40:13.445341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.860 [2024-12-13 10:40:13.445356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.860 qpair failed and we were unable to recover it. 00:38:19.860 [2024-12-13 10:40:13.445442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.860 [2024-12-13 10:40:13.445464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.860 qpair failed and we were unable to recover it. 00:38:19.860 [2024-12-13 10:40:13.445562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.860 [2024-12-13 10:40:13.445594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.860 qpair failed and we were unable to recover it. 00:38:19.860 [2024-12-13 10:40:13.445689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.860 [2024-12-13 10:40:13.445705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.860 qpair failed and we were unable to recover it. 00:38:19.860 [2024-12-13 10:40:13.445873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.860 [2024-12-13 10:40:13.445887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.860 qpair failed and we were unable to recover it. 00:38:19.860 [2024-12-13 10:40:13.445963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.860 [2024-12-13 10:40:13.445976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.860 qpair failed and we were unable to recover it. 00:38:19.860 [2024-12-13 10:40:13.446051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.860 [2024-12-13 10:40:13.446064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.860 qpair failed and we were unable to recover it. 00:38:19.860 [2024-12-13 10:40:13.446141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.860 [2024-12-13 10:40:13.446155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.860 qpair failed and we were unable to recover it. 00:38:19.860 [2024-12-13 10:40:13.446222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.860 [2024-12-13 10:40:13.446236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.860 qpair failed and we were unable to recover it. 00:38:19.860 [2024-12-13 10:40:13.446304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.860 [2024-12-13 10:40:13.446317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.860 qpair failed and we were unable to recover it. 00:38:19.860 [2024-12-13 10:40:13.446389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.860 [2024-12-13 10:40:13.446402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.860 qpair failed and we were unable to recover it. 00:38:19.860 [2024-12-13 10:40:13.446557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.860 [2024-12-13 10:40:13.446608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.860 qpair failed and we were unable to recover it. 00:38:19.860 [2024-12-13 10:40:13.446818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.860 [2024-12-13 10:40:13.446861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.860 qpair failed and we were unable to recover it. 00:38:19.860 [2024-12-13 10:40:13.447004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.860 [2024-12-13 10:40:13.447056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.860 qpair failed and we were unable to recover it. 00:38:19.860 [2024-12-13 10:40:13.447126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.860 [2024-12-13 10:40:13.447140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.860 qpair failed and we were unable to recover it. 00:38:19.860 [2024-12-13 10:40:13.447291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.860 [2024-12-13 10:40:13.447305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.860 qpair failed and we were unable to recover it. 00:38:19.860 [2024-12-13 10:40:13.447377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.860 [2024-12-13 10:40:13.447390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.860 qpair failed and we were unable to recover it. 00:38:19.860 [2024-12-13 10:40:13.447469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.860 [2024-12-13 10:40:13.447483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.860 qpair failed and we were unable to recover it. 00:38:19.860 [2024-12-13 10:40:13.447560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.860 [2024-12-13 10:40:13.447574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.860 qpair failed and we were unable to recover it. 00:38:19.860 [2024-12-13 10:40:13.447645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.860 [2024-12-13 10:40:13.447658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.860 qpair failed and we were unable to recover it. 00:38:19.860 [2024-12-13 10:40:13.447728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.860 [2024-12-13 10:40:13.447741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.860 qpair failed and we were unable to recover it. 00:38:19.860 [2024-12-13 10:40:13.447820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.860 [2024-12-13 10:40:13.447833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.860 qpair failed and we were unable to recover it. 00:38:19.860 [2024-12-13 10:40:13.447899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.860 [2024-12-13 10:40:13.447912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.860 qpair failed and we were unable to recover it. 00:38:19.860 [2024-12-13 10:40:13.448003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.860 [2024-12-13 10:40:13.448017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.860 qpair failed and we were unable to recover it. 00:38:19.860 [2024-12-13 10:40:13.448083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.860 [2024-12-13 10:40:13.448096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.860 qpair failed and we were unable to recover it. 00:38:19.860 [2024-12-13 10:40:13.448193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.860 [2024-12-13 10:40:13.448236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.860 qpair failed and we were unable to recover it. 00:38:19.860 [2024-12-13 10:40:13.448392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.860 [2024-12-13 10:40:13.448435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.860 qpair failed and we were unable to recover it. 00:38:19.860 [2024-12-13 10:40:13.448576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.860 [2024-12-13 10:40:13.448620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.860 qpair failed and we were unable to recover it. 00:38:19.860 [2024-12-13 10:40:13.448758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.860 [2024-12-13 10:40:13.448771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.860 qpair failed and we were unable to recover it. 00:38:19.860 [2024-12-13 10:40:13.448908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.860 [2024-12-13 10:40:13.448921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.860 qpair failed and we were unable to recover it. 00:38:19.860 [2024-12-13 10:40:13.448990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.860 [2024-12-13 10:40:13.449004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.860 qpair failed and we were unable to recover it. 00:38:19.860 [2024-12-13 10:40:13.449074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.860 [2024-12-13 10:40:13.449088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.860 qpair failed and we were unable to recover it. 00:38:19.860 [2024-12-13 10:40:13.449159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.860 [2024-12-13 10:40:13.449172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.860 qpair failed and we were unable to recover it. 00:38:19.860 [2024-12-13 10:40:13.449313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.860 [2024-12-13 10:40:13.449356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.860 qpair failed and we were unable to recover it. 00:38:19.860 [2024-12-13 10:40:13.449645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.860 [2024-12-13 10:40:13.449689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.860 qpair failed and we were unable to recover it. 00:38:19.860 [2024-12-13 10:40:13.449896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.860 [2024-12-13 10:40:13.449910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.860 qpair failed and we were unable to recover it. 00:38:19.860 [2024-12-13 10:40:13.450065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.860 [2024-12-13 10:40:13.450107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.860 qpair failed and we were unable to recover it. 00:38:19.860 [2024-12-13 10:40:13.450340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.860 [2024-12-13 10:40:13.450382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.860 qpair failed and we were unable to recover it. 00:38:19.860 [2024-12-13 10:40:13.450570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.860 [2024-12-13 10:40:13.450622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:19.860 qpair failed and we were unable to recover it. 00:38:19.861 [2024-12-13 10:40:13.450766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.861 [2024-12-13 10:40:13.450810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:19.861 qpair failed and we were unable to recover it. 00:38:19.861 [2024-12-13 10:40:13.450957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.861 [2024-12-13 10:40:13.451000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:19.861 qpair failed and we were unable to recover it. 00:38:19.861 [2024-12-13 10:40:13.451141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.861 [2024-12-13 10:40:13.451184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:19.861 qpair failed and we were unable to recover it. 00:38:19.861 [2024-12-13 10:40:13.451412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.861 [2024-12-13 10:40:13.451467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:19.861 qpair failed and we were unable to recover it. 00:38:19.861 [2024-12-13 10:40:13.451761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.861 [2024-12-13 10:40:13.451805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:19.861 qpair failed and we were unable to recover it. 00:38:19.861 [2024-12-13 10:40:13.451956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.861 [2024-12-13 10:40:13.451977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:19.861 qpair failed and we were unable to recover it. 00:38:19.861 [2024-12-13 10:40:13.452167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.861 [2024-12-13 10:40:13.452210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:19.861 qpair failed and we were unable to recover it. 00:38:19.861 [2024-12-13 10:40:13.452360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.861 [2024-12-13 10:40:13.452404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:19.861 qpair failed and we were unable to recover it. 00:38:19.861 [2024-12-13 10:40:13.452552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.861 [2024-12-13 10:40:13.452596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:19.861 qpair failed and we were unable to recover it. 00:38:19.861 [2024-12-13 10:40:13.452751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.861 [2024-12-13 10:40:13.452794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:19.861 qpair failed and we were unable to recover it. 00:38:19.861 [2024-12-13 10:40:13.453002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.861 [2024-12-13 10:40:13.453044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:19.861 qpair failed and we were unable to recover it. 00:38:19.861 [2024-12-13 10:40:13.453233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.861 [2024-12-13 10:40:13.453277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:19.861 qpair failed and we were unable to recover it. 00:38:19.861 [2024-12-13 10:40:13.453492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.861 [2024-12-13 10:40:13.453544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:19.861 qpair failed and we were unable to recover it. 00:38:19.861 [2024-12-13 10:40:13.453692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.861 [2024-12-13 10:40:13.453714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:19.861 qpair failed and we were unable to recover it. 00:38:19.861 [2024-12-13 10:40:13.453804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.861 [2024-12-13 10:40:13.453825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:19.861 qpair failed and we were unable to recover it. 00:38:19.861 [2024-12-13 10:40:13.454001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.861 [2024-12-13 10:40:13.454023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:19.861 qpair failed and we were unable to recover it. 00:38:19.861 [2024-12-13 10:40:13.454129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.861 [2024-12-13 10:40:13.454170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:19.861 qpair failed and we were unable to recover it. 00:38:19.861 [2024-12-13 10:40:13.454305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.861 [2024-12-13 10:40:13.454349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:19.861 qpair failed and we were unable to recover it. 00:38:19.861 [2024-12-13 10:40:13.454553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.861 [2024-12-13 10:40:13.454596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:19.861 qpair failed and we were unable to recover it. 00:38:19.861 [2024-12-13 10:40:13.454747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.861 [2024-12-13 10:40:13.454772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:19.861 qpair failed and we were unable to recover it. 00:38:19.861 [2024-12-13 10:40:13.454863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.861 [2024-12-13 10:40:13.454879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.861 qpair failed and we were unable to recover it. 00:38:19.861 [2024-12-13 10:40:13.454955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.861 [2024-12-13 10:40:13.454969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.861 qpair failed and we were unable to recover it. 00:38:19.861 [2024-12-13 10:40:13.455048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.861 [2024-12-13 10:40:13.455061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.861 qpair failed and we were unable to recover it. 00:38:19.861 [2024-12-13 10:40:13.455147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.861 [2024-12-13 10:40:13.455160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.861 qpair failed and we were unable to recover it. 00:38:19.861 [2024-12-13 10:40:13.455315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.861 [2024-12-13 10:40:13.455328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.861 qpair failed and we were unable to recover it. 00:38:19.861 [2024-12-13 10:40:13.455431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.861 [2024-12-13 10:40:13.455485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.861 qpair failed and we were unable to recover it. 00:38:19.861 [2024-12-13 10:40:13.455696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.861 [2024-12-13 10:40:13.455739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.861 qpair failed and we were unable to recover it. 00:38:19.861 [2024-12-13 10:40:13.455866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.861 [2024-12-13 10:40:13.455907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.861 qpair failed and we were unable to recover it. 00:38:19.861 [2024-12-13 10:40:13.456042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.861 [2024-12-13 10:40:13.456084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.861 qpair failed and we were unable to recover it. 00:38:19.861 [2024-12-13 10:40:13.456363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.861 [2024-12-13 10:40:13.456406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.861 qpair failed and we were unable to recover it. 00:38:19.861 [2024-12-13 10:40:13.456574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.861 [2024-12-13 10:40:13.456629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.861 qpair failed and we were unable to recover it. 00:38:19.861 [2024-12-13 10:40:13.456760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.861 [2024-12-13 10:40:13.456802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.861 qpair failed and we were unable to recover it. 00:38:19.861 [2024-12-13 10:40:13.456934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.861 [2024-12-13 10:40:13.456975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.861 qpair failed and we were unable to recover it. 00:38:19.861 [2024-12-13 10:40:13.457110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.861 [2024-12-13 10:40:13.457153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.861 qpair failed and we were unable to recover it. 00:38:19.861 [2024-12-13 10:40:13.457290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.861 [2024-12-13 10:40:13.457303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.861 qpair failed and we were unable to recover it. 00:38:19.861 [2024-12-13 10:40:13.457371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.861 [2024-12-13 10:40:13.457383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.861 qpair failed and we were unable to recover it. 00:38:19.861 [2024-12-13 10:40:13.457477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.861 [2024-12-13 10:40:13.457491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.861 qpair failed and we were unable to recover it. 00:38:19.861 [2024-12-13 10:40:13.457570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.861 [2024-12-13 10:40:13.457583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.861 qpair failed and we were unable to recover it. 00:38:19.861 [2024-12-13 10:40:13.457666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.861 [2024-12-13 10:40:13.457700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.862 qpair failed and we were unable to recover it. 00:38:19.862 [2024-12-13 10:40:13.457976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.862 [2024-12-13 10:40:13.458064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:19.862 qpair failed and we were unable to recover it. 00:38:19.862 [2024-12-13 10:40:13.458271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.862 [2024-12-13 10:40:13.458322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:19.862 qpair failed and we were unable to recover it. 00:38:19.862 [2024-12-13 10:40:13.458502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.862 [2024-12-13 10:40:13.458550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:19.862 qpair failed and we were unable to recover it. 00:38:19.862 [2024-12-13 10:40:13.458695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.862 [2024-12-13 10:40:13.458738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:19.862 qpair failed and we were unable to recover it. 00:38:19.862 [2024-12-13 10:40:13.458882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.862 [2024-12-13 10:40:13.458903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:19.862 qpair failed and we were unable to recover it. 00:38:19.862 [2024-12-13 10:40:13.459001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.862 [2024-12-13 10:40:13.459023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:19.862 qpair failed and we were unable to recover it. 00:38:19.862 [2024-12-13 10:40:13.459127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.862 [2024-12-13 10:40:13.459148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:19.862 qpair failed and we were unable to recover it. 00:38:19.862 [2024-12-13 10:40:13.459257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.862 [2024-12-13 10:40:13.459279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:19.862 qpair failed and we were unable to recover it. 00:38:19.862 [2024-12-13 10:40:13.459440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.862 [2024-12-13 10:40:13.459469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:19.862 qpair failed and we were unable to recover it. 00:38:19.862 [2024-12-13 10:40:13.459636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.862 [2024-12-13 10:40:13.459657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:19.862 qpair failed and we were unable to recover it. 00:38:19.862 [2024-12-13 10:40:13.459754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.862 [2024-12-13 10:40:13.459775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:19.862 qpair failed and we were unable to recover it. 00:38:19.862 [2024-12-13 10:40:13.459925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.862 [2024-12-13 10:40:13.459945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:19.862 qpair failed and we were unable to recover it. 00:38:19.862 [2024-12-13 10:40:13.460125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.862 [2024-12-13 10:40:13.460181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:19.862 qpair failed and we were unable to recover it. 00:38:19.862 [2024-12-13 10:40:13.460387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.862 [2024-12-13 10:40:13.460444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:19.862 qpair failed and we were unable to recover it. 00:38:19.862 [2024-12-13 10:40:13.460587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.862 [2024-12-13 10:40:13.460630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:19.862 qpair failed and we were unable to recover it. 00:38:19.862 [2024-12-13 10:40:13.460849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.862 [2024-12-13 10:40:13.460870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:19.862 qpair failed and we were unable to recover it. 00:38:19.862 [2024-12-13 10:40:13.460981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.862 [2024-12-13 10:40:13.461025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:19.862 qpair failed and we were unable to recover it. 00:38:19.862 [2024-12-13 10:40:13.461168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.862 [2024-12-13 10:40:13.461212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:19.862 qpair failed and we were unable to recover it. 00:38:19.862 [2024-12-13 10:40:13.461411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.862 [2024-12-13 10:40:13.461463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:19.862 qpair failed and we were unable to recover it. 00:38:19.862 [2024-12-13 10:40:13.461603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.862 [2024-12-13 10:40:13.461645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:19.862 qpair failed and we were unable to recover it. 00:38:19.862 [2024-12-13 10:40:13.461855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.862 [2024-12-13 10:40:13.461898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:19.862 qpair failed and we were unable to recover it. 00:38:19.862 [2024-12-13 10:40:13.462060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.862 [2024-12-13 10:40:13.462103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:19.862 qpair failed and we were unable to recover it. 00:38:19.862 [2024-12-13 10:40:13.462250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.862 [2024-12-13 10:40:13.462292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:19.862 qpair failed and we were unable to recover it. 00:38:19.862 [2024-12-13 10:40:13.462558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.862 [2024-12-13 10:40:13.462603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:19.862 qpair failed and we were unable to recover it. 00:38:19.862 [2024-12-13 10:40:13.462761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.862 [2024-12-13 10:40:13.462783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:19.862 qpair failed and we were unable to recover it. 00:38:19.862 [2024-12-13 10:40:13.462887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.862 [2024-12-13 10:40:13.462909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:19.862 qpair failed and we were unable to recover it. 00:38:19.862 [2024-12-13 10:40:13.463098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.862 [2024-12-13 10:40:13.463118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:19.862 qpair failed and we were unable to recover it. 00:38:19.862 [2024-12-13 10:40:13.463220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.862 [2024-12-13 10:40:13.463241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:19.862 qpair failed and we were unable to recover it. 00:38:19.862 [2024-12-13 10:40:13.463413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.862 [2024-12-13 10:40:13.463466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:19.862 qpair failed and we were unable to recover it. 00:38:19.862 [2024-12-13 10:40:13.463686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.862 [2024-12-13 10:40:13.463728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:19.862 qpair failed and we were unable to recover it. 00:38:19.862 [2024-12-13 10:40:13.463954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.862 [2024-12-13 10:40:13.463998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:19.862 qpair failed and we were unable to recover it. 00:38:19.862 [2024-12-13 10:40:13.464094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.862 [2024-12-13 10:40:13.464115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:19.862 qpair failed and we were unable to recover it. 00:38:19.862 [2024-12-13 10:40:13.464342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.862 [2024-12-13 10:40:13.464384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:19.862 qpair failed and we were unable to recover it. 00:38:19.862 [2024-12-13 10:40:13.464559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.862 [2024-12-13 10:40:13.464603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:19.862 qpair failed and we were unable to recover it. 00:38:19.862 [2024-12-13 10:40:13.464797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.862 [2024-12-13 10:40:13.464839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:19.862 qpair failed and we were unable to recover it. 00:38:19.863 [2024-12-13 10:40:13.465020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.863 [2024-12-13 10:40:13.465042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:19.863 qpair failed and we were unable to recover it. 00:38:19.863 [2024-12-13 10:40:13.465129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.863 [2024-12-13 10:40:13.465183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.863 qpair failed and we were unable to recover it. 00:38:19.863 [2024-12-13 10:40:13.465389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.863 [2024-12-13 10:40:13.465433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.863 qpair failed and we were unable to recover it. 00:38:19.863 [2024-12-13 10:40:13.465656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.863 [2024-12-13 10:40:13.465696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.863 qpair failed and we were unable to recover it. 00:38:19.863 [2024-12-13 10:40:13.465829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.863 [2024-12-13 10:40:13.465842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.863 qpair failed and we were unable to recover it. 00:38:19.863 [2024-12-13 10:40:13.465927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.863 [2024-12-13 10:40:13.465941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.863 qpair failed and we were unable to recover it. 00:38:19.863 [2024-12-13 10:40:13.466079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.863 [2024-12-13 10:40:13.466092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.863 qpair failed and we were unable to recover it. 00:38:19.863 [2024-12-13 10:40:13.466189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.863 [2024-12-13 10:40:13.466226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.863 qpair failed and we were unable to recover it. 00:38:19.863 [2024-12-13 10:40:13.466484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.863 [2024-12-13 10:40:13.466527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.863 qpair failed and we were unable to recover it. 00:38:19.863 [2024-12-13 10:40:13.466660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.863 [2024-12-13 10:40:13.466701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.863 qpair failed and we were unable to recover it. 00:38:19.863 [2024-12-13 10:40:13.466827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.863 [2024-12-13 10:40:13.466841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.863 qpair failed and we were unable to recover it. 00:38:19.863 [2024-12-13 10:40:13.466992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.863 [2024-12-13 10:40:13.467005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.863 qpair failed and we were unable to recover it. 00:38:19.863 [2024-12-13 10:40:13.467096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.863 [2024-12-13 10:40:13.467110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.863 qpair failed and we were unable to recover it. 00:38:19.863 [2024-12-13 10:40:13.467196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.863 [2024-12-13 10:40:13.467209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.863 qpair failed and we were unable to recover it. 00:38:19.863 [2024-12-13 10:40:13.467399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.863 [2024-12-13 10:40:13.467413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.863 qpair failed and we were unable to recover it. 00:38:19.863 [2024-12-13 10:40:13.467499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.863 [2024-12-13 10:40:13.467513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.863 qpair failed and we were unable to recover it. 00:38:19.863 [2024-12-13 10:40:13.467662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.863 [2024-12-13 10:40:13.467675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.863 qpair failed and we were unable to recover it. 00:38:19.863 [2024-12-13 10:40:13.467825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.863 [2024-12-13 10:40:13.467839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.863 qpair failed and we were unable to recover it. 00:38:19.863 [2024-12-13 10:40:13.467952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.863 [2024-12-13 10:40:13.467968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.863 qpair failed and we were unable to recover it. 00:38:19.863 [2024-12-13 10:40:13.468050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.863 [2024-12-13 10:40:13.468064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.863 qpair failed and we were unable to recover it. 00:38:19.863 [2024-12-13 10:40:13.468221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.863 [2024-12-13 10:40:13.468234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.863 qpair failed and we were unable to recover it. 00:38:19.863 [2024-12-13 10:40:13.468440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.863 [2024-12-13 10:40:13.468459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.863 qpair failed and we were unable to recover it. 00:38:19.863 [2024-12-13 10:40:13.468591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.863 [2024-12-13 10:40:13.468604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.863 qpair failed and we were unable to recover it. 00:38:19.863 [2024-12-13 10:40:13.468862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.863 [2024-12-13 10:40:13.468875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.863 qpair failed and we were unable to recover it. 00:38:19.863 [2024-12-13 10:40:13.469014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.863 [2024-12-13 10:40:13.469027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.863 qpair failed and we were unable to recover it. 00:38:19.863 [2024-12-13 10:40:13.469113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.863 [2024-12-13 10:40:13.469128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.863 qpair failed and we were unable to recover it. 00:38:19.863 [2024-12-13 10:40:13.469250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.863 [2024-12-13 10:40:13.469293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.863 qpair failed and we were unable to recover it. 00:38:19.863 [2024-12-13 10:40:13.469458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.863 [2024-12-13 10:40:13.469501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.863 qpair failed and we were unable to recover it. 00:38:19.863 [2024-12-13 10:40:13.469717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.863 [2024-12-13 10:40:13.469759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.863 qpair failed and we were unable to recover it. 00:38:19.863 [2024-12-13 10:40:13.469995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.863 [2024-12-13 10:40:13.470036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.863 qpair failed and we were unable to recover it. 00:38:19.863 [2024-12-13 10:40:13.470168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.863 [2024-12-13 10:40:13.470182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.863 qpair failed and we were unable to recover it. 00:38:19.863 [2024-12-13 10:40:13.470352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.863 [2024-12-13 10:40:13.470393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.863 qpair failed and we were unable to recover it. 00:38:19.863 [2024-12-13 10:40:13.470620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.863 [2024-12-13 10:40:13.470664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.863 qpair failed and we were unable to recover it. 00:38:19.863 [2024-12-13 10:40:13.470814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.863 [2024-12-13 10:40:13.470857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.863 qpair failed and we were unable to recover it. 00:38:19.863 [2024-12-13 10:40:13.470989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.863 [2024-12-13 10:40:13.471002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.863 qpair failed and we were unable to recover it. 00:38:19.863 [2024-12-13 10:40:13.471090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.863 [2024-12-13 10:40:13.471103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.863 qpair failed and we were unable to recover it. 00:38:19.863 [2024-12-13 10:40:13.471236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.863 [2024-12-13 10:40:13.471249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.863 qpair failed and we were unable to recover it. 00:38:19.863 [2024-12-13 10:40:13.471407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.863 [2024-12-13 10:40:13.471468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.863 qpair failed and we were unable to recover it. 00:38:19.863 [2024-12-13 10:40:13.471703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.863 [2024-12-13 10:40:13.471746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.864 qpair failed and we were unable to recover it. 00:38:19.864 [2024-12-13 10:40:13.471971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.864 [2024-12-13 10:40:13.472013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.864 qpair failed and we were unable to recover it. 00:38:19.864 [2024-12-13 10:40:13.472173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.864 [2024-12-13 10:40:13.472187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.864 qpair failed and we were unable to recover it. 00:38:19.864 [2024-12-13 10:40:13.472273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.864 [2024-12-13 10:40:13.472287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.864 qpair failed and we were unable to recover it. 00:38:19.864 [2024-12-13 10:40:13.472364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.864 [2024-12-13 10:40:13.472377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.864 qpair failed and we were unable to recover it. 00:38:19.864 [2024-12-13 10:40:13.472523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.864 [2024-12-13 10:40:13.472537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.864 qpair failed and we were unable to recover it. 00:38:19.864 [2024-12-13 10:40:13.472613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.864 [2024-12-13 10:40:13.472626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.864 qpair failed and we were unable to recover it. 00:38:19.864 [2024-12-13 10:40:13.472783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.864 [2024-12-13 10:40:13.472797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.864 qpair failed and we were unable to recover it. 00:38:19.864 [2024-12-13 10:40:13.472933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.864 [2024-12-13 10:40:13.472946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.864 qpair failed and we were unable to recover it. 00:38:19.864 [2024-12-13 10:40:13.473039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.864 [2024-12-13 10:40:13.473052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.864 qpair failed and we were unable to recover it. 00:38:19.864 [2024-12-13 10:40:13.473214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.864 [2024-12-13 10:40:13.473256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.864 qpair failed and we were unable to recover it. 00:38:19.864 [2024-12-13 10:40:13.473398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.864 [2024-12-13 10:40:13.473438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.864 qpair failed and we were unable to recover it. 00:38:19.864 [2024-12-13 10:40:13.473644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.864 [2024-12-13 10:40:13.473687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.864 qpair failed and we were unable to recover it. 00:38:19.864 [2024-12-13 10:40:13.473885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.864 [2024-12-13 10:40:13.473936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.864 qpair failed and we were unable to recover it. 00:38:19.864 [2024-12-13 10:40:13.474074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.864 [2024-12-13 10:40:13.474087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.864 qpair failed and we were unable to recover it. 00:38:19.864 [2024-12-13 10:40:13.474261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.864 [2024-12-13 10:40:13.474302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.864 qpair failed and we were unable to recover it. 00:38:19.864 [2024-12-13 10:40:13.474437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.864 [2024-12-13 10:40:13.474490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.864 qpair failed and we were unable to recover it. 00:38:19.864 [2024-12-13 10:40:13.474637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.864 [2024-12-13 10:40:13.474680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.864 qpair failed and we were unable to recover it. 00:38:19.864 [2024-12-13 10:40:13.474906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.864 [2024-12-13 10:40:13.474967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.864 qpair failed and we were unable to recover it. 00:38:19.864 [2024-12-13 10:40:13.475174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.864 [2024-12-13 10:40:13.475188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.864 qpair failed and we were unable to recover it. 00:38:19.864 [2024-12-13 10:40:13.475268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.864 [2024-12-13 10:40:13.475284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.864 qpair failed and we were unable to recover it. 00:38:19.864 [2024-12-13 10:40:13.475436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.864 [2024-12-13 10:40:13.475453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.864 qpair failed and we were unable to recover it. 00:38:19.864 [2024-12-13 10:40:13.475533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.864 [2024-12-13 10:40:13.475546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.864 qpair failed and we were unable to recover it. 00:38:19.864 [2024-12-13 10:40:13.475626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.864 [2024-12-13 10:40:13.475639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.864 qpair failed and we were unable to recover it. 00:38:19.864 [2024-12-13 10:40:13.475718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.864 [2024-12-13 10:40:13.475732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.864 qpair failed and we were unable to recover it. 00:38:19.864 [2024-12-13 10:40:13.475805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.864 [2024-12-13 10:40:13.475818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.864 qpair failed and we were unable to recover it. 00:38:19.864 [2024-12-13 10:40:13.475907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.864 [2024-12-13 10:40:13.475921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.864 qpair failed and we were unable to recover it. 00:38:19.864 [2024-12-13 10:40:13.475996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.864 [2024-12-13 10:40:13.476009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.864 qpair failed and we were unable to recover it. 00:38:19.864 [2024-12-13 10:40:13.476154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.864 [2024-12-13 10:40:13.476195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.864 qpair failed and we were unable to recover it. 00:38:19.864 [2024-12-13 10:40:13.476339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.864 [2024-12-13 10:40:13.476381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.864 qpair failed and we were unable to recover it. 00:38:19.864 [2024-12-13 10:40:13.476550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.864 [2024-12-13 10:40:13.476592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.864 qpair failed and we were unable to recover it. 00:38:19.864 [2024-12-13 10:40:13.476806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.864 [2024-12-13 10:40:13.476820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.864 qpair failed and we were unable to recover it. 00:38:19.864 [2024-12-13 10:40:13.476939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.864 [2024-12-13 10:40:13.476981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.864 qpair failed and we were unable to recover it. 00:38:19.864 [2024-12-13 10:40:13.477136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.864 [2024-12-13 10:40:13.477179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.864 qpair failed and we were unable to recover it. 00:38:19.864 [2024-12-13 10:40:13.477322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.864 [2024-12-13 10:40:13.477363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.864 qpair failed and we were unable to recover it. 00:38:19.865 [2024-12-13 10:40:13.477581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.865 [2024-12-13 10:40:13.477624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.865 qpair failed and we were unable to recover it. 00:38:19.865 [2024-12-13 10:40:13.477756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.865 [2024-12-13 10:40:13.477769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.865 qpair failed and we were unable to recover it. 00:38:19.865 [2024-12-13 10:40:13.477926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.865 [2024-12-13 10:40:13.477939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.865 qpair failed and we were unable to recover it. 00:38:19.865 [2024-12-13 10:40:13.478013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.865 [2024-12-13 10:40:13.478027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.865 qpair failed and we were unable to recover it. 00:38:19.865 [2024-12-13 10:40:13.478201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.865 [2024-12-13 10:40:13.478214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.865 qpair failed and we were unable to recover it. 00:38:19.865 [2024-12-13 10:40:13.478371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.865 [2024-12-13 10:40:13.478385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.865 qpair failed and we were unable to recover it. 00:38:19.865 [2024-12-13 10:40:13.478470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.865 [2024-12-13 10:40:13.478484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.865 qpair failed and we were unable to recover it. 00:38:19.865 [2024-12-13 10:40:13.478579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.865 [2024-12-13 10:40:13.478593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.865 qpair failed and we were unable to recover it. 00:38:19.865 [2024-12-13 10:40:13.478752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.865 [2024-12-13 10:40:13.478765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.865 qpair failed and we were unable to recover it. 00:38:19.865 [2024-12-13 10:40:13.478849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.865 [2024-12-13 10:40:13.478862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.865 qpair failed and we were unable to recover it. 00:38:19.865 [2024-12-13 10:40:13.478939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.865 [2024-12-13 10:40:13.478952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.865 qpair failed and we were unable to recover it. 00:38:19.865 [2024-12-13 10:40:13.479022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.865 [2024-12-13 10:40:13.479035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.865 qpair failed and we were unable to recover it. 00:38:19.865 [2024-12-13 10:40:13.479115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.865 [2024-12-13 10:40:13.479128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.865 qpair failed and we were unable to recover it. 00:38:19.865 [2024-12-13 10:40:13.479198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.865 [2024-12-13 10:40:13.479228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.865 qpair failed and we were unable to recover it. 00:38:19.865 [2024-12-13 10:40:13.479427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.865 [2024-12-13 10:40:13.479479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.865 qpair failed and we were unable to recover it. 00:38:19.865 [2024-12-13 10:40:13.479610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.865 [2024-12-13 10:40:13.479652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.865 qpair failed and we were unable to recover it. 00:38:19.865 [2024-12-13 10:40:13.479791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.865 [2024-12-13 10:40:13.479804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.865 qpair failed and we were unable to recover it. 00:38:19.865 [2024-12-13 10:40:13.479904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.865 [2024-12-13 10:40:13.479918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.865 qpair failed and we were unable to recover it. 00:38:19.865 [2024-12-13 10:40:13.479988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.865 [2024-12-13 10:40:13.480001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.865 qpair failed and we were unable to recover it. 00:38:19.865 [2024-12-13 10:40:13.480102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.865 [2024-12-13 10:40:13.480116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.865 qpair failed and we were unable to recover it. 00:38:19.865 [2024-12-13 10:40:13.480259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.865 [2024-12-13 10:40:13.480295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.865 qpair failed and we were unable to recover it. 00:38:19.865 [2024-12-13 10:40:13.480440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.865 [2024-12-13 10:40:13.480505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:19.865 qpair failed and we were unable to recover it. 00:38:19.865 [2024-12-13 10:40:13.480669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.865 [2024-12-13 10:40:13.480712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:19.865 qpair failed and we were unable to recover it. 00:38:19.865 [2024-12-13 10:40:13.480913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.865 [2024-12-13 10:40:13.480934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:19.865 qpair failed and we were unable to recover it. 00:38:19.865 [2024-12-13 10:40:13.481025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.865 [2024-12-13 10:40:13.481046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:19.865 qpair failed and we were unable to recover it. 00:38:19.865 [2024-12-13 10:40:13.481198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.865 [2024-12-13 10:40:13.481222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:19.865 qpair failed and we were unable to recover it. 00:38:19.865 [2024-12-13 10:40:13.481358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.865 [2024-12-13 10:40:13.481380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:19.865 qpair failed and we were unable to recover it. 00:38:19.865 [2024-12-13 10:40:13.481484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.865 [2024-12-13 10:40:13.481506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:19.865 qpair failed and we were unable to recover it. 00:38:19.865 [2024-12-13 10:40:13.481736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.865 [2024-12-13 10:40:13.481757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:19.865 qpair failed and we were unable to recover it. 00:38:19.865 [2024-12-13 10:40:13.481933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.865 [2024-12-13 10:40:13.481954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:19.865 qpair failed and we were unable to recover it. 00:38:19.865 [2024-12-13 10:40:13.482054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.865 [2024-12-13 10:40:13.482075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:19.865 qpair failed and we were unable to recover it. 00:38:19.865 [2024-12-13 10:40:13.482176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.865 [2024-12-13 10:40:13.482197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:19.865 qpair failed and we were unable to recover it. 00:38:19.865 [2024-12-13 10:40:13.482285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.865 [2024-12-13 10:40:13.482306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:19.865 qpair failed and we were unable to recover it. 00:38:19.865 [2024-12-13 10:40:13.482421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.865 [2024-12-13 10:40:13.482443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:19.865 qpair failed and we were unable to recover it. 00:38:19.865 [2024-12-13 10:40:13.482611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.865 [2024-12-13 10:40:13.482633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:19.865 qpair failed and we were unable to recover it. 00:38:19.865 [2024-12-13 10:40:13.482719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.865 [2024-12-13 10:40:13.482741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:19.865 qpair failed and we were unable to recover it. 00:38:19.865 [2024-12-13 10:40:13.482838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.865 [2024-12-13 10:40:13.482859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:19.866 qpair failed and we were unable to recover it. 00:38:19.866 [2024-12-13 10:40:13.483012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.866 [2024-12-13 10:40:13.483059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:19.866 qpair failed and we were unable to recover it. 00:38:19.866 [2024-12-13 10:40:13.483205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.866 [2024-12-13 10:40:13.483248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:19.866 qpair failed and we were unable to recover it. 00:38:19.866 [2024-12-13 10:40:13.483406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.866 [2024-12-13 10:40:13.483460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:19.866 qpair failed and we were unable to recover it. 00:38:19.866 [2024-12-13 10:40:13.483613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.866 [2024-12-13 10:40:13.483657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:19.866 qpair failed and we were unable to recover it. 00:38:19.866 [2024-12-13 10:40:13.483842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.866 [2024-12-13 10:40:13.483864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:19.866 qpair failed and we were unable to recover it. 00:38:19.866 [2024-12-13 10:40:13.483979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.866 [2024-12-13 10:40:13.484025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.866 qpair failed and we were unable to recover it. 00:38:19.866 [2024-12-13 10:40:13.484169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.866 [2024-12-13 10:40:13.484212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.866 qpair failed and we were unable to recover it. 00:38:19.866 [2024-12-13 10:40:13.484365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.866 [2024-12-13 10:40:13.484407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.866 qpair failed and we were unable to recover it. 00:38:19.866 [2024-12-13 10:40:13.484577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.866 [2024-12-13 10:40:13.484621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.866 qpair failed and we were unable to recover it. 00:38:19.866 [2024-12-13 10:40:13.484757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.866 [2024-12-13 10:40:13.484799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.866 qpair failed and we were unable to recover it. 00:38:19.866 [2024-12-13 10:40:13.485131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.866 [2024-12-13 10:40:13.485174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.866 qpair failed and we were unable to recover it. 00:38:19.866 [2024-12-13 10:40:13.485373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.866 [2024-12-13 10:40:13.485414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.866 qpair failed and we were unable to recover it. 00:38:19.866 [2024-12-13 10:40:13.485593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.866 [2024-12-13 10:40:13.485636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.866 qpair failed and we were unable to recover it. 00:38:19.866 [2024-12-13 10:40:13.485808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.866 [2024-12-13 10:40:13.485821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.866 qpair failed and we were unable to recover it. 00:38:19.866 [2024-12-13 10:40:13.485973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.866 [2024-12-13 10:40:13.486016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.866 qpair failed and we were unable to recover it. 00:38:19.866 [2024-12-13 10:40:13.486217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.866 [2024-12-13 10:40:13.486260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.866 qpair failed and we were unable to recover it. 00:38:19.866 [2024-12-13 10:40:13.486520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.866 [2024-12-13 10:40:13.486563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.866 qpair failed and we were unable to recover it. 00:38:19.866 [2024-12-13 10:40:13.486758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.866 [2024-12-13 10:40:13.486800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.866 qpair failed and we were unable to recover it. 00:38:19.866 [2024-12-13 10:40:13.486971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.866 [2024-12-13 10:40:13.486985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.866 qpair failed and we were unable to recover it. 00:38:19.866 [2024-12-13 10:40:13.487072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.866 [2024-12-13 10:40:13.487085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.866 qpair failed and we were unable to recover it. 00:38:19.866 [2024-12-13 10:40:13.487170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.866 [2024-12-13 10:40:13.487183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.866 qpair failed and we were unable to recover it. 00:38:19.866 [2024-12-13 10:40:13.487273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.866 [2024-12-13 10:40:13.487286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.866 qpair failed and we were unable to recover it. 00:38:19.866 [2024-12-13 10:40:13.487438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.866 [2024-12-13 10:40:13.487492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.866 qpair failed and we were unable to recover it. 00:38:19.866 [2024-12-13 10:40:13.487641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.866 [2024-12-13 10:40:13.487683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.866 qpair failed and we were unable to recover it. 00:38:19.866 [2024-12-13 10:40:13.487879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.866 [2024-12-13 10:40:13.487920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.866 qpair failed and we were unable to recover it. 00:38:19.866 [2024-12-13 10:40:13.488029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.866 [2024-12-13 10:40:13.488042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.866 qpair failed and we were unable to recover it. 00:38:19.866 [2024-12-13 10:40:13.488176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.866 [2024-12-13 10:40:13.488189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.866 qpair failed and we were unable to recover it. 00:38:19.866 [2024-12-13 10:40:13.488336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.866 [2024-12-13 10:40:13.488349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.866 qpair failed and we were unable to recover it. 00:38:19.866 [2024-12-13 10:40:13.488432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.866 [2024-12-13 10:40:13.488446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.866 qpair failed and we were unable to recover it. 00:38:19.866 [2024-12-13 10:40:13.488658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.866 [2024-12-13 10:40:13.488672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.866 qpair failed and we were unable to recover it. 00:38:19.866 [2024-12-13 10:40:13.488751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.866 [2024-12-13 10:40:13.488765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.866 qpair failed and we were unable to recover it. 00:38:19.866 [2024-12-13 10:40:13.488868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.866 [2024-12-13 10:40:13.488890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.866 qpair failed and we were unable to recover it. 00:38:19.866 [2024-12-13 10:40:13.488974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.866 [2024-12-13 10:40:13.488987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.866 qpair failed and we were unable to recover it. 00:38:19.866 [2024-12-13 10:40:13.489060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.866 [2024-12-13 10:40:13.489072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.866 qpair failed and we were unable to recover it. 00:38:19.866 [2024-12-13 10:40:13.489162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.866 [2024-12-13 10:40:13.489202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.866 qpair failed and we were unable to recover it. 00:38:19.866 [2024-12-13 10:40:13.489330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.866 [2024-12-13 10:40:13.489372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.866 qpair failed and we were unable to recover it. 00:38:19.866 [2024-12-13 10:40:13.489537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.866 [2024-12-13 10:40:13.489581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.866 qpair failed and we were unable to recover it. 00:38:19.866 [2024-12-13 10:40:13.489716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.866 [2024-12-13 10:40:13.489729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.866 qpair failed and we were unable to recover it. 00:38:19.866 [2024-12-13 10:40:13.489805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.867 [2024-12-13 10:40:13.489818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.867 qpair failed and we were unable to recover it. 00:38:19.867 [2024-12-13 10:40:13.489964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.867 [2024-12-13 10:40:13.490006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.867 qpair failed and we were unable to recover it. 00:38:19.867 [2024-12-13 10:40:13.490154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.867 [2024-12-13 10:40:13.490195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.867 qpair failed and we were unable to recover it. 00:38:19.867 [2024-12-13 10:40:13.490402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.867 [2024-12-13 10:40:13.490445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.867 qpair failed and we were unable to recover it. 00:38:19.867 [2024-12-13 10:40:13.490606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.867 [2024-12-13 10:40:13.490649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.867 qpair failed and we were unable to recover it. 00:38:19.867 [2024-12-13 10:40:13.490859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.867 [2024-12-13 10:40:13.490902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.867 qpair failed and we were unable to recover it. 00:38:19.867 [2024-12-13 10:40:13.491049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.867 [2024-12-13 10:40:13.491091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.867 qpair failed and we were unable to recover it. 00:38:19.867 [2024-12-13 10:40:13.491283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.867 [2024-12-13 10:40:13.491325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.867 qpair failed and we were unable to recover it. 00:38:19.867 [2024-12-13 10:40:13.491537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.867 [2024-12-13 10:40:13.491582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.867 qpair failed and we were unable to recover it. 00:38:19.867 [2024-12-13 10:40:13.491837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.867 [2024-12-13 10:40:13.491879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.867 qpair failed and we were unable to recover it. 00:38:19.867 [2024-12-13 10:40:13.492081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.867 [2024-12-13 10:40:13.492124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.867 qpair failed and we were unable to recover it. 00:38:19.867 [2024-12-13 10:40:13.492276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.867 [2024-12-13 10:40:13.492319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.867 qpair failed and we were unable to recover it. 00:38:19.867 [2024-12-13 10:40:13.492535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.867 [2024-12-13 10:40:13.492579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.867 qpair failed and we were unable to recover it. 00:38:19.867 [2024-12-13 10:40:13.492731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.867 [2024-12-13 10:40:13.492774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.867 qpair failed and we were unable to recover it. 00:38:19.867 [2024-12-13 10:40:13.492971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.867 [2024-12-13 10:40:13.493013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.867 qpair failed and we were unable to recover it. 00:38:19.867 [2024-12-13 10:40:13.493227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.867 [2024-12-13 10:40:13.493270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.867 qpair failed and we were unable to recover it. 00:38:19.867 [2024-12-13 10:40:13.493498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.867 [2024-12-13 10:40:13.493541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.867 qpair failed and we were unable to recover it. 00:38:19.867 [2024-12-13 10:40:13.493747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.867 [2024-12-13 10:40:13.493796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.867 qpair failed and we were unable to recover it. 00:38:19.867 [2024-12-13 10:40:13.494000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.867 [2024-12-13 10:40:13.494042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.867 qpair failed and we were unable to recover it. 00:38:19.867 [2024-12-13 10:40:13.494192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.867 [2024-12-13 10:40:13.494232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.867 qpair failed and we were unable to recover it. 00:38:19.867 [2024-12-13 10:40:13.494372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.867 [2024-12-13 10:40:13.494414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.867 qpair failed and we were unable to recover it. 00:38:19.867 [2024-12-13 10:40:13.494647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.867 [2024-12-13 10:40:13.494696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:19.867 qpair failed and we were unable to recover it. 00:38:19.867 [2024-12-13 10:40:13.494899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.867 [2024-12-13 10:40:13.494920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:19.867 qpair failed and we were unable to recover it. 00:38:19.867 [2024-12-13 10:40:13.495087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.867 [2024-12-13 10:40:13.495130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:19.867 qpair failed and we were unable to recover it. 00:38:19.867 [2024-12-13 10:40:13.495275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.867 [2024-12-13 10:40:13.495318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:19.867 qpair failed and we were unable to recover it. 00:38:19.867 [2024-12-13 10:40:13.495543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.867 [2024-12-13 10:40:13.495590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:19.867 qpair failed and we were unable to recover it. 00:38:19.867 [2024-12-13 10:40:13.495795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.867 [2024-12-13 10:40:13.495815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:19.867 qpair failed and we were unable to recover it. 00:38:19.867 [2024-12-13 10:40:13.495905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.867 [2024-12-13 10:40:13.495951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.867 qpair failed and we were unable to recover it. 00:38:19.867 [2024-12-13 10:40:13.496164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.867 [2024-12-13 10:40:13.496207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.867 qpair failed and we were unable to recover it. 00:38:19.867 [2024-12-13 10:40:13.496333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.867 [2024-12-13 10:40:13.496375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.867 qpair failed and we were unable to recover it. 00:38:19.867 [2024-12-13 10:40:13.496600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.867 [2024-12-13 10:40:13.496643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.867 qpair failed and we were unable to recover it. 00:38:19.867 [2024-12-13 10:40:13.496850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.867 [2024-12-13 10:40:13.496892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.867 qpair failed and we were unable to recover it. 00:38:19.867 [2024-12-13 10:40:13.497048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.867 [2024-12-13 10:40:13.497062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.867 qpair failed and we were unable to recover it. 00:38:19.867 [2024-12-13 10:40:13.497139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.867 [2024-12-13 10:40:13.497152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.867 qpair failed and we were unable to recover it. 00:38:19.867 [2024-12-13 10:40:13.497238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.867 [2024-12-13 10:40:13.497252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.867 qpair failed and we were unable to recover it. 00:38:19.867 [2024-12-13 10:40:13.497389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.867 [2024-12-13 10:40:13.497403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.867 qpair failed and we were unable to recover it. 00:38:19.867 [2024-12-13 10:40:13.497533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.867 [2024-12-13 10:40:13.497576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.867 qpair failed and we were unable to recover it. 00:38:19.867 [2024-12-13 10:40:13.497714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.867 [2024-12-13 10:40:13.497756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.867 qpair failed and we were unable to recover it. 00:38:19.867 [2024-12-13 10:40:13.498042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.867 [2024-12-13 10:40:13.498085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.867 qpair failed and we were unable to recover it. 00:38:19.868 [2024-12-13 10:40:13.498238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.868 [2024-12-13 10:40:13.498279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.868 qpair failed and we were unable to recover it. 00:38:19.868 [2024-12-13 10:40:13.498433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.868 [2024-12-13 10:40:13.498489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.868 qpair failed and we were unable to recover it. 00:38:19.868 [2024-12-13 10:40:13.498628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.868 [2024-12-13 10:40:13.498672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.868 qpair failed and we were unable to recover it. 00:38:19.868 [2024-12-13 10:40:13.498873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.868 [2024-12-13 10:40:13.498915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.868 qpair failed and we were unable to recover it. 00:38:19.868 [2024-12-13 10:40:13.499124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.868 [2024-12-13 10:40:13.499166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.868 qpair failed and we were unable to recover it. 00:38:19.868 [2024-12-13 10:40:13.499317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.868 [2024-12-13 10:40:13.499359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.868 qpair failed and we were unable to recover it. 00:38:19.868 [2024-12-13 10:40:13.499526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.868 [2024-12-13 10:40:13.499571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.868 qpair failed and we were unable to recover it. 00:38:19.868 [2024-12-13 10:40:13.499837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.868 [2024-12-13 10:40:13.499879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.868 qpair failed and we were unable to recover it. 00:38:19.868 [2024-12-13 10:40:13.500149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.868 [2024-12-13 10:40:13.500191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.868 qpair failed and we were unable to recover it. 00:38:19.868 [2024-12-13 10:40:13.500481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.868 [2024-12-13 10:40:13.500525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.868 qpair failed and we were unable to recover it. 00:38:19.868 [2024-12-13 10:40:13.500743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.868 [2024-12-13 10:40:13.500789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.868 qpair failed and we were unable to recover it. 00:38:19.868 [2024-12-13 10:40:13.500864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.868 [2024-12-13 10:40:13.500878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.868 qpair failed and we were unable to recover it. 00:38:19.868 [2024-12-13 10:40:13.501042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.868 [2024-12-13 10:40:13.501090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.868 qpair failed and we were unable to recover it. 00:38:19.868 [2024-12-13 10:40:13.501237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.868 [2024-12-13 10:40:13.501280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.868 qpair failed and we were unable to recover it. 00:38:19.868 [2024-12-13 10:40:13.501485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.868 [2024-12-13 10:40:13.501528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.868 qpair failed and we were unable to recover it. 00:38:19.868 [2024-12-13 10:40:13.501810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.868 [2024-12-13 10:40:13.501852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.868 qpair failed and we were unable to recover it. 00:38:19.868 [2024-12-13 10:40:13.502064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.868 [2024-12-13 10:40:13.502106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.868 qpair failed and we were unable to recover it. 00:38:19.868 [2024-12-13 10:40:13.502321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.868 [2024-12-13 10:40:13.502362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.868 qpair failed and we were unable to recover it. 00:38:19.868 [2024-12-13 10:40:13.502512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.868 [2024-12-13 10:40:13.502562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.868 qpair failed and we were unable to recover it. 00:38:19.868 [2024-12-13 10:40:13.502822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.868 [2024-12-13 10:40:13.502865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.868 qpair failed and we were unable to recover it. 00:38:19.868 [2024-12-13 10:40:13.502999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.868 [2024-12-13 10:40:13.503012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.868 qpair failed and we were unable to recover it. 00:38:19.868 [2024-12-13 10:40:13.503090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.868 [2024-12-13 10:40:13.503104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.868 qpair failed and we were unable to recover it. 00:38:19.868 [2024-12-13 10:40:13.503265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.868 [2024-12-13 10:40:13.503279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.868 qpair failed and we were unable to recover it. 00:38:19.868 [2024-12-13 10:40:13.503370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.868 [2024-12-13 10:40:13.503383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.868 qpair failed and we were unable to recover it. 00:38:19.868 [2024-12-13 10:40:13.503529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.868 [2024-12-13 10:40:13.503586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.868 qpair failed and we were unable to recover it. 00:38:19.868 [2024-12-13 10:40:13.503757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.868 [2024-12-13 10:40:13.503799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.868 qpair failed and we were unable to recover it. 00:38:19.868 [2024-12-13 10:40:13.504011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.868 [2024-12-13 10:40:13.504054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.868 qpair failed and we were unable to recover it. 00:38:19.868 [2024-12-13 10:40:13.504323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.868 [2024-12-13 10:40:13.504366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.868 qpair failed and we were unable to recover it. 00:38:19.868 [2024-12-13 10:40:13.504518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.868 [2024-12-13 10:40:13.504561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.868 qpair failed and we were unable to recover it. 00:38:19.868 [2024-12-13 10:40:13.504701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.868 [2024-12-13 10:40:13.504744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.868 qpair failed and we were unable to recover it. 00:38:19.868 [2024-12-13 10:40:13.504884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.868 [2024-12-13 10:40:13.504926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.868 qpair failed and we were unable to recover it. 00:38:19.868 [2024-12-13 10:40:13.505102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.868 [2024-12-13 10:40:13.505116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.869 qpair failed and we were unable to recover it. 00:38:19.869 [2024-12-13 10:40:13.505296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.869 [2024-12-13 10:40:13.505338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.869 qpair failed and we were unable to recover it. 00:38:19.869 [2024-12-13 10:40:13.505536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.869 [2024-12-13 10:40:13.505579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.869 qpair failed and we were unable to recover it. 00:38:19.869 [2024-12-13 10:40:13.505731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.869 [2024-12-13 10:40:13.505774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.869 qpair failed and we were unable to recover it. 00:38:19.869 [2024-12-13 10:40:13.505978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.869 [2024-12-13 10:40:13.506020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.869 qpair failed and we were unable to recover it. 00:38:19.869 [2024-12-13 10:40:13.506172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.869 [2024-12-13 10:40:13.506214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.869 qpair failed and we were unable to recover it. 00:38:19.869 [2024-12-13 10:40:13.506346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.869 [2024-12-13 10:40:13.506388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.869 qpair failed and we were unable to recover it. 00:38:19.869 [2024-12-13 10:40:13.506605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.869 [2024-12-13 10:40:13.506660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.869 qpair failed and we were unable to recover it. 00:38:19.869 [2024-12-13 10:40:13.506816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.869 [2024-12-13 10:40:13.506829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.869 qpair failed and we were unable to recover it. 00:38:19.869 [2024-12-13 10:40:13.506997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.869 [2024-12-13 10:40:13.507010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.869 qpair failed and we were unable to recover it. 00:38:19.869 [2024-12-13 10:40:13.507157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.869 [2024-12-13 10:40:13.507199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.869 qpair failed and we were unable to recover it. 00:38:19.869 [2024-12-13 10:40:13.507337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.869 [2024-12-13 10:40:13.507378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.869 qpair failed and we were unable to recover it. 00:38:19.869 [2024-12-13 10:40:13.507602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.869 [2024-12-13 10:40:13.507646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.869 qpair failed and we were unable to recover it. 00:38:19.869 [2024-12-13 10:40:13.507879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.869 [2024-12-13 10:40:13.507893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.869 qpair failed and we were unable to recover it. 00:38:19.869 [2024-12-13 10:40:13.508053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.869 [2024-12-13 10:40:13.508096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.869 qpair failed and we were unable to recover it. 00:38:19.869 [2024-12-13 10:40:13.508361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.869 [2024-12-13 10:40:13.508402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.869 qpair failed and we were unable to recover it. 00:38:19.869 [2024-12-13 10:40:13.508633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.869 [2024-12-13 10:40:13.508677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.869 qpair failed and we were unable to recover it. 00:38:19.869 [2024-12-13 10:40:13.508836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.869 [2024-12-13 10:40:13.508878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.869 qpair failed and we were unable to recover it. 00:38:19.869 [2024-12-13 10:40:13.509077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.869 [2024-12-13 10:40:13.509119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.869 qpair failed and we were unable to recover it. 00:38:19.869 [2024-12-13 10:40:13.509252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.869 [2024-12-13 10:40:13.509294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.869 qpair failed and we were unable to recover it. 00:38:19.869 [2024-12-13 10:40:13.509440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.869 [2024-12-13 10:40:13.509497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.869 qpair failed and we were unable to recover it. 00:38:19.869 [2024-12-13 10:40:13.509690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.869 [2024-12-13 10:40:13.509732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.869 qpair failed and we were unable to recover it. 00:38:19.869 [2024-12-13 10:40:13.509941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.869 [2024-12-13 10:40:13.509954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.869 qpair failed and we were unable to recover it. 00:38:19.869 [2024-12-13 10:40:13.510039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.869 [2024-12-13 10:40:13.510052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.869 qpair failed and we were unable to recover it. 00:38:19.869 [2024-12-13 10:40:13.510186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.869 [2024-12-13 10:40:13.510199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.869 qpair failed and we were unable to recover it. 00:38:19.869 [2024-12-13 10:40:13.510347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.869 [2024-12-13 10:40:13.510360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.869 qpair failed and we were unable to recover it. 00:38:19.869 [2024-12-13 10:40:13.510599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.869 [2024-12-13 10:40:13.510643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.869 qpair failed and we were unable to recover it. 00:38:19.869 [2024-12-13 10:40:13.510801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.869 [2024-12-13 10:40:13.510849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.869 qpair failed and we were unable to recover it. 00:38:19.869 [2024-12-13 10:40:13.511055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.869 [2024-12-13 10:40:13.511095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.869 qpair failed and we were unable to recover it. 00:38:19.869 [2024-12-13 10:40:13.511247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.869 [2024-12-13 10:40:13.511260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.869 qpair failed and we were unable to recover it. 00:38:19.869 [2024-12-13 10:40:13.511423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.869 [2024-12-13 10:40:13.511476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.869 qpair failed and we were unable to recover it. 00:38:19.869 [2024-12-13 10:40:13.511606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.869 [2024-12-13 10:40:13.511648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.869 qpair failed and we were unable to recover it. 00:38:19.869 [2024-12-13 10:40:13.511799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.869 [2024-12-13 10:40:13.511841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.869 qpair failed and we were unable to recover it. 00:38:19.869 [2024-12-13 10:40:13.511978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.869 [2024-12-13 10:40:13.511992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.869 qpair failed and we were unable to recover it. 00:38:19.869 [2024-12-13 10:40:13.512141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.869 [2024-12-13 10:40:13.512189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.869 qpair failed and we were unable to recover it. 00:38:19.869 [2024-12-13 10:40:13.512351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.869 [2024-12-13 10:40:13.512394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.869 qpair failed and we were unable to recover it. 00:38:19.869 [2024-12-13 10:40:13.512558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.869 [2024-12-13 10:40:13.512602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.869 qpair failed and we were unable to recover it. 00:38:19.869 [2024-12-13 10:40:13.512792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.869 [2024-12-13 10:40:13.512804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.869 qpair failed and we were unable to recover it. 00:38:19.870 [2024-12-13 10:40:13.512877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.870 [2024-12-13 10:40:13.512891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.870 qpair failed and we were unable to recover it. 00:38:19.870 [2024-12-13 10:40:13.512972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.870 [2024-12-13 10:40:13.513012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.870 qpair failed and we were unable to recover it. 00:38:19.870 [2024-12-13 10:40:13.513268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.870 [2024-12-13 10:40:13.513311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.870 qpair failed and we were unable to recover it. 00:38:19.870 [2024-12-13 10:40:13.513520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.870 [2024-12-13 10:40:13.513563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.870 qpair failed and we were unable to recover it. 00:38:19.870 [2024-12-13 10:40:13.513713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.870 [2024-12-13 10:40:13.513756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.870 qpair failed and we were unable to recover it. 00:38:19.870 [2024-12-13 10:40:13.513975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.870 [2024-12-13 10:40:13.514018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.870 qpair failed and we were unable to recover it. 00:38:19.870 [2024-12-13 10:40:13.514248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.870 [2024-12-13 10:40:13.514290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.870 qpair failed and we were unable to recover it. 00:38:19.870 [2024-12-13 10:40:13.514552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.870 [2024-12-13 10:40:13.514595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.870 qpair failed and we were unable to recover it. 00:38:19.870 [2024-12-13 10:40:13.514806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.870 [2024-12-13 10:40:13.514849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.870 qpair failed and we were unable to recover it. 00:38:19.870 [2024-12-13 10:40:13.515058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.870 [2024-12-13 10:40:13.515100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.870 qpair failed and we were unable to recover it. 00:38:19.870 [2024-12-13 10:40:13.515302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.870 [2024-12-13 10:40:13.515315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.870 qpair failed and we were unable to recover it. 00:38:19.870 [2024-12-13 10:40:13.515408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.870 [2024-12-13 10:40:13.515422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.870 qpair failed and we were unable to recover it. 00:38:19.870 [2024-12-13 10:40:13.515530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.870 [2024-12-13 10:40:13.515543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.870 qpair failed and we were unable to recover it. 00:38:19.870 [2024-12-13 10:40:13.515675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.870 [2024-12-13 10:40:13.515688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.870 qpair failed and we were unable to recover it. 00:38:19.870 [2024-12-13 10:40:13.515765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.870 [2024-12-13 10:40:13.515778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.870 qpair failed and we were unable to recover it. 00:38:19.870 [2024-12-13 10:40:13.515868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.870 [2024-12-13 10:40:13.515881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.870 qpair failed and we were unable to recover it. 00:38:19.870 [2024-12-13 10:40:13.515972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.870 [2024-12-13 10:40:13.516015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.870 qpair failed and we were unable to recover it. 00:38:19.870 [2024-12-13 10:40:13.516170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.870 [2024-12-13 10:40:13.516210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.870 qpair failed and we were unable to recover it. 00:38:19.870 [2024-12-13 10:40:13.516421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.870 [2024-12-13 10:40:13.516476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.870 qpair failed and we were unable to recover it. 00:38:19.870 [2024-12-13 10:40:13.516682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.870 [2024-12-13 10:40:13.516736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.870 qpair failed and we were unable to recover it. 00:38:19.870 [2024-12-13 10:40:13.516942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.870 [2024-12-13 10:40:13.516984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.870 qpair failed and we were unable to recover it. 00:38:19.870 [2024-12-13 10:40:13.517115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.870 [2024-12-13 10:40:13.517128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.870 qpair failed and we were unable to recover it. 00:38:19.870 [2024-12-13 10:40:13.517285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.870 [2024-12-13 10:40:13.517298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.870 qpair failed and we were unable to recover it. 00:38:19.870 [2024-12-13 10:40:13.517378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.870 [2024-12-13 10:40:13.517392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.870 qpair failed and we were unable to recover it. 00:38:19.870 [2024-12-13 10:40:13.517477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.870 [2024-12-13 10:40:13.517491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.870 qpair failed and we were unable to recover it. 00:38:19.870 [2024-12-13 10:40:13.517571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.870 [2024-12-13 10:40:13.517584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.870 qpair failed and we were unable to recover it. 00:38:19.870 [2024-12-13 10:40:13.517665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.870 [2024-12-13 10:40:13.517679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.870 qpair failed and we were unable to recover it. 00:38:19.870 [2024-12-13 10:40:13.517763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.870 [2024-12-13 10:40:13.517777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.870 qpair failed and we were unable to recover it. 00:38:19.870 [2024-12-13 10:40:13.517916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.870 [2024-12-13 10:40:13.517929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.870 qpair failed and we were unable to recover it. 00:38:19.870 [2024-12-13 10:40:13.518078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.870 [2024-12-13 10:40:13.518094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.870 qpair failed and we were unable to recover it. 00:38:19.870 [2024-12-13 10:40:13.518166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.870 [2024-12-13 10:40:13.518180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.870 qpair failed and we were unable to recover it. 00:38:19.870 [2024-12-13 10:40:13.518322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.870 [2024-12-13 10:40:13.518336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.870 qpair failed and we were unable to recover it. 00:38:19.870 [2024-12-13 10:40:13.518484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.870 [2024-12-13 10:40:13.518498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.870 qpair failed and we were unable to recover it. 00:38:19.870 [2024-12-13 10:40:13.518599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.870 [2024-12-13 10:40:13.518613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.870 qpair failed and we were unable to recover it. 00:38:19.870 [2024-12-13 10:40:13.518687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.870 [2024-12-13 10:40:13.518700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.870 qpair failed and we were unable to recover it. 00:38:19.870 [2024-12-13 10:40:13.518833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.870 [2024-12-13 10:40:13.518846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.870 qpair failed and we were unable to recover it. 00:38:19.870 [2024-12-13 10:40:13.518926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.870 [2024-12-13 10:40:13.518939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.870 qpair failed and we were unable to recover it. 00:38:19.870 [2024-12-13 10:40:13.519074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.870 [2024-12-13 10:40:13.519087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.870 qpair failed and we were unable to recover it. 00:38:19.870 [2024-12-13 10:40:13.519150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.870 [2024-12-13 10:40:13.519163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.870 qpair failed and we were unable to recover it. 00:38:19.870 [2024-12-13 10:40:13.519228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.871 [2024-12-13 10:40:13.519241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.871 qpair failed and we were unable to recover it. 00:38:19.871 [2024-12-13 10:40:13.519321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.871 [2024-12-13 10:40:13.519335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.871 qpair failed and we were unable to recover it. 00:38:19.871 [2024-12-13 10:40:13.519469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.871 [2024-12-13 10:40:13.519483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.871 qpair failed and we were unable to recover it. 00:38:19.871 [2024-12-13 10:40:13.519561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.871 [2024-12-13 10:40:13.519575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.871 qpair failed and we were unable to recover it. 00:38:19.871 [2024-12-13 10:40:13.519653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.871 [2024-12-13 10:40:13.519695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.871 qpair failed and we were unable to recover it. 00:38:19.871 [2024-12-13 10:40:13.519914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.871 [2024-12-13 10:40:13.519955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.871 qpair failed and we were unable to recover it. 00:38:19.871 [2024-12-13 10:40:13.520217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.871 [2024-12-13 10:40:13.520230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.871 qpair failed and we were unable to recover it. 00:38:19.871 [2024-12-13 10:40:13.520318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.871 [2024-12-13 10:40:13.520331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.871 qpair failed and we were unable to recover it. 00:38:19.871 [2024-12-13 10:40:13.520400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.871 [2024-12-13 10:40:13.520413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.871 qpair failed and we were unable to recover it. 00:38:19.871 [2024-12-13 10:40:13.520547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.871 [2024-12-13 10:40:13.520561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.871 qpair failed and we were unable to recover it. 00:38:19.871 [2024-12-13 10:40:13.520649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.871 [2024-12-13 10:40:13.520663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.871 qpair failed and we were unable to recover it. 00:38:19.871 [2024-12-13 10:40:13.520733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.871 [2024-12-13 10:40:13.520746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.871 qpair failed and we were unable to recover it. 00:38:19.871 [2024-12-13 10:40:13.520907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.871 [2024-12-13 10:40:13.520948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.871 qpair failed and we were unable to recover it. 00:38:19.871 [2024-12-13 10:40:13.521092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.871 [2024-12-13 10:40:13.521133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.871 qpair failed and we were unable to recover it. 00:38:19.871 [2024-12-13 10:40:13.521284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.871 [2024-12-13 10:40:13.521327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.871 qpair failed and we were unable to recover it. 00:38:19.871 [2024-12-13 10:40:13.521531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.871 [2024-12-13 10:40:13.521573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.871 qpair failed and we were unable to recover it. 00:38:19.871 [2024-12-13 10:40:13.521702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.871 [2024-12-13 10:40:13.521744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.871 qpair failed and we were unable to recover it. 00:38:19.871 [2024-12-13 10:40:13.522018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.871 [2024-12-13 10:40:13.522060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.871 qpair failed and we were unable to recover it. 00:38:19.871 [2024-12-13 10:40:13.522251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.871 [2024-12-13 10:40:13.522264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.871 qpair failed and we were unable to recover it. 00:38:19.871 [2024-12-13 10:40:13.522366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.871 [2024-12-13 10:40:13.522380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.871 qpair failed and we were unable to recover it. 00:38:19.871 [2024-12-13 10:40:13.522613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.871 [2024-12-13 10:40:13.522658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.871 qpair failed and we were unable to recover it. 00:38:19.871 [2024-12-13 10:40:13.522795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.871 [2024-12-13 10:40:13.522837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.871 qpair failed and we were unable to recover it. 00:38:19.871 [2024-12-13 10:40:13.523037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.871 [2024-12-13 10:40:13.523080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.871 qpair failed and we were unable to recover it. 00:38:19.871 [2024-12-13 10:40:13.523212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.871 [2024-12-13 10:40:13.523225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.871 qpair failed and we were unable to recover it. 00:38:19.871 [2024-12-13 10:40:13.523384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.871 [2024-12-13 10:40:13.523397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.871 qpair failed and we were unable to recover it. 00:38:19.871 [2024-12-13 10:40:13.523566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.871 [2024-12-13 10:40:13.523580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.871 qpair failed and we were unable to recover it. 00:38:19.871 [2024-12-13 10:40:13.523729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.871 [2024-12-13 10:40:13.523772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.871 qpair failed and we were unable to recover it. 00:38:19.871 [2024-12-13 10:40:13.524034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.871 [2024-12-13 10:40:13.524076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.871 qpair failed and we were unable to recover it. 00:38:19.871 [2024-12-13 10:40:13.524236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.871 [2024-12-13 10:40:13.524278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.871 qpair failed and we were unable to recover it. 00:38:19.871 [2024-12-13 10:40:13.524495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.871 [2024-12-13 10:40:13.524539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.871 qpair failed and we were unable to recover it. 00:38:19.871 [2024-12-13 10:40:13.524748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.871 [2024-12-13 10:40:13.524797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.871 qpair failed and we were unable to recover it. 00:38:19.871 [2024-12-13 10:40:13.524980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.871 [2024-12-13 10:40:13.524993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.871 qpair failed and we were unable to recover it. 00:38:19.871 [2024-12-13 10:40:13.525202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.871 [2024-12-13 10:40:13.525215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.871 qpair failed and we were unable to recover it. 00:38:19.871 [2024-12-13 10:40:13.525316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.871 [2024-12-13 10:40:13.525329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.871 qpair failed and we were unable to recover it. 00:38:19.871 [2024-12-13 10:40:13.525483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.871 [2024-12-13 10:40:13.525497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.871 qpair failed and we were unable to recover it. 00:38:19.871 [2024-12-13 10:40:13.525641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.871 [2024-12-13 10:40:13.525654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.871 qpair failed and we were unable to recover it. 00:38:19.871 [2024-12-13 10:40:13.525804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.871 [2024-12-13 10:40:13.525817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.871 qpair failed and we were unable to recover it. 00:38:19.871 [2024-12-13 10:40:13.525903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.871 [2024-12-13 10:40:13.525916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.871 qpair failed and we were unable to recover it. 00:38:19.871 [2024-12-13 10:40:13.526061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.871 [2024-12-13 10:40:13.526101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.872 qpair failed and we were unable to recover it. 00:38:19.872 [2024-12-13 10:40:13.526228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.872 [2024-12-13 10:40:13.526270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.872 qpair failed and we were unable to recover it. 00:38:19.872 [2024-12-13 10:40:13.526474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.872 [2024-12-13 10:40:13.526517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.872 qpair failed and we were unable to recover it. 00:38:19.872 [2024-12-13 10:40:13.526722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.872 [2024-12-13 10:40:13.526735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.872 qpair failed and we were unable to recover it. 00:38:19.872 [2024-12-13 10:40:13.526902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.872 [2024-12-13 10:40:13.526915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.872 qpair failed and we were unable to recover it. 00:38:19.872 [2024-12-13 10:40:13.527134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.872 [2024-12-13 10:40:13.527149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.872 qpair failed and we were unable to recover it. 00:38:19.872 [2024-12-13 10:40:13.527368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.872 [2024-12-13 10:40:13.527386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.872 qpair failed and we were unable to recover it. 00:38:19.872 [2024-12-13 10:40:13.527594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.872 [2024-12-13 10:40:13.527608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.872 qpair failed and we were unable to recover it. 00:38:19.872 [2024-12-13 10:40:13.527743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.872 [2024-12-13 10:40:13.527757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.872 qpair failed and we were unable to recover it. 00:38:19.872 [2024-12-13 10:40:13.527904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.872 [2024-12-13 10:40:13.527917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.872 qpair failed and we were unable to recover it. 00:38:19.872 [2024-12-13 10:40:13.527994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.872 [2024-12-13 10:40:13.528008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.872 qpair failed and we were unable to recover it. 00:38:19.872 [2024-12-13 10:40:13.528217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.872 [2024-12-13 10:40:13.528259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.872 qpair failed and we were unable to recover it. 00:38:19.872 [2024-12-13 10:40:13.528382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.872 [2024-12-13 10:40:13.528422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.872 qpair failed and we were unable to recover it. 00:38:19.872 [2024-12-13 10:40:13.528593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.872 [2024-12-13 10:40:13.528635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.872 qpair failed and we were unable to recover it. 00:38:19.872 [2024-12-13 10:40:13.528784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.872 [2024-12-13 10:40:13.528827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.872 qpair failed and we were unable to recover it. 00:38:19.872 [2024-12-13 10:40:13.528961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.872 [2024-12-13 10:40:13.529002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.872 qpair failed and we were unable to recover it. 00:38:19.872 [2024-12-13 10:40:13.529134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.872 [2024-12-13 10:40:13.529175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.872 qpair failed and we were unable to recover it. 00:38:19.872 [2024-12-13 10:40:13.529404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.872 [2024-12-13 10:40:13.529446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.872 qpair failed and we were unable to recover it. 00:38:19.872 [2024-12-13 10:40:13.529671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.872 [2024-12-13 10:40:13.529713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.872 qpair failed and we were unable to recover it. 00:38:19.872 [2024-12-13 10:40:13.529840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.872 [2024-12-13 10:40:13.529854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.872 qpair failed and we were unable to recover it. 00:38:19.872 [2024-12-13 10:40:13.530016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.872 [2024-12-13 10:40:13.530057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.872 qpair failed and we were unable to recover it. 00:38:19.872 [2024-12-13 10:40:13.530186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.872 [2024-12-13 10:40:13.530227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.872 qpair failed and we were unable to recover it. 00:38:19.872 [2024-12-13 10:40:13.530361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.872 [2024-12-13 10:40:13.530403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.872 qpair failed and we were unable to recover it. 00:38:19.872 [2024-12-13 10:40:13.530701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.872 [2024-12-13 10:40:13.530745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.872 qpair failed and we were unable to recover it. 00:38:19.872 [2024-12-13 10:40:13.530957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.872 [2024-12-13 10:40:13.530999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.872 qpair failed and we were unable to recover it. 00:38:19.872 [2024-12-13 10:40:13.531206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.872 [2024-12-13 10:40:13.531248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.872 qpair failed and we were unable to recover it. 00:38:19.872 [2024-12-13 10:40:13.531466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.872 [2024-12-13 10:40:13.531510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.872 qpair failed and we were unable to recover it. 00:38:19.872 [2024-12-13 10:40:13.531721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.872 [2024-12-13 10:40:13.531763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.872 qpair failed and we were unable to recover it. 00:38:19.872 [2024-12-13 10:40:13.531964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.872 [2024-12-13 10:40:13.531977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.872 qpair failed and we were unable to recover it. 00:38:19.872 [2024-12-13 10:40:13.532071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.872 [2024-12-13 10:40:13.532083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.872 qpair failed and we were unable to recover it. 00:38:19.872 [2024-12-13 10:40:13.532233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.872 [2024-12-13 10:40:13.532275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.872 qpair failed and we were unable to recover it. 00:38:19.872 [2024-12-13 10:40:13.532427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.872 [2024-12-13 10:40:13.532524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.872 qpair failed and we were unable to recover it. 00:38:19.872 [2024-12-13 10:40:13.532652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.872 [2024-12-13 10:40:13.532701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.872 qpair failed and we were unable to recover it. 00:38:19.872 [2024-12-13 10:40:13.532963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.872 [2024-12-13 10:40:13.533005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.872 qpair failed and we were unable to recover it. 00:38:19.872 [2024-12-13 10:40:13.533263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.872 [2024-12-13 10:40:13.533304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.872 qpair failed and we were unable to recover it. 00:38:19.872 [2024-12-13 10:40:13.533515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.872 [2024-12-13 10:40:13.533558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.872 qpair failed and we were unable to recover it. 00:38:19.872 [2024-12-13 10:40:13.533768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.872 [2024-12-13 10:40:13.533810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.872 qpair failed and we were unable to recover it. 00:38:19.872 [2024-12-13 10:40:13.533905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.872 [2024-12-13 10:40:13.533918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.872 qpair failed and we were unable to recover it. 00:38:19.872 [2024-12-13 10:40:13.534146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.872 [2024-12-13 10:40:13.534188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.872 qpair failed and we were unable to recover it. 00:38:19.872 [2024-12-13 10:40:13.534330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.872 [2024-12-13 10:40:13.534370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.873 qpair failed and we were unable to recover it. 00:38:19.873 [2024-12-13 10:40:13.534654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.873 [2024-12-13 10:40:13.534697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.873 qpair failed and we were unable to recover it. 00:38:19.873 [2024-12-13 10:40:13.534959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.873 [2024-12-13 10:40:13.535002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.873 qpair failed and we were unable to recover it. 00:38:19.873 [2024-12-13 10:40:13.535183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.873 [2024-12-13 10:40:13.535196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.873 qpair failed and we were unable to recover it. 00:38:19.873 [2024-12-13 10:40:13.535271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.873 [2024-12-13 10:40:13.535284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.873 qpair failed and we were unable to recover it. 00:38:19.873 [2024-12-13 10:40:13.535379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.873 [2024-12-13 10:40:13.535392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.873 qpair failed and we were unable to recover it. 00:38:19.873 [2024-12-13 10:40:13.535543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.873 [2024-12-13 10:40:13.535555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.873 qpair failed and we were unable to recover it. 00:38:19.873 [2024-12-13 10:40:13.535706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.873 [2024-12-13 10:40:13.535719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.873 qpair failed and we were unable to recover it. 00:38:19.873 [2024-12-13 10:40:13.535803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.873 [2024-12-13 10:40:13.535817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.873 qpair failed and we were unable to recover it. 00:38:19.873 [2024-12-13 10:40:13.535896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.873 [2024-12-13 10:40:13.535910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.873 qpair failed and we were unable to recover it. 00:38:19.873 [2024-12-13 10:40:13.536142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.873 [2024-12-13 10:40:13.536184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.873 qpair failed and we were unable to recover it. 00:38:19.873 [2024-12-13 10:40:13.536391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.873 [2024-12-13 10:40:13.536433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.873 qpair failed and we were unable to recover it. 00:38:19.873 [2024-12-13 10:40:13.536585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.873 [2024-12-13 10:40:13.536626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.873 qpair failed and we were unable to recover it. 00:38:19.873 [2024-12-13 10:40:13.536886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.873 [2024-12-13 10:40:13.536928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.873 qpair failed and we were unable to recover it. 00:38:19.873 [2024-12-13 10:40:13.537083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.873 [2024-12-13 10:40:13.537108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.873 qpair failed and we were unable to recover it. 00:38:19.873 [2024-12-13 10:40:13.537269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.873 [2024-12-13 10:40:13.537283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.873 qpair failed and we were unable to recover it. 00:38:19.873 [2024-12-13 10:40:13.537413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.873 [2024-12-13 10:40:13.537426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.873 qpair failed and we were unable to recover it. 00:38:19.873 [2024-12-13 10:40:13.537598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.873 [2024-12-13 10:40:13.537641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.873 qpair failed and we were unable to recover it. 00:38:19.873 [2024-12-13 10:40:13.537865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.873 [2024-12-13 10:40:13.537906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.873 qpair failed and we were unable to recover it. 00:38:19.873 [2024-12-13 10:40:13.538108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.873 [2024-12-13 10:40:13.538153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.873 qpair failed and we were unable to recover it. 00:38:19.873 [2024-12-13 10:40:13.538355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.873 [2024-12-13 10:40:13.538368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.873 qpair failed and we were unable to recover it. 00:38:19.873 [2024-12-13 10:40:13.538506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.873 [2024-12-13 10:40:13.538521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.873 qpair failed and we were unable to recover it. 00:38:19.873 [2024-12-13 10:40:13.538682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.873 [2024-12-13 10:40:13.538723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.873 qpair failed and we were unable to recover it. 00:38:19.873 [2024-12-13 10:40:13.538948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.873 [2024-12-13 10:40:13.538990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.873 qpair failed and we were unable to recover it. 00:38:19.873 [2024-12-13 10:40:13.539253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.873 [2024-12-13 10:40:13.539296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.873 qpair failed and we were unable to recover it. 00:38:19.873 [2024-12-13 10:40:13.539514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.873 [2024-12-13 10:40:13.539557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.873 qpair failed and we were unable to recover it. 00:38:19.873 [2024-12-13 10:40:13.539790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.873 [2024-12-13 10:40:13.539831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.873 qpair failed and we were unable to recover it. 00:38:19.873 [2024-12-13 10:40:13.540027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.873 [2024-12-13 10:40:13.540068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.873 qpair failed and we were unable to recover it. 00:38:19.873 [2024-12-13 10:40:13.540269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.873 [2024-12-13 10:40:13.540311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.873 qpair failed and we were unable to recover it. 00:38:19.873 [2024-12-13 10:40:13.540603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.873 [2024-12-13 10:40:13.540647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.873 qpair failed and we were unable to recover it. 00:38:19.873 [2024-12-13 10:40:13.540876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.873 [2024-12-13 10:40:13.540922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.873 qpair failed and we were unable to recover it. 00:38:19.873 [2024-12-13 10:40:13.541190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.873 [2024-12-13 10:40:13.541247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.873 qpair failed and we were unable to recover it. 00:38:19.873 [2024-12-13 10:40:13.541404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.873 [2024-12-13 10:40:13.541418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.873 qpair failed and we were unable to recover it. 00:38:19.873 [2024-12-13 10:40:13.541634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.873 [2024-12-13 10:40:13.541684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.873 qpair failed and we were unable to recover it. 00:38:19.874 [2024-12-13 10:40:13.541831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.874 [2024-12-13 10:40:13.541874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.874 qpair failed and we were unable to recover it. 00:38:19.874 [2024-12-13 10:40:13.542000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.874 [2024-12-13 10:40:13.542041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.874 qpair failed and we were unable to recover it. 00:38:19.874 [2024-12-13 10:40:13.542298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.874 [2024-12-13 10:40:13.542310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.874 qpair failed and we were unable to recover it. 00:38:19.874 [2024-12-13 10:40:13.542445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.874 [2024-12-13 10:40:13.542465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.874 qpair failed and we were unable to recover it. 00:38:19.874 [2024-12-13 10:40:13.542722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.874 [2024-12-13 10:40:13.542763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.874 qpair failed and we were unable to recover it. 00:38:19.874 [2024-12-13 10:40:13.542895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.874 [2024-12-13 10:40:13.542937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.874 qpair failed and we were unable to recover it. 00:38:19.874 [2024-12-13 10:40:13.543098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.874 [2024-12-13 10:40:13.543140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.874 qpair failed and we were unable to recover it. 00:38:19.874 [2024-12-13 10:40:13.543326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.874 [2024-12-13 10:40:13.543339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.874 qpair failed and we were unable to recover it. 00:38:19.874 [2024-12-13 10:40:13.543553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.874 [2024-12-13 10:40:13.543568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.874 qpair failed and we were unable to recover it. 00:38:19.874 [2024-12-13 10:40:13.543635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.874 [2024-12-13 10:40:13.543648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.874 qpair failed and we were unable to recover it. 00:38:19.874 [2024-12-13 10:40:13.543782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.874 [2024-12-13 10:40:13.543796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.874 qpair failed and we were unable to recover it. 00:38:19.874 [2024-12-13 10:40:13.543890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.874 [2024-12-13 10:40:13.543903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.874 qpair failed and we were unable to recover it. 00:38:19.874 [2024-12-13 10:40:13.544175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.874 [2024-12-13 10:40:13.544217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.874 qpair failed and we were unable to recover it. 00:38:19.874 [2024-12-13 10:40:13.544438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.874 [2024-12-13 10:40:13.544492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.874 qpair failed and we were unable to recover it. 00:38:19.874 [2024-12-13 10:40:13.544805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.874 [2024-12-13 10:40:13.544848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.874 qpair failed and we were unable to recover it. 00:38:19.874 [2024-12-13 10:40:13.545051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.874 [2024-12-13 10:40:13.545093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.874 qpair failed and we were unable to recover it. 00:38:19.874 [2024-12-13 10:40:13.545213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.874 [2024-12-13 10:40:13.545227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.874 qpair failed and we were unable to recover it. 00:38:19.874 [2024-12-13 10:40:13.545380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.874 [2024-12-13 10:40:13.545428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.874 qpair failed and we were unable to recover it. 00:38:19.874 [2024-12-13 10:40:13.545732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.874 [2024-12-13 10:40:13.545776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.874 qpair failed and we were unable to recover it. 00:38:19.874 [2024-12-13 10:40:13.545991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.874 [2024-12-13 10:40:13.546033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.874 qpair failed and we were unable to recover it. 00:38:19.874 [2024-12-13 10:40:13.546262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.874 [2024-12-13 10:40:13.546305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.874 qpair failed and we were unable to recover it. 00:38:19.874 [2024-12-13 10:40:13.546511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.874 [2024-12-13 10:40:13.546554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.874 qpair failed and we were unable to recover it. 00:38:19.874 [2024-12-13 10:40:13.546703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.874 [2024-12-13 10:40:13.546746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.874 qpair failed and we were unable to recover it. 00:38:19.874 [2024-12-13 10:40:13.546885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.874 [2024-12-13 10:40:13.546921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.874 qpair failed and we were unable to recover it. 00:38:19.874 [2024-12-13 10:40:13.546999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.874 [2024-12-13 10:40:13.547012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.874 qpair failed and we were unable to recover it. 00:38:19.874 [2024-12-13 10:40:13.547144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.874 [2024-12-13 10:40:13.547157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.874 qpair failed and we were unable to recover it. 00:38:19.874 [2024-12-13 10:40:13.547243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.874 [2024-12-13 10:40:13.547257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.874 qpair failed and we were unable to recover it. 00:38:19.874 [2024-12-13 10:40:13.547328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.874 [2024-12-13 10:40:13.547341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.874 qpair failed and we were unable to recover it. 00:38:19.874 [2024-12-13 10:40:13.547506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.874 [2024-12-13 10:40:13.547551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.874 qpair failed and we were unable to recover it. 00:38:19.874 [2024-12-13 10:40:13.547697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.874 [2024-12-13 10:40:13.547740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.874 qpair failed and we were unable to recover it. 00:38:19.874 [2024-12-13 10:40:13.547934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.874 [2024-12-13 10:40:13.547976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.874 qpair failed and we were unable to recover it. 00:38:19.874 [2024-12-13 10:40:13.548177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.874 [2024-12-13 10:40:13.548221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.874 qpair failed and we were unable to recover it. 00:38:19.874 [2024-12-13 10:40:13.548443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.874 [2024-12-13 10:40:13.548463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.874 qpair failed and we were unable to recover it. 00:38:19.874 [2024-12-13 10:40:13.548690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.874 [2024-12-13 10:40:13.548704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.874 qpair failed and we were unable to recover it. 00:38:19.874 [2024-12-13 10:40:13.548782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.874 [2024-12-13 10:40:13.548795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.874 qpair failed and we were unable to recover it. 00:38:19.874 [2024-12-13 10:40:13.548932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.874 [2024-12-13 10:40:13.548945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.874 qpair failed and we were unable to recover it. 00:38:19.874 [2024-12-13 10:40:13.549084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.874 [2024-12-13 10:40:13.549126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.874 qpair failed and we were unable to recover it. 00:38:19.874 [2024-12-13 10:40:13.549267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.875 [2024-12-13 10:40:13.549307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.875 qpair failed and we were unable to recover it. 00:38:19.875 [2024-12-13 10:40:13.549467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.875 [2024-12-13 10:40:13.549511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.875 qpair failed and we were unable to recover it. 00:38:19.875 [2024-12-13 10:40:13.549657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.875 [2024-12-13 10:40:13.549706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.875 qpair failed and we were unable to recover it. 00:38:19.875 [2024-12-13 10:40:13.549904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.875 [2024-12-13 10:40:13.549945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.875 qpair failed and we were unable to recover it. 00:38:19.875 [2024-12-13 10:40:13.550142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.875 [2024-12-13 10:40:13.550184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.875 qpair failed and we were unable to recover it. 00:38:19.875 [2024-12-13 10:40:13.550313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.875 [2024-12-13 10:40:13.550327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.875 qpair failed and we were unable to recover it. 00:38:19.875 [2024-12-13 10:40:13.550562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.875 [2024-12-13 10:40:13.550604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.875 qpair failed and we were unable to recover it. 00:38:19.875 [2024-12-13 10:40:13.550890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.875 [2024-12-13 10:40:13.550932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.875 qpair failed and we were unable to recover it. 00:38:19.875 [2024-12-13 10:40:13.551127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.875 [2024-12-13 10:40:13.551170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.875 qpair failed and we were unable to recover it. 00:38:19.875 [2024-12-13 10:40:13.551384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.875 [2024-12-13 10:40:13.551426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.875 qpair failed and we were unable to recover it. 00:38:19.875 [2024-12-13 10:40:13.551646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.875 [2024-12-13 10:40:13.551689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.875 qpair failed and we were unable to recover it. 00:38:19.875 [2024-12-13 10:40:13.551947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.875 [2024-12-13 10:40:13.551960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.875 qpair failed and we were unable to recover it. 00:38:19.875 [2024-12-13 10:40:13.552158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.875 [2024-12-13 10:40:13.552201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.875 qpair failed and we were unable to recover it. 00:38:19.875 [2024-12-13 10:40:13.552345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.875 [2024-12-13 10:40:13.552386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.875 qpair failed and we were unable to recover it. 00:38:19.875 [2024-12-13 10:40:13.552626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.875 [2024-12-13 10:40:13.552670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.875 qpair failed and we were unable to recover it. 00:38:19.875 [2024-12-13 10:40:13.552891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.875 [2024-12-13 10:40:13.552933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.875 qpair failed and we were unable to recover it. 00:38:19.875 [2024-12-13 10:40:13.553138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.875 [2024-12-13 10:40:13.553181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.875 qpair failed and we were unable to recover it. 00:38:19.875 [2024-12-13 10:40:13.553391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.875 [2024-12-13 10:40:13.553404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.875 qpair failed and we were unable to recover it. 00:38:19.875 [2024-12-13 10:40:13.553634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.875 [2024-12-13 10:40:13.553676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.875 qpair failed and we were unable to recover it. 00:38:19.875 [2024-12-13 10:40:13.553909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.875 [2024-12-13 10:40:13.553951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.875 qpair failed and we were unable to recover it. 00:38:19.875 [2024-12-13 10:40:13.554143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.875 [2024-12-13 10:40:13.554184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.875 qpair failed and we were unable to recover it. 00:38:19.875 [2024-12-13 10:40:13.554473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.875 [2024-12-13 10:40:13.554517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.875 qpair failed and we were unable to recover it. 00:38:19.875 [2024-12-13 10:40:13.554726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.875 [2024-12-13 10:40:13.554768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.875 qpair failed and we were unable to recover it. 00:38:19.875 [2024-12-13 10:40:13.554933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.875 [2024-12-13 10:40:13.554977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.875 qpair failed and we were unable to recover it. 00:38:19.875 [2024-12-13 10:40:13.555261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.875 [2024-12-13 10:40:13.555278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.875 qpair failed and we were unable to recover it. 00:38:19.875 [2024-12-13 10:40:13.555416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.875 [2024-12-13 10:40:13.555429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.875 qpair failed and we were unable to recover it. 00:38:19.875 [2024-12-13 10:40:13.555611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.875 [2024-12-13 10:40:13.555624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.875 qpair failed and we were unable to recover it. 00:38:19.875 [2024-12-13 10:40:13.555768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.875 [2024-12-13 10:40:13.555782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.875 qpair failed and we were unable to recover it. 00:38:19.875 [2024-12-13 10:40:13.555981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.875 [2024-12-13 10:40:13.555995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.875 qpair failed and we were unable to recover it. 00:38:19.875 [2024-12-13 10:40:13.556077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.875 [2024-12-13 10:40:13.556090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.875 qpair failed and we were unable to recover it. 00:38:19.875 [2024-12-13 10:40:13.556226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.875 [2024-12-13 10:40:13.556239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.875 qpair failed and we were unable to recover it. 00:38:19.875 [2024-12-13 10:40:13.556388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.875 [2024-12-13 10:40:13.556401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.875 qpair failed and we were unable to recover it. 00:38:19.875 [2024-12-13 10:40:13.556640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.875 [2024-12-13 10:40:13.556682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.875 qpair failed and we were unable to recover it. 00:38:19.875 [2024-12-13 10:40:13.556820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.875 [2024-12-13 10:40:13.556861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.875 qpair failed and we were unable to recover it. 00:38:19.875 [2024-12-13 10:40:13.557063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.875 [2024-12-13 10:40:13.557104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.875 qpair failed and we were unable to recover it. 00:38:19.875 [2024-12-13 10:40:13.557262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.875 [2024-12-13 10:40:13.557275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.875 qpair failed and we were unable to recover it. 00:38:19.875 [2024-12-13 10:40:13.557432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.875 [2024-12-13 10:40:13.557489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.875 qpair failed and we were unable to recover it. 00:38:19.875 [2024-12-13 10:40:13.557682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.875 [2024-12-13 10:40:13.557724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.875 qpair failed and we were unable to recover it. 00:38:19.875 [2024-12-13 10:40:13.557864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.875 [2024-12-13 10:40:13.557905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.875 qpair failed and we were unable to recover it. 00:38:19.875 [2024-12-13 10:40:13.558150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.876 [2024-12-13 10:40:13.558163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.876 qpair failed and we were unable to recover it. 00:38:19.876 [2024-12-13 10:40:13.558338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.876 [2024-12-13 10:40:13.558350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.876 qpair failed and we were unable to recover it. 00:38:19.876 [2024-12-13 10:40:13.558509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.876 [2024-12-13 10:40:13.558553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.876 qpair failed and we were unable to recover it. 00:38:19.876 [2024-12-13 10:40:13.558709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.876 [2024-12-13 10:40:13.558758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.876 qpair failed and we were unable to recover it. 00:38:19.876 [2024-12-13 10:40:13.559019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.876 [2024-12-13 10:40:13.559032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.876 qpair failed and we were unable to recover it. 00:38:19.876 [2024-12-13 10:40:13.559181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.876 [2024-12-13 10:40:13.559195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.876 qpair failed and we were unable to recover it. 00:38:19.876 [2024-12-13 10:40:13.559362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.876 [2024-12-13 10:40:13.559403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.876 qpair failed and we were unable to recover it. 00:38:19.876 [2024-12-13 10:40:13.559625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.876 [2024-12-13 10:40:13.559666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.876 qpair failed and we were unable to recover it. 00:38:19.876 [2024-12-13 10:40:13.559943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.876 [2024-12-13 10:40:13.559957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.876 qpair failed and we were unable to recover it. 00:38:19.876 [2024-12-13 10:40:13.560133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.876 [2024-12-13 10:40:13.560173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.876 qpair failed and we were unable to recover it. 00:38:19.876 [2024-12-13 10:40:13.560323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.876 [2024-12-13 10:40:13.560364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.876 qpair failed and we were unable to recover it. 00:38:19.876 [2024-12-13 10:40:13.560634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.876 [2024-12-13 10:40:13.560678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.876 qpair failed and we were unable to recover it. 00:38:19.876 [2024-12-13 10:40:13.560894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.876 [2024-12-13 10:40:13.560935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.876 qpair failed and we were unable to recover it. 00:38:19.876 [2024-12-13 10:40:13.561150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.876 [2024-12-13 10:40:13.561193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.876 qpair failed and we were unable to recover it. 00:38:19.876 [2024-12-13 10:40:13.561478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.876 [2024-12-13 10:40:13.561519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.876 qpair failed and we were unable to recover it. 00:38:19.876 [2024-12-13 10:40:13.561729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.876 [2024-12-13 10:40:13.561770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.876 qpair failed and we were unable to recover it. 00:38:19.876 [2024-12-13 10:40:13.561984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.876 [2024-12-13 10:40:13.562026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.876 qpair failed and we were unable to recover it. 00:38:19.876 [2024-12-13 10:40:13.562178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.876 [2024-12-13 10:40:13.562192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.876 qpair failed and we were unable to recover it. 00:38:19.876 [2024-12-13 10:40:13.562345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.876 [2024-12-13 10:40:13.562369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.876 qpair failed and we were unable to recover it. 00:38:19.876 [2024-12-13 10:40:13.562527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.876 [2024-12-13 10:40:13.562540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.876 qpair failed and we were unable to recover it. 00:38:19.876 [2024-12-13 10:40:13.562751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.876 [2024-12-13 10:40:13.562793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.876 qpair failed and we were unable to recover it. 00:38:19.876 [2024-12-13 10:40:13.563082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.876 [2024-12-13 10:40:13.563124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.876 qpair failed and we were unable to recover it. 00:38:19.876 [2024-12-13 10:40:13.563412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.876 [2024-12-13 10:40:13.563425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.876 qpair failed and we were unable to recover it. 00:38:19.876 [2024-12-13 10:40:13.563653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.876 [2024-12-13 10:40:13.563667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.876 qpair failed and we were unable to recover it. 00:38:19.876 [2024-12-13 10:40:13.563896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.876 [2024-12-13 10:40:13.563909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.876 qpair failed and we were unable to recover it. 00:38:19.876 [2024-12-13 10:40:13.564113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.876 [2024-12-13 10:40:13.564126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.876 qpair failed and we were unable to recover it. 00:38:19.876 [2024-12-13 10:40:13.564374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.876 [2024-12-13 10:40:13.564414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.876 qpair failed and we were unable to recover it. 00:38:19.876 [2024-12-13 10:40:13.564666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.876 [2024-12-13 10:40:13.564708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.876 qpair failed and we were unable to recover it. 00:38:19.876 [2024-12-13 10:40:13.564925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.876 [2024-12-13 10:40:13.564967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.876 qpair failed and we were unable to recover it. 00:38:19.876 [2024-12-13 10:40:13.565162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.876 [2024-12-13 10:40:13.565203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.876 qpair failed and we were unable to recover it. 00:38:19.876 [2024-12-13 10:40:13.565514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.876 [2024-12-13 10:40:13.565556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.876 qpair failed and we were unable to recover it. 00:38:19.876 [2024-12-13 10:40:13.565766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.876 [2024-12-13 10:40:13.565808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.876 qpair failed and we were unable to recover it. 00:38:19.876 [2024-12-13 10:40:13.566026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.876 [2024-12-13 10:40:13.566077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.876 qpair failed and we were unable to recover it. 00:38:19.876 [2024-12-13 10:40:13.566248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.876 [2024-12-13 10:40:13.566261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.876 qpair failed and we were unable to recover it. 00:38:19.876 [2024-12-13 10:40:13.566349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.876 [2024-12-13 10:40:13.566362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.876 qpair failed and we were unable to recover it. 00:38:19.876 [2024-12-13 10:40:13.566599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.876 [2024-12-13 10:40:13.566644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.876 qpair failed and we were unable to recover it. 00:38:19.876 [2024-12-13 10:40:13.566791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.876 [2024-12-13 10:40:13.566834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.876 qpair failed and we were unable to recover it. 00:38:19.876 [2024-12-13 10:40:13.567043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.876 [2024-12-13 10:40:13.567085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.876 qpair failed and we were unable to recover it. 00:38:19.876 [2024-12-13 10:40:13.567371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.876 [2024-12-13 10:40:13.567384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.876 qpair failed and we were unable to recover it. 00:38:19.876 [2024-12-13 10:40:13.567491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.877 [2024-12-13 10:40:13.567505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.877 qpair failed and we were unable to recover it. 00:38:19.877 [2024-12-13 10:40:13.567678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.877 [2024-12-13 10:40:13.567719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.877 qpair failed and we were unable to recover it. 00:38:19.877 [2024-12-13 10:40:13.567924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.877 [2024-12-13 10:40:13.567965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.877 qpair failed and we were unable to recover it. 00:38:19.877 [2024-12-13 10:40:13.568269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.877 [2024-12-13 10:40:13.568311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.877 qpair failed and we were unable to recover it. 00:38:19.877 [2024-12-13 10:40:13.568572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.877 [2024-12-13 10:40:13.568627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.877 qpair failed and we were unable to recover it. 00:38:19.877 [2024-12-13 10:40:13.568773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.877 [2024-12-13 10:40:13.568815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.877 qpair failed and we were unable to recover it. 00:38:19.877 [2024-12-13 10:40:13.568954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.877 [2024-12-13 10:40:13.568994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.877 qpair failed and we were unable to recover it. 00:38:19.877 [2024-12-13 10:40:13.569187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.877 [2024-12-13 10:40:13.569202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.877 qpair failed and we were unable to recover it. 00:38:19.877 [2024-12-13 10:40:13.569375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.877 [2024-12-13 10:40:13.569417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.877 qpair failed and we were unable to recover it. 00:38:19.877 [2024-12-13 10:40:13.569671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.877 [2024-12-13 10:40:13.569715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.877 qpair failed and we were unable to recover it. 00:38:19.877 [2024-12-13 10:40:13.569859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.877 [2024-12-13 10:40:13.569914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.877 qpair failed and we were unable to recover it. 00:38:19.877 [2024-12-13 10:40:13.570080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.877 [2024-12-13 10:40:13.570094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.877 qpair failed and we were unable to recover it. 00:38:19.877 [2024-12-13 10:40:13.570303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.877 [2024-12-13 10:40:13.570346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.877 qpair failed and we were unable to recover it. 00:38:19.877 [2024-12-13 10:40:13.570620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.877 [2024-12-13 10:40:13.570665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.877 qpair failed and we were unable to recover it. 00:38:19.877 [2024-12-13 10:40:13.570801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.877 [2024-12-13 10:40:13.570842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.877 qpair failed and we were unable to recover it. 00:38:19.877 [2024-12-13 10:40:13.571046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.877 [2024-12-13 10:40:13.571087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.877 qpair failed and we were unable to recover it. 00:38:19.877 [2024-12-13 10:40:13.571310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.877 [2024-12-13 10:40:13.571356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.877 qpair failed and we were unable to recover it. 00:38:19.877 [2024-12-13 10:40:13.571497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.877 [2024-12-13 10:40:13.571511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.877 qpair failed and we were unable to recover it. 00:38:19.877 [2024-12-13 10:40:13.571729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.877 [2024-12-13 10:40:13.571772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.877 qpair failed and we were unable to recover it. 00:38:19.877 [2024-12-13 10:40:13.572027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.877 [2024-12-13 10:40:13.572070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.877 qpair failed and we were unable to recover it. 00:38:19.877 [2024-12-13 10:40:13.572331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.877 [2024-12-13 10:40:13.572373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.877 qpair failed and we were unable to recover it. 00:38:19.877 [2024-12-13 10:40:13.572512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.877 [2024-12-13 10:40:13.572555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.877 qpair failed and we were unable to recover it. 00:38:19.877 [2024-12-13 10:40:13.572873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.877 [2024-12-13 10:40:13.572916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.877 qpair failed and we were unable to recover it. 00:38:19.877 [2024-12-13 10:40:13.573130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.877 [2024-12-13 10:40:13.573173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.877 qpair failed and we were unable to recover it. 00:38:19.877 [2024-12-13 10:40:13.573427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.877 [2024-12-13 10:40:13.573440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.877 qpair failed and we were unable to recover it. 00:38:19.877 [2024-12-13 10:40:13.573606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.877 [2024-12-13 10:40:13.573650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.877 qpair failed and we were unable to recover it. 00:38:19.877 [2024-12-13 10:40:13.573804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.877 [2024-12-13 10:40:13.573846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.877 qpair failed and we were unable to recover it. 00:38:19.877 [2024-12-13 10:40:13.573987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.877 [2024-12-13 10:40:13.574029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.877 qpair failed and we were unable to recover it. 00:38:19.877 [2024-12-13 10:40:13.574155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.877 [2024-12-13 10:40:13.574168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.877 qpair failed and we were unable to recover it. 00:38:19.877 [2024-12-13 10:40:13.574250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.877 [2024-12-13 10:40:13.574264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.877 qpair failed and we were unable to recover it. 00:38:19.877 [2024-12-13 10:40:13.574467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.877 [2024-12-13 10:40:13.574481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.877 qpair failed and we were unable to recover it. 00:38:19.877 [2024-12-13 10:40:13.574561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.877 [2024-12-13 10:40:13.574575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.877 qpair failed and we were unable to recover it. 00:38:19.877 [2024-12-13 10:40:13.574657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.877 [2024-12-13 10:40:13.574670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.877 qpair failed and we were unable to recover it. 00:38:19.877 [2024-12-13 10:40:13.574856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.877 [2024-12-13 10:40:13.574898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.877 qpair failed and we were unable to recover it. 00:38:19.877 [2024-12-13 10:40:13.575044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.877 [2024-12-13 10:40:13.575087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.877 qpair failed and we were unable to recover it. 00:38:19.878 [2024-12-13 10:40:13.575281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.878 [2024-12-13 10:40:13.575323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.878 qpair failed and we were unable to recover it. 00:38:19.878 [2024-12-13 10:40:13.575485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.878 [2024-12-13 10:40:13.575529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.878 qpair failed and we were unable to recover it. 00:38:19.878 [2024-12-13 10:40:13.575728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.878 [2024-12-13 10:40:13.575770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.878 qpair failed and we were unable to recover it. 00:38:19.878 [2024-12-13 10:40:13.575974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.878 [2024-12-13 10:40:13.576015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.878 qpair failed and we were unable to recover it. 00:38:19.878 [2024-12-13 10:40:13.576217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.878 [2024-12-13 10:40:13.576258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.878 qpair failed and we were unable to recover it. 00:38:19.878 [2024-12-13 10:40:13.576499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.878 [2024-12-13 10:40:13.576543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.878 qpair failed and we were unable to recover it. 00:38:19.878 [2024-12-13 10:40:13.576699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.878 [2024-12-13 10:40:13.576740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.878 qpair failed and we were unable to recover it. 00:38:19.878 [2024-12-13 10:40:13.576953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.878 [2024-12-13 10:40:13.576994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.878 qpair failed and we were unable to recover it. 00:38:19.878 [2024-12-13 10:40:13.577124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.878 [2024-12-13 10:40:13.577173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.878 qpair failed and we were unable to recover it. 00:38:19.878 [2024-12-13 10:40:13.577486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.878 [2024-12-13 10:40:13.577537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.878 qpair failed and we were unable to recover it. 00:38:19.878 [2024-12-13 10:40:13.577733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.878 [2024-12-13 10:40:13.577775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.878 qpair failed and we were unable to recover it. 00:38:19.878 [2024-12-13 10:40:13.577980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.878 [2024-12-13 10:40:13.578022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.878 qpair failed and we were unable to recover it. 00:38:19.878 [2024-12-13 10:40:13.578322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.878 [2024-12-13 10:40:13.578364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.878 qpair failed and we were unable to recover it. 00:38:19.878 [2024-12-13 10:40:13.578517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.878 [2024-12-13 10:40:13.578560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.878 qpair failed and we were unable to recover it. 00:38:19.878 [2024-12-13 10:40:13.578769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.878 [2024-12-13 10:40:13.578811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.878 qpair failed and we were unable to recover it. 00:38:19.878 [2024-12-13 10:40:13.578963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.878 [2024-12-13 10:40:13.579005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.878 qpair failed and we were unable to recover it. 00:38:19.878 [2024-12-13 10:40:13.579172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.878 [2024-12-13 10:40:13.579214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.878 qpair failed and we were unable to recover it. 00:38:19.878 [2024-12-13 10:40:13.579304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.878 [2024-12-13 10:40:13.579317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.878 qpair failed and we were unable to recover it. 00:38:19.878 [2024-12-13 10:40:13.579468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.878 [2024-12-13 10:40:13.579482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.878 qpair failed and we were unable to recover it. 00:38:19.878 [2024-12-13 10:40:13.579558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.878 [2024-12-13 10:40:13.579571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.878 qpair failed and we were unable to recover it. 00:38:19.878 [2024-12-13 10:40:13.579648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.878 [2024-12-13 10:40:13.579660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.878 qpair failed and we were unable to recover it. 00:38:19.878 [2024-12-13 10:40:13.579751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.878 [2024-12-13 10:40:13.579765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.878 qpair failed and we were unable to recover it. 00:38:19.878 [2024-12-13 10:40:13.579908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.878 [2024-12-13 10:40:13.579921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.878 qpair failed and we were unable to recover it. 00:38:19.878 [2024-12-13 10:40:13.580094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.878 [2024-12-13 10:40:13.580136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.878 qpair failed and we were unable to recover it. 00:38:19.878 [2024-12-13 10:40:13.580350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.878 [2024-12-13 10:40:13.580392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.878 qpair failed and we were unable to recover it. 00:38:19.878 [2024-12-13 10:40:13.580668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.878 [2024-12-13 10:40:13.580710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.878 qpair failed and we were unable to recover it. 00:38:19.878 [2024-12-13 10:40:13.580919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.878 [2024-12-13 10:40:13.580961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.878 qpair failed and we were unable to recover it. 00:38:19.878 [2024-12-13 10:40:13.581184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.878 [2024-12-13 10:40:13.581198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.878 qpair failed and we were unable to recover it. 00:38:19.878 [2024-12-13 10:40:13.581444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.878 [2024-12-13 10:40:13.581496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.878 qpair failed and we were unable to recover it. 00:38:19.878 [2024-12-13 10:40:13.581724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.878 [2024-12-13 10:40:13.581764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.878 qpair failed and we were unable to recover it. 00:38:19.878 [2024-12-13 10:40:13.581882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.878 [2024-12-13 10:40:13.581895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.878 qpair failed and we were unable to recover it. 00:38:19.878 [2024-12-13 10:40:13.582101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.878 [2024-12-13 10:40:13.582114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.878 qpair failed and we were unable to recover it. 00:38:19.878 [2024-12-13 10:40:13.582269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.878 [2024-12-13 10:40:13.582282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.878 qpair failed and we were unable to recover it. 00:38:19.878 [2024-12-13 10:40:13.582445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.878 [2024-12-13 10:40:13.582500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.878 qpair failed and we were unable to recover it. 00:38:19.878 [2024-12-13 10:40:13.582665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.878 [2024-12-13 10:40:13.582708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.878 qpair failed and we were unable to recover it. 00:38:19.878 [2024-12-13 10:40:13.582975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.878 [2024-12-13 10:40:13.583016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.879 qpair failed and we were unable to recover it. 00:38:19.879 [2024-12-13 10:40:13.583218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.879 [2024-12-13 10:40:13.583261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.879 qpair failed and we were unable to recover it. 00:38:19.879 [2024-12-13 10:40:13.583461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.879 [2024-12-13 10:40:13.583504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.879 qpair failed and we were unable to recover it. 00:38:19.879 [2024-12-13 10:40:13.583755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.879 [2024-12-13 10:40:13.583798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.879 qpair failed and we were unable to recover it. 00:38:19.879 [2024-12-13 10:40:13.583925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.879 [2024-12-13 10:40:13.583943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.879 qpair failed and we were unable to recover it. 00:38:19.879 [2024-12-13 10:40:13.584148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.879 [2024-12-13 10:40:13.584172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.879 qpair failed and we were unable to recover it. 00:38:19.879 [2024-12-13 10:40:13.584309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.879 [2024-12-13 10:40:13.584322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.879 qpair failed and we were unable to recover it. 00:38:19.879 [2024-12-13 10:40:13.584464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.879 [2024-12-13 10:40:13.584479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.879 qpair failed and we were unable to recover it. 00:38:19.879 [2024-12-13 10:40:13.584634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.879 [2024-12-13 10:40:13.584674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.879 qpair failed and we were unable to recover it. 00:38:19.879 [2024-12-13 10:40:13.584943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.879 [2024-12-13 10:40:13.584984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.879 qpair failed and we were unable to recover it. 00:38:19.879 [2024-12-13 10:40:13.585121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.879 [2024-12-13 10:40:13.585158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.879 qpair failed and we were unable to recover it. 00:38:19.879 [2024-12-13 10:40:13.585292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.879 [2024-12-13 10:40:13.585305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.879 qpair failed and we were unable to recover it. 00:38:19.879 [2024-12-13 10:40:13.585461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.879 [2024-12-13 10:40:13.585504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.879 qpair failed and we were unable to recover it. 00:38:19.879 [2024-12-13 10:40:13.585729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.879 [2024-12-13 10:40:13.585772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.879 qpair failed and we were unable to recover it. 00:38:19.879 [2024-12-13 10:40:13.585991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.879 [2024-12-13 10:40:13.586039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.879 qpair failed and we were unable to recover it. 00:38:19.879 [2024-12-13 10:40:13.586253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.879 [2024-12-13 10:40:13.586294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.879 qpair failed and we were unable to recover it. 00:38:19.879 [2024-12-13 10:40:13.586476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.879 [2024-12-13 10:40:13.586519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.879 qpair failed and we were unable to recover it. 00:38:19.879 [2024-12-13 10:40:13.586741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.879 [2024-12-13 10:40:13.586783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.879 qpair failed and we were unable to recover it. 00:38:19.879 [2024-12-13 10:40:13.587023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.879 [2024-12-13 10:40:13.587064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.879 qpair failed and we were unable to recover it. 00:38:19.879 [2024-12-13 10:40:13.587253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.879 [2024-12-13 10:40:13.587266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.879 qpair failed and we were unable to recover it. 00:38:19.879 [2024-12-13 10:40:13.587492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.879 [2024-12-13 10:40:13.587506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.879 qpair failed and we were unable to recover it. 00:38:19.879 [2024-12-13 10:40:13.587586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.879 [2024-12-13 10:40:13.587599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.879 qpair failed and we were unable to recover it. 00:38:19.879 [2024-12-13 10:40:13.587905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.879 [2024-12-13 10:40:13.587948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.879 qpair failed and we were unable to recover it. 00:38:19.879 [2024-12-13 10:40:13.588152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.879 [2024-12-13 10:40:13.588194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.879 qpair failed and we were unable to recover it. 00:38:19.879 [2024-12-13 10:40:13.588428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.879 [2024-12-13 10:40:13.588442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.879 qpair failed and we were unable to recover it. 00:38:19.879 [2024-12-13 10:40:13.588651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.879 [2024-12-13 10:40:13.588665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.879 qpair failed and we were unable to recover it. 00:38:19.879 [2024-12-13 10:40:13.588835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.879 [2024-12-13 10:40:13.588848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.879 qpair failed and we were unable to recover it. 00:38:19.879 [2024-12-13 10:40:13.588982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.879 [2024-12-13 10:40:13.588995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.879 qpair failed and we were unable to recover it. 00:38:19.879 [2024-12-13 10:40:13.589163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.879 [2024-12-13 10:40:13.589177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.879 qpair failed and we were unable to recover it. 00:38:19.879 [2024-12-13 10:40:13.589283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.879 [2024-12-13 10:40:13.589324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.879 qpair failed and we were unable to recover it. 00:38:19.879 [2024-12-13 10:40:13.589528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.879 [2024-12-13 10:40:13.589569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.879 qpair failed and we were unable to recover it. 00:38:19.879 [2024-12-13 10:40:13.589781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.879 [2024-12-13 10:40:13.589822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.879 qpair failed and we were unable to recover it. 00:38:19.879 [2024-12-13 10:40:13.590108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.879 [2024-12-13 10:40:13.590149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.879 qpair failed and we were unable to recover it. 00:38:19.879 [2024-12-13 10:40:13.590308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.879 [2024-12-13 10:40:13.590350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.879 qpair failed and we were unable to recover it. 00:38:19.879 [2024-12-13 10:40:13.590482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.879 [2024-12-13 10:40:13.590523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.879 qpair failed and we were unable to recover it. 00:38:19.879 [2024-12-13 10:40:13.590666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.879 [2024-12-13 10:40:13.590707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.879 qpair failed and we were unable to recover it. 00:38:19.879 [2024-12-13 10:40:13.590850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.879 [2024-12-13 10:40:13.590892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.880 qpair failed and we were unable to recover it. 00:38:19.880 [2024-12-13 10:40:13.591141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.880 [2024-12-13 10:40:13.591155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.880 qpair failed and we were unable to recover it. 00:38:19.880 [2024-12-13 10:40:13.591292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.880 [2024-12-13 10:40:13.591305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.880 qpair failed and we were unable to recover it. 00:38:19.880 [2024-12-13 10:40:13.591458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.880 [2024-12-13 10:40:13.591500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.880 qpair failed and we were unable to recover it. 00:38:19.880 [2024-12-13 10:40:13.591703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.880 [2024-12-13 10:40:13.591744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.880 qpair failed and we were unable to recover it. 00:38:19.880 [2024-12-13 10:40:13.591969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.880 [2024-12-13 10:40:13.592055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:19.880 qpair failed and we were unable to recover it. 00:38:19.880 [2024-12-13 10:40:13.592279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.880 [2024-12-13 10:40:13.592324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:19.880 qpair failed and we were unable to recover it. 00:38:19.880 [2024-12-13 10:40:13.592523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.880 [2024-12-13 10:40:13.592570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:19.880 qpair failed and we were unable to recover it. 00:38:19.880 [2024-12-13 10:40:13.592669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.880 [2024-12-13 10:40:13.592685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.880 qpair failed and we were unable to recover it. 00:38:19.880 [2024-12-13 10:40:13.592837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.880 [2024-12-13 10:40:13.592850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.880 qpair failed and we were unable to recover it. 00:38:19.880 [2024-12-13 10:40:13.592956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.880 [2024-12-13 10:40:13.592970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.880 qpair failed and we were unable to recover it. 00:38:19.880 [2024-12-13 10:40:13.593141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.880 [2024-12-13 10:40:13.593184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.880 qpair failed and we were unable to recover it. 00:38:19.880 [2024-12-13 10:40:13.593395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.880 [2024-12-13 10:40:13.593438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.880 qpair failed and we were unable to recover it. 00:38:19.880 [2024-12-13 10:40:13.593602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.880 [2024-12-13 10:40:13.593643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.880 qpair failed and we were unable to recover it. 00:38:19.880 [2024-12-13 10:40:13.593907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.880 [2024-12-13 10:40:13.593948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.880 qpair failed and we were unable to recover it. 00:38:19.880 [2024-12-13 10:40:13.594165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.880 [2024-12-13 10:40:13.594208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.880 qpair failed and we were unable to recover it. 00:38:19.880 [2024-12-13 10:40:13.594402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.880 [2024-12-13 10:40:13.594415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.880 qpair failed and we were unable to recover it. 00:38:19.880 [2024-12-13 10:40:13.594488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.880 [2024-12-13 10:40:13.594525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.880 qpair failed and we were unable to recover it. 00:38:19.880 [2024-12-13 10:40:13.594729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.880 [2024-12-13 10:40:13.594772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.880 qpair failed and we were unable to recover it. 00:38:19.880 [2024-12-13 10:40:13.594990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.880 [2024-12-13 10:40:13.595032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.880 qpair failed and we were unable to recover it. 00:38:19.880 [2024-12-13 10:40:13.595222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.880 [2024-12-13 10:40:13.595235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.880 qpair failed and we were unable to recover it. 00:38:19.880 [2024-12-13 10:40:13.595398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.880 [2024-12-13 10:40:13.595438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.880 qpair failed and we were unable to recover it. 00:38:19.880 [2024-12-13 10:40:13.595678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.880 [2024-12-13 10:40:13.595720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.880 qpair failed and we were unable to recover it. 00:38:19.880 [2024-12-13 10:40:13.595864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.880 [2024-12-13 10:40:13.595904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.880 qpair failed and we were unable to recover it. 00:38:19.880 [2024-12-13 10:40:13.596044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.880 [2024-12-13 10:40:13.596087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.880 qpair failed and we were unable to recover it. 00:38:19.880 [2024-12-13 10:40:13.596217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.880 [2024-12-13 10:40:13.596230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.880 qpair failed and we were unable to recover it. 00:38:19.880 [2024-12-13 10:40:13.596377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.880 [2024-12-13 10:40:13.596426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.880 qpair failed and we were unable to recover it. 00:38:19.880 [2024-12-13 10:40:13.596669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.880 [2024-12-13 10:40:13.596712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.880 qpair failed and we were unable to recover it. 00:38:19.880 [2024-12-13 10:40:13.596867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.880 [2024-12-13 10:40:13.596909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.880 qpair failed and we were unable to recover it. 00:38:19.880 [2024-12-13 10:40:13.597031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.880 [2024-12-13 10:40:13.597044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.880 qpair failed and we were unable to recover it. 00:38:19.880 [2024-12-13 10:40:13.597231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.880 [2024-12-13 10:40:13.597271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.880 qpair failed and we were unable to recover it. 00:38:19.880 [2024-12-13 10:40:13.597487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.880 [2024-12-13 10:40:13.597530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.880 qpair failed and we were unable to recover it. 00:38:19.880 [2024-12-13 10:40:13.597744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.880 [2024-12-13 10:40:13.597786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.880 qpair failed and we were unable to recover it. 00:38:19.880 [2024-12-13 10:40:13.597985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.880 [2024-12-13 10:40:13.598027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.880 qpair failed and we were unable to recover it. 00:38:19.880 [2024-12-13 10:40:13.598272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.880 [2024-12-13 10:40:13.598316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.880 qpair failed and we were unable to recover it. 00:38:19.880 [2024-12-13 10:40:13.598578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.880 [2024-12-13 10:40:13.598596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.880 qpair failed and we were unable to recover it. 00:38:19.880 [2024-12-13 10:40:13.598828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.880 [2024-12-13 10:40:13.598841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.880 qpair failed and we were unable to recover it. 00:38:19.880 [2024-12-13 10:40:13.598932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.880 [2024-12-13 10:40:13.598946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.880 qpair failed and we were unable to recover it. 00:38:19.880 [2024-12-13 10:40:13.599040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.880 [2024-12-13 10:40:13.599053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.880 qpair failed and we were unable to recover it. 00:38:19.880 [2024-12-13 10:40:13.599134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.881 [2024-12-13 10:40:13.599148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.881 qpair failed and we were unable to recover it. 00:38:19.881 [2024-12-13 10:40:13.599301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.881 [2024-12-13 10:40:13.599335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.881 qpair failed and we were unable to recover it. 00:38:19.881 [2024-12-13 10:40:13.599557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.881 [2024-12-13 10:40:13.599599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.881 qpair failed and we were unable to recover it. 00:38:19.881 [2024-12-13 10:40:13.599742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.881 [2024-12-13 10:40:13.599783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.881 qpair failed and we were unable to recover it. 00:38:19.881 [2024-12-13 10:40:13.599920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.881 [2024-12-13 10:40:13.599963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.881 qpair failed and we were unable to recover it. 00:38:19.881 [2024-12-13 10:40:13.600163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.881 [2024-12-13 10:40:13.600206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.881 qpair failed and we were unable to recover it. 00:38:19.881 [2024-12-13 10:40:13.600501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.881 [2024-12-13 10:40:13.600550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.881 qpair failed and we were unable to recover it. 00:38:19.881 [2024-12-13 10:40:13.600724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.881 [2024-12-13 10:40:13.600766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.881 qpair failed and we were unable to recover it. 00:38:19.881 [2024-12-13 10:40:13.600996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.881 [2024-12-13 10:40:13.601037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.881 qpair failed and we were unable to recover it. 00:38:19.881 [2024-12-13 10:40:13.601220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.881 [2024-12-13 10:40:13.601233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.881 qpair failed and we were unable to recover it. 00:38:19.881 [2024-12-13 10:40:13.601416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.881 [2024-12-13 10:40:13.601466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.881 qpair failed and we were unable to recover it. 00:38:19.881 [2024-12-13 10:40:13.601617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.881 [2024-12-13 10:40:13.601658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.881 qpair failed and we were unable to recover it. 00:38:19.881 [2024-12-13 10:40:13.601918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.881 [2024-12-13 10:40:13.601957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.881 qpair failed and we were unable to recover it. 00:38:19.881 [2024-12-13 10:40:13.602105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.881 [2024-12-13 10:40:13.602118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.881 qpair failed and we were unable to recover it. 00:38:19.881 [2024-12-13 10:40:13.602238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.881 [2024-12-13 10:40:13.602279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.881 qpair failed and we were unable to recover it. 00:38:19.881 [2024-12-13 10:40:13.602415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.881 [2024-12-13 10:40:13.602467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.881 qpair failed and we were unable to recover it. 00:38:19.881 [2024-12-13 10:40:13.602682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.881 [2024-12-13 10:40:13.602724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.881 qpair failed and we were unable to recover it. 00:38:19.881 [2024-12-13 10:40:13.602992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.881 [2024-12-13 10:40:13.603033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.881 qpair failed and we were unable to recover it. 00:38:19.881 [2024-12-13 10:40:13.603192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.881 [2024-12-13 10:40:13.603205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.881 qpair failed and we were unable to recover it. 00:38:19.881 [2024-12-13 10:40:13.603302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.881 [2024-12-13 10:40:13.603343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.881 qpair failed and we were unable to recover it. 00:38:19.881 [2024-12-13 10:40:13.603562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.881 [2024-12-13 10:40:13.603606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.881 qpair failed and we were unable to recover it. 00:38:19.881 [2024-12-13 10:40:13.603806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.881 [2024-12-13 10:40:13.603848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.881 qpair failed and we were unable to recover it. 00:38:19.881 [2024-12-13 10:40:13.603996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.881 [2024-12-13 10:40:13.604039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.881 qpair failed and we were unable to recover it. 00:38:19.881 [2024-12-13 10:40:13.604242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.881 [2024-12-13 10:40:13.604282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.881 qpair failed and we were unable to recover it. 00:38:19.881 [2024-12-13 10:40:13.604497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.881 [2024-12-13 10:40:13.604538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.881 qpair failed and we were unable to recover it. 00:38:19.881 [2024-12-13 10:40:13.604755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.881 [2024-12-13 10:40:13.604796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.881 qpair failed and we were unable to recover it. 00:38:19.881 [2024-12-13 10:40:13.605024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.881 [2024-12-13 10:40:13.605065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.881 qpair failed and we were unable to recover it. 00:38:19.881 [2024-12-13 10:40:13.605327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.881 [2024-12-13 10:40:13.605370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.881 qpair failed and we were unable to recover it. 00:38:19.881 [2024-12-13 10:40:13.605610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.881 [2024-12-13 10:40:13.605623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.881 qpair failed and we were unable to recover it. 00:38:19.881 [2024-12-13 10:40:13.605760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.881 [2024-12-13 10:40:13.605776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.881 qpair failed and we were unable to recover it. 00:38:19.881 [2024-12-13 10:40:13.605930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.881 [2024-12-13 10:40:13.605944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.881 qpair failed and we were unable to recover it. 00:38:19.881 [2024-12-13 10:40:13.606024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.881 [2024-12-13 10:40:13.606037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.881 qpair failed and we were unable to recover it. 00:38:19.881 [2024-12-13 10:40:13.606268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.881 [2024-12-13 10:40:13.606310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.881 qpair failed and we were unable to recover it. 00:38:19.881 [2024-12-13 10:40:13.606525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.881 [2024-12-13 10:40:13.606569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.881 qpair failed and we were unable to recover it. 00:38:19.881 [2024-12-13 10:40:13.606766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.881 [2024-12-13 10:40:13.606808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.881 qpair failed and we were unable to recover it. 00:38:19.881 [2024-12-13 10:40:13.607018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.881 [2024-12-13 10:40:13.607060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.881 qpair failed and we were unable to recover it. 00:38:19.881 [2024-12-13 10:40:13.607379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.881 [2024-12-13 10:40:13.607420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.881 qpair failed and we were unable to recover it. 00:38:19.881 [2024-12-13 10:40:13.607661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.881 [2024-12-13 10:40:13.607704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.881 qpair failed and we were unable to recover it. 00:38:19.881 [2024-12-13 10:40:13.607915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.881 [2024-12-13 10:40:13.607957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.881 qpair failed and we were unable to recover it. 00:38:19.881 [2024-12-13 10:40:13.608088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.881 [2024-12-13 10:40:13.608131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.882 qpair failed and we were unable to recover it. 00:38:19.882 [2024-12-13 10:40:13.608323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.882 [2024-12-13 10:40:13.608363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.882 qpair failed and we were unable to recover it. 00:38:19.882 [2024-12-13 10:40:13.608650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.882 [2024-12-13 10:40:13.608693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.882 qpair failed and we were unable to recover it. 00:38:19.882 [2024-12-13 10:40:13.608906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.882 [2024-12-13 10:40:13.608948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.882 qpair failed and we were unable to recover it. 00:38:19.882 [2024-12-13 10:40:13.609128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.882 [2024-12-13 10:40:13.609142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.882 qpair failed and we were unable to recover it. 00:38:19.882 [2024-12-13 10:40:13.609300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.882 [2024-12-13 10:40:13.609341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.882 qpair failed and we were unable to recover it. 00:38:19.882 [2024-12-13 10:40:13.609475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.882 [2024-12-13 10:40:13.609517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.882 qpair failed and we were unable to recover it. 00:38:19.882 [2024-12-13 10:40:13.609751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.882 [2024-12-13 10:40:13.609799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.882 qpair failed and we were unable to recover it. 00:38:19.882 [2024-12-13 10:40:13.610010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.882 [2024-12-13 10:40:13.610051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.882 qpair failed and we were unable to recover it. 00:38:19.882 [2024-12-13 10:40:13.610189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.882 [2024-12-13 10:40:13.610231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.882 qpair failed and we were unable to recover it. 00:38:19.882 [2024-12-13 10:40:13.610419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.882 [2024-12-13 10:40:13.610469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.882 qpair failed and we were unable to recover it. 00:38:19.882 [2024-12-13 10:40:13.610600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.882 [2024-12-13 10:40:13.610642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.882 qpair failed and we were unable to recover it. 00:38:19.882 [2024-12-13 10:40:13.610898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.882 [2024-12-13 10:40:13.610938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.882 qpair failed and we were unable to recover it. 00:38:19.882 [2024-12-13 10:40:13.611172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.882 [2024-12-13 10:40:13.611214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.882 qpair failed and we were unable to recover it. 00:38:19.882 [2024-12-13 10:40:13.611455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.882 [2024-12-13 10:40:13.611498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.882 qpair failed and we were unable to recover it. 00:38:19.882 [2024-12-13 10:40:13.611700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.882 [2024-12-13 10:40:13.611743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.882 qpair failed and we were unable to recover it. 00:38:19.882 [2024-12-13 10:40:13.611969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.882 [2024-12-13 10:40:13.612009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.882 qpair failed and we were unable to recover it. 00:38:19.882 [2024-12-13 10:40:13.612236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.882 [2024-12-13 10:40:13.612250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.882 qpair failed and we were unable to recover it. 00:38:19.882 [2024-12-13 10:40:13.612389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.882 [2024-12-13 10:40:13.612402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.882 qpair failed and we were unable to recover it. 00:38:19.882 [2024-12-13 10:40:13.612475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.882 [2024-12-13 10:40:13.612489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.882 qpair failed and we were unable to recover it. 00:38:19.882 [2024-12-13 10:40:13.612581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.882 [2024-12-13 10:40:13.612594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.882 qpair failed and we were unable to recover it. 00:38:19.882 [2024-12-13 10:40:13.612737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.882 [2024-12-13 10:40:13.612757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.882 qpair failed and we were unable to recover it. 00:38:19.882 [2024-12-13 10:40:13.612824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.882 [2024-12-13 10:40:13.612838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.882 qpair failed and we were unable to recover it. 00:38:19.882 [2024-12-13 10:40:13.612975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.882 [2024-12-13 10:40:13.612988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.882 qpair failed and we were unable to recover it. 00:38:19.882 [2024-12-13 10:40:13.613127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.882 [2024-12-13 10:40:13.613170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.882 qpair failed and we were unable to recover it. 00:38:19.882 [2024-12-13 10:40:13.613312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.882 [2024-12-13 10:40:13.613353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.882 qpair failed and we were unable to recover it. 00:38:19.882 [2024-12-13 10:40:13.613547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.882 [2024-12-13 10:40:13.613590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.882 qpair failed and we were unable to recover it. 00:38:19.882 [2024-12-13 10:40:13.613739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.882 [2024-12-13 10:40:13.613780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.882 qpair failed and we were unable to recover it. 00:38:19.882 [2024-12-13 10:40:13.613974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.882 [2024-12-13 10:40:13.614015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.882 qpair failed and we were unable to recover it. 00:38:19.882 [2024-12-13 10:40:13.614213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.882 [2024-12-13 10:40:13.614254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.882 qpair failed and we were unable to recover it. 00:38:19.882 [2024-12-13 10:40:13.614487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.882 [2024-12-13 10:40:13.614530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.882 qpair failed and we were unable to recover it. 00:38:19.882 [2024-12-13 10:40:13.614791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.882 [2024-12-13 10:40:13.614833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.882 qpair failed and we were unable to recover it. 00:38:19.882 [2024-12-13 10:40:13.615041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.882 [2024-12-13 10:40:13.615083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.882 qpair failed and we were unable to recover it. 00:38:19.882 [2024-12-13 10:40:13.615222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.882 [2024-12-13 10:40:13.615262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.882 qpair failed and we were unable to recover it. 00:38:19.883 [2024-12-13 10:40:13.615469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.883 [2024-12-13 10:40:13.615483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.883 qpair failed and we were unable to recover it. 00:38:19.883 [2024-12-13 10:40:13.615646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.883 [2024-12-13 10:40:13.615687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.883 qpair failed and we were unable to recover it. 00:38:19.883 [2024-12-13 10:40:13.615960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.883 [2024-12-13 10:40:13.616003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.883 qpair failed and we were unable to recover it. 00:38:19.883 [2024-12-13 10:40:13.616152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.883 [2024-12-13 10:40:13.616194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.883 qpair failed and we were unable to recover it. 00:38:19.883 [2024-12-13 10:40:13.616491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.883 [2024-12-13 10:40:13.616535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.883 qpair failed and we were unable to recover it. 00:38:19.883 [2024-12-13 10:40:13.616700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.883 [2024-12-13 10:40:13.616742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.883 qpair failed and we were unable to recover it. 00:38:19.883 [2024-12-13 10:40:13.616888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.883 [2024-12-13 10:40:13.616929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.883 qpair failed and we were unable to recover it. 00:38:19.883 [2024-12-13 10:40:13.617144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.883 [2024-12-13 10:40:13.617186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.883 qpair failed and we were unable to recover it. 00:38:19.883 [2024-12-13 10:40:13.617394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.883 [2024-12-13 10:40:13.617435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.883 qpair failed and we were unable to recover it. 00:38:19.883 [2024-12-13 10:40:13.617671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.883 [2024-12-13 10:40:13.617713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.883 qpair failed and we were unable to recover it. 00:38:19.883 [2024-12-13 10:40:13.617977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.883 [2024-12-13 10:40:13.618018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.883 qpair failed and we were unable to recover it. 00:38:19.883 [2024-12-13 10:40:13.618126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.883 [2024-12-13 10:40:13.618139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.883 qpair failed and we were unable to recover it. 00:38:19.883 [2024-12-13 10:40:13.618280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.883 [2024-12-13 10:40:13.618294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.883 qpair failed and we were unable to recover it. 00:38:19.883 [2024-12-13 10:40:13.618538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.883 [2024-12-13 10:40:13.618589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.883 qpair failed and we were unable to recover it. 00:38:19.883 [2024-12-13 10:40:13.618858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.883 [2024-12-13 10:40:13.618900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.883 qpair failed and we were unable to recover it. 00:38:19.883 [2024-12-13 10:40:13.619099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.883 [2024-12-13 10:40:13.619142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.883 qpair failed and we were unable to recover it. 00:38:19.883 [2024-12-13 10:40:13.619402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.883 [2024-12-13 10:40:13.619444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.883 qpair failed and we were unable to recover it. 00:38:19.883 [2024-12-13 10:40:13.619765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.883 [2024-12-13 10:40:13.619807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.883 qpair failed and we were unable to recover it. 00:38:19.883 [2024-12-13 10:40:13.620032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.883 [2024-12-13 10:40:13.620072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.883 qpair failed and we were unable to recover it. 00:38:19.883 [2024-12-13 10:40:13.620348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.883 [2024-12-13 10:40:13.620390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.883 qpair failed and we were unable to recover it. 00:38:19.883 [2024-12-13 10:40:13.620607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.883 [2024-12-13 10:40:13.620651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.883 qpair failed and we were unable to recover it. 00:38:19.883 [2024-12-13 10:40:13.620802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.883 [2024-12-13 10:40:13.620844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.883 qpair failed and we were unable to recover it. 00:38:19.883 [2024-12-13 10:40:13.621038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.883 [2024-12-13 10:40:13.621080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.883 qpair failed and we were unable to recover it. 00:38:19.883 [2024-12-13 10:40:13.621315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.883 [2024-12-13 10:40:13.621357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.883 qpair failed and we were unable to recover it. 00:38:19.883 [2024-12-13 10:40:13.621615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.883 [2024-12-13 10:40:13.621659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.883 qpair failed and we were unable to recover it. 00:38:19.883 [2024-12-13 10:40:13.621881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.883 [2024-12-13 10:40:13.621923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.883 qpair failed and we were unable to recover it. 00:38:19.883 [2024-12-13 10:40:13.622103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.883 [2024-12-13 10:40:13.622116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.883 qpair failed and we were unable to recover it. 00:38:19.883 [2024-12-13 10:40:13.622203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.883 [2024-12-13 10:40:13.622215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.883 qpair failed and we were unable to recover it. 00:38:19.883 [2024-12-13 10:40:13.622379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.883 [2024-12-13 10:40:13.622393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.883 qpair failed and we were unable to recover it. 00:38:19.883 [2024-12-13 10:40:13.622628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.883 [2024-12-13 10:40:13.622642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.883 qpair failed and we were unable to recover it. 00:38:19.883 [2024-12-13 10:40:13.622799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.883 [2024-12-13 10:40:13.622840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.883 qpair failed and we were unable to recover it. 00:38:19.883 [2024-12-13 10:40:13.623071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.883 [2024-12-13 10:40:13.623113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.883 qpair failed and we were unable to recover it. 00:38:19.883 [2024-12-13 10:40:13.623348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.883 [2024-12-13 10:40:13.623389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.883 qpair failed and we were unable to recover it. 00:38:19.883 [2024-12-13 10:40:13.623549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.883 [2024-12-13 10:40:13.623592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.883 qpair failed and we were unable to recover it. 00:38:19.883 [2024-12-13 10:40:13.623801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.883 [2024-12-13 10:40:13.623842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.883 qpair failed and we were unable to recover it. 00:38:19.883 [2024-12-13 10:40:13.624131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.883 [2024-12-13 10:40:13.624174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.883 qpair failed and we were unable to recover it. 00:38:19.883 [2024-12-13 10:40:13.624484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.883 [2024-12-13 10:40:13.624526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.883 qpair failed and we were unable to recover it. 00:38:19.883 [2024-12-13 10:40:13.624797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.883 [2024-12-13 10:40:13.624839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.883 qpair failed and we were unable to recover it. 00:38:19.883 [2024-12-13 10:40:13.625103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.883 [2024-12-13 10:40:13.625145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.883 qpair failed and we were unable to recover it. 00:38:19.884 [2024-12-13 10:40:13.625282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.884 [2024-12-13 10:40:13.625324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.884 qpair failed and we were unable to recover it. 00:38:19.884 [2024-12-13 10:40:13.625538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.884 [2024-12-13 10:40:13.625552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.884 qpair failed and we were unable to recover it. 00:38:19.884 [2024-12-13 10:40:13.625779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.884 [2024-12-13 10:40:13.625820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.884 qpair failed and we were unable to recover it. 00:38:19.884 [2024-12-13 10:40:13.625947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.884 [2024-12-13 10:40:13.625988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.884 qpair failed and we were unable to recover it. 00:38:19.884 [2024-12-13 10:40:13.626196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.884 [2024-12-13 10:40:13.626236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.884 qpair failed and we were unable to recover it. 00:38:19.884 [2024-12-13 10:40:13.626492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.884 [2024-12-13 10:40:13.626505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.884 qpair failed and we were unable to recover it. 00:38:19.884 [2024-12-13 10:40:13.626670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.884 [2024-12-13 10:40:13.626711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.884 qpair failed and we were unable to recover it. 00:38:19.884 [2024-12-13 10:40:13.626968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.884 [2024-12-13 10:40:13.627008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.884 qpair failed and we were unable to recover it. 00:38:19.884 [2024-12-13 10:40:13.627189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.884 [2024-12-13 10:40:13.627202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.884 qpair failed and we were unable to recover it. 00:38:19.884 [2024-12-13 10:40:13.627410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.884 [2024-12-13 10:40:13.627460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.884 qpair failed and we were unable to recover it. 00:38:19.884 [2024-12-13 10:40:13.627676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.884 [2024-12-13 10:40:13.627718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.884 qpair failed and we were unable to recover it. 00:38:19.884 [2024-12-13 10:40:13.627941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.884 [2024-12-13 10:40:13.627983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.884 qpair failed and we were unable to recover it. 00:38:19.884 [2024-12-13 10:40:13.628121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.884 [2024-12-13 10:40:13.628150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.884 qpair failed and we were unable to recover it. 00:38:19.884 [2024-12-13 10:40:13.628293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.884 [2024-12-13 10:40:13.628321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.884 qpair failed and we were unable to recover it. 00:38:19.884 [2024-12-13 10:40:13.628531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.884 [2024-12-13 10:40:13.628580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.884 qpair failed and we were unable to recover it. 00:38:19.884 [2024-12-13 10:40:13.628723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.884 [2024-12-13 10:40:13.628763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.884 qpair failed and we were unable to recover it. 00:38:19.884 [2024-12-13 10:40:13.629050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.884 [2024-12-13 10:40:13.629092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.884 qpair failed and we were unable to recover it. 00:38:19.884 [2024-12-13 10:40:13.629287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.884 [2024-12-13 10:40:13.629327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.884 qpair failed and we were unable to recover it. 00:38:19.884 [2024-12-13 10:40:13.629461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.884 [2024-12-13 10:40:13.629474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.884 qpair failed and we were unable to recover it. 00:38:19.884 [2024-12-13 10:40:13.629636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.884 [2024-12-13 10:40:13.629650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.884 qpair failed and we were unable to recover it. 00:38:19.884 [2024-12-13 10:40:13.629792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.884 [2024-12-13 10:40:13.629833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.884 qpair failed and we were unable to recover it. 00:38:19.884 [2024-12-13 10:40:13.630040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.884 [2024-12-13 10:40:13.630082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.884 qpair failed and we were unable to recover it. 00:38:19.884 [2024-12-13 10:40:13.630233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.884 [2024-12-13 10:40:13.630274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.884 qpair failed and we were unable to recover it. 00:38:19.884 [2024-12-13 10:40:13.630528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.884 [2024-12-13 10:40:13.630541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.884 qpair failed and we were unable to recover it. 00:38:19.884 [2024-12-13 10:40:13.630636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.884 [2024-12-13 10:40:13.630679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.884 qpair failed and we were unable to recover it. 00:38:19.884 [2024-12-13 10:40:13.630958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.884 [2024-12-13 10:40:13.630998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.884 qpair failed and we were unable to recover it. 00:38:19.884 [2024-12-13 10:40:13.631283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.884 [2024-12-13 10:40:13.631325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.884 qpair failed and we were unable to recover it. 00:38:19.884 [2024-12-13 10:40:13.631518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.884 [2024-12-13 10:40:13.631562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.884 qpair failed and we were unable to recover it. 00:38:19.884 [2024-12-13 10:40:13.631781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.884 [2024-12-13 10:40:13.631823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.884 qpair failed and we were unable to recover it. 00:38:19.884 [2024-12-13 10:40:13.632012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.884 [2024-12-13 10:40:13.632052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.884 qpair failed and we were unable to recover it. 00:38:19.884 [2024-12-13 10:40:13.632343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.884 [2024-12-13 10:40:13.632390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.884 qpair failed and we were unable to recover it. 00:38:19.884 [2024-12-13 10:40:13.632484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.884 [2024-12-13 10:40:13.632497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.884 qpair failed and we were unable to recover it. 00:38:19.884 [2024-12-13 10:40:13.632684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.884 [2024-12-13 10:40:13.632727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.884 qpair failed and we were unable to recover it. 00:38:19.884 [2024-12-13 10:40:13.632931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.884 [2024-12-13 10:40:13.632972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.884 qpair failed and we were unable to recover it. 00:38:19.884 [2024-12-13 10:40:13.633259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.884 [2024-12-13 10:40:13.633306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.884 qpair failed and we were unable to recover it. 00:38:19.884 [2024-12-13 10:40:13.633386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.884 [2024-12-13 10:40:13.633398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.884 qpair failed and we were unable to recover it. 00:38:19.884 [2024-12-13 10:40:13.633533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.884 [2024-12-13 10:40:13.633547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.884 qpair failed and we were unable to recover it. 00:38:19.884 [2024-12-13 10:40:13.633643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.884 [2024-12-13 10:40:13.633656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.885 qpair failed and we were unable to recover it. 00:38:19.885 [2024-12-13 10:40:13.633864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.885 [2024-12-13 10:40:13.633907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.885 qpair failed and we were unable to recover it. 00:38:19.885 [2024-12-13 10:40:13.634033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.885 [2024-12-13 10:40:13.634074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.885 qpair failed and we were unable to recover it. 00:38:19.885 [2024-12-13 10:40:13.634266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.885 [2024-12-13 10:40:13.634308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.885 qpair failed and we were unable to recover it. 00:38:19.885 [2024-12-13 10:40:13.634503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.885 [2024-12-13 10:40:13.634517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.885 qpair failed and we were unable to recover it. 00:38:19.885 [2024-12-13 10:40:13.634656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.885 [2024-12-13 10:40:13.634669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.885 qpair failed and we were unable to recover it. 00:38:19.885 [2024-12-13 10:40:13.634811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.885 [2024-12-13 10:40:13.634825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.885 qpair failed and we were unable to recover it. 00:38:19.885 [2024-12-13 10:40:13.634920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.885 [2024-12-13 10:40:13.634936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.885 qpair failed and we were unable to recover it. 00:38:19.885 [2024-12-13 10:40:13.635015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.885 [2024-12-13 10:40:13.635028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.885 qpair failed and we were unable to recover it. 00:38:19.885 [2024-12-13 10:40:13.635172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.885 [2024-12-13 10:40:13.635185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.885 qpair failed and we were unable to recover it. 00:38:19.885 [2024-12-13 10:40:13.635254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.885 [2024-12-13 10:40:13.635267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.885 qpair failed and we were unable to recover it. 00:38:19.885 [2024-12-13 10:40:13.635410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.885 [2024-12-13 10:40:13.635424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.885 qpair failed and we were unable to recover it. 00:38:19.885 [2024-12-13 10:40:13.635570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.885 [2024-12-13 10:40:13.635613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.885 qpair failed and we were unable to recover it. 00:38:19.885 [2024-12-13 10:40:13.635830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.885 [2024-12-13 10:40:13.635872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.885 qpair failed and we were unable to recover it. 00:38:19.885 [2024-12-13 10:40:13.636000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.885 [2024-12-13 10:40:13.636041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.885 qpair failed and we were unable to recover it. 00:38:19.885 [2024-12-13 10:40:13.636214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.885 [2024-12-13 10:40:13.636228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.885 qpair failed and we were unable to recover it. 00:38:19.885 [2024-12-13 10:40:13.636302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.885 [2024-12-13 10:40:13.636315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.885 qpair failed and we were unable to recover it. 00:38:19.885 [2024-12-13 10:40:13.636400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.885 [2024-12-13 10:40:13.636417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.885 qpair failed and we were unable to recover it. 00:38:19.885 [2024-12-13 10:40:13.636512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.885 [2024-12-13 10:40:13.636526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.885 qpair failed and we were unable to recover it. 00:38:19.885 [2024-12-13 10:40:13.636688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.885 [2024-12-13 10:40:13.636703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.885 qpair failed and we were unable to recover it. 00:38:19.885 [2024-12-13 10:40:13.636912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.885 [2024-12-13 10:40:13.636925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.885 qpair failed and we were unable to recover it. 00:38:19.885 [2024-12-13 10:40:13.637005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.885 [2024-12-13 10:40:13.637018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.885 qpair failed and we were unable to recover it. 00:38:19.885 [2024-12-13 10:40:13.637158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.885 [2024-12-13 10:40:13.637172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.885 qpair failed and we were unable to recover it. 00:38:19.885 [2024-12-13 10:40:13.637271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.885 [2024-12-13 10:40:13.637284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.885 qpair failed and we were unable to recover it. 00:38:19.885 [2024-12-13 10:40:13.637440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.885 [2024-12-13 10:40:13.637461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.885 qpair failed and we were unable to recover it. 00:38:19.885 [2024-12-13 10:40:13.637528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.885 [2024-12-13 10:40:13.637541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.885 qpair failed and we were unable to recover it. 00:38:19.885 [2024-12-13 10:40:13.637785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.885 [2024-12-13 10:40:13.637799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.885 qpair failed and we were unable to recover it. 00:38:19.885 [2024-12-13 10:40:13.637941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.885 [2024-12-13 10:40:13.637955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.885 qpair failed and we were unable to recover it. 00:38:19.885 [2024-12-13 10:40:13.638034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.885 [2024-12-13 10:40:13.638047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.885 qpair failed and we were unable to recover it. 00:38:19.885 [2024-12-13 10:40:13.638117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.885 [2024-12-13 10:40:13.638130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.885 qpair failed and we were unable to recover it. 00:38:19.885 [2024-12-13 10:40:13.638275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.885 [2024-12-13 10:40:13.638288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.885 qpair failed and we were unable to recover it. 00:38:19.885 [2024-12-13 10:40:13.638385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.885 [2024-12-13 10:40:13.638399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.885 qpair failed and we were unable to recover it. 00:38:19.885 [2024-12-13 10:40:13.638556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.885 [2024-12-13 10:40:13.638569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.885 qpair failed and we were unable to recover it. 00:38:19.885 [2024-12-13 10:40:13.638752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.885 [2024-12-13 10:40:13.638766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.885 qpair failed and we were unable to recover it. 00:38:19.885 [2024-12-13 10:40:13.638844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.885 [2024-12-13 10:40:13.638858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.885 qpair failed and we were unable to recover it. 00:38:19.885 [2024-12-13 10:40:13.639026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.885 [2024-12-13 10:40:13.639040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.885 qpair failed and we were unable to recover it. 00:38:19.885 [2024-12-13 10:40:13.639188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.886 [2024-12-13 10:40:13.639202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.886 qpair failed and we were unable to recover it. 00:38:19.886 [2024-12-13 10:40:13.639357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.886 [2024-12-13 10:40:13.639371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.886 qpair failed and we were unable to recover it. 00:38:19.886 [2024-12-13 10:40:13.639456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.886 [2024-12-13 10:40:13.639471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.886 qpair failed and we were unable to recover it. 00:38:19.886 [2024-12-13 10:40:13.639564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.886 [2024-12-13 10:40:13.639596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.886 qpair failed and we were unable to recover it. 00:38:19.886 [2024-12-13 10:40:13.639685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.886 [2024-12-13 10:40:13.639699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.886 qpair failed and we were unable to recover it. 00:38:19.886 [2024-12-13 10:40:13.639837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.886 [2024-12-13 10:40:13.639851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.886 qpair failed and we were unable to recover it. 00:38:19.886 [2024-12-13 10:40:13.640009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.886 [2024-12-13 10:40:13.640023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.886 qpair failed and we were unable to recover it. 00:38:19.886 [2024-12-13 10:40:13.640179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.886 [2024-12-13 10:40:13.640193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.886 qpair failed and we were unable to recover it. 00:38:19.886 [2024-12-13 10:40:13.640298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.886 [2024-12-13 10:40:13.640312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.886 qpair failed and we were unable to recover it. 00:38:19.886 [2024-12-13 10:40:13.640477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.886 [2024-12-13 10:40:13.640491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.886 qpair failed and we were unable to recover it. 00:38:19.886 [2024-12-13 10:40:13.640593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.886 [2024-12-13 10:40:13.640607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.886 qpair failed and we were unable to recover it. 00:38:19.886 [2024-12-13 10:40:13.640704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.886 [2024-12-13 10:40:13.640717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.886 qpair failed and we were unable to recover it. 00:38:19.886 [2024-12-13 10:40:13.640939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.886 [2024-12-13 10:40:13.640953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.886 qpair failed and we were unable to recover it. 00:38:19.886 [2024-12-13 10:40:13.641166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.886 [2024-12-13 10:40:13.641179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.886 qpair failed and we were unable to recover it. 00:38:19.886 [2024-12-13 10:40:13.641314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.886 [2024-12-13 10:40:13.641328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.886 qpair failed and we were unable to recover it. 00:38:19.886 [2024-12-13 10:40:13.641472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.886 [2024-12-13 10:40:13.641485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.886 qpair failed and we were unable to recover it. 00:38:19.886 [2024-12-13 10:40:13.641693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.886 [2024-12-13 10:40:13.641707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.886 qpair failed and we were unable to recover it. 00:38:19.886 [2024-12-13 10:40:13.641851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.886 [2024-12-13 10:40:13.641865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.886 qpair failed and we were unable to recover it. 00:38:19.886 [2024-12-13 10:40:13.641946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.886 [2024-12-13 10:40:13.641960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.886 qpair failed and we were unable to recover it. 00:38:19.886 [2024-12-13 10:40:13.642049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.886 [2024-12-13 10:40:13.642063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.886 qpair failed and we were unable to recover it. 00:38:19.886 [2024-12-13 10:40:13.642210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.886 [2024-12-13 10:40:13.642223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.886 qpair failed and we were unable to recover it. 00:38:19.886 [2024-12-13 10:40:13.642357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.886 [2024-12-13 10:40:13.642373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.886 qpair failed and we were unable to recover it. 00:38:19.886 [2024-12-13 10:40:13.642465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.886 [2024-12-13 10:40:13.642480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.886 qpair failed and we were unable to recover it. 00:38:19.886 [2024-12-13 10:40:13.642629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.886 [2024-12-13 10:40:13.642643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.886 qpair failed and we were unable to recover it. 00:38:19.886 [2024-12-13 10:40:13.642730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.886 [2024-12-13 10:40:13.642744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.886 qpair failed and we were unable to recover it. 00:38:19.886 [2024-12-13 10:40:13.642891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.886 [2024-12-13 10:40:13.642906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.886 qpair failed and we were unable to recover it. 00:38:19.886 [2024-12-13 10:40:13.643041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.886 [2024-12-13 10:40:13.643055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.886 qpair failed and we were unable to recover it. 00:38:19.886 [2024-12-13 10:40:13.643136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.886 [2024-12-13 10:40:13.643150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.886 qpair failed and we were unable to recover it. 00:38:19.886 [2024-12-13 10:40:13.643290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.886 [2024-12-13 10:40:13.643304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.886 qpair failed and we were unable to recover it. 00:38:19.886 [2024-12-13 10:40:13.643386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.886 [2024-12-13 10:40:13.643400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.886 qpair failed and we were unable to recover it. 00:38:19.886 [2024-12-13 10:40:13.643551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.886 [2024-12-13 10:40:13.643566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.886 qpair failed and we were unable to recover it. 00:38:19.886 [2024-12-13 10:40:13.643637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.886 [2024-12-13 10:40:13.643651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.886 qpair failed and we were unable to recover it. 00:38:19.886 [2024-12-13 10:40:13.643727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.886 [2024-12-13 10:40:13.643740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.886 qpair failed and we were unable to recover it. 00:38:19.886 [2024-12-13 10:40:13.643921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.886 [2024-12-13 10:40:13.643934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.886 qpair failed and we were unable to recover it. 00:38:19.886 [2024-12-13 10:40:13.644073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.886 [2024-12-13 10:40:13.644087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.886 qpair failed and we were unable to recover it. 00:38:19.886 [2024-12-13 10:40:13.644178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.886 [2024-12-13 10:40:13.644192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.886 qpair failed and we were unable to recover it. 00:38:19.886 [2024-12-13 10:40:13.644280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.886 [2024-12-13 10:40:13.644294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.886 qpair failed and we were unable to recover it. 00:38:19.886 [2024-12-13 10:40:13.644372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.886 [2024-12-13 10:40:13.644385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.886 qpair failed and we were unable to recover it. 00:38:19.886 [2024-12-13 10:40:13.644585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.886 [2024-12-13 10:40:13.644600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.886 qpair failed and we were unable to recover it. 00:38:19.886 [2024-12-13 10:40:13.644759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.887 [2024-12-13 10:40:13.644773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.887 qpair failed and we were unable to recover it. 00:38:19.887 [2024-12-13 10:40:13.644857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.887 [2024-12-13 10:40:13.644870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.887 qpair failed and we were unable to recover it. 00:38:19.887 [2024-12-13 10:40:13.644956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.887 [2024-12-13 10:40:13.644970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.887 qpair failed and we were unable to recover it. 00:38:19.887 [2024-12-13 10:40:13.645058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.887 [2024-12-13 10:40:13.645072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.887 qpair failed and we were unable to recover it. 00:38:19.887 [2024-12-13 10:40:13.645207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.887 [2024-12-13 10:40:13.645220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.887 qpair failed and we were unable to recover it. 00:38:19.887 [2024-12-13 10:40:13.645308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.887 [2024-12-13 10:40:13.645321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.887 qpair failed and we were unable to recover it. 00:38:19.887 [2024-12-13 10:40:13.645423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.887 [2024-12-13 10:40:13.645437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.887 qpair failed and we were unable to recover it. 00:38:19.887 [2024-12-13 10:40:13.645571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.887 [2024-12-13 10:40:13.645617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:19.887 qpair failed and we were unable to recover it. 00:38:19.887 [2024-12-13 10:40:13.645754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.887 [2024-12-13 10:40:13.645798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:19.887 qpair failed and we were unable to recover it. 00:38:19.887 [2024-12-13 10:40:13.645983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.887 [2024-12-13 10:40:13.646028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:19.887 qpair failed and we were unable to recover it. 00:38:19.887 [2024-12-13 10:40:13.646128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.887 [2024-12-13 10:40:13.646143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.887 qpair failed and we were unable to recover it. 00:38:19.887 [2024-12-13 10:40:13.646285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.887 [2024-12-13 10:40:13.646298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.887 qpair failed and we were unable to recover it. 00:38:19.887 [2024-12-13 10:40:13.646473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.887 [2024-12-13 10:40:13.646487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.887 qpair failed and we were unable to recover it. 00:38:19.887 [2024-12-13 10:40:13.646582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.887 [2024-12-13 10:40:13.646595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.887 qpair failed and we were unable to recover it. 00:38:19.887 [2024-12-13 10:40:13.646668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.887 [2024-12-13 10:40:13.646682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.887 qpair failed and we were unable to recover it. 00:38:19.887 [2024-12-13 10:40:13.646764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.887 [2024-12-13 10:40:13.646777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.887 qpair failed and we were unable to recover it. 00:38:19.887 [2024-12-13 10:40:13.647008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.887 [2024-12-13 10:40:13.647022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.887 qpair failed and we were unable to recover it. 00:38:19.887 [2024-12-13 10:40:13.647242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.887 [2024-12-13 10:40:13.647283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.887 qpair failed and we were unable to recover it. 00:38:19.887 [2024-12-13 10:40:13.647570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.887 [2024-12-13 10:40:13.647615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.887 qpair failed and we were unable to recover it. 00:38:19.887 [2024-12-13 10:40:13.647756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.887 [2024-12-13 10:40:13.647798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.887 qpair failed and we were unable to recover it. 00:38:19.887 [2024-12-13 10:40:13.648042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.887 [2024-12-13 10:40:13.648084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.887 qpair failed and we were unable to recover it. 00:38:19.887 [2024-12-13 10:40:13.648294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.887 [2024-12-13 10:40:13.648336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.887 qpair failed and we were unable to recover it. 00:38:19.887 [2024-12-13 10:40:13.648458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.887 [2024-12-13 10:40:13.648474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.887 qpair failed and we were unable to recover it. 00:38:19.887 [2024-12-13 10:40:13.648610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.887 [2024-12-13 10:40:13.648658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.887 qpair failed and we were unable to recover it. 00:38:19.887 [2024-12-13 10:40:13.648866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.887 [2024-12-13 10:40:13.648908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.887 qpair failed and we were unable to recover it. 00:38:19.887 [2024-12-13 10:40:13.649099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.887 [2024-12-13 10:40:13.649142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.887 qpair failed and we were unable to recover it. 00:38:19.887 [2024-12-13 10:40:13.649426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.887 [2024-12-13 10:40:13.649478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.887 qpair failed and we were unable to recover it. 00:38:19.887 [2024-12-13 10:40:13.649769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.887 [2024-12-13 10:40:13.649812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.887 qpair failed and we were unable to recover it. 00:38:19.887 [2024-12-13 10:40:13.650080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.887 [2024-12-13 10:40:13.650122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.887 qpair failed and we were unable to recover it. 00:38:19.887 [2024-12-13 10:40:13.650334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.887 [2024-12-13 10:40:13.650375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.887 qpair failed and we were unable to recover it. 00:38:19.887 [2024-12-13 10:40:13.650547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.887 [2024-12-13 10:40:13.650592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.887 qpair failed and we were unable to recover it. 00:38:19.887 [2024-12-13 10:40:13.650803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.887 [2024-12-13 10:40:13.650857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.887 qpair failed and we were unable to recover it. 00:38:19.887 [2024-12-13 10:40:13.651119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.887 [2024-12-13 10:40:13.651162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.887 qpair failed and we were unable to recover it. 00:38:19.887 [2024-12-13 10:40:13.651299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.887 [2024-12-13 10:40:13.651342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.887 qpair failed and we were unable to recover it. 00:38:19.888 [2024-12-13 10:40:13.651544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.888 [2024-12-13 10:40:13.651588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.888 qpair failed and we were unable to recover it. 00:38:19.888 [2024-12-13 10:40:13.651853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.888 [2024-12-13 10:40:13.651896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.888 qpair failed and we were unable to recover it. 00:38:19.888 [2024-12-13 10:40:13.652101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.888 [2024-12-13 10:40:13.652144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.888 qpair failed and we were unable to recover it. 00:38:19.888 [2024-12-13 10:40:13.652430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.888 [2024-12-13 10:40:13.652482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.888 qpair failed and we were unable to recover it. 00:38:19.888 [2024-12-13 10:40:13.652685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.888 [2024-12-13 10:40:13.652727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.888 qpair failed and we were unable to recover it. 00:38:19.888 [2024-12-13 10:40:13.652942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.888 [2024-12-13 10:40:13.652983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.888 qpair failed and we were unable to recover it. 00:38:19.888 [2024-12-13 10:40:13.653141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.888 [2024-12-13 10:40:13.653183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.888 qpair failed and we were unable to recover it. 00:38:19.888 [2024-12-13 10:40:13.653396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.888 [2024-12-13 10:40:13.653438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.888 qpair failed and we were unable to recover it. 00:38:19.888 [2024-12-13 10:40:13.653655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.888 [2024-12-13 10:40:13.653697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.888 qpair failed and we were unable to recover it. 00:38:19.888 [2024-12-13 10:40:13.653962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.888 [2024-12-13 10:40:13.654004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.888 qpair failed and we were unable to recover it. 00:38:19.888 [2024-12-13 10:40:13.654218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.888 [2024-12-13 10:40:13.654231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.888 qpair failed and we were unable to recover it. 00:38:19.888 [2024-12-13 10:40:13.654493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.888 [2024-12-13 10:40:13.654536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.888 qpair failed and we were unable to recover it. 00:38:19.888 [2024-12-13 10:40:13.654665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.888 [2024-12-13 10:40:13.654706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.888 qpair failed and we were unable to recover it. 00:38:19.888 [2024-12-13 10:40:13.654925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.888 [2024-12-13 10:40:13.654968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.888 qpair failed and we were unable to recover it. 00:38:19.888 [2024-12-13 10:40:13.655167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.888 [2024-12-13 10:40:13.655209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.888 qpair failed and we were unable to recover it. 00:38:19.888 [2024-12-13 10:40:13.655480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.888 [2024-12-13 10:40:13.655547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:19.888 qpair failed and we were unable to recover it. 00:38:19.888 [2024-12-13 10:40:13.655685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.888 [2024-12-13 10:40:13.655719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:19.888 qpair failed and we were unable to recover it. 00:38:19.888 [2024-12-13 10:40:13.655904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.888 [2024-12-13 10:40:13.655931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:19.888 qpair failed and we were unable to recover it. 00:38:19.888 [2024-12-13 10:40:13.656186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.888 [2024-12-13 10:40:13.656231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.888 qpair failed and we were unable to recover it. 00:38:19.888 [2024-12-13 10:40:13.656494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.888 [2024-12-13 10:40:13.656537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.888 qpair failed and we were unable to recover it. 00:38:19.888 [2024-12-13 10:40:13.656738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.888 [2024-12-13 10:40:13.656782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.888 qpair failed and we were unable to recover it. 00:38:19.888 [2024-12-13 10:40:13.657021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.888 [2024-12-13 10:40:13.657063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.888 qpair failed and we were unable to recover it. 00:38:19.888 [2024-12-13 10:40:13.657176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.888 [2024-12-13 10:40:13.657190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.888 qpair failed and we were unable to recover it. 00:38:19.888 [2024-12-13 10:40:13.657344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.888 [2024-12-13 10:40:13.657358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.888 qpair failed and we were unable to recover it. 00:38:19.888 [2024-12-13 10:40:13.657647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.888 [2024-12-13 10:40:13.657691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.888 qpair failed and we were unable to recover it. 00:38:19.888 [2024-12-13 10:40:13.657824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.888 [2024-12-13 10:40:13.657865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.888 qpair failed and we were unable to recover it. 00:38:19.888 [2024-12-13 10:40:13.658083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.888 [2024-12-13 10:40:13.658125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.888 qpair failed and we were unable to recover it. 00:38:19.888 [2024-12-13 10:40:13.658284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.888 [2024-12-13 10:40:13.658298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.888 qpair failed and we were unable to recover it. 00:38:19.888 [2024-12-13 10:40:13.658516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.888 [2024-12-13 10:40:13.658570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.888 qpair failed and we were unable to recover it. 00:38:19.888 [2024-12-13 10:40:13.658712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.888 [2024-12-13 10:40:13.658754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.888 qpair failed and we were unable to recover it. 00:38:19.888 [2024-12-13 10:40:13.659060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.888 [2024-12-13 10:40:13.659103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.888 qpair failed and we were unable to recover it. 00:38:19.888 [2024-12-13 10:40:13.659359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.888 [2024-12-13 10:40:13.659408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.888 qpair failed and we were unable to recover it. 00:38:19.888 [2024-12-13 10:40:13.659631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.888 [2024-12-13 10:40:13.659648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.888 qpair failed and we were unable to recover it. 00:38:19.888 [2024-12-13 10:40:13.659810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.888 [2024-12-13 10:40:13.659823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.888 qpair failed and we were unable to recover it. 00:38:19.888 [2024-12-13 10:40:13.660012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.888 [2024-12-13 10:40:13.660054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.888 qpair failed and we were unable to recover it. 00:38:19.888 [2024-12-13 10:40:13.660251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.888 [2024-12-13 10:40:13.660294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.888 qpair failed and we were unable to recover it. 00:38:19.888 [2024-12-13 10:40:13.660485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.888 [2024-12-13 10:40:13.660525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.888 qpair failed and we were unable to recover it. 00:38:19.888 [2024-12-13 10:40:13.660733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.888 [2024-12-13 10:40:13.660746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.888 qpair failed and we were unable to recover it. 00:38:19.888 [2024-12-13 10:40:13.660833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.889 [2024-12-13 10:40:13.660846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.889 qpair failed and we were unable to recover it. 00:38:19.889 [2024-12-13 10:40:13.661066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.889 [2024-12-13 10:40:13.661109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.889 qpair failed and we were unable to recover it. 00:38:19.889 [2024-12-13 10:40:13.661306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.889 [2024-12-13 10:40:13.661348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.889 qpair failed and we were unable to recover it. 00:38:19.889 [2024-12-13 10:40:13.661638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.889 [2024-12-13 10:40:13.661682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.889 qpair failed and we were unable to recover it. 00:38:19.889 [2024-12-13 10:40:13.661914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.889 [2024-12-13 10:40:13.661957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.889 qpair failed and we were unable to recover it. 00:38:19.889 [2024-12-13 10:40:13.662267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.889 [2024-12-13 10:40:13.662308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.889 qpair failed and we were unable to recover it. 00:38:19.889 [2024-12-13 10:40:13.662571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.889 [2024-12-13 10:40:13.662614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.889 qpair failed and we were unable to recover it. 00:38:19.889 [2024-12-13 10:40:13.662819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.889 [2024-12-13 10:40:13.662832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.889 qpair failed and we were unable to recover it. 00:38:19.889 [2024-12-13 10:40:13.662914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.889 [2024-12-13 10:40:13.662942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.889 qpair failed and we were unable to recover it. 00:38:19.889 [2024-12-13 10:40:13.663153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.889 [2024-12-13 10:40:13.663195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.889 qpair failed and we were unable to recover it. 00:38:19.889 [2024-12-13 10:40:13.663388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.889 [2024-12-13 10:40:13.663430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.889 qpair failed and we were unable to recover it. 00:38:19.889 [2024-12-13 10:40:13.663622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.889 [2024-12-13 10:40:13.663636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.889 qpair failed and we were unable to recover it. 00:38:19.889 [2024-12-13 10:40:13.663799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.889 [2024-12-13 10:40:13.663841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.889 qpair failed and we were unable to recover it. 00:38:19.889 [2024-12-13 10:40:13.664040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.889 [2024-12-13 10:40:13.664083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.889 qpair failed and we were unable to recover it. 00:38:19.889 [2024-12-13 10:40:13.664279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.889 [2024-12-13 10:40:13.664318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.889 qpair failed and we were unable to recover it. 00:38:19.889 [2024-12-13 10:40:13.664467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.889 [2024-12-13 10:40:13.664481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.889 qpair failed and we were unable to recover it. 00:38:19.889 [2024-12-13 10:40:13.664616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.889 [2024-12-13 10:40:13.664630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.889 qpair failed and we were unable to recover it. 00:38:19.889 [2024-12-13 10:40:13.664719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.889 [2024-12-13 10:40:13.664732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.889 qpair failed and we were unable to recover it. 00:38:19.889 [2024-12-13 10:40:13.664892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.889 [2024-12-13 10:40:13.664935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.889 qpair failed and we were unable to recover it. 00:38:19.889 [2024-12-13 10:40:13.665072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.889 [2024-12-13 10:40:13.665114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.889 qpair failed and we were unable to recover it. 00:38:19.889 [2024-12-13 10:40:13.665347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.889 [2024-12-13 10:40:13.665389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.889 qpair failed and we were unable to recover it. 00:38:19.889 [2024-12-13 10:40:13.665584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.889 [2024-12-13 10:40:13.665598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.889 qpair failed and we were unable to recover it. 00:38:19.889 [2024-12-13 10:40:13.665695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.889 [2024-12-13 10:40:13.665708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.889 qpair failed and we were unable to recover it. 00:38:19.889 [2024-12-13 10:40:13.665790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.889 [2024-12-13 10:40:13.665803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.889 qpair failed and we were unable to recover it. 00:38:19.889 [2024-12-13 10:40:13.665962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.889 [2024-12-13 10:40:13.665976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.889 qpair failed and we were unable to recover it. 00:38:19.889 [2024-12-13 10:40:13.666192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.889 [2024-12-13 10:40:13.666237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.889 qpair failed and we were unable to recover it. 00:38:19.889 [2024-12-13 10:40:13.666432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.889 [2024-12-13 10:40:13.666508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.889 qpair failed and we were unable to recover it. 00:38:19.889 [2024-12-13 10:40:13.666725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.889 [2024-12-13 10:40:13.666769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.889 qpair failed and we were unable to recover it. 00:38:19.889 [2024-12-13 10:40:13.666911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.889 [2024-12-13 10:40:13.666953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.889 qpair failed and we were unable to recover it. 00:38:19.889 [2024-12-13 10:40:13.667173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.889 [2024-12-13 10:40:13.667214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.889 qpair failed and we were unable to recover it. 00:38:19.889 [2024-12-13 10:40:13.667498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.889 [2024-12-13 10:40:13.667548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.889 qpair failed and we were unable to recover it. 00:38:19.889 [2024-12-13 10:40:13.667686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.889 [2024-12-13 10:40:13.667728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.889 qpair failed and we were unable to recover it. 00:38:19.889 [2024-12-13 10:40:13.667889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.889 [2024-12-13 10:40:13.667932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.889 qpair failed and we were unable to recover it. 00:38:19.889 [2024-12-13 10:40:13.668164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.889 [2024-12-13 10:40:13.668206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.889 qpair failed and we were unable to recover it. 00:38:19.889 [2024-12-13 10:40:13.668400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.889 [2024-12-13 10:40:13.668413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.889 qpair failed and we were unable to recover it. 00:38:19.889 [2024-12-13 10:40:13.668622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.889 [2024-12-13 10:40:13.668636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.889 qpair failed and we were unable to recover it. 00:38:19.889 [2024-12-13 10:40:13.668772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.889 [2024-12-13 10:40:13.668828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.890 qpair failed and we were unable to recover it. 00:38:19.890 [2024-12-13 10:40:13.669119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.890 [2024-12-13 10:40:13.669162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.890 qpair failed and we were unable to recover it. 00:38:19.890 [2024-12-13 10:40:13.669380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.890 [2024-12-13 10:40:13.669423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.890 qpair failed and we were unable to recover it. 00:38:19.890 [2024-12-13 10:40:13.669633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.890 [2024-12-13 10:40:13.669676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.890 qpair failed and we were unable to recover it. 00:38:19.890 [2024-12-13 10:40:13.669905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.890 [2024-12-13 10:40:13.669947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.890 qpair failed and we were unable to recover it. 00:38:19.890 [2024-12-13 10:40:13.670237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.890 [2024-12-13 10:40:13.670280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.890 qpair failed and we were unable to recover it. 00:38:19.890 [2024-12-13 10:40:13.670556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.890 [2024-12-13 10:40:13.670571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.890 qpair failed and we were unable to recover it. 00:38:19.890 [2024-12-13 10:40:13.670794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.890 [2024-12-13 10:40:13.670837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.890 qpair failed and we were unable to recover it. 00:38:19.890 [2024-12-13 10:40:13.671050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.890 [2024-12-13 10:40:13.671092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.890 qpair failed and we were unable to recover it. 00:38:19.890 [2024-12-13 10:40:13.671292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.890 [2024-12-13 10:40:13.671335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.890 qpair failed and we were unable to recover it. 00:38:19.890 [2024-12-13 10:40:13.671615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.890 [2024-12-13 10:40:13.671659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.890 qpair failed and we were unable to recover it. 00:38:19.890 [2024-12-13 10:40:13.671803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.890 [2024-12-13 10:40:13.671846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.890 qpair failed and we were unable to recover it. 00:38:19.890 [2024-12-13 10:40:13.672122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.890 [2024-12-13 10:40:13.672165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.890 qpair failed and we were unable to recover it. 00:38:19.890 [2024-12-13 10:40:13.672324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.890 [2024-12-13 10:40:13.672366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.890 qpair failed and we were unable to recover it. 00:38:19.890 [2024-12-13 10:40:13.672557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.890 [2024-12-13 10:40:13.672571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.890 qpair failed and we were unable to recover it. 00:38:19.890 [2024-12-13 10:40:13.672641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.890 [2024-12-13 10:40:13.672655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.890 qpair failed and we were unable to recover it. 00:38:19.890 [2024-12-13 10:40:13.672824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.890 [2024-12-13 10:40:13.672837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.890 qpair failed and we were unable to recover it. 00:38:19.890 [2024-12-13 10:40:13.672986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.890 [2024-12-13 10:40:13.673029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.890 qpair failed and we were unable to recover it. 00:38:19.890 [2024-12-13 10:40:13.673167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.890 [2024-12-13 10:40:13.673210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.890 qpair failed and we were unable to recover it. 00:38:19.890 [2024-12-13 10:40:13.673405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.890 [2024-12-13 10:40:13.673456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.890 qpair failed and we were unable to recover it. 00:38:19.890 [2024-12-13 10:40:13.673689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.890 [2024-12-13 10:40:13.673703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.890 qpair failed and we were unable to recover it. 00:38:19.890 [2024-12-13 10:40:13.673856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.890 [2024-12-13 10:40:13.673870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.890 qpair failed and we were unable to recover it. 00:38:19.890 [2024-12-13 10:40:13.673963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.890 [2024-12-13 10:40:13.673976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.890 qpair failed and we were unable to recover it. 00:38:19.890 [2024-12-13 10:40:13.674121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.890 [2024-12-13 10:40:13.674163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.890 qpair failed and we were unable to recover it. 00:38:19.890 [2024-12-13 10:40:13.674436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.890 [2024-12-13 10:40:13.674499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.890 qpair failed and we were unable to recover it. 00:38:19.890 [2024-12-13 10:40:13.674760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.890 [2024-12-13 10:40:13.674803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.890 qpair failed and we were unable to recover it. 00:38:19.890 [2024-12-13 10:40:13.675102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.890 [2024-12-13 10:40:13.675144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.890 qpair failed and we were unable to recover it. 00:38:19.890 [2024-12-13 10:40:13.675373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.890 [2024-12-13 10:40:13.675416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.890 qpair failed and we were unable to recover it. 00:38:19.890 [2024-12-13 10:40:13.675582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.890 [2024-12-13 10:40:13.675595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.890 qpair failed and we were unable to recover it. 00:38:19.890 [2024-12-13 10:40:13.675832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.890 [2024-12-13 10:40:13.675874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.890 qpair failed and we were unable to recover it. 00:38:19.890 [2024-12-13 10:40:13.676067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.890 [2024-12-13 10:40:13.676109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.890 qpair failed and we were unable to recover it. 00:38:19.890 [2024-12-13 10:40:13.676395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.890 [2024-12-13 10:40:13.676438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.890 qpair failed and we were unable to recover it. 00:38:19.890 [2024-12-13 10:40:13.676602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.890 [2024-12-13 10:40:13.676644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.890 qpair failed and we were unable to recover it. 00:38:19.890 [2024-12-13 10:40:13.676847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.890 [2024-12-13 10:40:13.676889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.890 qpair failed and we were unable to recover it. 00:38:19.890 [2024-12-13 10:40:13.677011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.890 [2024-12-13 10:40:13.677058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.890 qpair failed and we were unable to recover it. 00:38:19.890 [2024-12-13 10:40:13.677250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.890 [2024-12-13 10:40:13.677293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.890 qpair failed and we were unable to recover it. 00:38:19.890 [2024-12-13 10:40:13.677465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.890 [2024-12-13 10:40:13.677510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.890 qpair failed and we were unable to recover it. 00:38:19.890 [2024-12-13 10:40:13.677785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.890 [2024-12-13 10:40:13.677799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.890 qpair failed and we were unable to recover it. 00:38:19.890 [2024-12-13 10:40:13.678003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.891 [2024-12-13 10:40:13.678016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.891 qpair failed and we were unable to recover it. 00:38:19.891 [2024-12-13 10:40:13.678181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.891 [2024-12-13 10:40:13.678195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.891 qpair failed and we were unable to recover it. 00:38:19.891 [2024-12-13 10:40:13.678293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.891 [2024-12-13 10:40:13.678307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.891 qpair failed and we were unable to recover it. 00:38:19.891 [2024-12-13 10:40:13.678469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.891 [2024-12-13 10:40:13.678483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.891 qpair failed and we were unable to recover it. 00:38:19.891 [2024-12-13 10:40:13.678649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.891 [2024-12-13 10:40:13.678662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.891 qpair failed and we were unable to recover it. 00:38:19.891 [2024-12-13 10:40:13.678808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.891 [2024-12-13 10:40:13.678850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.891 qpair failed and we were unable to recover it. 00:38:19.891 [2024-12-13 10:40:13.679062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.891 [2024-12-13 10:40:13.679103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.891 qpair failed and we were unable to recover it. 00:38:19.891 [2024-12-13 10:40:13.679287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.891 [2024-12-13 10:40:13.679300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.891 qpair failed and we were unable to recover it. 00:38:19.891 [2024-12-13 10:40:13.679446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.891 [2024-12-13 10:40:13.679470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.891 qpair failed and we were unable to recover it. 00:38:19.891 [2024-12-13 10:40:13.679719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.891 [2024-12-13 10:40:13.679757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.891 qpair failed and we were unable to recover it. 00:38:19.891 [2024-12-13 10:40:13.680052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.891 [2024-12-13 10:40:13.680095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.891 qpair failed and we were unable to recover it. 00:38:19.891 [2024-12-13 10:40:13.680291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.891 [2024-12-13 10:40:13.680333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.891 qpair failed and we were unable to recover it. 00:38:19.891 [2024-12-13 10:40:13.680503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.891 [2024-12-13 10:40:13.680548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.891 qpair failed and we were unable to recover it. 00:38:19.891 [2024-12-13 10:40:13.680767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.891 [2024-12-13 10:40:13.680780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.891 qpair failed and we were unable to recover it. 00:38:19.891 [2024-12-13 10:40:13.680874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.891 [2024-12-13 10:40:13.680887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.891 qpair failed and we were unable to recover it. 00:38:19.891 [2024-12-13 10:40:13.681040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.891 [2024-12-13 10:40:13.681054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.891 qpair failed and we were unable to recover it. 00:38:19.891 [2024-12-13 10:40:13.681207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.891 [2024-12-13 10:40:13.681229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.891 qpair failed and we were unable to recover it. 00:38:19.891 [2024-12-13 10:40:13.681368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.891 [2024-12-13 10:40:13.681382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.891 qpair failed and we were unable to recover it. 00:38:19.891 [2024-12-13 10:40:13.681529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.891 [2024-12-13 10:40:13.681543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.891 qpair failed and we were unable to recover it. 00:38:19.891 [2024-12-13 10:40:13.681759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.891 [2024-12-13 10:40:13.681773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.891 qpair failed and we were unable to recover it. 00:38:19.891 [2024-12-13 10:40:13.681924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.891 [2024-12-13 10:40:13.681937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.891 qpair failed and we were unable to recover it. 00:38:19.891 [2024-12-13 10:40:13.682076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.891 [2024-12-13 10:40:13.682089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.891 qpair failed and we were unable to recover it. 00:38:19.891 [2024-12-13 10:40:13.682314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.891 [2024-12-13 10:40:13.682357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.891 qpair failed and we were unable to recover it. 00:38:19.891 [2024-12-13 10:40:13.682579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.891 [2024-12-13 10:40:13.682623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.891 qpair failed and we were unable to recover it. 00:38:19.891 [2024-12-13 10:40:13.682853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.891 [2024-12-13 10:40:13.682896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.891 qpair failed and we were unable to recover it. 00:38:19.891 [2024-12-13 10:40:13.683112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.891 [2024-12-13 10:40:13.683154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.891 qpair failed and we were unable to recover it. 00:38:19.891 [2024-12-13 10:40:13.683305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.891 [2024-12-13 10:40:13.683347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.891 qpair failed and we were unable to recover it. 00:38:19.891 [2024-12-13 10:40:13.683637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.891 [2024-12-13 10:40:13.683650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.891 qpair failed and we were unable to recover it. 00:38:19.891 [2024-12-13 10:40:13.683799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.891 [2024-12-13 10:40:13.683841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.891 qpair failed and we were unable to recover it. 00:38:19.891 [2024-12-13 10:40:13.683984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.891 [2024-12-13 10:40:13.684026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.891 qpair failed and we were unable to recover it. 00:38:19.891 [2024-12-13 10:40:13.684228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.891 [2024-12-13 10:40:13.684269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.891 qpair failed and we were unable to recover it. 00:38:19.891 [2024-12-13 10:40:13.684475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.891 [2024-12-13 10:40:13.684519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.891 qpair failed and we were unable to recover it. 00:38:19.891 [2024-12-13 10:40:13.684768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.891 [2024-12-13 10:40:13.684782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.891 qpair failed and we were unable to recover it. 00:38:19.891 [2024-12-13 10:40:13.685010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.891 [2024-12-13 10:40:13.685023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.891 qpair failed and we were unable to recover it. 00:38:19.891 [2024-12-13 10:40:13.685101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.891 [2024-12-13 10:40:13.685115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.891 qpair failed and we were unable to recover it. 00:38:19.891 [2024-12-13 10:40:13.685300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.891 [2024-12-13 10:40:13.685341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.891 qpair failed and we were unable to recover it. 00:38:19.891 [2024-12-13 10:40:13.685550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.891 [2024-12-13 10:40:13.685600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.891 qpair failed and we were unable to recover it. 00:38:19.892 [2024-12-13 10:40:13.685812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.892 [2024-12-13 10:40:13.685853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.892 qpair failed and we were unable to recover it. 00:38:19.892 [2024-12-13 10:40:13.686065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.892 [2024-12-13 10:40:13.686108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.892 qpair failed and we were unable to recover it. 00:38:19.892 [2024-12-13 10:40:13.686393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.892 [2024-12-13 10:40:13.686436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.892 qpair failed and we were unable to recover it. 00:38:19.892 [2024-12-13 10:40:13.686617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.892 [2024-12-13 10:40:13.686661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.892 qpair failed and we were unable to recover it. 00:38:19.892 [2024-12-13 10:40:13.686879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.892 [2024-12-13 10:40:13.686921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.892 qpair failed and we were unable to recover it. 00:38:19.892 [2024-12-13 10:40:13.687065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.892 [2024-12-13 10:40:13.687107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.892 qpair failed and we were unable to recover it. 00:38:19.892 [2024-12-13 10:40:13.687232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.892 [2024-12-13 10:40:13.687273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.892 qpair failed and we were unable to recover it. 00:38:19.892 [2024-12-13 10:40:13.687556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.892 [2024-12-13 10:40:13.687600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.892 qpair failed and we were unable to recover it. 00:38:19.892 [2024-12-13 10:40:13.687792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.892 [2024-12-13 10:40:13.687834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.892 qpair failed and we were unable to recover it. 00:38:19.892 [2024-12-13 10:40:13.688125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.892 [2024-12-13 10:40:13.688175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.892 qpair failed and we were unable to recover it. 00:38:19.892 [2024-12-13 10:40:13.688331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.892 [2024-12-13 10:40:13.688344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.892 qpair failed and we were unable to recover it. 00:38:19.892 [2024-12-13 10:40:13.688492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.892 [2024-12-13 10:40:13.688537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.892 qpair failed and we were unable to recover it. 00:38:19.892 [2024-12-13 10:40:13.688749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.892 [2024-12-13 10:40:13.688791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.892 qpair failed and we were unable to recover it. 00:38:19.892 [2024-12-13 10:40:13.689014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.892 [2024-12-13 10:40:13.689057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.892 qpair failed and we were unable to recover it. 00:38:19.892 [2024-12-13 10:40:13.689286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.892 [2024-12-13 10:40:13.689328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.892 qpair failed and we were unable to recover it. 00:38:19.892 [2024-12-13 10:40:13.689578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.892 [2024-12-13 10:40:13.689593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.892 qpair failed and we were unable to recover it. 00:38:19.892 [2024-12-13 10:40:13.689780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.892 [2024-12-13 10:40:13.689794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.892 qpair failed and we were unable to recover it. 00:38:19.892 [2024-12-13 10:40:13.689961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.892 [2024-12-13 10:40:13.690004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.892 qpair failed and we were unable to recover it. 00:38:19.892 [2024-12-13 10:40:13.690290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.892 [2024-12-13 10:40:13.690333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.892 qpair failed and we were unable to recover it. 00:38:19.892 [2024-12-13 10:40:13.690509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.892 [2024-12-13 10:40:13.690522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.892 qpair failed and we were unable to recover it. 00:38:19.892 [2024-12-13 10:40:13.690676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.892 [2024-12-13 10:40:13.690731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.892 qpair failed and we were unable to recover it. 00:38:19.892 [2024-12-13 10:40:13.690870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.892 [2024-12-13 10:40:13.690912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.892 qpair failed and we were unable to recover it. 00:38:19.892 [2024-12-13 10:40:13.691123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.892 [2024-12-13 10:40:13.691166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.892 qpair failed and we were unable to recover it. 00:38:19.892 [2024-12-13 10:40:13.691460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.892 [2024-12-13 10:40:13.691503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.892 qpair failed and we were unable to recover it. 00:38:19.892 [2024-12-13 10:40:13.691646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.892 [2024-12-13 10:40:13.691688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.892 qpair failed and we were unable to recover it. 00:38:19.892 [2024-12-13 10:40:13.691902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.892 [2024-12-13 10:40:13.691944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.892 qpair failed and we were unable to recover it. 00:38:19.892 [2024-12-13 10:40:13.692162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.892 [2024-12-13 10:40:13.692205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.892 qpair failed and we were unable to recover it. 00:38:19.892 [2024-12-13 10:40:13.692367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.892 [2024-12-13 10:40:13.692410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.892 qpair failed and we were unable to recover it. 00:38:19.893 [2024-12-13 10:40:13.692559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.893 [2024-12-13 10:40:13.692602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.893 qpair failed and we were unable to recover it. 00:38:19.893 [2024-12-13 10:40:13.692862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.893 [2024-12-13 10:40:13.692904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.893 qpair failed and we were unable to recover it. 00:38:19.893 [2024-12-13 10:40:13.693162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.893 [2024-12-13 10:40:13.693204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.893 qpair failed and we were unable to recover it. 00:38:19.893 [2024-12-13 10:40:13.693405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.893 [2024-12-13 10:40:13.693459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.893 qpair failed and we were unable to recover it. 00:38:19.893 [2024-12-13 10:40:13.693723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.893 [2024-12-13 10:40:13.693765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.893 qpair failed and we were unable to recover it. 00:38:19.893 [2024-12-13 10:40:13.693981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.893 [2024-12-13 10:40:13.694023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.893 qpair failed and we were unable to recover it. 00:38:19.893 [2024-12-13 10:40:13.694170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.893 [2024-12-13 10:40:13.694212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.893 qpair failed and we were unable to recover it. 00:38:19.893 [2024-12-13 10:40:13.694510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.893 [2024-12-13 10:40:13.694554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.893 qpair failed and we were unable to recover it. 00:38:19.893 [2024-12-13 10:40:13.694830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.893 [2024-12-13 10:40:13.694844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.893 qpair failed and we were unable to recover it. 00:38:19.893 [2024-12-13 10:40:13.695011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.893 [2024-12-13 10:40:13.695025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.893 qpair failed and we were unable to recover it. 00:38:19.893 [2024-12-13 10:40:13.695284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.893 [2024-12-13 10:40:13.695327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.893 qpair failed and we were unable to recover it. 00:38:19.893 [2024-12-13 10:40:13.695471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.893 [2024-12-13 10:40:13.695520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.893 qpair failed and we were unable to recover it. 00:38:19.893 [2024-12-13 10:40:13.695750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.893 [2024-12-13 10:40:13.695793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.893 qpair failed and we were unable to recover it. 00:38:19.893 [2024-12-13 10:40:13.696054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.893 [2024-12-13 10:40:13.696096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.893 qpair failed and we were unable to recover it. 00:38:19.893 [2024-12-13 10:40:13.696370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.893 [2024-12-13 10:40:13.696413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.893 qpair failed and we were unable to recover it. 00:38:19.893 [2024-12-13 10:40:13.696628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.893 [2024-12-13 10:40:13.696684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.893 qpair failed and we were unable to recover it. 00:38:19.893 [2024-12-13 10:40:13.696826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.893 [2024-12-13 10:40:13.696869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.893 qpair failed and we were unable to recover it. 00:38:19.893 [2024-12-13 10:40:13.697156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.893 [2024-12-13 10:40:13.697198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.893 qpair failed and we were unable to recover it. 00:38:19.893 [2024-12-13 10:40:13.697383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.893 [2024-12-13 10:40:13.697396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.893 qpair failed and we were unable to recover it. 00:38:19.893 [2024-12-13 10:40:13.697555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.893 [2024-12-13 10:40:13.697570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.893 qpair failed and we were unable to recover it. 00:38:19.893 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 4160045 Killed "${NVMF_APP[@]}" "$@" 00:38:19.893 [2024-12-13 10:40:13.697793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.893 [2024-12-13 10:40:13.697807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.893 qpair failed and we were unable to recover it. 00:38:19.893 [2024-12-13 10:40:13.697946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.893 [2024-12-13 10:40:13.697960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.893 qpair failed and we were unable to recover it. 00:38:19.893 [2024-12-13 10:40:13.698050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.893 [2024-12-13 10:40:13.698064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.893 qpair failed and we were unable to recover it. 00:38:19.893 [2024-12-13 10:40:13.698222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.893 [2024-12-13 10:40:13.698236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.893 qpair failed and we were unable to recover it. 00:38:19.893 10:40:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:38:19.893 [2024-12-13 10:40:13.698385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.893 [2024-12-13 10:40:13.698400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.893 qpair failed and we were unable to recover it. 00:38:19.893 [2024-12-13 10:40:13.698478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.893 [2024-12-13 10:40:13.698492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.893 qpair failed and we were unable to recover it. 00:38:19.893 [2024-12-13 10:40:13.698652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.893 [2024-12-13 10:40:13.698665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.893 qpair failed and we were unable to recover it. 00:38:19.893 10:40:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:38:19.893 [2024-12-13 10:40:13.698811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.893 [2024-12-13 10:40:13.698826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.893 qpair failed and we were unable to recover it. 00:38:19.893 [2024-12-13 10:40:13.698891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.893 [2024-12-13 10:40:13.698904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.893 qpair failed and we were unable to recover it. 00:38:19.893 [2024-12-13 10:40:13.699057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.893 [2024-12-13 10:40:13.699072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.893 qpair failed and we were unable to recover it. 00:38:19.893 10:40:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:38:19.893 [2024-12-13 10:40:13.699163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.893 [2024-12-13 10:40:13.699176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.893 qpair failed and we were unable to recover it. 00:38:19.893 [2024-12-13 10:40:13.699380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.893 [2024-12-13 10:40:13.699394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.893 qpair failed and we were unable to recover it. 00:38:19.893 10:40:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:19.893 [2024-12-13 10:40:13.699487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.893 [2024-12-13 10:40:13.699501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.893 qpair failed and we were unable to recover it. 00:38:19.893 10:40:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:19.893 [2024-12-13 10:40:13.699713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.893 [2024-12-13 10:40:13.699727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.893 qpair failed and we were unable to recover it. 00:38:19.893 [2024-12-13 10:40:13.699913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.893 [2024-12-13 10:40:13.699927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.893 qpair failed and we were unable to recover it. 00:38:19.893 [2024-12-13 10:40:13.700069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.893 [2024-12-13 10:40:13.700083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.893 qpair failed and we were unable to recover it. 00:38:19.893 [2024-12-13 10:40:13.700181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.893 [2024-12-13 10:40:13.700194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.893 qpair failed and we were unable to recover it. 00:38:19.893 [2024-12-13 10:40:13.700268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.893 [2024-12-13 10:40:13.700281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.893 qpair failed and we were unable to recover it. 00:38:19.893 [2024-12-13 10:40:13.700517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.893 [2024-12-13 10:40:13.700531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.893 qpair failed and we were unable to recover it. 00:38:19.893 [2024-12-13 10:40:13.700612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.894 [2024-12-13 10:40:13.700626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.894 qpair failed and we were unable to recover it. 00:38:19.894 [2024-12-13 10:40:13.700888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.894 [2024-12-13 10:40:13.700902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.894 qpair failed and we were unable to recover it. 00:38:19.894 [2024-12-13 10:40:13.701060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.894 [2024-12-13 10:40:13.701073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.894 qpair failed and we were unable to recover it. 00:38:19.894 [2024-12-13 10:40:13.701245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.894 [2024-12-13 10:40:13.701259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.894 qpair failed and we were unable to recover it. 00:38:19.894 [2024-12-13 10:40:13.701346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.894 [2024-12-13 10:40:13.701359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.894 qpair failed and we were unable to recover it. 00:38:19.894 [2024-12-13 10:40:13.701565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.894 [2024-12-13 10:40:13.701579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.894 qpair failed and we were unable to recover it. 00:38:19.894 [2024-12-13 10:40:13.701782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.894 [2024-12-13 10:40:13.701796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.894 qpair failed and we were unable to recover it. 00:38:19.894 [2024-12-13 10:40:13.701939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.894 [2024-12-13 10:40:13.701953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.894 qpair failed and we were unable to recover it. 00:38:19.894 [2024-12-13 10:40:13.702092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.894 [2024-12-13 10:40:13.702106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.894 qpair failed and we were unable to recover it. 00:38:19.894 [2024-12-13 10:40:13.702252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.894 [2024-12-13 10:40:13.702266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.894 qpair failed and we were unable to recover it. 00:38:19.894 [2024-12-13 10:40:13.702367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.894 [2024-12-13 10:40:13.702380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.894 qpair failed and we were unable to recover it. 00:38:19.894 [2024-12-13 10:40:13.702475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.894 [2024-12-13 10:40:13.702490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.894 qpair failed and we were unable to recover it. 00:38:19.894 [2024-12-13 10:40:13.702647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.894 [2024-12-13 10:40:13.702660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.894 qpair failed and we were unable to recover it. 00:38:19.894 [2024-12-13 10:40:13.702739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.894 [2024-12-13 10:40:13.702753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.894 qpair failed and we were unable to recover it. 00:38:19.894 [2024-12-13 10:40:13.702837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.894 [2024-12-13 10:40:13.702850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.894 qpair failed and we were unable to recover it. 00:38:19.894 [2024-12-13 10:40:13.703027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.894 [2024-12-13 10:40:13.703040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.894 qpair failed and we were unable to recover it. 00:38:19.894 [2024-12-13 10:40:13.703194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.894 [2024-12-13 10:40:13.703207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.894 qpair failed and we were unable to recover it. 00:38:19.894 [2024-12-13 10:40:13.703341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.894 [2024-12-13 10:40:13.703355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.894 qpair failed and we were unable to recover it. 00:38:19.894 [2024-12-13 10:40:13.703459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.894 [2024-12-13 10:40:13.703474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.894 qpair failed and we were unable to recover it. 00:38:19.894 [2024-12-13 10:40:13.703545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.894 [2024-12-13 10:40:13.703558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.894 qpair failed and we were unable to recover it. 00:38:19.894 [2024-12-13 10:40:13.703765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.894 [2024-12-13 10:40:13.703779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.894 qpair failed and we were unable to recover it. 00:38:19.894 [2024-12-13 10:40:13.703844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.894 [2024-12-13 10:40:13.703858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.894 qpair failed and we were unable to recover it. 00:38:19.894 [2024-12-13 10:40:13.704080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.894 [2024-12-13 10:40:13.704094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.894 qpair failed and we were unable to recover it. 00:38:19.894 [2024-12-13 10:40:13.704248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.894 [2024-12-13 10:40:13.704264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.894 qpair failed and we were unable to recover it. 00:38:19.894 [2024-12-13 10:40:13.704361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.894 [2024-12-13 10:40:13.704375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.894 qpair failed and we were unable to recover it. 00:38:19.894 [2024-12-13 10:40:13.704455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.894 [2024-12-13 10:40:13.704469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.894 qpair failed and we were unable to recover it. 00:38:19.894 [2024-12-13 10:40:13.704545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.894 [2024-12-13 10:40:13.704559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.894 qpair failed and we were unable to recover it. 00:38:19.894 [2024-12-13 10:40:13.704635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.894 [2024-12-13 10:40:13.704648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.894 qpair failed and we were unable to recover it. 00:38:19.894 [2024-12-13 10:40:13.704721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.894 [2024-12-13 10:40:13.704734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.894 qpair failed and we were unable to recover it. 00:38:19.894 [2024-12-13 10:40:13.704811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.894 [2024-12-13 10:40:13.704825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.894 qpair failed and we were unable to recover it. 00:38:19.894 [2024-12-13 10:40:13.704909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.894 [2024-12-13 10:40:13.704923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.894 qpair failed and we were unable to recover it. 00:38:19.894 [2024-12-13 10:40:13.705016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.894 [2024-12-13 10:40:13.705030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.894 qpair failed and we were unable to recover it. 00:38:19.894 [2024-12-13 10:40:13.705115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.894 [2024-12-13 10:40:13.705129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.894 qpair failed and we were unable to recover it. 00:38:19.894 [2024-12-13 10:40:13.705214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.894 [2024-12-13 10:40:13.705228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.894 qpair failed and we were unable to recover it. 00:38:19.894 [2024-12-13 10:40:13.705329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.894 [2024-12-13 10:40:13.705343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.894 qpair failed and we were unable to recover it. 00:38:19.894 [2024-12-13 10:40:13.705490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.894 [2024-12-13 10:40:13.705504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.894 qpair failed and we were unable to recover it. 00:38:19.894 [2024-12-13 10:40:13.705753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.894 [2024-12-13 10:40:13.705767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.894 qpair failed and we were unable to recover it. 00:38:19.894 [2024-12-13 10:40:13.706020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.894 [2024-12-13 10:40:13.706034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.894 qpair failed and we were unable to recover it. 00:38:19.894 [2024-12-13 10:40:13.706185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.894 [2024-12-13 10:40:13.706199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.894 qpair failed and we were unable to recover it. 00:38:19.894 10:40:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=4160954 00:38:19.894 [2024-12-13 10:40:13.706405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.894 [2024-12-13 10:40:13.706420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.894 qpair failed and we were unable to recover it. 00:38:19.894 [2024-12-13 10:40:13.706652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.894 [2024-12-13 10:40:13.706675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.894 qpair failed and we were unable to recover it. 00:38:19.894 10:40:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 4160954 00:38:19.894 10:40:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:38:19.894 [2024-12-13 10:40:13.706936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.895 [2024-12-13 10:40:13.706950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.895 qpair failed and we were unable to recover it. 00:38:19.895 10:40:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 4160954 ']' 00:38:19.895 [2024-12-13 10:40:13.707098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.895 [2024-12-13 10:40:13.707112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.895 qpair failed and we were unable to recover it. 00:38:19.895 [2024-12-13 10:40:13.707266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.895 [2024-12-13 10:40:13.707280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.895 qpair failed and we were unable to recover it. 00:38:19.895 10:40:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:19.895 [2024-12-13 10:40:13.707427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.895 [2024-12-13 10:40:13.707441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.895 qpair failed and we were unable to recover it. 00:38:19.895 10:40:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:19.895 [2024-12-13 10:40:13.707711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.895 [2024-12-13 10:40:13.707726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.895 qpair failed and we were unable to recover it. 00:38:19.895 [2024-12-13 10:40:13.707926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.895 [2024-12-13 10:40:13.707940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.895 qpair failed and we were unable to recover it. 00:38:19.895 10:40:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:19.895 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:19.895 [2024-12-13 10:40:13.708115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.895 [2024-12-13 10:40:13.708128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.895 qpair failed and we were unable to recover it. 00:38:19.895 10:40:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:19.895 [2024-12-13 10:40:13.708300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.895 [2024-12-13 10:40:13.708314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.895 qpair failed and we were unable to recover it. 00:38:19.895 [2024-12-13 10:40:13.708545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.895 10:40:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:19.895 [2024-12-13 10:40:13.708559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.895 qpair failed and we were unable to recover it. 00:38:19.895 [2024-12-13 10:40:13.708791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.895 [2024-12-13 10:40:13.708804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.895 qpair failed and we were unable to recover it. 00:38:19.895 [2024-12-13 10:40:13.708908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.895 [2024-12-13 10:40:13.708921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.895 qpair failed and we were unable to recover it. 00:38:19.895 [2024-12-13 10:40:13.709166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.895 [2024-12-13 10:40:13.709179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.895 qpair failed and we were unable to recover it. 00:38:19.895 [2024-12-13 10:40:13.709326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.895 [2024-12-13 10:40:13.709339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.895 qpair failed and we were unable to recover it. 00:38:19.895 [2024-12-13 10:40:13.709492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.895 [2024-12-13 10:40:13.709506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.895 qpair failed and we were unable to recover it. 00:38:19.895 [2024-12-13 10:40:13.709692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.895 [2024-12-13 10:40:13.709708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.895 qpair failed and we were unable to recover it. 00:38:19.895 [2024-12-13 10:40:13.709852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.895 [2024-12-13 10:40:13.709867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.895 qpair failed and we were unable to recover it. 00:38:19.895 [2024-12-13 10:40:13.710014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.895 [2024-12-13 10:40:13.710027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.895 qpair failed and we were unable to recover it. 00:38:19.895 [2024-12-13 10:40:13.710127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.895 [2024-12-13 10:40:13.710145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.895 qpair failed and we were unable to recover it. 00:38:19.895 [2024-12-13 10:40:13.710300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.895 [2024-12-13 10:40:13.710314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.895 qpair failed and we were unable to recover it. 00:38:19.895 [2024-12-13 10:40:13.710533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.895 [2024-12-13 10:40:13.710549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.895 qpair failed and we were unable to recover it. 00:38:19.895 [2024-12-13 10:40:13.710720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.895 [2024-12-13 10:40:13.710734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.895 qpair failed and we were unable to recover it. 00:38:19.895 [2024-12-13 10:40:13.710830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.895 [2024-12-13 10:40:13.710844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.895 qpair failed and we were unable to recover it. 00:38:19.895 [2024-12-13 10:40:13.710987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.895 [2024-12-13 10:40:13.711001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.895 qpair failed and we were unable to recover it. 00:38:19.895 [2024-12-13 10:40:13.711152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.895 [2024-12-13 10:40:13.711166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.895 qpair failed and we were unable to recover it. 00:38:19.895 [2024-12-13 10:40:13.711252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.895 [2024-12-13 10:40:13.711266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.895 qpair failed and we were unable to recover it. 00:38:19.895 [2024-12-13 10:40:13.711346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.895 [2024-12-13 10:40:13.711360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.895 qpair failed and we were unable to recover it. 00:38:19.895 [2024-12-13 10:40:13.711511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.895 [2024-12-13 10:40:13.711525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.895 qpair failed and we were unable to recover it. 00:38:19.895 [2024-12-13 10:40:13.711661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.895 [2024-12-13 10:40:13.711675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.895 qpair failed and we were unable to recover it. 00:38:19.895 [2024-12-13 10:40:13.711837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.895 [2024-12-13 10:40:13.711851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.895 qpair failed and we were unable to recover it. 00:38:19.895 [2024-12-13 10:40:13.711944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.895 [2024-12-13 10:40:13.711959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.895 qpair failed and we were unable to recover it. 00:38:19.895 [2024-12-13 10:40:13.712088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.895 [2024-12-13 10:40:13.712101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.895 qpair failed and we were unable to recover it. 00:38:19.895 [2024-12-13 10:40:13.712191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.895 [2024-12-13 10:40:13.712205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.895 qpair failed and we were unable to recover it. 00:38:19.895 [2024-12-13 10:40:13.712339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.895 [2024-12-13 10:40:13.712353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.895 qpair failed and we were unable to recover it. 00:38:19.895 [2024-12-13 10:40:13.712576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.895 [2024-12-13 10:40:13.712592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.895 qpair failed and we were unable to recover it. 00:38:19.895 [2024-12-13 10:40:13.712690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.895 [2024-12-13 10:40:13.712703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.895 qpair failed and we were unable to recover it. 00:38:19.895 [2024-12-13 10:40:13.712769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.895 [2024-12-13 10:40:13.712783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.895 qpair failed and we were unable to recover it. 00:38:19.895 [2024-12-13 10:40:13.712848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.895 [2024-12-13 10:40:13.712862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.895 qpair failed and we were unable to recover it. 00:38:19.895 [2024-12-13 10:40:13.713000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:19.895 [2024-12-13 10:40:13.713013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:19.895 qpair failed and we were unable to recover it. 00:38:20.181 [2024-12-13 10:40:13.713114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.181 [2024-12-13 10:40:13.713127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.181 qpair failed and we were unable to recover it. 00:38:20.181 [2024-12-13 10:40:13.713204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.182 [2024-12-13 10:40:13.713219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.182 qpair failed and we were unable to recover it. 00:38:20.182 [2024-12-13 10:40:13.713395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.182 [2024-12-13 10:40:13.713409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.182 qpair failed and we were unable to recover it. 00:38:20.182 [2024-12-13 10:40:13.713561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.182 [2024-12-13 10:40:13.713575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.182 qpair failed and we were unable to recover it. 00:38:20.182 [2024-12-13 10:40:13.713660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.182 [2024-12-13 10:40:13.713673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.182 qpair failed and we were unable to recover it. 00:38:20.182 [2024-12-13 10:40:13.713866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.182 [2024-12-13 10:40:13.713880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.182 qpair failed and we were unable to recover it. 00:38:20.182 [2024-12-13 10:40:13.714014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.182 [2024-12-13 10:40:13.714028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.182 qpair failed and we were unable to recover it. 00:38:20.182 [2024-12-13 10:40:13.714104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.182 [2024-12-13 10:40:13.714117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.182 qpair failed and we were unable to recover it. 00:38:20.182 [2024-12-13 10:40:13.714250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.182 [2024-12-13 10:40:13.714264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.182 qpair failed and we were unable to recover it. 00:38:20.182 [2024-12-13 10:40:13.714576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.182 [2024-12-13 10:40:13.714590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.182 qpair failed and we were unable to recover it. 00:38:20.182 [2024-12-13 10:40:13.714809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.182 [2024-12-13 10:40:13.714823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.182 qpair failed and we were unable to recover it. 00:38:20.182 [2024-12-13 10:40:13.714965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.182 [2024-12-13 10:40:13.714979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.182 qpair failed and we were unable to recover it. 00:38:20.182 [2024-12-13 10:40:13.715132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.182 [2024-12-13 10:40:13.715146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.182 qpair failed and we were unable to recover it. 00:38:20.182 [2024-12-13 10:40:13.715370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.182 [2024-12-13 10:40:13.715384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.182 qpair failed and we were unable to recover it. 00:38:20.182 [2024-12-13 10:40:13.715593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.182 [2024-12-13 10:40:13.715607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.182 qpair failed and we were unable to recover it. 00:38:20.182 [2024-12-13 10:40:13.715758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.182 [2024-12-13 10:40:13.715770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.182 qpair failed and we were unable to recover it. 00:38:20.182 [2024-12-13 10:40:13.715940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.182 [2024-12-13 10:40:13.715952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.182 qpair failed and we were unable to recover it. 00:38:20.182 [2024-12-13 10:40:13.716107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.182 [2024-12-13 10:40:13.716120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.182 qpair failed and we were unable to recover it. 00:38:20.182 [2024-12-13 10:40:13.716268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.182 [2024-12-13 10:40:13.716281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.182 qpair failed and we were unable to recover it. 00:38:20.182 [2024-12-13 10:40:13.716432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.182 [2024-12-13 10:40:13.716454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.182 qpair failed and we were unable to recover it. 00:38:20.182 [2024-12-13 10:40:13.716609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.182 [2024-12-13 10:40:13.716625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.182 qpair failed and we were unable to recover it. 00:38:20.182 [2024-12-13 10:40:13.716718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.182 [2024-12-13 10:40:13.716732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.182 qpair failed and we were unable to recover it. 00:38:20.182 [2024-12-13 10:40:13.716875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.182 [2024-12-13 10:40:13.716891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.182 qpair failed and we were unable to recover it. 00:38:20.182 [2024-12-13 10:40:13.717038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.182 [2024-12-13 10:40:13.717052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.182 qpair failed and we were unable to recover it. 00:38:20.182 [2024-12-13 10:40:13.717159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.182 [2024-12-13 10:40:13.717174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.182 qpair failed and we were unable to recover it. 00:38:20.182 [2024-12-13 10:40:13.717352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.182 [2024-12-13 10:40:13.717373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.182 qpair failed and we were unable to recover it. 00:38:20.182 [2024-12-13 10:40:13.717517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.182 [2024-12-13 10:40:13.717533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.182 qpair failed and we were unable to recover it. 00:38:20.182 [2024-12-13 10:40:13.717698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.182 [2024-12-13 10:40:13.717714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.182 qpair failed and we were unable to recover it. 00:38:20.182 [2024-12-13 10:40:13.717916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.182 [2024-12-13 10:40:13.717931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.182 qpair failed and we were unable to recover it. 00:38:20.182 [2024-12-13 10:40:13.718097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.182 [2024-12-13 10:40:13.718113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.182 qpair failed and we were unable to recover it. 00:38:20.182 [2024-12-13 10:40:13.718328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.182 [2024-12-13 10:40:13.718343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.182 qpair failed and we were unable to recover it. 00:38:20.182 [2024-12-13 10:40:13.718414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.182 [2024-12-13 10:40:13.718430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.182 qpair failed and we were unable to recover it. 00:38:20.182 [2024-12-13 10:40:13.718582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.182 [2024-12-13 10:40:13.718597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.182 qpair failed and we were unable to recover it. 00:38:20.182 [2024-12-13 10:40:13.718821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.182 [2024-12-13 10:40:13.718836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.182 qpair failed and we were unable to recover it. 00:38:20.182 [2024-12-13 10:40:13.719039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.182 [2024-12-13 10:40:13.719054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.182 qpair failed and we were unable to recover it. 00:38:20.182 [2024-12-13 10:40:13.719206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.182 [2024-12-13 10:40:13.719221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.182 qpair failed and we were unable to recover it. 00:38:20.182 [2024-12-13 10:40:13.719453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.182 [2024-12-13 10:40:13.719468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.182 qpair failed and we were unable to recover it. 00:38:20.182 [2024-12-13 10:40:13.719696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.182 [2024-12-13 10:40:13.719711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.182 qpair failed and we were unable to recover it. 00:38:20.182 [2024-12-13 10:40:13.719915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.182 [2024-12-13 10:40:13.719930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.182 qpair failed and we were unable to recover it. 00:38:20.182 [2024-12-13 10:40:13.720021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.182 [2024-12-13 10:40:13.720036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.182 qpair failed and we were unable to recover it. 00:38:20.183 [2024-12-13 10:40:13.720265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.183 [2024-12-13 10:40:13.720280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.183 qpair failed and we were unable to recover it. 00:38:20.183 [2024-12-13 10:40:13.720457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.183 [2024-12-13 10:40:13.720472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.183 qpair failed and we were unable to recover it. 00:38:20.183 [2024-12-13 10:40:13.720631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.183 [2024-12-13 10:40:13.720646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.183 qpair failed and we were unable to recover it. 00:38:20.183 [2024-12-13 10:40:13.720895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.183 [2024-12-13 10:40:13.720911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.183 qpair failed and we were unable to recover it. 00:38:20.183 [2024-12-13 10:40:13.721056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.183 [2024-12-13 10:40:13.721072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.183 qpair failed and we were unable to recover it. 00:38:20.183 [2024-12-13 10:40:13.721303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.183 [2024-12-13 10:40:13.721320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.183 qpair failed and we were unable to recover it. 00:38:20.183 [2024-12-13 10:40:13.721494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.183 [2024-12-13 10:40:13.721512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.183 qpair failed and we were unable to recover it. 00:38:20.183 [2024-12-13 10:40:13.721807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.183 [2024-12-13 10:40:13.721824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.183 qpair failed and we were unable to recover it. 00:38:20.183 [2024-12-13 10:40:13.722003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.183 [2024-12-13 10:40:13.722019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.183 qpair failed and we were unable to recover it. 00:38:20.183 [2024-12-13 10:40:13.722108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.183 [2024-12-13 10:40:13.722124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.183 qpair failed and we were unable to recover it. 00:38:20.183 [2024-12-13 10:40:13.722225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.183 [2024-12-13 10:40:13.722240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.183 qpair failed and we were unable to recover it. 00:38:20.183 [2024-12-13 10:40:13.722463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.183 [2024-12-13 10:40:13.722479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.183 qpair failed and we were unable to recover it. 00:38:20.183 [2024-12-13 10:40:13.722630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.183 [2024-12-13 10:40:13.722645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.183 qpair failed and we were unable to recover it. 00:38:20.183 [2024-12-13 10:40:13.722785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.183 [2024-12-13 10:40:13.722801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.183 qpair failed and we were unable to recover it. 00:38:20.183 [2024-12-13 10:40:13.723003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.183 [2024-12-13 10:40:13.723018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.183 qpair failed and we were unable to recover it. 00:38:20.183 [2024-12-13 10:40:13.723196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.183 [2024-12-13 10:40:13.723211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.183 qpair failed and we were unable to recover it. 00:38:20.183 [2024-12-13 10:40:13.723358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.183 [2024-12-13 10:40:13.723374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.183 qpair failed and we were unable to recover it. 00:38:20.183 [2024-12-13 10:40:13.723525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.183 [2024-12-13 10:40:13.723541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.183 qpair failed and we were unable to recover it. 00:38:20.183 [2024-12-13 10:40:13.723766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.183 [2024-12-13 10:40:13.723782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.183 qpair failed and we were unable to recover it. 00:38:20.183 [2024-12-13 10:40:13.723917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.183 [2024-12-13 10:40:13.723933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.183 qpair failed and we were unable to recover it. 00:38:20.183 [2024-12-13 10:40:13.724105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.183 [2024-12-13 10:40:13.724121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.183 qpair failed and we were unable to recover it. 00:38:20.183 [2024-12-13 10:40:13.724338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.183 [2024-12-13 10:40:13.724354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.183 qpair failed and we were unable to recover it. 00:38:20.183 [2024-12-13 10:40:13.724506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.183 [2024-12-13 10:40:13.724522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.183 qpair failed and we were unable to recover it. 00:38:20.183 [2024-12-13 10:40:13.724764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.183 [2024-12-13 10:40:13.724780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.183 qpair failed and we were unable to recover it. 00:38:20.183 [2024-12-13 10:40:13.724932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.183 [2024-12-13 10:40:13.724948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.183 qpair failed and we were unable to recover it. 00:38:20.183 [2024-12-13 10:40:13.725097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.183 [2024-12-13 10:40:13.725113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.183 qpair failed and we were unable to recover it. 00:38:20.183 [2024-12-13 10:40:13.725335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.183 [2024-12-13 10:40:13.725352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.183 qpair failed and we were unable to recover it. 00:38:20.183 [2024-12-13 10:40:13.725573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.183 [2024-12-13 10:40:13.725588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.183 qpair failed and we were unable to recover it. 00:38:20.183 [2024-12-13 10:40:13.725827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.183 [2024-12-13 10:40:13.725842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.183 qpair failed and we were unable to recover it. 00:38:20.183 [2024-12-13 10:40:13.726011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.183 [2024-12-13 10:40:13.726026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.183 qpair failed and we were unable to recover it. 00:38:20.183 [2024-12-13 10:40:13.726247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.183 [2024-12-13 10:40:13.726263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.183 qpair failed and we were unable to recover it. 00:38:20.183 [2024-12-13 10:40:13.726563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.183 [2024-12-13 10:40:13.726580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.183 qpair failed and we were unable to recover it. 00:38:20.183 [2024-12-13 10:40:13.726755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.183 [2024-12-13 10:40:13.726771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.183 qpair failed and we were unable to recover it. 00:38:20.183 [2024-12-13 10:40:13.726866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.183 [2024-12-13 10:40:13.726883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.183 qpair failed and we were unable to recover it. 00:38:20.183 [2024-12-13 10:40:13.727099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.183 [2024-12-13 10:40:13.727114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.183 qpair failed and we were unable to recover it. 00:38:20.183 [2024-12-13 10:40:13.727328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.183 [2024-12-13 10:40:13.727344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.183 qpair failed and we were unable to recover it. 00:38:20.183 [2024-12-13 10:40:13.727567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.183 [2024-12-13 10:40:13.727583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.183 qpair failed and we were unable to recover it. 00:38:20.183 [2024-12-13 10:40:13.727679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.183 [2024-12-13 10:40:13.727694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.183 qpair failed and we were unable to recover it. 00:38:20.183 [2024-12-13 10:40:13.727827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.183 [2024-12-13 10:40:13.727842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.183 qpair failed and we were unable to recover it. 00:38:20.184 [2024-12-13 10:40:13.728043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.184 [2024-12-13 10:40:13.728059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.184 qpair failed and we were unable to recover it. 00:38:20.184 [2024-12-13 10:40:13.728279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.184 [2024-12-13 10:40:13.728294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.184 qpair failed and we were unable to recover it. 00:38:20.184 [2024-12-13 10:40:13.728587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.184 [2024-12-13 10:40:13.728603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.184 qpair failed and we were unable to recover it. 00:38:20.184 [2024-12-13 10:40:13.728818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.184 [2024-12-13 10:40:13.728834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.184 qpair failed and we were unable to recover it. 00:38:20.184 [2024-12-13 10:40:13.728996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.184 [2024-12-13 10:40:13.729012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.184 qpair failed and we were unable to recover it. 00:38:20.184 [2024-12-13 10:40:13.729238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.184 [2024-12-13 10:40:13.729253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.184 qpair failed and we were unable to recover it. 00:38:20.184 [2024-12-13 10:40:13.729407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.184 [2024-12-13 10:40:13.729423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.184 qpair failed and we were unable to recover it. 00:38:20.184 [2024-12-13 10:40:13.729678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.184 [2024-12-13 10:40:13.729697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.184 qpair failed and we were unable to recover it. 00:38:20.184 [2024-12-13 10:40:13.729834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.184 [2024-12-13 10:40:13.729849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.184 qpair failed and we were unable to recover it. 00:38:20.184 [2024-12-13 10:40:13.730149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.184 [2024-12-13 10:40:13.730165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.184 qpair failed and we were unable to recover it. 00:38:20.184 [2024-12-13 10:40:13.730334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.184 [2024-12-13 10:40:13.730355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.184 qpair failed and we were unable to recover it. 00:38:20.184 [2024-12-13 10:40:13.730499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.184 [2024-12-13 10:40:13.730515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.184 qpair failed and we were unable to recover it. 00:38:20.184 [2024-12-13 10:40:13.730615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.184 [2024-12-13 10:40:13.730630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.184 qpair failed and we were unable to recover it. 00:38:20.184 [2024-12-13 10:40:13.730781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.184 [2024-12-13 10:40:13.730796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.184 qpair failed and we were unable to recover it. 00:38:20.184 [2024-12-13 10:40:13.731025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.184 [2024-12-13 10:40:13.731041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.184 qpair failed and we were unable to recover it. 00:38:20.184 [2024-12-13 10:40:13.731263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.184 [2024-12-13 10:40:13.731278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.184 qpair failed and we were unable to recover it. 00:38:20.184 [2024-12-13 10:40:13.731543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.184 [2024-12-13 10:40:13.731560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.184 qpair failed and we were unable to recover it. 00:38:20.184 [2024-12-13 10:40:13.731704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.184 [2024-12-13 10:40:13.731719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.184 qpair failed and we were unable to recover it. 00:38:20.184 [2024-12-13 10:40:13.731891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.184 [2024-12-13 10:40:13.731906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.184 qpair failed and we were unable to recover it. 00:38:20.184 [2024-12-13 10:40:13.732066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.184 [2024-12-13 10:40:13.732081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.184 qpair failed and we were unable to recover it. 00:38:20.184 [2024-12-13 10:40:13.732252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.184 [2024-12-13 10:40:13.732267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.184 qpair failed and we were unable to recover it. 00:38:20.184 [2024-12-13 10:40:13.732431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.184 [2024-12-13 10:40:13.732446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.184 qpair failed and we were unable to recover it. 00:38:20.184 [2024-12-13 10:40:13.732680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.184 [2024-12-13 10:40:13.732695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.184 qpair failed and we were unable to recover it. 00:38:20.184 [2024-12-13 10:40:13.732868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.184 [2024-12-13 10:40:13.732883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.184 qpair failed and we were unable to recover it. 00:38:20.184 [2024-12-13 10:40:13.733043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.184 [2024-12-13 10:40:13.733059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.184 qpair failed and we were unable to recover it. 00:38:20.184 [2024-12-13 10:40:13.733167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.184 [2024-12-13 10:40:13.733183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.184 qpair failed and we were unable to recover it. 00:38:20.184 [2024-12-13 10:40:13.733340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.184 [2024-12-13 10:40:13.733355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.184 qpair failed and we were unable to recover it. 00:38:20.184 [2024-12-13 10:40:13.733496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.184 [2024-12-13 10:40:13.733512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.184 qpair failed and we were unable to recover it. 00:38:20.184 [2024-12-13 10:40:13.733654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.184 [2024-12-13 10:40:13.733669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.184 qpair failed and we were unable to recover it. 00:38:20.184 [2024-12-13 10:40:13.733837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.184 [2024-12-13 10:40:13.733852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.184 qpair failed and we were unable to recover it. 00:38:20.184 [2024-12-13 10:40:13.733998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.184 [2024-12-13 10:40:13.734014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.184 qpair failed and we were unable to recover it. 00:38:20.184 [2024-12-13 10:40:13.734210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.184 [2024-12-13 10:40:13.734225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.184 qpair failed and we were unable to recover it. 00:38:20.184 [2024-12-13 10:40:13.734306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.184 [2024-12-13 10:40:13.734321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.184 qpair failed and we were unable to recover it. 00:38:20.184 [2024-12-13 10:40:13.734522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.184 [2024-12-13 10:40:13.734546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.184 qpair failed and we were unable to recover it. 00:38:20.184 [2024-12-13 10:40:13.734729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.184 [2024-12-13 10:40:13.734744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.184 qpair failed and we were unable to recover it. 00:38:20.184 [2024-12-13 10:40:13.734824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.184 [2024-12-13 10:40:13.734839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.184 qpair failed and we were unable to recover it. 00:38:20.184 [2024-12-13 10:40:13.735086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.184 [2024-12-13 10:40:13.735102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.184 qpair failed and we were unable to recover it. 00:38:20.184 [2024-12-13 10:40:13.735235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.184 [2024-12-13 10:40:13.735251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.184 qpair failed and we were unable to recover it. 00:38:20.184 [2024-12-13 10:40:13.735417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.184 [2024-12-13 10:40:13.735433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.184 qpair failed and we were unable to recover it. 00:38:20.185 [2024-12-13 10:40:13.735579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.185 [2024-12-13 10:40:13.735595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.185 qpair failed and we were unable to recover it. 00:38:20.185 [2024-12-13 10:40:13.735826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.185 [2024-12-13 10:40:13.735842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.185 qpair failed and we were unable to recover it. 00:38:20.185 [2024-12-13 10:40:13.736001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.185 [2024-12-13 10:40:13.736017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.185 qpair failed and we were unable to recover it. 00:38:20.185 [2024-12-13 10:40:13.736168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.185 [2024-12-13 10:40:13.736183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.185 qpair failed and we were unable to recover it. 00:38:20.185 [2024-12-13 10:40:13.736413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.185 [2024-12-13 10:40:13.736428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.185 qpair failed and we were unable to recover it. 00:38:20.185 [2024-12-13 10:40:13.736619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.185 [2024-12-13 10:40:13.736635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.185 qpair failed and we were unable to recover it. 00:38:20.185 [2024-12-13 10:40:13.736778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.185 [2024-12-13 10:40:13.736794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.185 qpair failed and we were unable to recover it. 00:38:20.185 [2024-12-13 10:40:13.736891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.185 [2024-12-13 10:40:13.736907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.185 qpair failed and we were unable to recover it. 00:38:20.185 [2024-12-13 10:40:13.737053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.185 [2024-12-13 10:40:13.737071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.185 qpair failed and we were unable to recover it. 00:38:20.185 [2024-12-13 10:40:13.737169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.185 [2024-12-13 10:40:13.737185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.185 qpair failed and we were unable to recover it. 00:38:20.185 [2024-12-13 10:40:13.737264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.185 [2024-12-13 10:40:13.737279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.185 qpair failed and we were unable to recover it. 00:38:20.185 [2024-12-13 10:40:13.737367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.185 [2024-12-13 10:40:13.737383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.185 qpair failed and we were unable to recover it. 00:38:20.185 [2024-12-13 10:40:13.737541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.185 [2024-12-13 10:40:13.737557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.185 qpair failed and we were unable to recover it. 00:38:20.185 [2024-12-13 10:40:13.737630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.185 [2024-12-13 10:40:13.737645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.185 qpair failed and we were unable to recover it. 00:38:20.185 [2024-12-13 10:40:13.737784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.185 [2024-12-13 10:40:13.737800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.185 qpair failed and we were unable to recover it. 00:38:20.185 [2024-12-13 10:40:13.737958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.185 [2024-12-13 10:40:13.737973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.185 qpair failed and we were unable to recover it. 00:38:20.185 [2024-12-13 10:40:13.738110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.185 [2024-12-13 10:40:13.738126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.185 qpair failed and we were unable to recover it. 00:38:20.185 [2024-12-13 10:40:13.738263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.185 [2024-12-13 10:40:13.738278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.185 qpair failed and we were unable to recover it. 00:38:20.185 [2024-12-13 10:40:13.738357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.185 [2024-12-13 10:40:13.738373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.185 qpair failed and we were unable to recover it. 00:38:20.185 [2024-12-13 10:40:13.738461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.185 [2024-12-13 10:40:13.738477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.185 qpair failed and we were unable to recover it. 00:38:20.185 [2024-12-13 10:40:13.738678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.185 [2024-12-13 10:40:13.738694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.185 qpair failed and we were unable to recover it. 00:38:20.185 [2024-12-13 10:40:13.738798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.185 [2024-12-13 10:40:13.738819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.185 qpair failed and we were unable to recover it. 00:38:20.185 [2024-12-13 10:40:13.738969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.185 [2024-12-13 10:40:13.738985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.185 qpair failed and we were unable to recover it. 00:38:20.185 [2024-12-13 10:40:13.739071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.185 [2024-12-13 10:40:13.739086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.185 qpair failed and we were unable to recover it. 00:38:20.185 [2024-12-13 10:40:13.739237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.185 [2024-12-13 10:40:13.739253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.185 qpair failed and we were unable to recover it. 00:38:20.185 [2024-12-13 10:40:13.739399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.185 [2024-12-13 10:40:13.739415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.185 qpair failed and we were unable to recover it. 00:38:20.185 [2024-12-13 10:40:13.739557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.185 [2024-12-13 10:40:13.739575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.185 qpair failed and we were unable to recover it. 00:38:20.185 [2024-12-13 10:40:13.739663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.185 [2024-12-13 10:40:13.739678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.185 qpair failed and we were unable to recover it. 00:38:20.185 [2024-12-13 10:40:13.739755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.185 [2024-12-13 10:40:13.739770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.185 qpair failed and we were unable to recover it. 00:38:20.185 [2024-12-13 10:40:13.739851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.185 [2024-12-13 10:40:13.739866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.185 qpair failed and we were unable to recover it. 00:38:20.185 [2024-12-13 10:40:13.740109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.185 [2024-12-13 10:40:13.740130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.185 qpair failed and we were unable to recover it. 00:38:20.185 [2024-12-13 10:40:13.740202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.185 [2024-12-13 10:40:13.740217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.185 qpair failed and we were unable to recover it. 00:38:20.185 [2024-12-13 10:40:13.740444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.185 [2024-12-13 10:40:13.740467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.185 qpair failed and we were unable to recover it. 00:38:20.185 [2024-12-13 10:40:13.740553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.185 [2024-12-13 10:40:13.740568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.185 qpair failed and we were unable to recover it. 00:38:20.185 [2024-12-13 10:40:13.740669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.185 [2024-12-13 10:40:13.740686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.185 qpair failed and we were unable to recover it. 00:38:20.185 [2024-12-13 10:40:13.740831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.185 [2024-12-13 10:40:13.740851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.185 qpair failed and we were unable to recover it. 00:38:20.185 [2024-12-13 10:40:13.741019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.185 [2024-12-13 10:40:13.741036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.185 qpair failed and we were unable to recover it. 00:38:20.185 [2024-12-13 10:40:13.741128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.185 [2024-12-13 10:40:13.741143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.185 qpair failed and we were unable to recover it. 00:38:20.185 [2024-12-13 10:40:13.741247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.186 [2024-12-13 10:40:13.741263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.186 qpair failed and we were unable to recover it. 00:38:20.186 [2024-12-13 10:40:13.741404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.186 [2024-12-13 10:40:13.741420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.186 qpair failed and we were unable to recover it. 00:38:20.186 [2024-12-13 10:40:13.741567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.186 [2024-12-13 10:40:13.741583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.186 qpair failed and we were unable to recover it. 00:38:20.186 [2024-12-13 10:40:13.741675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.186 [2024-12-13 10:40:13.741690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.186 qpair failed and we were unable to recover it. 00:38:20.186 [2024-12-13 10:40:13.741766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.186 [2024-12-13 10:40:13.741781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.186 qpair failed and we were unable to recover it. 00:38:20.186 [2024-12-13 10:40:13.741883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.186 [2024-12-13 10:40:13.741898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.186 qpair failed and we were unable to recover it. 00:38:20.186 [2024-12-13 10:40:13.742044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.186 [2024-12-13 10:40:13.742060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.186 qpair failed and we were unable to recover it. 00:38:20.186 [2024-12-13 10:40:13.742244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.186 [2024-12-13 10:40:13.742260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.186 qpair failed and we were unable to recover it. 00:38:20.186 [2024-12-13 10:40:13.742422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.186 [2024-12-13 10:40:13.742437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.186 qpair failed and we were unable to recover it. 00:38:20.186 [2024-12-13 10:40:13.742524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.186 [2024-12-13 10:40:13.742540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.186 qpair failed and we were unable to recover it. 00:38:20.186 [2024-12-13 10:40:13.742688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.186 [2024-12-13 10:40:13.742734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.186 qpair failed and we were unable to recover it. 00:38:20.186 [2024-12-13 10:40:13.742895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.186 [2024-12-13 10:40:13.742909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.186 qpair failed and we were unable to recover it. 00:38:20.186 [2024-12-13 10:40:13.742994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.186 [2024-12-13 10:40:13.743010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.186 qpair failed and we were unable to recover it. 00:38:20.186 [2024-12-13 10:40:13.743172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.186 [2024-12-13 10:40:13.743188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.186 qpair failed and we were unable to recover it. 00:38:20.186 [2024-12-13 10:40:13.743264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.186 [2024-12-13 10:40:13.743280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.186 qpair failed and we were unable to recover it. 00:38:20.186 [2024-12-13 10:40:13.743427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.186 [2024-12-13 10:40:13.743443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.186 qpair failed and we were unable to recover it. 00:38:20.186 [2024-12-13 10:40:13.743540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.186 [2024-12-13 10:40:13.743557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.186 qpair failed and we were unable to recover it. 00:38:20.186 [2024-12-13 10:40:13.743638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.186 [2024-12-13 10:40:13.743654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.186 qpair failed and we were unable to recover it. 00:38:20.186 [2024-12-13 10:40:13.743807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.186 [2024-12-13 10:40:13.743823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.186 qpair failed and we were unable to recover it. 00:38:20.186 [2024-12-13 10:40:13.743979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.186 [2024-12-13 10:40:13.743994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.186 qpair failed and we were unable to recover it. 00:38:20.186 [2024-12-13 10:40:13.744177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.186 [2024-12-13 10:40:13.744193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.186 qpair failed and we were unable to recover it. 00:38:20.186 [2024-12-13 10:40:13.744339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.186 [2024-12-13 10:40:13.744355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.186 qpair failed and we were unable to recover it. 00:38:20.186 [2024-12-13 10:40:13.744491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.186 [2024-12-13 10:40:13.744507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.186 qpair failed and we were unable to recover it. 00:38:20.186 [2024-12-13 10:40:13.744745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.186 [2024-12-13 10:40:13.744761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.186 qpair failed and we were unable to recover it. 00:38:20.186 [2024-12-13 10:40:13.744856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.186 [2024-12-13 10:40:13.744872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.186 qpair failed and we were unable to recover it. 00:38:20.186 [2024-12-13 10:40:13.744958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.186 [2024-12-13 10:40:13.744974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.186 qpair failed and we were unable to recover it. 00:38:20.186 [2024-12-13 10:40:13.745119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.186 [2024-12-13 10:40:13.745136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.186 qpair failed and we were unable to recover it. 00:38:20.186 [2024-12-13 10:40:13.745220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.186 [2024-12-13 10:40:13.745236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.186 qpair failed and we were unable to recover it. 00:38:20.186 [2024-12-13 10:40:13.745380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.186 [2024-12-13 10:40:13.745396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.186 qpair failed and we were unable to recover it. 00:38:20.186 [2024-12-13 10:40:13.745547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.186 [2024-12-13 10:40:13.745563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.186 qpair failed and we were unable to recover it. 00:38:20.186 [2024-12-13 10:40:13.745723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.186 [2024-12-13 10:40:13.745738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.186 qpair failed and we were unable to recover it. 00:38:20.186 [2024-12-13 10:40:13.745879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.186 [2024-12-13 10:40:13.745894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.186 qpair failed and we were unable to recover it. 00:38:20.186 [2024-12-13 10:40:13.746047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.186 [2024-12-13 10:40:13.746062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.186 qpair failed and we were unable to recover it. 00:38:20.186 [2024-12-13 10:40:13.746165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.186 [2024-12-13 10:40:13.746180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.186 qpair failed and we were unable to recover it. 00:38:20.186 [2024-12-13 10:40:13.746262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.187 [2024-12-13 10:40:13.746277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.187 qpair failed and we were unable to recover it. 00:38:20.187 [2024-12-13 10:40:13.746479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.187 [2024-12-13 10:40:13.746495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.187 qpair failed and we were unable to recover it. 00:38:20.187 [2024-12-13 10:40:13.746646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.187 [2024-12-13 10:40:13.746661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.187 qpair failed and we were unable to recover it. 00:38:20.187 [2024-12-13 10:40:13.746888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.187 [2024-12-13 10:40:13.746903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.187 qpair failed and we were unable to recover it. 00:38:20.187 [2024-12-13 10:40:13.747054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.187 [2024-12-13 10:40:13.747070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.187 qpair failed and we were unable to recover it. 00:38:20.187 [2024-12-13 10:40:13.747174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.187 [2024-12-13 10:40:13.747189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.187 qpair failed and we were unable to recover it. 00:38:20.187 [2024-12-13 10:40:13.747268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.187 [2024-12-13 10:40:13.747283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.187 qpair failed and we were unable to recover it. 00:38:20.187 [2024-12-13 10:40:13.747367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.187 [2024-12-13 10:40:13.747382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.187 qpair failed and we were unable to recover it. 00:38:20.187 [2024-12-13 10:40:13.747527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.187 [2024-12-13 10:40:13.747542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.187 qpair failed and we were unable to recover it. 00:38:20.187 [2024-12-13 10:40:13.747744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.187 [2024-12-13 10:40:13.747759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.187 qpair failed and we were unable to recover it. 00:38:20.187 [2024-12-13 10:40:13.747850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.187 [2024-12-13 10:40:13.747865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.187 qpair failed and we were unable to recover it. 00:38:20.187 [2024-12-13 10:40:13.747952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.187 [2024-12-13 10:40:13.747969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.187 qpair failed and we were unable to recover it. 00:38:20.187 [2024-12-13 10:40:13.748123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.187 [2024-12-13 10:40:13.748138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.187 qpair failed and we were unable to recover it. 00:38:20.187 [2024-12-13 10:40:13.748212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.187 [2024-12-13 10:40:13.748227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.187 qpair failed and we were unable to recover it. 00:38:20.187 [2024-12-13 10:40:13.748317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.187 [2024-12-13 10:40:13.748332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.187 qpair failed and we were unable to recover it. 00:38:20.187 [2024-12-13 10:40:13.748472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.187 [2024-12-13 10:40:13.748488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.187 qpair failed and we were unable to recover it. 00:38:20.187 [2024-12-13 10:40:13.748598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.187 [2024-12-13 10:40:13.748615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.187 qpair failed and we were unable to recover it. 00:38:20.187 [2024-12-13 10:40:13.748760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.187 [2024-12-13 10:40:13.748777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.187 qpair failed and we were unable to recover it. 00:38:20.187 [2024-12-13 10:40:13.748940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.187 [2024-12-13 10:40:13.748956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.187 qpair failed and we were unable to recover it. 00:38:20.187 [2024-12-13 10:40:13.749029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.187 [2024-12-13 10:40:13.749044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.187 qpair failed and we were unable to recover it. 00:38:20.187 [2024-12-13 10:40:13.749113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.187 [2024-12-13 10:40:13.749128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.187 qpair failed and we were unable to recover it. 00:38:20.187 [2024-12-13 10:40:13.749331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.187 [2024-12-13 10:40:13.749347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.187 qpair failed and we were unable to recover it. 00:38:20.187 [2024-12-13 10:40:13.749424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.187 [2024-12-13 10:40:13.749440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.187 qpair failed and we were unable to recover it. 00:38:20.187 [2024-12-13 10:40:13.749582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.187 [2024-12-13 10:40:13.749597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.187 qpair failed and we were unable to recover it. 00:38:20.187 [2024-12-13 10:40:13.749744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.187 [2024-12-13 10:40:13.749760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.187 qpair failed and we were unable to recover it. 00:38:20.187 [2024-12-13 10:40:13.749828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.187 [2024-12-13 10:40:13.749842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.187 qpair failed and we were unable to recover it. 00:38:20.187 [2024-12-13 10:40:13.749991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.187 [2024-12-13 10:40:13.750007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.187 qpair failed and we were unable to recover it. 00:38:20.187 [2024-12-13 10:40:13.750155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.187 [2024-12-13 10:40:13.750175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.187 qpair failed and we were unable to recover it. 00:38:20.187 [2024-12-13 10:40:13.750259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.187 [2024-12-13 10:40:13.750275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.187 qpair failed and we were unable to recover it. 00:38:20.187 [2024-12-13 10:40:13.750409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.187 [2024-12-13 10:40:13.750425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.187 qpair failed and we were unable to recover it. 00:38:20.187 [2024-12-13 10:40:13.750584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.187 [2024-12-13 10:40:13.750600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.187 qpair failed and we were unable to recover it. 00:38:20.187 [2024-12-13 10:40:13.750673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.187 [2024-12-13 10:40:13.750689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.187 qpair failed and we were unable to recover it. 00:38:20.187 [2024-12-13 10:40:13.750931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.187 [2024-12-13 10:40:13.750947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.187 qpair failed and we were unable to recover it. 00:38:20.187 [2024-12-13 10:40:13.751025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.187 [2024-12-13 10:40:13.751040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.187 qpair failed and we were unable to recover it. 00:38:20.187 [2024-12-13 10:40:13.751128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.187 [2024-12-13 10:40:13.751143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.187 qpair failed and we were unable to recover it. 00:38:20.187 [2024-12-13 10:40:13.751216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.187 [2024-12-13 10:40:13.751231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.187 qpair failed and we were unable to recover it. 00:38:20.187 [2024-12-13 10:40:13.751317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.187 [2024-12-13 10:40:13.751334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.187 qpair failed and we were unable to recover it. 00:38:20.187 [2024-12-13 10:40:13.751439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.187 [2024-12-13 10:40:13.751460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.187 qpair failed and we were unable to recover it. 00:38:20.187 [2024-12-13 10:40:13.751541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.187 [2024-12-13 10:40:13.751557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.187 qpair failed and we were unable to recover it. 00:38:20.187 [2024-12-13 10:40:13.751655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.188 [2024-12-13 10:40:13.751671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.188 qpair failed and we were unable to recover it. 00:38:20.188 [2024-12-13 10:40:13.751817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.188 [2024-12-13 10:40:13.751834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.188 qpair failed and we were unable to recover it. 00:38:20.188 [2024-12-13 10:40:13.751996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.188 [2024-12-13 10:40:13.752012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.188 qpair failed and we were unable to recover it. 00:38:20.188 [2024-12-13 10:40:13.752082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.188 [2024-12-13 10:40:13.752097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.188 qpair failed and we were unable to recover it. 00:38:20.188 [2024-12-13 10:40:13.752243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.188 [2024-12-13 10:40:13.752259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.188 qpair failed and we were unable to recover it. 00:38:20.188 [2024-12-13 10:40:13.752434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.188 [2024-12-13 10:40:13.752456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.188 qpair failed and we were unable to recover it. 00:38:20.188 [2024-12-13 10:40:13.752619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.188 [2024-12-13 10:40:13.752634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.188 qpair failed and we were unable to recover it. 00:38:20.188 [2024-12-13 10:40:13.752703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.188 [2024-12-13 10:40:13.752718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.188 qpair failed and we were unable to recover it. 00:38:20.188 [2024-12-13 10:40:13.752926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.188 [2024-12-13 10:40:13.752942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.188 qpair failed and we were unable to recover it. 00:38:20.188 [2024-12-13 10:40:13.753162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.188 [2024-12-13 10:40:13.753178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.188 qpair failed and we were unable to recover it. 00:38:20.188 [2024-12-13 10:40:13.753240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.188 [2024-12-13 10:40:13.753256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.188 qpair failed and we were unable to recover it. 00:38:20.188 [2024-12-13 10:40:13.753331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.188 [2024-12-13 10:40:13.753346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.188 qpair failed and we were unable to recover it. 00:38:20.188 [2024-12-13 10:40:13.753494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.188 [2024-12-13 10:40:13.753511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.188 qpair failed and we were unable to recover it. 00:38:20.188 [2024-12-13 10:40:13.753649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.188 [2024-12-13 10:40:13.753665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.188 qpair failed and we were unable to recover it. 00:38:20.188 [2024-12-13 10:40:13.753743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.188 [2024-12-13 10:40:13.753759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.188 qpair failed and we were unable to recover it. 00:38:20.188 [2024-12-13 10:40:13.753892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.188 [2024-12-13 10:40:13.753907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.188 qpair failed and we were unable to recover it. 00:38:20.188 [2024-12-13 10:40:13.754078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.188 [2024-12-13 10:40:13.754095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.188 qpair failed and we were unable to recover it. 00:38:20.188 [2024-12-13 10:40:13.754161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.188 [2024-12-13 10:40:13.754178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.188 qpair failed and we were unable to recover it. 00:38:20.188 [2024-12-13 10:40:13.754262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.188 [2024-12-13 10:40:13.754278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.188 qpair failed and we were unable to recover it. 00:38:20.188 [2024-12-13 10:40:13.754436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.188 [2024-12-13 10:40:13.754457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.188 qpair failed and we were unable to recover it. 00:38:20.188 [2024-12-13 10:40:13.754600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.188 [2024-12-13 10:40:13.754617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.188 qpair failed and we were unable to recover it. 00:38:20.188 [2024-12-13 10:40:13.754711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.188 [2024-12-13 10:40:13.754727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.188 qpair failed and we were unable to recover it. 00:38:20.188 [2024-12-13 10:40:13.754798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.188 [2024-12-13 10:40:13.754818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.188 qpair failed and we were unable to recover it. 00:38:20.188 [2024-12-13 10:40:13.754964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.188 [2024-12-13 10:40:13.754979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.188 qpair failed and we were unable to recover it. 00:38:20.188 [2024-12-13 10:40:13.755071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.188 [2024-12-13 10:40:13.755087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.188 qpair failed and we were unable to recover it. 00:38:20.188 [2024-12-13 10:40:13.755159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.188 [2024-12-13 10:40:13.755173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.188 qpair failed and we were unable to recover it. 00:38:20.188 [2024-12-13 10:40:13.755332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.188 [2024-12-13 10:40:13.755347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.188 qpair failed and we were unable to recover it. 00:38:20.188 [2024-12-13 10:40:13.755437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.188 [2024-12-13 10:40:13.755457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.188 qpair failed and we were unable to recover it. 00:38:20.188 [2024-12-13 10:40:13.755535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.188 [2024-12-13 10:40:13.755550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.188 qpair failed and we were unable to recover it. 00:38:20.188 [2024-12-13 10:40:13.755636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.188 [2024-12-13 10:40:13.755650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.188 qpair failed and we were unable to recover it. 00:38:20.188 [2024-12-13 10:40:13.755750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.188 [2024-12-13 10:40:13.755764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.188 qpair failed and we were unable to recover it. 00:38:20.188 [2024-12-13 10:40:13.755863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.188 [2024-12-13 10:40:13.755878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.188 qpair failed and we were unable to recover it. 00:38:20.188 [2024-12-13 10:40:13.756009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.188 [2024-12-13 10:40:13.756024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.188 qpair failed and we were unable to recover it. 00:38:20.188 [2024-12-13 10:40:13.756112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.188 [2024-12-13 10:40:13.756128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.188 qpair failed and we were unable to recover it. 00:38:20.188 [2024-12-13 10:40:13.756212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.188 [2024-12-13 10:40:13.756228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.188 qpair failed and we were unable to recover it. 00:38:20.188 [2024-12-13 10:40:13.756318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.188 [2024-12-13 10:40:13.756334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.188 qpair failed and we were unable to recover it. 00:38:20.188 [2024-12-13 10:40:13.756417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.188 [2024-12-13 10:40:13.756432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.188 qpair failed and we were unable to recover it. 00:38:20.188 [2024-12-13 10:40:13.756504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.188 [2024-12-13 10:40:13.756520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.188 qpair failed and we were unable to recover it. 00:38:20.188 [2024-12-13 10:40:13.756656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.188 [2024-12-13 10:40:13.756672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.188 qpair failed and we were unable to recover it. 00:38:20.188 [2024-12-13 10:40:13.756753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.189 [2024-12-13 10:40:13.756768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.189 qpair failed and we were unable to recover it. 00:38:20.189 [2024-12-13 10:40:13.756916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.189 [2024-12-13 10:40:13.756932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.189 qpair failed and we were unable to recover it. 00:38:20.189 [2024-12-13 10:40:13.757010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.189 [2024-12-13 10:40:13.757025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.189 qpair failed and we were unable to recover it. 00:38:20.189 [2024-12-13 10:40:13.757172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.189 [2024-12-13 10:40:13.757187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.189 qpair failed and we were unable to recover it. 00:38:20.189 [2024-12-13 10:40:13.757410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.189 [2024-12-13 10:40:13.757426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.189 qpair failed and we were unable to recover it. 00:38:20.189 [2024-12-13 10:40:13.757529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.189 [2024-12-13 10:40:13.757546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.189 qpair failed and we were unable to recover it. 00:38:20.189 [2024-12-13 10:40:13.757615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.189 [2024-12-13 10:40:13.757630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.189 qpair failed and we were unable to recover it. 00:38:20.189 [2024-12-13 10:40:13.757711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.189 [2024-12-13 10:40:13.757726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.189 qpair failed and we were unable to recover it. 00:38:20.189 [2024-12-13 10:40:13.757803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.189 [2024-12-13 10:40:13.757819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.189 qpair failed and we were unable to recover it. 00:38:20.189 [2024-12-13 10:40:13.757991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.189 [2024-12-13 10:40:13.758006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.189 qpair failed and we were unable to recover it. 00:38:20.189 [2024-12-13 10:40:13.758141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.189 [2024-12-13 10:40:13.758156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.189 qpair failed and we were unable to recover it. 00:38:20.189 [2024-12-13 10:40:13.758222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.189 [2024-12-13 10:40:13.758237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.189 qpair failed and we were unable to recover it. 00:38:20.189 [2024-12-13 10:40:13.758387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.189 [2024-12-13 10:40:13.758405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.189 qpair failed and we were unable to recover it. 00:38:20.189 [2024-12-13 10:40:13.758542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.189 [2024-12-13 10:40:13.758563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.189 qpair failed and we were unable to recover it. 00:38:20.189 [2024-12-13 10:40:13.758701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.189 [2024-12-13 10:40:13.758716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.189 qpair failed and we were unable to recover it. 00:38:20.189 [2024-12-13 10:40:13.758811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.189 [2024-12-13 10:40:13.758826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.189 qpair failed and we were unable to recover it. 00:38:20.189 [2024-12-13 10:40:13.758969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.189 [2024-12-13 10:40:13.758984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.189 qpair failed and we were unable to recover it. 00:38:20.189 [2024-12-13 10:40:13.759057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.189 [2024-12-13 10:40:13.759074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.189 qpair failed and we were unable to recover it. 00:38:20.189 [2024-12-13 10:40:13.759155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.189 [2024-12-13 10:40:13.759172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.189 qpair failed and we were unable to recover it. 00:38:20.189 [2024-12-13 10:40:13.759248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.189 [2024-12-13 10:40:13.759263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.189 qpair failed and we were unable to recover it. 00:38:20.189 [2024-12-13 10:40:13.759395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.189 [2024-12-13 10:40:13.759411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.189 qpair failed and we were unable to recover it. 00:38:20.189 [2024-12-13 10:40:13.759582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.189 [2024-12-13 10:40:13.759597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.189 qpair failed and we were unable to recover it. 00:38:20.189 [2024-12-13 10:40:13.759738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.189 [2024-12-13 10:40:13.759753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.189 qpair failed and we were unable to recover it. 00:38:20.189 [2024-12-13 10:40:13.759897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.189 [2024-12-13 10:40:13.759912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.189 qpair failed and we were unable to recover it. 00:38:20.189 [2024-12-13 10:40:13.760071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.189 [2024-12-13 10:40:13.760085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.189 qpair failed and we were unable to recover it. 00:38:20.189 [2024-12-13 10:40:13.760232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.189 [2024-12-13 10:40:13.760246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.189 qpair failed and we were unable to recover it. 00:38:20.189 [2024-12-13 10:40:13.760401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.189 [2024-12-13 10:40:13.760418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.189 qpair failed and we were unable to recover it. 00:38:20.189 [2024-12-13 10:40:13.760641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.189 [2024-12-13 10:40:13.760656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.189 qpair failed and we were unable to recover it. 00:38:20.189 [2024-12-13 10:40:13.760913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.189 [2024-12-13 10:40:13.760928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.189 qpair failed and we were unable to recover it. 00:38:20.189 [2024-12-13 10:40:13.761066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.189 [2024-12-13 10:40:13.761081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.189 qpair failed and we were unable to recover it. 00:38:20.189 [2024-12-13 10:40:13.761163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.189 [2024-12-13 10:40:13.761178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.189 qpair failed and we were unable to recover it. 00:38:20.189 [2024-12-13 10:40:13.761281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.189 [2024-12-13 10:40:13.761296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.189 qpair failed and we were unable to recover it. 00:38:20.189 [2024-12-13 10:40:13.761387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.189 [2024-12-13 10:40:13.761403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.189 qpair failed and we were unable to recover it. 00:38:20.189 [2024-12-13 10:40:13.761511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.189 [2024-12-13 10:40:13.761527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.189 qpair failed and we were unable to recover it. 00:38:20.189 [2024-12-13 10:40:13.761603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.189 [2024-12-13 10:40:13.761618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.189 qpair failed and we were unable to recover it. 00:38:20.189 [2024-12-13 10:40:13.761755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.189 [2024-12-13 10:40:13.761771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.189 qpair failed and we were unable to recover it. 00:38:20.189 [2024-12-13 10:40:13.761927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.189 [2024-12-13 10:40:13.761943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.189 qpair failed and we were unable to recover it. 00:38:20.189 [2024-12-13 10:40:13.762085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.189 [2024-12-13 10:40:13.762100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.189 qpair failed and we were unable to recover it. 00:38:20.189 [2024-12-13 10:40:13.762189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.189 [2024-12-13 10:40:13.762204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.189 qpair failed and we were unable to recover it. 00:38:20.189 [2024-12-13 10:40:13.762289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.190 [2024-12-13 10:40:13.762304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.190 qpair failed and we were unable to recover it. 00:38:20.190 [2024-12-13 10:40:13.762458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.190 [2024-12-13 10:40:13.762474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.190 qpair failed and we were unable to recover it. 00:38:20.190 [2024-12-13 10:40:13.762557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.190 [2024-12-13 10:40:13.762572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.190 qpair failed and we were unable to recover it. 00:38:20.190 [2024-12-13 10:40:13.762672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.190 [2024-12-13 10:40:13.762688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.190 qpair failed and we were unable to recover it. 00:38:20.190 [2024-12-13 10:40:13.762767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.190 [2024-12-13 10:40:13.762782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.190 qpair failed and we were unable to recover it. 00:38:20.190 [2024-12-13 10:40:13.762919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.190 [2024-12-13 10:40:13.762935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.190 qpair failed and we were unable to recover it. 00:38:20.190 [2024-12-13 10:40:13.763163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.190 [2024-12-13 10:40:13.763180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.190 qpair failed and we were unable to recover it. 00:38:20.190 [2024-12-13 10:40:13.763361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.190 [2024-12-13 10:40:13.763376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.190 qpair failed and we were unable to recover it. 00:38:20.190 [2024-12-13 10:40:13.763528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.190 [2024-12-13 10:40:13.763545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.190 qpair failed and we were unable to recover it. 00:38:20.190 [2024-12-13 10:40:13.763783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.190 [2024-12-13 10:40:13.763798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.190 qpair failed and we were unable to recover it. 00:38:20.190 [2024-12-13 10:40:13.764021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.190 [2024-12-13 10:40:13.764037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.190 qpair failed and we were unable to recover it. 00:38:20.190 [2024-12-13 10:40:13.764185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.190 [2024-12-13 10:40:13.764199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.190 qpair failed and we were unable to recover it. 00:38:20.190 [2024-12-13 10:40:13.764397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.190 [2024-12-13 10:40:13.764413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.190 qpair failed and we were unable to recover it. 00:38:20.190 [2024-12-13 10:40:13.764495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.190 [2024-12-13 10:40:13.764511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.190 qpair failed and we were unable to recover it. 00:38:20.190 [2024-12-13 10:40:13.764662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.190 [2024-12-13 10:40:13.764677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.190 qpair failed and we were unable to recover it. 00:38:20.190 [2024-12-13 10:40:13.764764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.190 [2024-12-13 10:40:13.764780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.190 qpair failed and we were unable to recover it. 00:38:20.190 [2024-12-13 10:40:13.764925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.190 [2024-12-13 10:40:13.764940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.190 qpair failed and we were unable to recover it. 00:38:20.190 [2024-12-13 10:40:13.765145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.190 [2024-12-13 10:40:13.765161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.190 qpair failed and we were unable to recover it. 00:38:20.190 [2024-12-13 10:40:13.765248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.190 [2024-12-13 10:40:13.765263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.190 qpair failed and we were unable to recover it. 00:38:20.190 [2024-12-13 10:40:13.765431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.190 [2024-12-13 10:40:13.765465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.190 qpair failed and we were unable to recover it. 00:38:20.190 [2024-12-13 10:40:13.765582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.190 [2024-12-13 10:40:13.765598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.190 qpair failed and we were unable to recover it. 00:38:20.190 [2024-12-13 10:40:13.765748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.190 [2024-12-13 10:40:13.765763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.190 qpair failed and we were unable to recover it. 00:38:20.190 [2024-12-13 10:40:13.765834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.190 [2024-12-13 10:40:13.765849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.190 qpair failed and we were unable to recover it. 00:38:20.190 [2024-12-13 10:40:13.765948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.190 [2024-12-13 10:40:13.765964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.190 qpair failed and we were unable to recover it. 00:38:20.190 [2024-12-13 10:40:13.766040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.190 [2024-12-13 10:40:13.766056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.190 qpair failed and we were unable to recover it. 00:38:20.190 [2024-12-13 10:40:13.766135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.190 [2024-12-13 10:40:13.766150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.190 qpair failed and we were unable to recover it. 00:38:20.190 [2024-12-13 10:40:13.766245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.190 [2024-12-13 10:40:13.766260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.190 qpair failed and we were unable to recover it. 00:38:20.190 [2024-12-13 10:40:13.766428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.190 [2024-12-13 10:40:13.766443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.190 qpair failed and we were unable to recover it. 00:38:20.190 [2024-12-13 10:40:13.766530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.190 [2024-12-13 10:40:13.766545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.190 qpair failed and we were unable to recover it. 00:38:20.190 [2024-12-13 10:40:13.766631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.190 [2024-12-13 10:40:13.766646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.190 qpair failed and we were unable to recover it. 00:38:20.190 [2024-12-13 10:40:13.766715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.190 [2024-12-13 10:40:13.766730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.190 qpair failed and we were unable to recover it. 00:38:20.190 [2024-12-13 10:40:13.766864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.190 [2024-12-13 10:40:13.766880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.190 qpair failed and we were unable to recover it. 00:38:20.190 [2024-12-13 10:40:13.767019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.190 [2024-12-13 10:40:13.767034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.190 qpair failed and we were unable to recover it. 00:38:20.190 [2024-12-13 10:40:13.767247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.190 [2024-12-13 10:40:13.767263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.190 qpair failed and we were unable to recover it. 00:38:20.190 [2024-12-13 10:40:13.767351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.190 [2024-12-13 10:40:13.767367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.190 qpair failed and we were unable to recover it. 00:38:20.191 [2024-12-13 10:40:13.767466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.191 [2024-12-13 10:40:13.767481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.191 qpair failed and we were unable to recover it. 00:38:20.191 [2024-12-13 10:40:13.767560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.191 [2024-12-13 10:40:13.767574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.191 qpair failed and we were unable to recover it. 00:38:20.191 [2024-12-13 10:40:13.767723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.191 [2024-12-13 10:40:13.767738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.191 qpair failed and we were unable to recover it. 00:38:20.191 [2024-12-13 10:40:13.767871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.191 [2024-12-13 10:40:13.767891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.191 qpair failed and we were unable to recover it. 00:38:20.191 [2024-12-13 10:40:13.767975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.191 [2024-12-13 10:40:13.767990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.191 qpair failed and we were unable to recover it. 00:38:20.191 [2024-12-13 10:40:13.768057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.191 [2024-12-13 10:40:13.768073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.191 qpair failed and we were unable to recover it. 00:38:20.191 [2024-12-13 10:40:13.768227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.191 [2024-12-13 10:40:13.768242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.191 qpair failed and we were unable to recover it. 00:38:20.191 [2024-12-13 10:40:13.768324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.191 [2024-12-13 10:40:13.768339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.191 qpair failed and we were unable to recover it. 00:38:20.191 [2024-12-13 10:40:13.768491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.191 [2024-12-13 10:40:13.768506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.191 qpair failed and we were unable to recover it. 00:38:20.191 [2024-12-13 10:40:13.768643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.191 [2024-12-13 10:40:13.768659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.191 qpair failed and we were unable to recover it. 00:38:20.191 [2024-12-13 10:40:13.768807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.191 [2024-12-13 10:40:13.768823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.191 qpair failed and we were unable to recover it. 00:38:20.191 [2024-12-13 10:40:13.769073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.191 [2024-12-13 10:40:13.769088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.191 qpair failed and we were unable to recover it. 00:38:20.191 [2024-12-13 10:40:13.769229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.191 [2024-12-13 10:40:13.769245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.191 qpair failed and we were unable to recover it. 00:38:20.191 [2024-12-13 10:40:13.769332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.191 [2024-12-13 10:40:13.769347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.191 qpair failed and we were unable to recover it. 00:38:20.191 [2024-12-13 10:40:13.769505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.191 [2024-12-13 10:40:13.769521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.191 qpair failed and we were unable to recover it. 00:38:20.191 [2024-12-13 10:40:13.769684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.191 [2024-12-13 10:40:13.769700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.191 qpair failed and we were unable to recover it. 00:38:20.191 [2024-12-13 10:40:13.769790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.191 [2024-12-13 10:40:13.769805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.191 qpair failed and we were unable to recover it. 00:38:20.191 [2024-12-13 10:40:13.769953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.191 [2024-12-13 10:40:13.769969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.191 qpair failed and we were unable to recover it. 00:38:20.191 [2024-12-13 10:40:13.770074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.191 [2024-12-13 10:40:13.770091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.191 qpair failed and we were unable to recover it. 00:38:20.191 [2024-12-13 10:40:13.770193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.191 [2024-12-13 10:40:13.770209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.191 qpair failed and we were unable to recover it. 00:38:20.191 [2024-12-13 10:40:13.770298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.191 [2024-12-13 10:40:13.770313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.191 qpair failed and we were unable to recover it. 00:38:20.191 [2024-12-13 10:40:13.770402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.191 [2024-12-13 10:40:13.770418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.191 qpair failed and we were unable to recover it. 00:38:20.191 [2024-12-13 10:40:13.770575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.191 [2024-12-13 10:40:13.770592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.191 qpair failed and we were unable to recover it. 00:38:20.191 [2024-12-13 10:40:13.770682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.191 [2024-12-13 10:40:13.770698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.191 qpair failed and we were unable to recover it. 00:38:20.191 [2024-12-13 10:40:13.770831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.191 [2024-12-13 10:40:13.770848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.191 qpair failed and we were unable to recover it. 00:38:20.191 [2024-12-13 10:40:13.770923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.191 [2024-12-13 10:40:13.770939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.191 qpair failed and we were unable to recover it. 00:38:20.191 [2024-12-13 10:40:13.771083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.191 [2024-12-13 10:40:13.771099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.191 qpair failed and we were unable to recover it. 00:38:20.191 [2024-12-13 10:40:13.771203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.191 [2024-12-13 10:40:13.771218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.191 qpair failed and we were unable to recover it. 00:38:20.191 [2024-12-13 10:40:13.771299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.191 [2024-12-13 10:40:13.771314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.191 qpair failed and we were unable to recover it. 00:38:20.191 [2024-12-13 10:40:13.771397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.191 [2024-12-13 10:40:13.771413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.191 qpair failed and we were unable to recover it. 00:38:20.191 [2024-12-13 10:40:13.771564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.191 [2024-12-13 10:40:13.771581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.191 qpair failed and we were unable to recover it. 00:38:20.191 [2024-12-13 10:40:13.771670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.191 [2024-12-13 10:40:13.771687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.191 qpair failed and we were unable to recover it. 00:38:20.191 [2024-12-13 10:40:13.771757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.191 [2024-12-13 10:40:13.771772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.191 qpair failed and we were unable to recover it. 00:38:20.191 [2024-12-13 10:40:13.771984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.191 [2024-12-13 10:40:13.772004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.191 qpair failed and we were unable to recover it. 00:38:20.191 [2024-12-13 10:40:13.772091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.191 [2024-12-13 10:40:13.772106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.191 qpair failed and we were unable to recover it. 00:38:20.191 [2024-12-13 10:40:13.772278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.191 [2024-12-13 10:40:13.772293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.191 qpair failed and we were unable to recover it. 00:38:20.191 [2024-12-13 10:40:13.772387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.191 [2024-12-13 10:40:13.772402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.191 qpair failed and we were unable to recover it. 00:38:20.191 [2024-12-13 10:40:13.772491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.191 [2024-12-13 10:40:13.772507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.191 qpair failed and we were unable to recover it. 00:38:20.191 [2024-12-13 10:40:13.772652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.191 [2024-12-13 10:40:13.772667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.191 qpair failed and we were unable to recover it. 00:38:20.192 [2024-12-13 10:40:13.772740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.192 [2024-12-13 10:40:13.772755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.192 qpair failed and we were unable to recover it. 00:38:20.192 [2024-12-13 10:40:13.772976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.192 [2024-12-13 10:40:13.772991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.192 qpair failed and we were unable to recover it. 00:38:20.192 [2024-12-13 10:40:13.773192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.192 [2024-12-13 10:40:13.773207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.192 qpair failed and we were unable to recover it. 00:38:20.192 [2024-12-13 10:40:13.773285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.192 [2024-12-13 10:40:13.773300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.192 qpair failed and we were unable to recover it. 00:38:20.192 [2024-12-13 10:40:13.773381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.192 [2024-12-13 10:40:13.773396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.192 qpair failed and we were unable to recover it. 00:38:20.192 [2024-12-13 10:40:13.773490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.192 [2024-12-13 10:40:13.773506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.192 qpair failed and we were unable to recover it. 00:38:20.192 [2024-12-13 10:40:13.773654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.192 [2024-12-13 10:40:13.773670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.192 qpair failed and we were unable to recover it. 00:38:20.192 [2024-12-13 10:40:13.773810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.192 [2024-12-13 10:40:13.773825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.192 qpair failed and we were unable to recover it. 00:38:20.192 [2024-12-13 10:40:13.773907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.192 [2024-12-13 10:40:13.773922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.192 qpair failed and we were unable to recover it. 00:38:20.192 [2024-12-13 10:40:13.774071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.192 [2024-12-13 10:40:13.774086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.192 qpair failed and we were unable to recover it. 00:38:20.192 [2024-12-13 10:40:13.774233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.192 [2024-12-13 10:40:13.774248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.192 qpair failed and we were unable to recover it. 00:38:20.192 [2024-12-13 10:40:13.774316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.192 [2024-12-13 10:40:13.774331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.192 qpair failed and we were unable to recover it. 00:38:20.192 [2024-12-13 10:40:13.774543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.192 [2024-12-13 10:40:13.774595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:20.192 qpair failed and we were unable to recover it. 00:38:20.192 [2024-12-13 10:40:13.774837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.192 [2024-12-13 10:40:13.774887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:20.192 qpair failed and we were unable to recover it. 00:38:20.192 [2024-12-13 10:40:13.775111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.192 [2024-12-13 10:40:13.775157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:20.192 qpair failed and we were unable to recover it. 00:38:20.192 [2024-12-13 10:40:13.775411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.192 [2024-12-13 10:40:13.775428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.192 qpair failed and we were unable to recover it. 00:38:20.192 [2024-12-13 10:40:13.775589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.192 [2024-12-13 10:40:13.775604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.192 qpair failed and we were unable to recover it. 00:38:20.192 [2024-12-13 10:40:13.775743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.192 [2024-12-13 10:40:13.775757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.192 qpair failed and we were unable to recover it. 00:38:20.192 [2024-12-13 10:40:13.775911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.192 [2024-12-13 10:40:13.775926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.192 qpair failed and we were unable to recover it. 00:38:20.192 [2024-12-13 10:40:13.776175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.192 [2024-12-13 10:40:13.776190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.192 qpair failed and we were unable to recover it. 00:38:20.192 [2024-12-13 10:40:13.776394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.192 [2024-12-13 10:40:13.776409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.192 qpair failed and we were unable to recover it. 00:38:20.192 [2024-12-13 10:40:13.776562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.192 [2024-12-13 10:40:13.776578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.192 qpair failed and we were unable to recover it. 00:38:20.192 [2024-12-13 10:40:13.776681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.192 [2024-12-13 10:40:13.776696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.192 qpair failed and we were unable to recover it. 00:38:20.192 [2024-12-13 10:40:13.776864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.192 [2024-12-13 10:40:13.776879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.192 qpair failed and we were unable to recover it. 00:38:20.192 [2024-12-13 10:40:13.776961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.192 [2024-12-13 10:40:13.776976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.192 qpair failed and we were unable to recover it. 00:38:20.192 [2024-12-13 10:40:13.777058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.192 [2024-12-13 10:40:13.777076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.192 qpair failed and we were unable to recover it. 00:38:20.192 [2024-12-13 10:40:13.777222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.192 [2024-12-13 10:40:13.777237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.192 qpair failed and we were unable to recover it. 00:38:20.192 [2024-12-13 10:40:13.777386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.192 [2024-12-13 10:40:13.777401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.192 qpair failed and we were unable to recover it. 00:38:20.192 [2024-12-13 10:40:13.777542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.192 [2024-12-13 10:40:13.777557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.192 qpair failed and we were unable to recover it. 00:38:20.192 [2024-12-13 10:40:13.777651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.192 [2024-12-13 10:40:13.777666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.192 qpair failed and we were unable to recover it. 00:38:20.192 [2024-12-13 10:40:13.777758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.192 [2024-12-13 10:40:13.777774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.192 qpair failed and we were unable to recover it. 00:38:20.192 [2024-12-13 10:40:13.777980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.192 [2024-12-13 10:40:13.778000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.192 qpair failed and we were unable to recover it. 00:38:20.192 [2024-12-13 10:40:13.778149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.192 [2024-12-13 10:40:13.778164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.192 qpair failed and we were unable to recover it. 00:38:20.192 [2024-12-13 10:40:13.778246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.192 [2024-12-13 10:40:13.778261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.192 qpair failed and we were unable to recover it. 00:38:20.192 [2024-12-13 10:40:13.778410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.192 [2024-12-13 10:40:13.778426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.192 qpair failed and we were unable to recover it. 00:38:20.192 [2024-12-13 10:40:13.778519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.192 [2024-12-13 10:40:13.778535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.192 qpair failed and we were unable to recover it. 00:38:20.192 [2024-12-13 10:40:13.778672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.192 [2024-12-13 10:40:13.778688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.192 qpair failed and we were unable to recover it. 00:38:20.192 [2024-12-13 10:40:13.778836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.192 [2024-12-13 10:40:13.778852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.192 qpair failed and we were unable to recover it. 00:38:20.192 [2024-12-13 10:40:13.778922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.192 [2024-12-13 10:40:13.778937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.192 qpair failed and we were unable to recover it. 00:38:20.193 [2024-12-13 10:40:13.779098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.193 [2024-12-13 10:40:13.779112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.193 qpair failed and we were unable to recover it. 00:38:20.193 [2024-12-13 10:40:13.779270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.193 [2024-12-13 10:40:13.779286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.193 qpair failed and we were unable to recover it. 00:38:20.193 [2024-12-13 10:40:13.779355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.193 [2024-12-13 10:40:13.779370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.193 qpair failed and we were unable to recover it. 00:38:20.193 [2024-12-13 10:40:13.779503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.193 [2024-12-13 10:40:13.779519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.193 qpair failed and we were unable to recover it. 00:38:20.193 [2024-12-13 10:40:13.779625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.193 [2024-12-13 10:40:13.779640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.193 qpair failed and we were unable to recover it. 00:38:20.193 [2024-12-13 10:40:13.779867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.193 [2024-12-13 10:40:13.779883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.193 qpair failed and we were unable to recover it. 00:38:20.193 [2024-12-13 10:40:13.779963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.193 [2024-12-13 10:40:13.779978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.193 qpair failed and we were unable to recover it. 00:38:20.193 [2024-12-13 10:40:13.780078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.193 [2024-12-13 10:40:13.780093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.193 qpair failed and we were unable to recover it. 00:38:20.193 [2024-12-13 10:40:13.780248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.193 [2024-12-13 10:40:13.780263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.193 qpair failed and we were unable to recover it. 00:38:20.193 [2024-12-13 10:40:13.780348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.193 [2024-12-13 10:40:13.780363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.193 qpair failed and we were unable to recover it. 00:38:20.193 [2024-12-13 10:40:13.780445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.193 [2024-12-13 10:40:13.780476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.193 qpair failed and we were unable to recover it. 00:38:20.193 [2024-12-13 10:40:13.780645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.193 [2024-12-13 10:40:13.780661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.193 qpair failed and we were unable to recover it. 00:38:20.193 [2024-12-13 10:40:13.780852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.193 [2024-12-13 10:40:13.780868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.193 qpair failed and we were unable to recover it. 00:38:20.193 [2024-12-13 10:40:13.780988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.193 [2024-12-13 10:40:13.781016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:20.193 qpair failed and we were unable to recover it. 00:38:20.193 [2024-12-13 10:40:13.781258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.193 [2024-12-13 10:40:13.781290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:20.193 qpair failed and we were unable to recover it. 00:38:20.193 [2024-12-13 10:40:13.781476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.193 [2024-12-13 10:40:13.781505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:20.193 qpair failed and we were unable to recover it. 00:38:20.193 [2024-12-13 10:40:13.781607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.193 [2024-12-13 10:40:13.781625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.193 qpair failed and we were unable to recover it. 00:38:20.193 [2024-12-13 10:40:13.781774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.193 [2024-12-13 10:40:13.781790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.193 qpair failed and we were unable to recover it. 00:38:20.193 [2024-12-13 10:40:13.781940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.193 [2024-12-13 10:40:13.781954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.193 qpair failed and we were unable to recover it. 00:38:20.193 [2024-12-13 10:40:13.782107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.193 [2024-12-13 10:40:13.782121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.193 qpair failed and we were unable to recover it. 00:38:20.193 [2024-12-13 10:40:13.782273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.193 [2024-12-13 10:40:13.782288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.193 qpair failed and we were unable to recover it. 00:38:20.193 [2024-12-13 10:40:13.782436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.193 [2024-12-13 10:40:13.782422] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:38:20.193 [2024-12-13 10:40:13.782457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.193 qpair failed and we were unable to recover it. 00:38:20.193 [2024-12-13 10:40:13.782502] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:20.193 [2024-12-13 10:40:13.782622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.193 [2024-12-13 10:40:13.782636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.193 qpair failed and we were unable to recover it. 00:38:20.193 [2024-12-13 10:40:13.782716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.193 [2024-12-13 10:40:13.782729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.193 qpair failed and we were unable to recover it. 00:38:20.193 [2024-12-13 10:40:13.782808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.193 [2024-12-13 10:40:13.782820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.193 qpair failed and we were unable to recover it. 00:38:20.193 [2024-12-13 10:40:13.782905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.193 [2024-12-13 10:40:13.782919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.193 qpair failed and we were unable to recover it. 00:38:20.193 [2024-12-13 10:40:13.782997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.193 [2024-12-13 10:40:13.783009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.193 qpair failed and we were unable to recover it. 00:38:20.193 [2024-12-13 10:40:13.783156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.193 [2024-12-13 10:40:13.783170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.193 qpair failed and we were unable to recover it. 00:38:20.193 [2024-12-13 10:40:13.783257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.193 [2024-12-13 10:40:13.783272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.193 qpair failed and we were unable to recover it. 00:38:20.193 [2024-12-13 10:40:13.783342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.193 [2024-12-13 10:40:13.783358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.193 qpair failed and we were unable to recover it. 00:38:20.193 [2024-12-13 10:40:13.783586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.193 [2024-12-13 10:40:13.783603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.193 qpair failed and we were unable to recover it. 00:38:20.193 [2024-12-13 10:40:13.783750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.193 [2024-12-13 10:40:13.783765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.193 qpair failed and we were unable to recover it. 00:38:20.193 [2024-12-13 10:40:13.783915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.193 [2024-12-13 10:40:13.783930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.193 qpair failed and we were unable to recover it. 00:38:20.193 [2024-12-13 10:40:13.784072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.193 [2024-12-13 10:40:13.784087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.193 qpair failed and we were unable to recover it. 00:38:20.193 [2024-12-13 10:40:13.784173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.193 [2024-12-13 10:40:13.784188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.193 qpair failed and we were unable to recover it. 00:38:20.193 [2024-12-13 10:40:13.784392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.193 [2024-12-13 10:40:13.784409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.193 qpair failed and we were unable to recover it. 00:38:20.193 [2024-12-13 10:40:13.784571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.193 [2024-12-13 10:40:13.784587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.193 qpair failed and we were unable to recover it. 00:38:20.193 [2024-12-13 10:40:13.784680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.193 [2024-12-13 10:40:13.784695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.193 qpair failed and we were unable to recover it. 00:38:20.194 [2024-12-13 10:40:13.784841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.194 [2024-12-13 10:40:13.784857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.194 qpair failed and we were unable to recover it. 00:38:20.194 [2024-12-13 10:40:13.785008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.194 [2024-12-13 10:40:13.785023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.194 qpair failed and we were unable to recover it. 00:38:20.194 [2024-12-13 10:40:13.785120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.194 [2024-12-13 10:40:13.785136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.194 qpair failed and we were unable to recover it. 00:38:20.194 [2024-12-13 10:40:13.785283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.194 [2024-12-13 10:40:13.785299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.194 qpair failed and we were unable to recover it. 00:38:20.194 [2024-12-13 10:40:13.785459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.194 [2024-12-13 10:40:13.785475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.194 qpair failed and we were unable to recover it. 00:38:20.194 [2024-12-13 10:40:13.785681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.194 [2024-12-13 10:40:13.785697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.194 qpair failed and we were unable to recover it. 00:38:20.194 [2024-12-13 10:40:13.785790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.194 [2024-12-13 10:40:13.785805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.194 qpair failed and we were unable to recover it. 00:38:20.194 [2024-12-13 10:40:13.785978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.194 [2024-12-13 10:40:13.785994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.194 qpair failed and we were unable to recover it. 00:38:20.194 [2024-12-13 10:40:13.786165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.194 [2024-12-13 10:40:13.786181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.194 qpair failed and we were unable to recover it. 00:38:20.194 [2024-12-13 10:40:13.786397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.194 [2024-12-13 10:40:13.786412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.194 qpair failed and we were unable to recover it. 00:38:20.194 [2024-12-13 10:40:13.786552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.194 [2024-12-13 10:40:13.786568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.194 qpair failed and we were unable to recover it. 00:38:20.194 [2024-12-13 10:40:13.786651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.194 [2024-12-13 10:40:13.786667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.194 qpair failed and we were unable to recover it. 00:38:20.194 [2024-12-13 10:40:13.786822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.194 [2024-12-13 10:40:13.786837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.194 qpair failed and we were unable to recover it. 00:38:20.194 [2024-12-13 10:40:13.787038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.194 [2024-12-13 10:40:13.787054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.194 qpair failed and we were unable to recover it. 00:38:20.194 [2024-12-13 10:40:13.787147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.194 [2024-12-13 10:40:13.787162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.194 qpair failed and we were unable to recover it. 00:38:20.194 [2024-12-13 10:40:13.787324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.194 [2024-12-13 10:40:13.787340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.194 qpair failed and we were unable to recover it. 00:38:20.194 [2024-12-13 10:40:13.787495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.194 [2024-12-13 10:40:13.787512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.194 qpair failed and we were unable to recover it. 00:38:20.194 [2024-12-13 10:40:13.787670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.194 [2024-12-13 10:40:13.787687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.194 qpair failed and we were unable to recover it. 00:38:20.194 [2024-12-13 10:40:13.787889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.194 [2024-12-13 10:40:13.787905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.194 qpair failed and we were unable to recover it. 00:38:20.194 [2024-12-13 10:40:13.788006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.194 [2024-12-13 10:40:13.788021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.194 qpair failed and we were unable to recover it. 00:38:20.194 [2024-12-13 10:40:13.788132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.194 [2024-12-13 10:40:13.788149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.194 qpair failed and we were unable to recover it. 00:38:20.194 [2024-12-13 10:40:13.788295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.194 [2024-12-13 10:40:13.788315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.194 qpair failed and we were unable to recover it. 00:38:20.194 [2024-12-13 10:40:13.788396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.194 [2024-12-13 10:40:13.788411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.194 qpair failed and we were unable to recover it. 00:38:20.194 [2024-12-13 10:40:13.788490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.194 [2024-12-13 10:40:13.788506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.194 qpair failed and we were unable to recover it. 00:38:20.194 [2024-12-13 10:40:13.788598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.194 [2024-12-13 10:40:13.788613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.194 qpair failed and we were unable to recover it. 00:38:20.194 [2024-12-13 10:40:13.788768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.194 [2024-12-13 10:40:13.788783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.194 qpair failed and we were unable to recover it. 00:38:20.194 [2024-12-13 10:40:13.788883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.194 [2024-12-13 10:40:13.788899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.194 qpair failed and we were unable to recover it. 00:38:20.194 [2024-12-13 10:40:13.788982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.194 [2024-12-13 10:40:13.789000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.194 qpair failed and we were unable to recover it. 00:38:20.194 [2024-12-13 10:40:13.789144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.194 [2024-12-13 10:40:13.789159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.194 qpair failed and we were unable to recover it. 00:38:20.194 [2024-12-13 10:40:13.789228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.194 [2024-12-13 10:40:13.789244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.194 qpair failed and we were unable to recover it. 00:38:20.194 [2024-12-13 10:40:13.789331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.194 [2024-12-13 10:40:13.789346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.194 qpair failed and we were unable to recover it. 00:38:20.194 [2024-12-13 10:40:13.789489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.194 [2024-12-13 10:40:13.789506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.194 qpair failed and we were unable to recover it. 00:38:20.194 [2024-12-13 10:40:13.789656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.194 [2024-12-13 10:40:13.789671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.194 qpair failed and we were unable to recover it. 00:38:20.194 [2024-12-13 10:40:13.789839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.194 [2024-12-13 10:40:13.789856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.194 qpair failed and we were unable to recover it. 00:38:20.194 [2024-12-13 10:40:13.789997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.194 [2024-12-13 10:40:13.790013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.194 qpair failed and we were unable to recover it. 00:38:20.194 [2024-12-13 10:40:13.790204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.194 [2024-12-13 10:40:13.790219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.194 qpair failed and we were unable to recover it. 00:38:20.194 [2024-12-13 10:40:13.790362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.194 [2024-12-13 10:40:13.790378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.194 qpair failed and we were unable to recover it. 00:38:20.194 [2024-12-13 10:40:13.790514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.194 [2024-12-13 10:40:13.790530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.194 qpair failed and we were unable to recover it. 00:38:20.194 [2024-12-13 10:40:13.790609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.194 [2024-12-13 10:40:13.790624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.194 qpair failed and we were unable to recover it. 00:38:20.195 [2024-12-13 10:40:13.790777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.195 [2024-12-13 10:40:13.790793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.195 qpair failed and we were unable to recover it. 00:38:20.195 [2024-12-13 10:40:13.790893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.195 [2024-12-13 10:40:13.790908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.195 qpair failed and we were unable to recover it. 00:38:20.195 [2024-12-13 10:40:13.791067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.195 [2024-12-13 10:40:13.791083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.195 qpair failed and we were unable to recover it. 00:38:20.195 [2024-12-13 10:40:13.791154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.195 [2024-12-13 10:40:13.791170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.195 qpair failed and we were unable to recover it. 00:38:20.195 [2024-12-13 10:40:13.791245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.195 [2024-12-13 10:40:13.791260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.195 qpair failed and we were unable to recover it. 00:38:20.195 [2024-12-13 10:40:13.791350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.195 [2024-12-13 10:40:13.791365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.195 qpair failed and we were unable to recover it. 00:38:20.195 [2024-12-13 10:40:13.791504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.195 [2024-12-13 10:40:13.791520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.195 qpair failed and we were unable to recover it. 00:38:20.195 [2024-12-13 10:40:13.791678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.195 [2024-12-13 10:40:13.791692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.195 qpair failed and we were unable to recover it. 00:38:20.195 [2024-12-13 10:40:13.791834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.195 [2024-12-13 10:40:13.791850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.195 qpair failed and we were unable to recover it. 00:38:20.195 [2024-12-13 10:40:13.792010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.195 [2024-12-13 10:40:13.792025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.195 qpair failed and we were unable to recover it. 00:38:20.195 [2024-12-13 10:40:13.792170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.195 [2024-12-13 10:40:13.792186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.195 qpair failed and we were unable to recover it. 00:38:20.195 [2024-12-13 10:40:13.792319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.195 [2024-12-13 10:40:13.792335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.195 qpair failed and we were unable to recover it. 00:38:20.195 [2024-12-13 10:40:13.792474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.195 [2024-12-13 10:40:13.792493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.195 qpair failed and we were unable to recover it. 00:38:20.195 [2024-12-13 10:40:13.792665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.195 [2024-12-13 10:40:13.792681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.195 qpair failed and we were unable to recover it. 00:38:20.195 [2024-12-13 10:40:13.792768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.195 [2024-12-13 10:40:13.792784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.195 qpair failed and we were unable to recover it. 00:38:20.195 [2024-12-13 10:40:13.792945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.195 [2024-12-13 10:40:13.792960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.195 qpair failed and we were unable to recover it. 00:38:20.195 [2024-12-13 10:40:13.793122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.195 [2024-12-13 10:40:13.793137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.195 qpair failed and we were unable to recover it. 00:38:20.195 [2024-12-13 10:40:13.793212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.195 [2024-12-13 10:40:13.793228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.195 qpair failed and we were unable to recover it. 00:38:20.195 [2024-12-13 10:40:13.793373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.195 [2024-12-13 10:40:13.793388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.195 qpair failed and we were unable to recover it. 00:38:20.195 [2024-12-13 10:40:13.793560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.195 [2024-12-13 10:40:13.793575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.195 qpair failed and we were unable to recover it. 00:38:20.195 [2024-12-13 10:40:13.793684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.195 [2024-12-13 10:40:13.793699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.195 qpair failed and we were unable to recover it. 00:38:20.195 [2024-12-13 10:40:13.793928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.195 [2024-12-13 10:40:13.793943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.195 qpair failed and we were unable to recover it. 00:38:20.195 [2024-12-13 10:40:13.794094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.195 [2024-12-13 10:40:13.794109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.195 qpair failed and we were unable to recover it. 00:38:20.195 [2024-12-13 10:40:13.794244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.195 [2024-12-13 10:40:13.794259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.195 qpair failed and we were unable to recover it. 00:38:20.195 [2024-12-13 10:40:13.794339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.195 [2024-12-13 10:40:13.794355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.195 qpair failed and we were unable to recover it. 00:38:20.195 [2024-12-13 10:40:13.794427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.195 [2024-12-13 10:40:13.794442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.195 qpair failed and we were unable to recover it. 00:38:20.195 [2024-12-13 10:40:13.794528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.195 [2024-12-13 10:40:13.794543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.195 qpair failed and we were unable to recover it. 00:38:20.195 [2024-12-13 10:40:13.794684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.195 [2024-12-13 10:40:13.794700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.195 qpair failed and we were unable to recover it. 00:38:20.195 [2024-12-13 10:40:13.794801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.195 [2024-12-13 10:40:13.794818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.195 qpair failed and we were unable to recover it. 00:38:20.195 [2024-12-13 10:40:13.794919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.195 [2024-12-13 10:40:13.794934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.195 qpair failed and we were unable to recover it. 00:38:20.195 [2024-12-13 10:40:13.795087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.195 [2024-12-13 10:40:13.795101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.195 qpair failed and we were unable to recover it. 00:38:20.195 [2024-12-13 10:40:13.795242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.195 [2024-12-13 10:40:13.795256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.195 qpair failed and we were unable to recover it. 00:38:20.195 [2024-12-13 10:40:13.795339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.195 [2024-12-13 10:40:13.795353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.195 qpair failed and we were unable to recover it. 00:38:20.195 [2024-12-13 10:40:13.795555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.195 [2024-12-13 10:40:13.795572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.195 qpair failed and we were unable to recover it. 00:38:20.195 [2024-12-13 10:40:13.795707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.195 [2024-12-13 10:40:13.795724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.196 qpair failed and we were unable to recover it. 00:38:20.196 [2024-12-13 10:40:13.795909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.196 [2024-12-13 10:40:13.795925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.196 qpair failed and we were unable to recover it. 00:38:20.196 [2024-12-13 10:40:13.796149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.196 [2024-12-13 10:40:13.796165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.196 qpair failed and we were unable to recover it. 00:38:20.196 [2024-12-13 10:40:13.796259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.196 [2024-12-13 10:40:13.796274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.196 qpair failed and we were unable to recover it. 00:38:20.196 [2024-12-13 10:40:13.796356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.196 [2024-12-13 10:40:13.796370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.196 qpair failed and we were unable to recover it. 00:38:20.196 [2024-12-13 10:40:13.796444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.196 [2024-12-13 10:40:13.796466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.196 qpair failed and we were unable to recover it. 00:38:20.196 [2024-12-13 10:40:13.796616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.196 [2024-12-13 10:40:13.796631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.196 qpair failed and we were unable to recover it. 00:38:20.196 [2024-12-13 10:40:13.796729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.196 [2024-12-13 10:40:13.796744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.196 qpair failed and we were unable to recover it. 00:38:20.196 [2024-12-13 10:40:13.796838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.196 [2024-12-13 10:40:13.796853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.196 qpair failed and we were unable to recover it. 00:38:20.196 [2024-12-13 10:40:13.797001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.196 [2024-12-13 10:40:13.797018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.196 qpair failed and we were unable to recover it. 00:38:20.196 [2024-12-13 10:40:13.797115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.196 [2024-12-13 10:40:13.797130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.196 qpair failed and we were unable to recover it. 00:38:20.196 [2024-12-13 10:40:13.797280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.196 [2024-12-13 10:40:13.797297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.196 qpair failed and we were unable to recover it. 00:38:20.196 [2024-12-13 10:40:13.797445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.196 [2024-12-13 10:40:13.797491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.196 qpair failed and we were unable to recover it. 00:38:20.196 [2024-12-13 10:40:13.797578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.196 [2024-12-13 10:40:13.797593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.196 qpair failed and we were unable to recover it. 00:38:20.196 [2024-12-13 10:40:13.797680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.196 [2024-12-13 10:40:13.797694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.196 qpair failed and we were unable to recover it. 00:38:20.196 [2024-12-13 10:40:13.797792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.196 [2024-12-13 10:40:13.797806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.196 qpair failed and we were unable to recover it. 00:38:20.196 [2024-12-13 10:40:13.797966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.196 [2024-12-13 10:40:13.797983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.196 qpair failed and we were unable to recover it. 00:38:20.196 [2024-12-13 10:40:13.798132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.196 [2024-12-13 10:40:13.798148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.196 qpair failed and we were unable to recover it. 00:38:20.196 [2024-12-13 10:40:13.798299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.196 [2024-12-13 10:40:13.798314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.196 qpair failed and we were unable to recover it. 00:38:20.196 [2024-12-13 10:40:13.798383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.196 [2024-12-13 10:40:13.798398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.196 qpair failed and we were unable to recover it. 00:38:20.196 [2024-12-13 10:40:13.798480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.196 [2024-12-13 10:40:13.798497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.196 qpair failed and we were unable to recover it. 00:38:20.196 [2024-12-13 10:40:13.798580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.196 [2024-12-13 10:40:13.798595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.196 qpair failed and we were unable to recover it. 00:38:20.196 [2024-12-13 10:40:13.798746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.196 [2024-12-13 10:40:13.798762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.196 qpair failed and we were unable to recover it. 00:38:20.196 [2024-12-13 10:40:13.798907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.196 [2024-12-13 10:40:13.798922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.196 qpair failed and we were unable to recover it. 00:38:20.196 [2024-12-13 10:40:13.799013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.196 [2024-12-13 10:40:13.799030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.196 qpair failed and we were unable to recover it. 00:38:20.196 [2024-12-13 10:40:13.799120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.196 [2024-12-13 10:40:13.799135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.196 qpair failed and we were unable to recover it. 00:38:20.196 [2024-12-13 10:40:13.799318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.196 [2024-12-13 10:40:13.799342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.196 qpair failed and we were unable to recover it. 00:38:20.196 [2024-12-13 10:40:13.799501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.196 [2024-12-13 10:40:13.799517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.196 qpair failed and we were unable to recover it. 00:38:20.196 [2024-12-13 10:40:13.799659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.196 [2024-12-13 10:40:13.799675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.196 qpair failed and we were unable to recover it. 00:38:20.196 [2024-12-13 10:40:13.799768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.196 [2024-12-13 10:40:13.799783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.196 qpair failed and we were unable to recover it. 00:38:20.196 [2024-12-13 10:40:13.799874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.196 [2024-12-13 10:40:13.799890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.196 qpair failed and we were unable to recover it. 00:38:20.196 [2024-12-13 10:40:13.799971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.196 [2024-12-13 10:40:13.799986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.196 qpair failed and we were unable to recover it. 00:38:20.196 [2024-12-13 10:40:13.800079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.196 [2024-12-13 10:40:13.800095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.196 qpair failed and we were unable to recover it. 00:38:20.196 [2024-12-13 10:40:13.800184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.196 [2024-12-13 10:40:13.800200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.196 qpair failed and we were unable to recover it. 00:38:20.196 [2024-12-13 10:40:13.800372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.196 [2024-12-13 10:40:13.800390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.196 qpair failed and we were unable to recover it. 00:38:20.196 [2024-12-13 10:40:13.800474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.196 [2024-12-13 10:40:13.800490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.196 qpair failed and we were unable to recover it. 00:38:20.196 [2024-12-13 10:40:13.800563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.196 [2024-12-13 10:40:13.800578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.196 qpair failed and we were unable to recover it. 00:38:20.196 [2024-12-13 10:40:13.800675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.196 [2024-12-13 10:40:13.800690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.196 qpair failed and we were unable to recover it. 00:38:20.196 [2024-12-13 10:40:13.800826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.196 [2024-12-13 10:40:13.800842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.196 qpair failed and we were unable to recover it. 00:38:20.196 [2024-12-13 10:40:13.801005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.196 [2024-12-13 10:40:13.801022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.196 qpair failed and we were unable to recover it. 00:38:20.197 [2024-12-13 10:40:13.801089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.197 [2024-12-13 10:40:13.801104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.197 qpair failed and we were unable to recover it. 00:38:20.197 [2024-12-13 10:40:13.801238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.197 [2024-12-13 10:40:13.801254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.197 qpair failed and we were unable to recover it. 00:38:20.197 [2024-12-13 10:40:13.801347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.197 [2024-12-13 10:40:13.801363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.197 qpair failed and we were unable to recover it. 00:38:20.197 [2024-12-13 10:40:13.801507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.197 [2024-12-13 10:40:13.801524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.197 qpair failed and we were unable to recover it. 00:38:20.197 [2024-12-13 10:40:13.801612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.197 [2024-12-13 10:40:13.801628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.197 qpair failed and we were unable to recover it. 00:38:20.197 [2024-12-13 10:40:13.801709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.197 [2024-12-13 10:40:13.801724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.197 qpair failed and we were unable to recover it. 00:38:20.197 [2024-12-13 10:40:13.801814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.197 [2024-12-13 10:40:13.801830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.197 qpair failed and we were unable to recover it. 00:38:20.197 [2024-12-13 10:40:13.801966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.197 [2024-12-13 10:40:13.801981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.197 qpair failed and we were unable to recover it. 00:38:20.197 [2024-12-13 10:40:13.802090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.197 [2024-12-13 10:40:13.802106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.197 qpair failed and we were unable to recover it. 00:38:20.197 [2024-12-13 10:40:13.802258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.197 [2024-12-13 10:40:13.802275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.197 qpair failed and we were unable to recover it. 00:38:20.197 [2024-12-13 10:40:13.802471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.197 [2024-12-13 10:40:13.802487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.197 qpair failed and we were unable to recover it. 00:38:20.197 [2024-12-13 10:40:13.802562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.197 [2024-12-13 10:40:13.802577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.197 qpair failed and we were unable to recover it. 00:38:20.197 [2024-12-13 10:40:13.802660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.197 [2024-12-13 10:40:13.802675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.197 qpair failed and we were unable to recover it. 00:38:20.197 [2024-12-13 10:40:13.802841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.197 [2024-12-13 10:40:13.802857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.197 qpair failed and we were unable to recover it. 00:38:20.197 [2024-12-13 10:40:13.802963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.197 [2024-12-13 10:40:13.802981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.197 qpair failed and we were unable to recover it. 00:38:20.197 [2024-12-13 10:40:13.803125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.197 [2024-12-13 10:40:13.803141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.197 qpair failed and we were unable to recover it. 00:38:20.197 [2024-12-13 10:40:13.803232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.197 [2024-12-13 10:40:13.803247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.197 qpair failed and we were unable to recover it. 00:38:20.197 [2024-12-13 10:40:13.803343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.197 [2024-12-13 10:40:13.803358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.197 qpair failed and we were unable to recover it. 00:38:20.197 [2024-12-13 10:40:13.803526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.197 [2024-12-13 10:40:13.803542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.197 qpair failed and we were unable to recover it. 00:38:20.197 [2024-12-13 10:40:13.803683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.197 [2024-12-13 10:40:13.803698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.197 qpair failed and we were unable to recover it. 00:38:20.197 [2024-12-13 10:40:13.803767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.197 [2024-12-13 10:40:13.803783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.197 qpair failed and we were unable to recover it. 00:38:20.197 [2024-12-13 10:40:13.803987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.197 [2024-12-13 10:40:13.804005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.197 qpair failed and we were unable to recover it. 00:38:20.197 [2024-12-13 10:40:13.804100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.197 [2024-12-13 10:40:13.804117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.197 qpair failed and we were unable to recover it. 00:38:20.197 [2024-12-13 10:40:13.804214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.197 [2024-12-13 10:40:13.804230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.197 qpair failed and we were unable to recover it. 00:38:20.197 [2024-12-13 10:40:13.804382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.197 [2024-12-13 10:40:13.804398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.197 qpair failed and we were unable to recover it. 00:38:20.197 [2024-12-13 10:40:13.804473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.197 [2024-12-13 10:40:13.804488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.197 qpair failed and we were unable to recover it. 00:38:20.197 [2024-12-13 10:40:13.804565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.197 [2024-12-13 10:40:13.804580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.197 qpair failed and we were unable to recover it. 00:38:20.197 [2024-12-13 10:40:13.804665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.197 [2024-12-13 10:40:13.804680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.197 qpair failed and we were unable to recover it. 00:38:20.197 [2024-12-13 10:40:13.804762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.197 [2024-12-13 10:40:13.804776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.197 qpair failed and we were unable to recover it. 00:38:20.197 [2024-12-13 10:40:13.804922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.197 [2024-12-13 10:40:13.804938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.197 qpair failed and we were unable to recover it. 00:38:20.197 [2024-12-13 10:40:13.805075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.197 [2024-12-13 10:40:13.805090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.197 qpair failed and we were unable to recover it. 00:38:20.197 [2024-12-13 10:40:13.805161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.197 [2024-12-13 10:40:13.805176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.197 qpair failed and we were unable to recover it. 00:38:20.197 [2024-12-13 10:40:13.805253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.197 [2024-12-13 10:40:13.805269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.197 qpair failed and we were unable to recover it. 00:38:20.197 [2024-12-13 10:40:13.805352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.197 [2024-12-13 10:40:13.805368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.197 qpair failed and we were unable to recover it. 00:38:20.197 [2024-12-13 10:40:13.805527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.197 [2024-12-13 10:40:13.805546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.197 qpair failed and we were unable to recover it. 00:38:20.197 [2024-12-13 10:40:13.805687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.197 [2024-12-13 10:40:13.805703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.197 qpair failed and we were unable to recover it. 00:38:20.197 [2024-12-13 10:40:13.805872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.197 [2024-12-13 10:40:13.805893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.197 qpair failed and we were unable to recover it. 00:38:20.197 [2024-12-13 10:40:13.805985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.197 [2024-12-13 10:40:13.806001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.197 qpair failed and we were unable to recover it. 00:38:20.197 [2024-12-13 10:40:13.806160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.198 [2024-12-13 10:40:13.806176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.198 qpair failed and we were unable to recover it. 00:38:20.198 [2024-12-13 10:40:13.806268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.198 [2024-12-13 10:40:13.806285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.198 qpair failed and we were unable to recover it. 00:38:20.198 [2024-12-13 10:40:13.806359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.198 [2024-12-13 10:40:13.806374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.198 qpair failed and we were unable to recover it. 00:38:20.198 [2024-12-13 10:40:13.806466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.198 [2024-12-13 10:40:13.806482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.198 qpair failed and we were unable to recover it. 00:38:20.198 [2024-12-13 10:40:13.806631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.198 [2024-12-13 10:40:13.806646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.198 qpair failed and we were unable to recover it. 00:38:20.198 [2024-12-13 10:40:13.806725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.198 [2024-12-13 10:40:13.806740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.198 qpair failed and we were unable to recover it. 00:38:20.198 [2024-12-13 10:40:13.806827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.198 [2024-12-13 10:40:13.806842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.198 qpair failed and we were unable to recover it. 00:38:20.198 [2024-12-13 10:40:13.806917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.198 [2024-12-13 10:40:13.806933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.198 qpair failed and we were unable to recover it. 00:38:20.198 [2024-12-13 10:40:13.807000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.198 [2024-12-13 10:40:13.807015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.198 qpair failed and we were unable to recover it. 00:38:20.198 [2024-12-13 10:40:13.807096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.198 [2024-12-13 10:40:13.807112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.198 qpair failed and we were unable to recover it. 00:38:20.198 [2024-12-13 10:40:13.807344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.198 [2024-12-13 10:40:13.807360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.198 qpair failed and we were unable to recover it. 00:38:20.198 [2024-12-13 10:40:13.807425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.198 [2024-12-13 10:40:13.807440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.198 qpair failed and we were unable to recover it. 00:38:20.198 [2024-12-13 10:40:13.807622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.198 [2024-12-13 10:40:13.807638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.198 qpair failed and we were unable to recover it. 00:38:20.198 [2024-12-13 10:40:13.807714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.198 [2024-12-13 10:40:13.807730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.198 qpair failed and we were unable to recover it. 00:38:20.198 [2024-12-13 10:40:13.807809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.198 [2024-12-13 10:40:13.807824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.198 qpair failed and we were unable to recover it. 00:38:20.198 [2024-12-13 10:40:13.807966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.198 [2024-12-13 10:40:13.807982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.198 qpair failed and we were unable to recover it. 00:38:20.198 [2024-12-13 10:40:13.808066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.198 [2024-12-13 10:40:13.808081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.198 qpair failed and we were unable to recover it. 00:38:20.198 [2024-12-13 10:40:13.808288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.198 [2024-12-13 10:40:13.808304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.198 qpair failed and we were unable to recover it. 00:38:20.198 [2024-12-13 10:40:13.808397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.198 [2024-12-13 10:40:13.808413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.198 qpair failed and we were unable to recover it. 00:38:20.198 [2024-12-13 10:40:13.808558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.198 [2024-12-13 10:40:13.808575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.198 qpair failed and we were unable to recover it. 00:38:20.198 [2024-12-13 10:40:13.808648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.198 [2024-12-13 10:40:13.808664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.198 qpair failed and we were unable to recover it. 00:38:20.198 [2024-12-13 10:40:13.808753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.198 [2024-12-13 10:40:13.808769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.198 qpair failed and we were unable to recover it. 00:38:20.198 [2024-12-13 10:40:13.808847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.198 [2024-12-13 10:40:13.808863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.198 qpair failed and we were unable to recover it. 00:38:20.198 [2024-12-13 10:40:13.809071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.198 [2024-12-13 10:40:13.809087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.198 qpair failed and we were unable to recover it. 00:38:20.198 [2024-12-13 10:40:13.809221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.198 [2024-12-13 10:40:13.809236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.198 qpair failed and we were unable to recover it. 00:38:20.198 [2024-12-13 10:40:13.809372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.198 [2024-12-13 10:40:13.809387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.198 qpair failed and we were unable to recover it. 00:38:20.198 [2024-12-13 10:40:13.809459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.198 [2024-12-13 10:40:13.809475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.198 qpair failed and we were unable to recover it. 00:38:20.198 [2024-12-13 10:40:13.809571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.198 [2024-12-13 10:40:13.809589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.198 qpair failed and we were unable to recover it. 00:38:20.198 [2024-12-13 10:40:13.809660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.198 [2024-12-13 10:40:13.809675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.198 qpair failed and we were unable to recover it. 00:38:20.198 [2024-12-13 10:40:13.809820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.198 [2024-12-13 10:40:13.809835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.198 qpair failed and we were unable to recover it. 00:38:20.198 [2024-12-13 10:40:13.809982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.198 [2024-12-13 10:40:13.809998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.198 qpair failed and we were unable to recover it. 00:38:20.198 [2024-12-13 10:40:13.810149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.198 [2024-12-13 10:40:13.810164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.198 qpair failed and we were unable to recover it. 00:38:20.198 [2024-12-13 10:40:13.810241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.198 [2024-12-13 10:40:13.810256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.198 qpair failed and we were unable to recover it. 00:38:20.198 [2024-12-13 10:40:13.810395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.198 [2024-12-13 10:40:13.810409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.198 qpair failed and we were unable to recover it. 00:38:20.198 [2024-12-13 10:40:13.810546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.198 [2024-12-13 10:40:13.810564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.198 qpair failed and we were unable to recover it. 00:38:20.198 [2024-12-13 10:40:13.810713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.198 [2024-12-13 10:40:13.810730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.198 qpair failed and we were unable to recover it. 00:38:20.198 [2024-12-13 10:40:13.810880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.198 [2024-12-13 10:40:13.810900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.198 qpair failed and we were unable to recover it. 00:38:20.198 [2024-12-13 10:40:13.811061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.198 [2024-12-13 10:40:13.811076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.198 qpair failed and we were unable to recover it. 00:38:20.198 [2024-12-13 10:40:13.811151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.198 [2024-12-13 10:40:13.811166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.198 qpair failed and we were unable to recover it. 00:38:20.199 [2024-12-13 10:40:13.811255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.199 [2024-12-13 10:40:13.811270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.199 qpair failed and we were unable to recover it. 00:38:20.199 [2024-12-13 10:40:13.811415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.199 [2024-12-13 10:40:13.811429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.199 qpair failed and we were unable to recover it. 00:38:20.199 [2024-12-13 10:40:13.811598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.199 [2024-12-13 10:40:13.811614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.199 qpair failed and we were unable to recover it. 00:38:20.199 [2024-12-13 10:40:13.811699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.199 [2024-12-13 10:40:13.811714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.199 qpair failed and we were unable to recover it. 00:38:20.199 [2024-12-13 10:40:13.811783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.199 [2024-12-13 10:40:13.811798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.199 qpair failed and we were unable to recover it. 00:38:20.199 [2024-12-13 10:40:13.811887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.199 [2024-12-13 10:40:13.811902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.199 qpair failed and we were unable to recover it. 00:38:20.199 [2024-12-13 10:40:13.811976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.199 [2024-12-13 10:40:13.811991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.199 qpair failed and we were unable to recover it. 00:38:20.199 [2024-12-13 10:40:13.812153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.199 [2024-12-13 10:40:13.812169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.199 qpair failed and we were unable to recover it. 00:38:20.199 [2024-12-13 10:40:13.812237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.199 [2024-12-13 10:40:13.812252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.199 qpair failed and we were unable to recover it. 00:38:20.199 [2024-12-13 10:40:13.812391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.199 [2024-12-13 10:40:13.812407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.199 qpair failed and we were unable to recover it. 00:38:20.199 [2024-12-13 10:40:13.812499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.199 [2024-12-13 10:40:13.812515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.199 qpair failed and we were unable to recover it. 00:38:20.199 [2024-12-13 10:40:13.812671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.199 [2024-12-13 10:40:13.812686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.199 qpair failed and we were unable to recover it. 00:38:20.199 [2024-12-13 10:40:13.812779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.199 [2024-12-13 10:40:13.812794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.199 qpair failed and we were unable to recover it. 00:38:20.199 [2024-12-13 10:40:13.812868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.199 [2024-12-13 10:40:13.812883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.199 qpair failed and we were unable to recover it. 00:38:20.199 [2024-12-13 10:40:13.812979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.199 [2024-12-13 10:40:13.812995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.199 qpair failed and we were unable to recover it. 00:38:20.199 [2024-12-13 10:40:13.813071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.199 [2024-12-13 10:40:13.813085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.199 qpair failed and we were unable to recover it. 00:38:20.199 [2024-12-13 10:40:13.813157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.199 [2024-12-13 10:40:13.813172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.199 qpair failed and we were unable to recover it. 00:38:20.199 [2024-12-13 10:40:13.813248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.199 [2024-12-13 10:40:13.813264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.199 qpair failed and we were unable to recover it. 00:38:20.199 [2024-12-13 10:40:13.813342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.199 [2024-12-13 10:40:13.813357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.199 qpair failed and we were unable to recover it. 00:38:20.199 [2024-12-13 10:40:13.813498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.199 [2024-12-13 10:40:13.813514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.199 qpair failed and we were unable to recover it. 00:38:20.199 [2024-12-13 10:40:13.813668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.199 [2024-12-13 10:40:13.813684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.199 qpair failed and we were unable to recover it. 00:38:20.199 [2024-12-13 10:40:13.813764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.199 [2024-12-13 10:40:13.813780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.199 qpair failed and we were unable to recover it. 00:38:20.199 [2024-12-13 10:40:13.813930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.199 [2024-12-13 10:40:13.813946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.199 qpair failed and we were unable to recover it. 00:38:20.199 [2024-12-13 10:40:13.814017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.199 [2024-12-13 10:40:13.814052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.199 qpair failed and we were unable to recover it. 00:38:20.199 [2024-12-13 10:40:13.814218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.199 [2024-12-13 10:40:13.814254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:20.199 qpair failed and we were unable to recover it. 00:38:20.199 [2024-12-13 10:40:13.814459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.199 [2024-12-13 10:40:13.814487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:20.199 qpair failed and we were unable to recover it. 00:38:20.199 [2024-12-13 10:40:13.814597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.199 [2024-12-13 10:40:13.814621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:20.199 qpair failed and we were unable to recover it. 00:38:20.199 [2024-12-13 10:40:13.814838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.199 [2024-12-13 10:40:13.814856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.199 qpair failed and we were unable to recover it. 00:38:20.199 [2024-12-13 10:40:13.814941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.199 [2024-12-13 10:40:13.814956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.199 qpair failed and we were unable to recover it. 00:38:20.199 [2024-12-13 10:40:13.815037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.199 [2024-12-13 10:40:13.815053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.199 qpair failed and we were unable to recover it. 00:38:20.199 [2024-12-13 10:40:13.815207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.199 [2024-12-13 10:40:13.815222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.199 qpair failed and we were unable to recover it. 00:38:20.199 [2024-12-13 10:40:13.815357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.199 [2024-12-13 10:40:13.815373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.199 qpair failed and we were unable to recover it. 00:38:20.199 [2024-12-13 10:40:13.815515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.199 [2024-12-13 10:40:13.815531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.199 qpair failed and we were unable to recover it. 00:38:20.199 [2024-12-13 10:40:13.815666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.199 [2024-12-13 10:40:13.815682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.199 qpair failed and we were unable to recover it. 00:38:20.199 [2024-12-13 10:40:13.815770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.200 [2024-12-13 10:40:13.815785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.200 qpair failed and we were unable to recover it. 00:38:20.200 [2024-12-13 10:40:13.815859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.200 [2024-12-13 10:40:13.815873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.200 qpair failed and we were unable to recover it. 00:38:20.200 [2024-12-13 10:40:13.815957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.200 [2024-12-13 10:40:13.815972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.200 qpair failed and we were unable to recover it. 00:38:20.200 [2024-12-13 10:40:13.816048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.200 [2024-12-13 10:40:13.816065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.200 qpair failed and we were unable to recover it. 00:38:20.200 [2024-12-13 10:40:13.816208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.200 [2024-12-13 10:40:13.816224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.200 qpair failed and we were unable to recover it. 00:38:20.200 [2024-12-13 10:40:13.816314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.200 [2024-12-13 10:40:13.816330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.200 qpair failed and we were unable to recover it. 00:38:20.200 [2024-12-13 10:40:13.816477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.200 [2024-12-13 10:40:13.816492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.200 qpair failed and we were unable to recover it. 00:38:20.200 [2024-12-13 10:40:13.816571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.200 [2024-12-13 10:40:13.816587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.200 qpair failed and we were unable to recover it. 00:38:20.200 [2024-12-13 10:40:13.816660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.200 [2024-12-13 10:40:13.816676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.200 qpair failed and we were unable to recover it. 00:38:20.200 [2024-12-13 10:40:13.816817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.200 [2024-12-13 10:40:13.816832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.200 qpair failed and we were unable to recover it. 00:38:20.200 [2024-12-13 10:40:13.816917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.200 [2024-12-13 10:40:13.816933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.200 qpair failed and we were unable to recover it. 00:38:20.200 [2024-12-13 10:40:13.817070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.200 [2024-12-13 10:40:13.817086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.200 qpair failed and we were unable to recover it. 00:38:20.200 [2024-12-13 10:40:13.817170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.200 [2024-12-13 10:40:13.817186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.200 qpair failed and we were unable to recover it. 00:38:20.200 [2024-12-13 10:40:13.817253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.200 [2024-12-13 10:40:13.817268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.200 qpair failed and we were unable to recover it. 00:38:20.200 [2024-12-13 10:40:13.817429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.200 [2024-12-13 10:40:13.817445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.200 qpair failed and we were unable to recover it. 00:38:20.200 [2024-12-13 10:40:13.817613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.200 [2024-12-13 10:40:13.817629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.200 qpair failed and we were unable to recover it. 00:38:20.200 [2024-12-13 10:40:13.817785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.200 [2024-12-13 10:40:13.817800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.200 qpair failed and we were unable to recover it. 00:38:20.200 [2024-12-13 10:40:13.817892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.200 [2024-12-13 10:40:13.817907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.200 qpair failed and we were unable to recover it. 00:38:20.200 [2024-12-13 10:40:13.818055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.200 [2024-12-13 10:40:13.818071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.200 qpair failed and we were unable to recover it. 00:38:20.200 [2024-12-13 10:40:13.818153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.200 [2024-12-13 10:40:13.818167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.200 qpair failed and we were unable to recover it. 00:38:20.200 [2024-12-13 10:40:13.818245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.200 [2024-12-13 10:40:13.818260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.200 qpair failed and we were unable to recover it. 00:38:20.200 [2024-12-13 10:40:13.818324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.200 [2024-12-13 10:40:13.818338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.200 qpair failed and we were unable to recover it. 00:38:20.200 [2024-12-13 10:40:13.818422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.200 [2024-12-13 10:40:13.818437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.200 qpair failed and we were unable to recover it. 00:38:20.200 [2024-12-13 10:40:13.818518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.200 [2024-12-13 10:40:13.818533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.200 qpair failed and we were unable to recover it. 00:38:20.200 [2024-12-13 10:40:13.818684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.200 [2024-12-13 10:40:13.818699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.200 qpair failed and we were unable to recover it. 00:38:20.200 [2024-12-13 10:40:13.818836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.200 [2024-12-13 10:40:13.818851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.200 qpair failed and we were unable to recover it. 00:38:20.200 [2024-12-13 10:40:13.818924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.200 [2024-12-13 10:40:13.818938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.200 qpair failed and we were unable to recover it. 00:38:20.200 [2024-12-13 10:40:13.819024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.200 [2024-12-13 10:40:13.819041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.200 qpair failed and we were unable to recover it. 00:38:20.200 [2024-12-13 10:40:13.819135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.200 [2024-12-13 10:40:13.819151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.200 qpair failed and we were unable to recover it. 00:38:20.200 [2024-12-13 10:40:13.819356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.200 [2024-12-13 10:40:13.819372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.200 qpair failed and we were unable to recover it. 00:38:20.200 [2024-12-13 10:40:13.819470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.200 [2024-12-13 10:40:13.819501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:20.200 qpair failed and we were unable to recover it. 00:38:20.200 [2024-12-13 10:40:13.819686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.200 [2024-12-13 10:40:13.819710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:20.200 qpair failed and we were unable to recover it. 00:38:20.200 [2024-12-13 10:40:13.819802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.200 [2024-12-13 10:40:13.819825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:20.200 qpair failed and we were unable to recover it. 00:38:20.200 [2024-12-13 10:40:13.819997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.200 [2024-12-13 10:40:13.820014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.200 qpair failed and we were unable to recover it. 00:38:20.200 [2024-12-13 10:40:13.820172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.200 [2024-12-13 10:40:13.820188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.200 qpair failed and we were unable to recover it. 00:38:20.200 [2024-12-13 10:40:13.820325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.200 [2024-12-13 10:40:13.820340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.200 qpair failed and we were unable to recover it. 00:38:20.200 [2024-12-13 10:40:13.820424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.200 [2024-12-13 10:40:13.820440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.200 qpair failed and we were unable to recover it. 00:38:20.200 [2024-12-13 10:40:13.820605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.200 [2024-12-13 10:40:13.820621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.200 qpair failed and we were unable to recover it. 00:38:20.200 [2024-12-13 10:40:13.820710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.200 [2024-12-13 10:40:13.820725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.200 qpair failed and we were unable to recover it. 00:38:20.200 [2024-12-13 10:40:13.820805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.201 [2024-12-13 10:40:13.820820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.201 qpair failed and we were unable to recover it. 00:38:20.201 [2024-12-13 10:40:13.820900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.201 [2024-12-13 10:40:13.820914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.201 qpair failed and we were unable to recover it. 00:38:20.201 [2024-12-13 10:40:13.821115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.201 [2024-12-13 10:40:13.821131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.201 qpair failed and we were unable to recover it. 00:38:20.201 [2024-12-13 10:40:13.821220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.201 [2024-12-13 10:40:13.821235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.201 qpair failed and we were unable to recover it. 00:38:20.201 [2024-12-13 10:40:13.821317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.201 [2024-12-13 10:40:13.821335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.201 qpair failed and we were unable to recover it. 00:38:20.201 [2024-12-13 10:40:13.821481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.201 [2024-12-13 10:40:13.821497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.201 qpair failed and we were unable to recover it. 00:38:20.201 [2024-12-13 10:40:13.821563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.201 [2024-12-13 10:40:13.821579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.201 qpair failed and we were unable to recover it. 00:38:20.201 [2024-12-13 10:40:13.821722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.201 [2024-12-13 10:40:13.821738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.201 qpair failed and we were unable to recover it. 00:38:20.201 [2024-12-13 10:40:13.821888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.201 [2024-12-13 10:40:13.821903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.201 qpair failed and we were unable to recover it. 00:38:20.201 [2024-12-13 10:40:13.822041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.201 [2024-12-13 10:40:13.822056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.201 qpair failed and we were unable to recover it. 00:38:20.201 [2024-12-13 10:40:13.822138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.201 [2024-12-13 10:40:13.822153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.201 qpair failed and we were unable to recover it. 00:38:20.201 [2024-12-13 10:40:13.822251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.201 [2024-12-13 10:40:13.822267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.201 qpair failed and we were unable to recover it. 00:38:20.201 [2024-12-13 10:40:13.822341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.201 [2024-12-13 10:40:13.822356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.201 qpair failed and we were unable to recover it. 00:38:20.201 [2024-12-13 10:40:13.822492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.201 [2024-12-13 10:40:13.822508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.201 qpair failed and we were unable to recover it. 00:38:20.201 [2024-12-13 10:40:13.822578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.201 [2024-12-13 10:40:13.822594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.201 qpair failed and we were unable to recover it. 00:38:20.201 [2024-12-13 10:40:13.822665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.201 [2024-12-13 10:40:13.822681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.201 qpair failed and we were unable to recover it. 00:38:20.201 [2024-12-13 10:40:13.822750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.201 [2024-12-13 10:40:13.822765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.201 qpair failed and we were unable to recover it. 00:38:20.201 [2024-12-13 10:40:13.822834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.201 [2024-12-13 10:40:13.822849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.201 qpair failed and we were unable to recover it. 00:38:20.201 [2024-12-13 10:40:13.823060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.201 [2024-12-13 10:40:13.823075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.201 qpair failed and we were unable to recover it. 00:38:20.201 [2024-12-13 10:40:13.823227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.201 [2024-12-13 10:40:13.823242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.201 qpair failed and we were unable to recover it. 00:38:20.201 [2024-12-13 10:40:13.823311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.201 [2024-12-13 10:40:13.823331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.201 qpair failed and we were unable to recover it. 00:38:20.201 [2024-12-13 10:40:13.823473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.201 [2024-12-13 10:40:13.823489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.201 qpair failed and we were unable to recover it. 00:38:20.201 [2024-12-13 10:40:13.823560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.201 [2024-12-13 10:40:13.823575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.201 qpair failed and we were unable to recover it. 00:38:20.201 [2024-12-13 10:40:13.823729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.201 [2024-12-13 10:40:13.823746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.201 qpair failed and we were unable to recover it. 00:38:20.201 [2024-12-13 10:40:13.823891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.201 [2024-12-13 10:40:13.823905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.201 qpair failed and we were unable to recover it. 00:38:20.201 [2024-12-13 10:40:13.823999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.201 [2024-12-13 10:40:13.824014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.201 qpair failed and we were unable to recover it. 00:38:20.201 [2024-12-13 10:40:13.824175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.201 [2024-12-13 10:40:13.824190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.201 qpair failed and we were unable to recover it. 00:38:20.201 [2024-12-13 10:40:13.824279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.201 [2024-12-13 10:40:13.824295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.201 qpair failed and we were unable to recover it. 00:38:20.201 [2024-12-13 10:40:13.824394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.201 [2024-12-13 10:40:13.824410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.201 qpair failed and we were unable to recover it. 00:38:20.201 [2024-12-13 10:40:13.824483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.201 [2024-12-13 10:40:13.824499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.201 qpair failed and we were unable to recover it. 00:38:20.201 [2024-12-13 10:40:13.824635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.201 [2024-12-13 10:40:13.824650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.201 qpair failed and we were unable to recover it. 00:38:20.201 [2024-12-13 10:40:13.824756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.201 [2024-12-13 10:40:13.824786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:20.201 qpair failed and we were unable to recover it. 00:38:20.201 [2024-12-13 10:40:13.824905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.201 [2024-12-13 10:40:13.824929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:20.201 qpair failed and we were unable to recover it. 00:38:20.201 [2024-12-13 10:40:13.825095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.201 [2024-12-13 10:40:13.825118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:20.201 qpair failed and we were unable to recover it. 00:38:20.201 [2024-12-13 10:40:13.825274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.201 [2024-12-13 10:40:13.825297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.201 qpair failed and we were unable to recover it. 00:38:20.201 [2024-12-13 10:40:13.825386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.201 [2024-12-13 10:40:13.825400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.201 qpair failed and we were unable to recover it. 00:38:20.201 [2024-12-13 10:40:13.825492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.201 [2024-12-13 10:40:13.825508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.201 qpair failed and we were unable to recover it. 00:38:20.201 [2024-12-13 10:40:13.825647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.201 [2024-12-13 10:40:13.825661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.201 qpair failed and we were unable to recover it. 00:38:20.201 [2024-12-13 10:40:13.825729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.201 [2024-12-13 10:40:13.825743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.201 qpair failed and we were unable to recover it. 00:38:20.201 [2024-12-13 10:40:13.825893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.202 [2024-12-13 10:40:13.825908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.202 qpair failed and we were unable to recover it. 00:38:20.202 [2024-12-13 10:40:13.825978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.202 [2024-12-13 10:40:13.825995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.202 qpair failed and we were unable to recover it. 00:38:20.202 [2024-12-13 10:40:13.826158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.202 [2024-12-13 10:40:13.826173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.202 qpair failed and we were unable to recover it. 00:38:20.202 [2024-12-13 10:40:13.826376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.202 [2024-12-13 10:40:13.826391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.202 qpair failed and we were unable to recover it. 00:38:20.202 [2024-12-13 10:40:13.826482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.202 [2024-12-13 10:40:13.826499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.202 qpair failed and we were unable to recover it. 00:38:20.202 [2024-12-13 10:40:13.826717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.202 [2024-12-13 10:40:13.826737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.202 qpair failed and we were unable to recover it. 00:38:20.202 [2024-12-13 10:40:13.826948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.202 [2024-12-13 10:40:13.826963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.202 qpair failed and we were unable to recover it. 00:38:20.202 [2024-12-13 10:40:13.827027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.202 [2024-12-13 10:40:13.827042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.202 qpair failed and we were unable to recover it. 00:38:20.202 [2024-12-13 10:40:13.827215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.202 [2024-12-13 10:40:13.827230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.202 qpair failed and we were unable to recover it. 00:38:20.202 [2024-12-13 10:40:13.827477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.202 [2024-12-13 10:40:13.827494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.202 qpair failed and we were unable to recover it. 00:38:20.202 [2024-12-13 10:40:13.827592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.202 [2024-12-13 10:40:13.827610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.202 qpair failed and we were unable to recover it. 00:38:20.202 [2024-12-13 10:40:13.827693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.202 [2024-12-13 10:40:13.827708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.202 qpair failed and we were unable to recover it. 00:38:20.202 [2024-12-13 10:40:13.827799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.202 [2024-12-13 10:40:13.827815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.202 qpair failed and we were unable to recover it. 00:38:20.202 [2024-12-13 10:40:13.827961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.202 [2024-12-13 10:40:13.827977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.202 qpair failed and we were unable to recover it. 00:38:20.202 [2024-12-13 10:40:13.828126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.202 [2024-12-13 10:40:13.828142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.202 qpair failed and we were unable to recover it. 00:38:20.202 [2024-12-13 10:40:13.828286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.202 [2024-12-13 10:40:13.828301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.202 qpair failed and we were unable to recover it. 00:38:20.202 [2024-12-13 10:40:13.828439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.202 [2024-12-13 10:40:13.828458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.202 qpair failed and we were unable to recover it. 00:38:20.202 [2024-12-13 10:40:13.828619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.202 [2024-12-13 10:40:13.828635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.202 qpair failed and we were unable to recover it. 00:38:20.202 [2024-12-13 10:40:13.828783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.202 [2024-12-13 10:40:13.828798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.202 qpair failed and we were unable to recover it. 00:38:20.202 [2024-12-13 10:40:13.829037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.202 [2024-12-13 10:40:13.829051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.202 qpair failed and we were unable to recover it. 00:38:20.202 [2024-12-13 10:40:13.829128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.202 [2024-12-13 10:40:13.829143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.202 qpair failed and we were unable to recover it. 00:38:20.202 [2024-12-13 10:40:13.829304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.202 [2024-12-13 10:40:13.829320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.202 qpair failed and we were unable to recover it. 00:38:20.202 [2024-12-13 10:40:13.829461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.202 [2024-12-13 10:40:13.829478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.202 qpair failed and we were unable to recover it. 00:38:20.202 [2024-12-13 10:40:13.829615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.202 [2024-12-13 10:40:13.829630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.202 qpair failed and we were unable to recover it. 00:38:20.202 [2024-12-13 10:40:13.829711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.202 [2024-12-13 10:40:13.829726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.202 qpair failed and we were unable to recover it. 00:38:20.202 [2024-12-13 10:40:13.829886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.202 [2024-12-13 10:40:13.829902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.202 qpair failed and we were unable to recover it. 00:38:20.202 [2024-12-13 10:40:13.829984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.202 [2024-12-13 10:40:13.830001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.202 qpair failed and we were unable to recover it. 00:38:20.202 [2024-12-13 10:40:13.830142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.202 [2024-12-13 10:40:13.830158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.202 qpair failed and we were unable to recover it. 00:38:20.202 [2024-12-13 10:40:13.830364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.202 [2024-12-13 10:40:13.830380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.202 qpair failed and we were unable to recover it. 00:38:20.202 [2024-12-13 10:40:13.830618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.202 [2024-12-13 10:40:13.830633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.202 qpair failed and we were unable to recover it. 00:38:20.202 [2024-12-13 10:40:13.830716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.202 [2024-12-13 10:40:13.830730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.202 qpair failed and we were unable to recover it. 00:38:20.202 [2024-12-13 10:40:13.830883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.202 [2024-12-13 10:40:13.830899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.202 qpair failed and we were unable to recover it. 00:38:20.202 [2024-12-13 10:40:13.830991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.202 [2024-12-13 10:40:13.831007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.202 qpair failed and we were unable to recover it. 00:38:20.202 [2024-12-13 10:40:13.831084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.202 [2024-12-13 10:40:13.831099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.202 qpair failed and we were unable to recover it. 00:38:20.202 [2024-12-13 10:40:13.831193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.202 [2024-12-13 10:40:13.831209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.202 qpair failed and we were unable to recover it. 00:38:20.202 [2024-12-13 10:40:13.831436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.202 [2024-12-13 10:40:13.831467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.202 qpair failed and we were unable to recover it. 00:38:20.202 [2024-12-13 10:40:13.831550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.202 [2024-12-13 10:40:13.831565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.202 qpair failed and we were unable to recover it. 00:38:20.202 [2024-12-13 10:40:13.831771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.202 [2024-12-13 10:40:13.831787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.202 qpair failed and we were unable to recover it. 00:38:20.202 [2024-12-13 10:40:13.831932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.202 [2024-12-13 10:40:13.831948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.202 qpair failed and we were unable to recover it. 00:38:20.202 [2024-12-13 10:40:13.832048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.203 [2024-12-13 10:40:13.832063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.203 qpair failed and we were unable to recover it. 00:38:20.203 [2024-12-13 10:40:13.832157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.203 [2024-12-13 10:40:13.832172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.203 qpair failed and we were unable to recover it. 00:38:20.203 [2024-12-13 10:40:13.832313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.203 [2024-12-13 10:40:13.832328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.203 qpair failed and we were unable to recover it. 00:38:20.203 [2024-12-13 10:40:13.832397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.203 [2024-12-13 10:40:13.832412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.203 qpair failed and we were unable to recover it. 00:38:20.203 [2024-12-13 10:40:13.832554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.203 [2024-12-13 10:40:13.832570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.203 qpair failed and we were unable to recover it. 00:38:20.203 [2024-12-13 10:40:13.832714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.203 [2024-12-13 10:40:13.832730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.203 qpair failed and we were unable to recover it. 00:38:20.203 [2024-12-13 10:40:13.832806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.203 [2024-12-13 10:40:13.832823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.203 qpair failed and we were unable to recover it. 00:38:20.203 [2024-12-13 10:40:13.832968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.203 [2024-12-13 10:40:13.832984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.203 qpair failed and we were unable to recover it. 00:38:20.203 [2024-12-13 10:40:13.833155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.203 [2024-12-13 10:40:13.833179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.203 qpair failed and we were unable to recover it. 00:38:20.203 [2024-12-13 10:40:13.833321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.203 [2024-12-13 10:40:13.833341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.203 qpair failed and we were unable to recover it. 00:38:20.203 [2024-12-13 10:40:13.833490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.203 [2024-12-13 10:40:13.833506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.203 qpair failed and we were unable to recover it. 00:38:20.203 [2024-12-13 10:40:13.833647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.203 [2024-12-13 10:40:13.833663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.203 qpair failed and we were unable to recover it. 00:38:20.203 [2024-12-13 10:40:13.833746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.203 [2024-12-13 10:40:13.833762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.203 qpair failed and we were unable to recover it. 00:38:20.203 [2024-12-13 10:40:13.833913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.203 [2024-12-13 10:40:13.833929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.203 qpair failed and we were unable to recover it. 00:38:20.203 [2024-12-13 10:40:13.834013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.203 [2024-12-13 10:40:13.834030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.203 qpair failed and we were unable to recover it. 00:38:20.203 [2024-12-13 10:40:13.834114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.203 [2024-12-13 10:40:13.834130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.203 qpair failed and we were unable to recover it. 00:38:20.203 [2024-12-13 10:40:13.834279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.203 [2024-12-13 10:40:13.834295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.203 qpair failed and we were unable to recover it. 00:38:20.203 [2024-12-13 10:40:13.834430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.203 [2024-12-13 10:40:13.834445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.203 qpair failed and we were unable to recover it. 00:38:20.203 [2024-12-13 10:40:13.834544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.203 [2024-12-13 10:40:13.834560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.203 qpair failed and we were unable to recover it. 00:38:20.203 [2024-12-13 10:40:13.834646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.203 [2024-12-13 10:40:13.834662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.203 qpair failed and we were unable to recover it. 00:38:20.203 [2024-12-13 10:40:13.834758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.203 [2024-12-13 10:40:13.834773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.203 qpair failed and we were unable to recover it. 00:38:20.203 [2024-12-13 10:40:13.834946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.203 [2024-12-13 10:40:13.834961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.203 qpair failed and we were unable to recover it. 00:38:20.203 [2024-12-13 10:40:13.835122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.203 [2024-12-13 10:40:13.835144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.203 qpair failed and we were unable to recover it. 00:38:20.203 [2024-12-13 10:40:13.835229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.203 [2024-12-13 10:40:13.835244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.203 qpair failed and we were unable to recover it. 00:38:20.203 [2024-12-13 10:40:13.835328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.203 [2024-12-13 10:40:13.835343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.203 qpair failed and we were unable to recover it. 00:38:20.203 [2024-12-13 10:40:13.835412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.203 [2024-12-13 10:40:13.835426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.203 qpair failed and we were unable to recover it. 00:38:20.203 [2024-12-13 10:40:13.835507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.203 [2024-12-13 10:40:13.835522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.203 qpair failed and we were unable to recover it. 00:38:20.203 [2024-12-13 10:40:13.835599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.203 [2024-12-13 10:40:13.835614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.203 qpair failed and we were unable to recover it. 00:38:20.203 [2024-12-13 10:40:13.835684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.203 [2024-12-13 10:40:13.835699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.203 qpair failed and we were unable to recover it. 00:38:20.203 [2024-12-13 10:40:13.835774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.203 [2024-12-13 10:40:13.835791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.203 qpair failed and we were unable to recover it. 00:38:20.203 [2024-12-13 10:40:13.835925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.203 [2024-12-13 10:40:13.835939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.203 qpair failed and we were unable to recover it. 00:38:20.203 [2024-12-13 10:40:13.836099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.203 [2024-12-13 10:40:13.836115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.203 qpair failed and we were unable to recover it. 00:38:20.203 [2024-12-13 10:40:13.836186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.203 [2024-12-13 10:40:13.836201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.203 qpair failed and we were unable to recover it. 00:38:20.203 [2024-12-13 10:40:13.836309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.203 [2024-12-13 10:40:13.836342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:20.203 qpair failed and we were unable to recover it. 00:38:20.203 [2024-12-13 10:40:13.836524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.203 [2024-12-13 10:40:13.836550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:20.203 qpair failed and we were unable to recover it. 00:38:20.203 [2024-12-13 10:40:13.836641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.203 [2024-12-13 10:40:13.836664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:20.203 qpair failed and we were unable to recover it. 00:38:20.203 [2024-12-13 10:40:13.836946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.203 [2024-12-13 10:40:13.836964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.203 qpair failed and we were unable to recover it. 00:38:20.203 [2024-12-13 10:40:13.837056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.203 [2024-12-13 10:40:13.837071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.203 qpair failed and we were unable to recover it. 00:38:20.203 [2024-12-13 10:40:13.837227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.203 [2024-12-13 10:40:13.837241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.203 qpair failed and we were unable to recover it. 00:38:20.203 [2024-12-13 10:40:13.837382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.204 [2024-12-13 10:40:13.837399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.204 qpair failed and we were unable to recover it. 00:38:20.204 [2024-12-13 10:40:13.837474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.204 [2024-12-13 10:40:13.837490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.204 qpair failed and we were unable to recover it. 00:38:20.204 [2024-12-13 10:40:13.837641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.204 [2024-12-13 10:40:13.837656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.204 qpair failed and we were unable to recover it. 00:38:20.204 [2024-12-13 10:40:13.837724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.204 [2024-12-13 10:40:13.837739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.204 qpair failed and we were unable to recover it. 00:38:20.204 [2024-12-13 10:40:13.837885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.204 [2024-12-13 10:40:13.837901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.204 qpair failed and we were unable to recover it. 00:38:20.204 [2024-12-13 10:40:13.837982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.204 [2024-12-13 10:40:13.837997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.204 qpair failed and we were unable to recover it. 00:38:20.204 [2024-12-13 10:40:13.838175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.204 [2024-12-13 10:40:13.838190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.204 qpair failed and we were unable to recover it. 00:38:20.204 [2024-12-13 10:40:13.838270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.204 [2024-12-13 10:40:13.838295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.204 qpair failed and we were unable to recover it. 00:38:20.204 [2024-12-13 10:40:13.838375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.204 [2024-12-13 10:40:13.838399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.204 qpair failed and we were unable to recover it. 00:38:20.204 [2024-12-13 10:40:13.838480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.204 [2024-12-13 10:40:13.838495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.204 qpair failed and we were unable to recover it. 00:38:20.204 [2024-12-13 10:40:13.838720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.204 [2024-12-13 10:40:13.838735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.204 qpair failed and we were unable to recover it. 00:38:20.204 [2024-12-13 10:40:13.838943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.204 [2024-12-13 10:40:13.838958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.204 qpair failed and we were unable to recover it. 00:38:20.204 [2024-12-13 10:40:13.839107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.204 [2024-12-13 10:40:13.839122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.204 qpair failed and we were unable to recover it. 00:38:20.204 [2024-12-13 10:40:13.839213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.204 [2024-12-13 10:40:13.839228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.204 qpair failed and we were unable to recover it. 00:38:20.204 [2024-12-13 10:40:13.839298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.204 [2024-12-13 10:40:13.839313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.204 qpair failed and we were unable to recover it. 00:38:20.204 [2024-12-13 10:40:13.839408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.204 [2024-12-13 10:40:13.839423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.204 qpair failed and we were unable to recover it. 00:38:20.204 [2024-12-13 10:40:13.839586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.204 [2024-12-13 10:40:13.839603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.204 qpair failed and we were unable to recover it. 00:38:20.204 [2024-12-13 10:40:13.839690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.204 [2024-12-13 10:40:13.839705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.204 qpair failed and we were unable to recover it. 00:38:20.204 [2024-12-13 10:40:13.839850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.204 [2024-12-13 10:40:13.839865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.204 qpair failed and we were unable to recover it. 00:38:20.204 [2024-12-13 10:40:13.840010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.204 [2024-12-13 10:40:13.840025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.204 qpair failed and we were unable to recover it. 00:38:20.204 [2024-12-13 10:40:13.840110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.204 [2024-12-13 10:40:13.840126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.204 qpair failed and we were unable to recover it. 00:38:20.204 [2024-12-13 10:40:13.840212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.204 [2024-12-13 10:40:13.840227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.204 qpair failed and we were unable to recover it. 00:38:20.204 [2024-12-13 10:40:13.840364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.204 [2024-12-13 10:40:13.840379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.204 qpair failed and we were unable to recover it. 00:38:20.204 [2024-12-13 10:40:13.840465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.204 [2024-12-13 10:40:13.840480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.204 qpair failed and we were unable to recover it. 00:38:20.204 [2024-12-13 10:40:13.840684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.204 [2024-12-13 10:40:13.840700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.204 qpair failed and we were unable to recover it. 00:38:20.204 [2024-12-13 10:40:13.840838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.204 [2024-12-13 10:40:13.840853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.204 qpair failed and we were unable to recover it. 00:38:20.204 [2024-12-13 10:40:13.840930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.204 [2024-12-13 10:40:13.840945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.204 qpair failed and we were unable to recover it. 00:38:20.204 [2024-12-13 10:40:13.841096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.204 [2024-12-13 10:40:13.841111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.204 qpair failed and we were unable to recover it. 00:38:20.204 [2024-12-13 10:40:13.841203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.204 [2024-12-13 10:40:13.841218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.204 qpair failed and we were unable to recover it. 00:38:20.204 [2024-12-13 10:40:13.841376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.204 [2024-12-13 10:40:13.841393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.204 qpair failed and we were unable to recover it. 00:38:20.204 [2024-12-13 10:40:13.841473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.204 [2024-12-13 10:40:13.841489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.204 qpair failed and we were unable to recover it. 00:38:20.204 [2024-12-13 10:40:13.841651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.204 [2024-12-13 10:40:13.841666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.204 qpair failed and we were unable to recover it. 00:38:20.204 [2024-12-13 10:40:13.841766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.204 [2024-12-13 10:40:13.841781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.204 qpair failed and we were unable to recover it. 00:38:20.204 [2024-12-13 10:40:13.841863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.204 [2024-12-13 10:40:13.841877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.204 qpair failed and we were unable to recover it. 00:38:20.204 [2024-12-13 10:40:13.841948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.204 [2024-12-13 10:40:13.841967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.204 qpair failed and we were unable to recover it. 00:38:20.204 [2024-12-13 10:40:13.842113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.205 [2024-12-13 10:40:13.842128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.205 qpair failed and we were unable to recover it. 00:38:20.205 [2024-12-13 10:40:13.842202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.205 [2024-12-13 10:40:13.842218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.205 qpair failed and we were unable to recover it. 00:38:20.205 [2024-12-13 10:40:13.842376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.205 [2024-12-13 10:40:13.842396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.205 qpair failed and we were unable to recover it. 00:38:20.205 [2024-12-13 10:40:13.842475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.205 [2024-12-13 10:40:13.842491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.205 qpair failed and we were unable to recover it. 00:38:20.205 [2024-12-13 10:40:13.842577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.205 [2024-12-13 10:40:13.842592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.205 qpair failed and we were unable to recover it. 00:38:20.205 [2024-12-13 10:40:13.842690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.205 [2024-12-13 10:40:13.842705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.205 qpair failed and we were unable to recover it. 00:38:20.205 [2024-12-13 10:40:13.842842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.205 [2024-12-13 10:40:13.842857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.205 qpair failed and we were unable to recover it. 00:38:20.205 [2024-12-13 10:40:13.842932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.205 [2024-12-13 10:40:13.842947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.205 qpair failed and we were unable to recover it. 00:38:20.205 [2024-12-13 10:40:13.843059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.205 [2024-12-13 10:40:13.843074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.205 qpair failed and we were unable to recover it. 00:38:20.205 [2024-12-13 10:40:13.843150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.205 [2024-12-13 10:40:13.843165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.205 qpair failed and we were unable to recover it. 00:38:20.205 [2024-12-13 10:40:13.843322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.205 [2024-12-13 10:40:13.843338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.205 qpair failed and we were unable to recover it. 00:38:20.205 [2024-12-13 10:40:13.843435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.205 [2024-12-13 10:40:13.843456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.205 qpair failed and we were unable to recover it. 00:38:20.205 [2024-12-13 10:40:13.843594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.205 [2024-12-13 10:40:13.843609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.205 qpair failed and we were unable to recover it. 00:38:20.205 [2024-12-13 10:40:13.843749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.205 [2024-12-13 10:40:13.843764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.205 qpair failed and we were unable to recover it. 00:38:20.205 [2024-12-13 10:40:13.843901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.205 [2024-12-13 10:40:13.843916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.205 qpair failed and we were unable to recover it. 00:38:20.205 [2024-12-13 10:40:13.844054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.205 [2024-12-13 10:40:13.844070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.205 qpair failed and we were unable to recover it. 00:38:20.205 [2024-12-13 10:40:13.844148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.205 [2024-12-13 10:40:13.844164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.205 qpair failed and we were unable to recover it. 00:38:20.205 [2024-12-13 10:40:13.844248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.205 [2024-12-13 10:40:13.844264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.205 qpair failed and we were unable to recover it. 00:38:20.205 [2024-12-13 10:40:13.844355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.205 [2024-12-13 10:40:13.844371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.205 qpair failed and we were unable to recover it. 00:38:20.205 [2024-12-13 10:40:13.844445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.205 [2024-12-13 10:40:13.844466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.205 qpair failed and we were unable to recover it. 00:38:20.205 [2024-12-13 10:40:13.844550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.205 [2024-12-13 10:40:13.844568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.205 qpair failed and we were unable to recover it. 00:38:20.205 [2024-12-13 10:40:13.844842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.205 [2024-12-13 10:40:13.844858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.205 qpair failed and we were unable to recover it. 00:38:20.205 [2024-12-13 10:40:13.844996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.205 [2024-12-13 10:40:13.845012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.205 qpair failed and we were unable to recover it. 00:38:20.205 [2024-12-13 10:40:13.845093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.205 [2024-12-13 10:40:13.845109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.205 qpair failed and we were unable to recover it. 00:38:20.205 [2024-12-13 10:40:13.845176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.205 [2024-12-13 10:40:13.845192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.205 qpair failed and we were unable to recover it. 00:38:20.205 [2024-12-13 10:40:13.845268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.205 [2024-12-13 10:40:13.845283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.205 qpair failed and we were unable to recover it. 00:38:20.205 [2024-12-13 10:40:13.845357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.205 [2024-12-13 10:40:13.845373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.205 qpair failed and we were unable to recover it. 00:38:20.205 [2024-12-13 10:40:13.845595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.205 [2024-12-13 10:40:13.845614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.205 qpair failed and we were unable to recover it. 00:38:20.205 [2024-12-13 10:40:13.845713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.205 [2024-12-13 10:40:13.845728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.205 qpair failed and we were unable to recover it. 00:38:20.205 [2024-12-13 10:40:13.845865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.205 [2024-12-13 10:40:13.845881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.205 qpair failed and we were unable to recover it. 00:38:20.205 [2024-12-13 10:40:13.846018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.205 [2024-12-13 10:40:13.846034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.205 qpair failed and we were unable to recover it. 00:38:20.205 [2024-12-13 10:40:13.846114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.205 [2024-12-13 10:40:13.846129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.205 qpair failed and we were unable to recover it. 00:38:20.205 [2024-12-13 10:40:13.846331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.205 [2024-12-13 10:40:13.846353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.205 qpair failed and we were unable to recover it. 00:38:20.205 [2024-12-13 10:40:13.846503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.205 [2024-12-13 10:40:13.846519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.205 qpair failed and we were unable to recover it. 00:38:20.205 [2024-12-13 10:40:13.846664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.205 [2024-12-13 10:40:13.846679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.205 qpair failed and we were unable to recover it. 00:38:20.205 [2024-12-13 10:40:13.846829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.205 [2024-12-13 10:40:13.846846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.205 qpair failed and we were unable to recover it. 00:38:20.205 [2024-12-13 10:40:13.846916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.205 [2024-12-13 10:40:13.846941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.205 qpair failed and we were unable to recover it. 00:38:20.205 [2024-12-13 10:40:13.847031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.205 [2024-12-13 10:40:13.847048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.205 qpair failed and we were unable to recover it. 00:38:20.205 [2024-12-13 10:40:13.847136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.205 [2024-12-13 10:40:13.847151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.205 qpair failed and we were unable to recover it. 00:38:20.205 [2024-12-13 10:40:13.847304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.206 [2024-12-13 10:40:13.847323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.206 qpair failed and we were unable to recover it. 00:38:20.206 [2024-12-13 10:40:13.847467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.206 [2024-12-13 10:40:13.847483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.206 qpair failed and we were unable to recover it. 00:38:20.206 [2024-12-13 10:40:13.847635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.206 [2024-12-13 10:40:13.847651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.206 qpair failed and we were unable to recover it. 00:38:20.206 [2024-12-13 10:40:13.847799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.206 [2024-12-13 10:40:13.847815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.206 qpair failed and we were unable to recover it. 00:38:20.206 [2024-12-13 10:40:13.847893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.206 [2024-12-13 10:40:13.847909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.206 qpair failed and we were unable to recover it. 00:38:20.206 [2024-12-13 10:40:13.848047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.206 [2024-12-13 10:40:13.848063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.206 qpair failed and we were unable to recover it. 00:38:20.206 [2024-12-13 10:40:13.848149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.206 [2024-12-13 10:40:13.848164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.206 qpair failed and we were unable to recover it. 00:38:20.206 [2024-12-13 10:40:13.848360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.206 [2024-12-13 10:40:13.848376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.206 qpair failed and we were unable to recover it. 00:38:20.206 [2024-12-13 10:40:13.848530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.206 [2024-12-13 10:40:13.848557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.206 qpair failed and we were unable to recover it. 00:38:20.206 [2024-12-13 10:40:13.848655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.206 [2024-12-13 10:40:13.848673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.206 qpair failed and we were unable to recover it. 00:38:20.206 [2024-12-13 10:40:13.848767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.206 [2024-12-13 10:40:13.848783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.206 qpair failed and we were unable to recover it. 00:38:20.206 [2024-12-13 10:40:13.848874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.206 [2024-12-13 10:40:13.848890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.206 qpair failed and we were unable to recover it. 00:38:20.206 [2024-12-13 10:40:13.848981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.206 [2024-12-13 10:40:13.848996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.206 qpair failed and we were unable to recover it. 00:38:20.206 [2024-12-13 10:40:13.849143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.206 [2024-12-13 10:40:13.849159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.206 qpair failed and we were unable to recover it. 00:38:20.206 [2024-12-13 10:40:13.849234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.206 [2024-12-13 10:40:13.849250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.206 qpair failed and we were unable to recover it. 00:38:20.206 [2024-12-13 10:40:13.849357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.206 [2024-12-13 10:40:13.849373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.206 qpair failed and we were unable to recover it. 00:38:20.206 [2024-12-13 10:40:13.849605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.206 [2024-12-13 10:40:13.849622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.206 qpair failed and we were unable to recover it. 00:38:20.206 [2024-12-13 10:40:13.849826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.206 [2024-12-13 10:40:13.849842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.206 qpair failed and we were unable to recover it. 00:38:20.206 [2024-12-13 10:40:13.849939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.206 [2024-12-13 10:40:13.849955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.206 qpair failed and we were unable to recover it. 00:38:20.206 [2024-12-13 10:40:13.850097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.206 [2024-12-13 10:40:13.850112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.206 qpair failed and we were unable to recover it. 00:38:20.206 [2024-12-13 10:40:13.850204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.206 [2024-12-13 10:40:13.850220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.206 qpair failed and we were unable to recover it. 00:38:20.206 [2024-12-13 10:40:13.850380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.206 [2024-12-13 10:40:13.850396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.206 qpair failed and we were unable to recover it. 00:38:20.206 [2024-12-13 10:40:13.850549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.206 [2024-12-13 10:40:13.850565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.206 qpair failed and we were unable to recover it. 00:38:20.206 [2024-12-13 10:40:13.850753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.206 [2024-12-13 10:40:13.850768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.206 qpair failed and we were unable to recover it. 00:38:20.206 [2024-12-13 10:40:13.850853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.206 [2024-12-13 10:40:13.850869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.206 qpair failed and we were unable to recover it. 00:38:20.206 [2024-12-13 10:40:13.851013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.206 [2024-12-13 10:40:13.851029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.206 qpair failed and we were unable to recover it. 00:38:20.206 [2024-12-13 10:40:13.851123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.206 [2024-12-13 10:40:13.851139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.206 qpair failed and we were unable to recover it. 00:38:20.206 [2024-12-13 10:40:13.851280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.206 [2024-12-13 10:40:13.851301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.206 qpair failed and we were unable to recover it. 00:38:20.206 [2024-12-13 10:40:13.851452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.206 [2024-12-13 10:40:13.851468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.206 qpair failed and we were unable to recover it. 00:38:20.206 [2024-12-13 10:40:13.851536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.206 [2024-12-13 10:40:13.851551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.206 qpair failed and we were unable to recover it. 00:38:20.206 [2024-12-13 10:40:13.851719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.206 [2024-12-13 10:40:13.851736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.206 qpair failed and we were unable to recover it. 00:38:20.206 [2024-12-13 10:40:13.851894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.206 [2024-12-13 10:40:13.851910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.206 qpair failed and we were unable to recover it. 00:38:20.206 [2024-12-13 10:40:13.851995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.206 [2024-12-13 10:40:13.852010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.206 qpair failed and we were unable to recover it. 00:38:20.206 [2024-12-13 10:40:13.852093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.206 [2024-12-13 10:40:13.852108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.206 qpair failed and we were unable to recover it. 00:38:20.206 [2024-12-13 10:40:13.852184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.206 [2024-12-13 10:40:13.852200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.206 qpair failed and we were unable to recover it. 00:38:20.206 [2024-12-13 10:40:13.852283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.206 [2024-12-13 10:40:13.852298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.206 qpair failed and we were unable to recover it. 00:38:20.206 [2024-12-13 10:40:13.852387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.206 [2024-12-13 10:40:13.852403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.206 qpair failed and we were unable to recover it. 00:38:20.206 [2024-12-13 10:40:13.852506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.206 [2024-12-13 10:40:13.852522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.206 qpair failed and we were unable to recover it. 00:38:20.206 [2024-12-13 10:40:13.852610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.206 [2024-12-13 10:40:13.852625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.206 qpair failed and we were unable to recover it. 00:38:20.206 [2024-12-13 10:40:13.852787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.207 [2024-12-13 10:40:13.852803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.207 qpair failed and we were unable to recover it. 00:38:20.207 [2024-12-13 10:40:13.852878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.207 [2024-12-13 10:40:13.852895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.207 qpair failed and we were unable to recover it. 00:38:20.207 [2024-12-13 10:40:13.853052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.207 [2024-12-13 10:40:13.853069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.207 qpair failed and we were unable to recover it. 00:38:20.207 [2024-12-13 10:40:13.853291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.207 [2024-12-13 10:40:13.853307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.207 qpair failed and we were unable to recover it. 00:38:20.207 [2024-12-13 10:40:13.853468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.207 [2024-12-13 10:40:13.853484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.207 qpair failed and we were unable to recover it. 00:38:20.207 [2024-12-13 10:40:13.853572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.207 [2024-12-13 10:40:13.853588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.207 qpair failed and we were unable to recover it. 00:38:20.207 [2024-12-13 10:40:13.853656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.207 [2024-12-13 10:40:13.853671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.207 qpair failed and we were unable to recover it. 00:38:20.207 [2024-12-13 10:40:13.853873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.207 [2024-12-13 10:40:13.853889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.207 qpair failed and we were unable to recover it. 00:38:20.207 [2024-12-13 10:40:13.854110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.207 [2024-12-13 10:40:13.854126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.207 qpair failed and we were unable to recover it. 00:38:20.207 [2024-12-13 10:40:13.854279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.207 [2024-12-13 10:40:13.854296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.207 qpair failed and we were unable to recover it. 00:38:20.207 [2024-12-13 10:40:13.854379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.207 [2024-12-13 10:40:13.854394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.207 qpair failed and we were unable to recover it. 00:38:20.207 [2024-12-13 10:40:13.854532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.207 [2024-12-13 10:40:13.854549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.207 qpair failed and we were unable to recover it. 00:38:20.207 [2024-12-13 10:40:13.854720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.207 [2024-12-13 10:40:13.854736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.207 qpair failed and we were unable to recover it. 00:38:20.207 [2024-12-13 10:40:13.854893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.207 [2024-12-13 10:40:13.854908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.207 qpair failed and we were unable to recover it. 00:38:20.207 [2024-12-13 10:40:13.855061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.207 [2024-12-13 10:40:13.855077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.207 qpair failed and we were unable to recover it. 00:38:20.207 [2024-12-13 10:40:13.855163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.207 [2024-12-13 10:40:13.855179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.207 qpair failed and we were unable to recover it. 00:38:20.207 [2024-12-13 10:40:13.855350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.207 [2024-12-13 10:40:13.855366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.207 qpair failed and we were unable to recover it. 00:38:20.207 [2024-12-13 10:40:13.855532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.207 [2024-12-13 10:40:13.855547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.207 qpair failed and we were unable to recover it. 00:38:20.207 [2024-12-13 10:40:13.855663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.207 [2024-12-13 10:40:13.855679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.207 qpair failed and we were unable to recover it. 00:38:20.207 [2024-12-13 10:40:13.855813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.207 [2024-12-13 10:40:13.855829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.207 qpair failed and we were unable to recover it. 00:38:20.207 [2024-12-13 10:40:13.855984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.207 [2024-12-13 10:40:13.856001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.207 qpair failed and we were unable to recover it. 00:38:20.207 [2024-12-13 10:40:13.856158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.207 [2024-12-13 10:40:13.856173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.207 qpair failed and we were unable to recover it. 00:38:20.207 [2024-12-13 10:40:13.856339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.207 [2024-12-13 10:40:13.856354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.207 qpair failed and we were unable to recover it. 00:38:20.207 [2024-12-13 10:40:13.856510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.207 [2024-12-13 10:40:13.856527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.207 qpair failed and we were unable to recover it. 00:38:20.207 [2024-12-13 10:40:13.856614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.207 [2024-12-13 10:40:13.856629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.207 qpair failed and we were unable to recover it. 00:38:20.207 [2024-12-13 10:40:13.856741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.207 [2024-12-13 10:40:13.856756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.207 qpair failed and we were unable to recover it. 00:38:20.207 [2024-12-13 10:40:13.856897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.207 [2024-12-13 10:40:13.856913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.207 qpair failed and we were unable to recover it. 00:38:20.207 [2024-12-13 10:40:13.857048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.207 [2024-12-13 10:40:13.857063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.207 qpair failed and we were unable to recover it. 00:38:20.207 [2024-12-13 10:40:13.857201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.207 [2024-12-13 10:40:13.857217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.207 qpair failed and we were unable to recover it. 00:38:20.207 [2024-12-13 10:40:13.857437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.207 [2024-12-13 10:40:13.857459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.207 qpair failed and we were unable to recover it. 00:38:20.207 [2024-12-13 10:40:13.857604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.207 [2024-12-13 10:40:13.857620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.207 qpair failed and we were unable to recover it. 00:38:20.207 [2024-12-13 10:40:13.857753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.207 [2024-12-13 10:40:13.857769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.207 qpair failed and we were unable to recover it. 00:38:20.207 [2024-12-13 10:40:13.857982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.207 [2024-12-13 10:40:13.857998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.207 qpair failed and we were unable to recover it. 00:38:20.207 [2024-12-13 10:40:13.858158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.207 [2024-12-13 10:40:13.858174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.207 qpair failed and we were unable to recover it. 00:38:20.207 [2024-12-13 10:40:13.858272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.207 [2024-12-13 10:40:13.858288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.207 qpair failed and we were unable to recover it. 00:38:20.207 [2024-12-13 10:40:13.858371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.207 [2024-12-13 10:40:13.858387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.207 qpair failed and we were unable to recover it. 00:38:20.207 [2024-12-13 10:40:13.858551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.207 [2024-12-13 10:40:13.858568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.207 qpair failed and we were unable to recover it. 00:38:20.207 [2024-12-13 10:40:13.858652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.207 [2024-12-13 10:40:13.858681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.207 qpair failed and we were unable to recover it. 00:38:20.207 [2024-12-13 10:40:13.858935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.207 [2024-12-13 10:40:13.858951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.207 qpair failed and we were unable to recover it. 00:38:20.207 [2024-12-13 10:40:13.859051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.207 [2024-12-13 10:40:13.859067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.208 qpair failed and we were unable to recover it. 00:38:20.208 [2024-12-13 10:40:13.859155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.208 [2024-12-13 10:40:13.859171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.208 qpair failed and we were unable to recover it. 00:38:20.208 [2024-12-13 10:40:13.859315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.208 [2024-12-13 10:40:13.859335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.208 qpair failed and we were unable to recover it. 00:38:20.208 [2024-12-13 10:40:13.859474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.208 [2024-12-13 10:40:13.859490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.208 qpair failed and we were unable to recover it. 00:38:20.208 [2024-12-13 10:40:13.859568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.208 [2024-12-13 10:40:13.859584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.208 qpair failed and we were unable to recover it. 00:38:20.208 [2024-12-13 10:40:13.859821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.208 [2024-12-13 10:40:13.859836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.208 qpair failed and we were unable to recover it. 00:38:20.208 [2024-12-13 10:40:13.860066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.208 [2024-12-13 10:40:13.860081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.208 qpair failed and we were unable to recover it. 00:38:20.208 [2024-12-13 10:40:13.860310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.208 [2024-12-13 10:40:13.860326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.208 qpair failed and we were unable to recover it. 00:38:20.208 [2024-12-13 10:40:13.860470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.208 [2024-12-13 10:40:13.860486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.208 qpair failed and we were unable to recover it. 00:38:20.208 [2024-12-13 10:40:13.860571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.208 [2024-12-13 10:40:13.860588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.208 qpair failed and we were unable to recover it. 00:38:20.208 [2024-12-13 10:40:13.860662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.208 [2024-12-13 10:40:13.860677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.208 qpair failed and we were unable to recover it. 00:38:20.208 [2024-12-13 10:40:13.860845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.208 [2024-12-13 10:40:13.860863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.208 qpair failed and we were unable to recover it. 00:38:20.208 [2024-12-13 10:40:13.860959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.208 [2024-12-13 10:40:13.860986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.208 qpair failed and we were unable to recover it. 00:38:20.208 [2024-12-13 10:40:13.861060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.208 [2024-12-13 10:40:13.861088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.208 qpair failed and we were unable to recover it. 00:38:20.208 [2024-12-13 10:40:13.861295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.208 [2024-12-13 10:40:13.861312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.208 qpair failed and we were unable to recover it. 00:38:20.208 [2024-12-13 10:40:13.861458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.208 [2024-12-13 10:40:13.861473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.208 qpair failed and we were unable to recover it. 00:38:20.208 [2024-12-13 10:40:13.861623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.208 [2024-12-13 10:40:13.861640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.208 qpair failed and we were unable to recover it. 00:38:20.208 [2024-12-13 10:40:13.861722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.208 [2024-12-13 10:40:13.861738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.208 qpair failed and we were unable to recover it. 00:38:20.208 [2024-12-13 10:40:13.861961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.208 [2024-12-13 10:40:13.861977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.208 qpair failed and we were unable to recover it. 00:38:20.208 [2024-12-13 10:40:13.862123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.208 [2024-12-13 10:40:13.862139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.208 qpair failed and we were unable to recover it. 00:38:20.208 [2024-12-13 10:40:13.862284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.208 [2024-12-13 10:40:13.862301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.208 qpair failed and we were unable to recover it. 00:38:20.208 [2024-12-13 10:40:13.862368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.208 [2024-12-13 10:40:13.862383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.208 qpair failed and we were unable to recover it. 00:38:20.208 [2024-12-13 10:40:13.862490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.208 [2024-12-13 10:40:13.862507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.208 qpair failed and we were unable to recover it. 00:38:20.208 [2024-12-13 10:40:13.862653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.208 [2024-12-13 10:40:13.862671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.208 qpair failed and we were unable to recover it. 00:38:20.208 [2024-12-13 10:40:13.862844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.208 [2024-12-13 10:40:13.862860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.208 qpair failed and we were unable to recover it. 00:38:20.208 [2024-12-13 10:40:13.863006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.208 [2024-12-13 10:40:13.863022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.208 qpair failed and we were unable to recover it. 00:38:20.208 [2024-12-13 10:40:13.863111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.208 [2024-12-13 10:40:13.863126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.208 qpair failed and we were unable to recover it. 00:38:20.208 [2024-12-13 10:40:13.863272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.208 [2024-12-13 10:40:13.863288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.208 qpair failed and we were unable to recover it. 00:38:20.208 [2024-12-13 10:40:13.863438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.208 [2024-12-13 10:40:13.863473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.208 qpair failed and we were unable to recover it. 00:38:20.208 [2024-12-13 10:40:13.863638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.208 [2024-12-13 10:40:13.863655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.208 qpair failed and we were unable to recover it. 00:38:20.208 [2024-12-13 10:40:13.863894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.208 [2024-12-13 10:40:13.863911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.208 qpair failed and we were unable to recover it. 00:38:20.208 [2024-12-13 10:40:13.864003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.208 [2024-12-13 10:40:13.864019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.208 qpair failed and we were unable to recover it. 00:38:20.208 [2024-12-13 10:40:13.864246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.208 [2024-12-13 10:40:13.864261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.208 qpair failed and we were unable to recover it. 00:38:20.208 [2024-12-13 10:40:13.864430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.208 [2024-12-13 10:40:13.864447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.208 qpair failed and we were unable to recover it. 00:38:20.208 [2024-12-13 10:40:13.864670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.208 [2024-12-13 10:40:13.864686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.208 qpair failed and we were unable to recover it. 00:38:20.208 [2024-12-13 10:40:13.864880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.209 [2024-12-13 10:40:13.864897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.209 qpair failed and we were unable to recover it. 00:38:20.209 [2024-12-13 10:40:13.864996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.209 [2024-12-13 10:40:13.865011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.209 qpair failed and we were unable to recover it. 00:38:20.209 [2024-12-13 10:40:13.865146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.209 [2024-12-13 10:40:13.865162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.209 qpair failed and we were unable to recover it. 00:38:20.209 [2024-12-13 10:40:13.865311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.209 [2024-12-13 10:40:13.865327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.209 qpair failed and we were unable to recover it. 00:38:20.209 [2024-12-13 10:40:13.865421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.209 [2024-12-13 10:40:13.865435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.209 qpair failed and we were unable to recover it. 00:38:20.209 [2024-12-13 10:40:13.865671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.209 [2024-12-13 10:40:13.865686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.209 qpair failed and we were unable to recover it. 00:38:20.209 [2024-12-13 10:40:13.865787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.209 [2024-12-13 10:40:13.865803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.209 qpair failed and we were unable to recover it. 00:38:20.209 [2024-12-13 10:40:13.865952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.209 [2024-12-13 10:40:13.865969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.209 qpair failed and we were unable to recover it. 00:38:20.209 [2024-12-13 10:40:13.866128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.209 [2024-12-13 10:40:13.866143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.209 qpair failed and we were unable to recover it. 00:38:20.209 [2024-12-13 10:40:13.866224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.209 [2024-12-13 10:40:13.866240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.209 qpair failed and we were unable to recover it. 00:38:20.209 [2024-12-13 10:40:13.866332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.209 [2024-12-13 10:40:13.866348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.209 qpair failed and we were unable to recover it. 00:38:20.209 [2024-12-13 10:40:13.866553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.209 [2024-12-13 10:40:13.866570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.209 qpair failed and we were unable to recover it. 00:38:20.209 [2024-12-13 10:40:13.866656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.209 [2024-12-13 10:40:13.866671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.209 qpair failed and we were unable to recover it. 00:38:20.209 [2024-12-13 10:40:13.866821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.209 [2024-12-13 10:40:13.866837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.209 qpair failed and we were unable to recover it. 00:38:20.209 [2024-12-13 10:40:13.866913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.209 [2024-12-13 10:40:13.866931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.209 qpair failed and we were unable to recover it. 00:38:20.209 [2024-12-13 10:40:13.867087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.209 [2024-12-13 10:40:13.867104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.209 qpair failed and we were unable to recover it. 00:38:20.209 [2024-12-13 10:40:13.867204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.209 [2024-12-13 10:40:13.867219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.209 qpair failed and we were unable to recover it. 00:38:20.209 [2024-12-13 10:40:13.867366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.209 [2024-12-13 10:40:13.867382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.209 qpair failed and we were unable to recover it. 00:38:20.209 [2024-12-13 10:40:13.867472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.209 [2024-12-13 10:40:13.867487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.209 qpair failed and we were unable to recover it. 00:38:20.209 [2024-12-13 10:40:13.867644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.209 [2024-12-13 10:40:13.867661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.209 qpair failed and we were unable to recover it. 00:38:20.209 [2024-12-13 10:40:13.867810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.209 [2024-12-13 10:40:13.867826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.209 qpair failed and we were unable to recover it. 00:38:20.209 [2024-12-13 10:40:13.867906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.209 [2024-12-13 10:40:13.867922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.209 qpair failed and we were unable to recover it. 00:38:20.209 [2024-12-13 10:40:13.868011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.209 [2024-12-13 10:40:13.868026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.209 qpair failed and we were unable to recover it. 00:38:20.209 [2024-12-13 10:40:13.868164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.209 [2024-12-13 10:40:13.868179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.209 qpair failed and we were unable to recover it. 00:38:20.209 [2024-12-13 10:40:13.868252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.209 [2024-12-13 10:40:13.868267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.209 qpair failed and we were unable to recover it. 00:38:20.209 [2024-12-13 10:40:13.868454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.209 [2024-12-13 10:40:13.868471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.209 qpair failed and we were unable to recover it. 00:38:20.209 [2024-12-13 10:40:13.868620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.209 [2024-12-13 10:40:13.868636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.209 qpair failed and we were unable to recover it. 00:38:20.209 [2024-12-13 10:40:13.868841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.209 [2024-12-13 10:40:13.868857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.209 qpair failed and we were unable to recover it. 00:38:20.209 [2024-12-13 10:40:13.868930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.209 [2024-12-13 10:40:13.868947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.209 qpair failed and we were unable to recover it. 00:38:20.209 [2024-12-13 10:40:13.869033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.209 [2024-12-13 10:40:13.869049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.209 qpair failed and we were unable to recover it. 00:38:20.209 [2024-12-13 10:40:13.869185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.209 [2024-12-13 10:40:13.869201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.209 qpair failed and we were unable to recover it. 00:38:20.209 [2024-12-13 10:40:13.869282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.209 [2024-12-13 10:40:13.869298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.209 qpair failed and we were unable to recover it. 00:38:20.209 [2024-12-13 10:40:13.869454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.209 [2024-12-13 10:40:13.869471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.209 qpair failed and we were unable to recover it. 00:38:20.209 [2024-12-13 10:40:13.869541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.209 [2024-12-13 10:40:13.869557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.209 qpair failed and we were unable to recover it. 00:38:20.209 [2024-12-13 10:40:13.869654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.209 [2024-12-13 10:40:13.869669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.209 qpair failed and we were unable to recover it. 00:38:20.209 [2024-12-13 10:40:13.869839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.209 [2024-12-13 10:40:13.869854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.209 qpair failed and we were unable to recover it. 00:38:20.209 [2024-12-13 10:40:13.870004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.209 [2024-12-13 10:40:13.870021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.209 qpair failed and we were unable to recover it. 00:38:20.209 [2024-12-13 10:40:13.870102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.209 [2024-12-13 10:40:13.870117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.209 qpair failed and we were unable to recover it. 00:38:20.209 [2024-12-13 10:40:13.870340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.209 [2024-12-13 10:40:13.870356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.209 qpair failed and we were unable to recover it. 00:38:20.209 [2024-12-13 10:40:13.870507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.209 [2024-12-13 10:40:13.870523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.210 qpair failed and we were unable to recover it. 00:38:20.210 [2024-12-13 10:40:13.870764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.210 [2024-12-13 10:40:13.870786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.210 qpair failed and we were unable to recover it. 00:38:20.210 [2024-12-13 10:40:13.870995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.210 [2024-12-13 10:40:13.871012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.210 qpair failed and we were unable to recover it. 00:38:20.210 [2024-12-13 10:40:13.871159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.210 [2024-12-13 10:40:13.871179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.210 qpair failed and we were unable to recover it. 00:38:20.210 [2024-12-13 10:40:13.871356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.210 [2024-12-13 10:40:13.871372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.210 qpair failed and we were unable to recover it. 00:38:20.210 [2024-12-13 10:40:13.871549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.210 [2024-12-13 10:40:13.871566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.210 qpair failed and we were unable to recover it. 00:38:20.210 [2024-12-13 10:40:13.871681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.210 [2024-12-13 10:40:13.871697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.210 qpair failed and we were unable to recover it. 00:38:20.210 [2024-12-13 10:40:13.871779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.210 [2024-12-13 10:40:13.871795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.210 qpair failed and we were unable to recover it. 00:38:20.210 [2024-12-13 10:40:13.872019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.210 [2024-12-13 10:40:13.872038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.210 qpair failed and we were unable to recover it. 00:38:20.210 [2024-12-13 10:40:13.872130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.210 [2024-12-13 10:40:13.872146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.210 qpair failed and we were unable to recover it. 00:38:20.210 [2024-12-13 10:40:13.872300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.210 [2024-12-13 10:40:13.872316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.210 qpair failed and we were unable to recover it. 00:38:20.210 [2024-12-13 10:40:13.872456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.210 [2024-12-13 10:40:13.872472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.210 qpair failed and we were unable to recover it. 00:38:20.210 [2024-12-13 10:40:13.872614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.210 [2024-12-13 10:40:13.872641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.210 qpair failed and we were unable to recover it. 00:38:20.210 [2024-12-13 10:40:13.872789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.210 [2024-12-13 10:40:13.872805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.210 qpair failed and we were unable to recover it. 00:38:20.210 [2024-12-13 10:40:13.872885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.210 [2024-12-13 10:40:13.872901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.210 qpair failed and we were unable to recover it. 00:38:20.210 [2024-12-13 10:40:13.873045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.210 [2024-12-13 10:40:13.873061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.210 qpair failed and we were unable to recover it. 00:38:20.210 [2024-12-13 10:40:13.873208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.210 [2024-12-13 10:40:13.873225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.210 qpair failed and we were unable to recover it. 00:38:20.210 [2024-12-13 10:40:13.873397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.210 [2024-12-13 10:40:13.873413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.210 qpair failed and we were unable to recover it. 00:38:20.210 [2024-12-13 10:40:13.873500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.210 [2024-12-13 10:40:13.873516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.210 qpair failed and we were unable to recover it. 00:38:20.210 [2024-12-13 10:40:13.873660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.210 [2024-12-13 10:40:13.873677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.210 qpair failed and we were unable to recover it. 00:38:20.210 [2024-12-13 10:40:13.873765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.210 [2024-12-13 10:40:13.873781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.210 qpair failed and we were unable to recover it. 00:38:20.210 [2024-12-13 10:40:13.873920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.210 [2024-12-13 10:40:13.873936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.210 qpair failed and we were unable to recover it. 00:38:20.210 [2024-12-13 10:40:13.874101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.210 [2024-12-13 10:40:13.874117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.210 qpair failed and we were unable to recover it. 00:38:20.210 [2024-12-13 10:40:13.874262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.210 [2024-12-13 10:40:13.874278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.210 qpair failed and we were unable to recover it. 00:38:20.210 [2024-12-13 10:40:13.874378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.210 [2024-12-13 10:40:13.874394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.210 qpair failed and we were unable to recover it. 00:38:20.210 [2024-12-13 10:40:13.874611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.210 [2024-12-13 10:40:13.874628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.210 qpair failed and we were unable to recover it. 00:38:20.210 [2024-12-13 10:40:13.874696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.210 [2024-12-13 10:40:13.874712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.210 qpair failed and we were unable to recover it. 00:38:20.210 [2024-12-13 10:40:13.874799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.210 [2024-12-13 10:40:13.874814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.210 qpair failed and we were unable to recover it. 00:38:20.210 [2024-12-13 10:40:13.874906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.210 [2024-12-13 10:40:13.874927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.210 qpair failed and we were unable to recover it. 00:38:20.210 [2024-12-13 10:40:13.875081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.210 [2024-12-13 10:40:13.875096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.210 qpair failed and we were unable to recover it. 00:38:20.210 [2024-12-13 10:40:13.875172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.210 [2024-12-13 10:40:13.875188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.210 qpair failed and we were unable to recover it. 00:38:20.210 [2024-12-13 10:40:13.875354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.210 [2024-12-13 10:40:13.875370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.210 qpair failed and we were unable to recover it. 00:38:20.210 [2024-12-13 10:40:13.875522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.210 [2024-12-13 10:40:13.875538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.210 qpair failed and we were unable to recover it. 00:38:20.210 [2024-12-13 10:40:13.875619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.210 [2024-12-13 10:40:13.875636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.210 qpair failed and we were unable to recover it. 00:38:20.210 [2024-12-13 10:40:13.875776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.210 [2024-12-13 10:40:13.875791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.210 qpair failed and we were unable to recover it. 00:38:20.210 [2024-12-13 10:40:13.875936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.210 [2024-12-13 10:40:13.875953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.210 qpair failed and we were unable to recover it. 00:38:20.210 [2024-12-13 10:40:13.876029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.210 [2024-12-13 10:40:13.876045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.210 qpair failed and we were unable to recover it. 00:38:20.210 [2024-12-13 10:40:13.876122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.210 [2024-12-13 10:40:13.876141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.210 qpair failed and we were unable to recover it. 00:38:20.210 [2024-12-13 10:40:13.876288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.210 [2024-12-13 10:40:13.876304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.210 qpair failed and we were unable to recover it. 00:38:20.210 [2024-12-13 10:40:13.876384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.211 [2024-12-13 10:40:13.876401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.211 qpair failed and we were unable to recover it. 00:38:20.211 [2024-12-13 10:40:13.876554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.211 [2024-12-13 10:40:13.876573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.211 qpair failed and we were unable to recover it. 00:38:20.211 [2024-12-13 10:40:13.876671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.211 [2024-12-13 10:40:13.876686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.211 qpair failed and we were unable to recover it. 00:38:20.211 [2024-12-13 10:40:13.876773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.211 [2024-12-13 10:40:13.876788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.211 qpair failed and we were unable to recover it. 00:38:20.211 [2024-12-13 10:40:13.876950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.211 [2024-12-13 10:40:13.876965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.211 qpair failed and we were unable to recover it. 00:38:20.211 [2024-12-13 10:40:13.877055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.211 [2024-12-13 10:40:13.877070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.211 qpair failed and we were unable to recover it. 00:38:20.211 [2024-12-13 10:40:13.877231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.211 [2024-12-13 10:40:13.877246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.211 qpair failed and we were unable to recover it. 00:38:20.211 [2024-12-13 10:40:13.877354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.211 [2024-12-13 10:40:13.877369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.211 qpair failed and we were unable to recover it. 00:38:20.211 [2024-12-13 10:40:13.877467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.211 [2024-12-13 10:40:13.877484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.211 qpair failed and we were unable to recover it. 00:38:20.211 [2024-12-13 10:40:13.877556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.211 [2024-12-13 10:40:13.877574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.211 qpair failed and we were unable to recover it. 00:38:20.211 [2024-12-13 10:40:13.877647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.211 [2024-12-13 10:40:13.877663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.211 qpair failed and we were unable to recover it. 00:38:20.211 [2024-12-13 10:40:13.877886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.211 [2024-12-13 10:40:13.877901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.211 qpair failed and we were unable to recover it. 00:38:20.211 [2024-12-13 10:40:13.877986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.211 [2024-12-13 10:40:13.878002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.211 qpair failed and we were unable to recover it. 00:38:20.211 [2024-12-13 10:40:13.878150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.211 [2024-12-13 10:40:13.878166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.211 qpair failed and we were unable to recover it. 00:38:20.211 [2024-12-13 10:40:13.878307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.211 [2024-12-13 10:40:13.878322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.211 qpair failed and we were unable to recover it. 00:38:20.211 [2024-12-13 10:40:13.878467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.211 [2024-12-13 10:40:13.878484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.211 qpair failed and we were unable to recover it. 00:38:20.211 [2024-12-13 10:40:13.878564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.211 [2024-12-13 10:40:13.878585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.211 qpair failed and we were unable to recover it. 00:38:20.211 [2024-12-13 10:40:13.878749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.211 [2024-12-13 10:40:13.878764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.211 qpair failed and we were unable to recover it. 00:38:20.211 [2024-12-13 10:40:13.878920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.211 [2024-12-13 10:40:13.878935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.211 qpair failed and we were unable to recover it. 00:38:20.211 [2024-12-13 10:40:13.879105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.211 [2024-12-13 10:40:13.879121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.211 qpair failed and we were unable to recover it. 00:38:20.211 [2024-12-13 10:40:13.879205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.211 [2024-12-13 10:40:13.879222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.211 qpair failed and we were unable to recover it. 00:38:20.211 [2024-12-13 10:40:13.879313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.211 [2024-12-13 10:40:13.879330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.211 qpair failed and we were unable to recover it. 00:38:20.211 [2024-12-13 10:40:13.879472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.211 [2024-12-13 10:40:13.879489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.211 qpair failed and we were unable to recover it. 00:38:20.211 [2024-12-13 10:40:13.879667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.211 [2024-12-13 10:40:13.879683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.211 qpair failed and we were unable to recover it. 00:38:20.211 [2024-12-13 10:40:13.879858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.211 [2024-12-13 10:40:13.879875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.211 qpair failed and we were unable to recover it. 00:38:20.211 [2024-12-13 10:40:13.879954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.211 [2024-12-13 10:40:13.879969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.211 qpair failed and we were unable to recover it. 00:38:20.211 [2024-12-13 10:40:13.880039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.211 [2024-12-13 10:40:13.880055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.211 qpair failed and we were unable to recover it. 00:38:20.211 [2024-12-13 10:40:13.880210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.211 [2024-12-13 10:40:13.880227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.211 qpair failed and we were unable to recover it. 00:38:20.211 [2024-12-13 10:40:13.880311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.211 [2024-12-13 10:40:13.880332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.211 qpair failed and we were unable to recover it. 00:38:20.211 [2024-12-13 10:40:13.880471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.211 [2024-12-13 10:40:13.880488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.211 qpair failed and we were unable to recover it. 00:38:20.211 [2024-12-13 10:40:13.880588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.211 [2024-12-13 10:40:13.880604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.211 qpair failed and we were unable to recover it. 00:38:20.211 [2024-12-13 10:40:13.880697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.211 [2024-12-13 10:40:13.880713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.211 qpair failed and we were unable to recover it. 00:38:20.211 [2024-12-13 10:40:13.880851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.211 [2024-12-13 10:40:13.880866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.211 qpair failed and we were unable to recover it. 00:38:20.211 [2024-12-13 10:40:13.880940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.211 [2024-12-13 10:40:13.880955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.211 qpair failed and we were unable to recover it. 00:38:20.211 [2024-12-13 10:40:13.881104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.211 [2024-12-13 10:40:13.881120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.211 qpair failed and we were unable to recover it. 00:38:20.211 [2024-12-13 10:40:13.881199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.211 [2024-12-13 10:40:13.881214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.211 qpair failed and we were unable to recover it. 00:38:20.211 [2024-12-13 10:40:13.881300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.211 [2024-12-13 10:40:13.881315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.211 qpair failed and we were unable to recover it. 00:38:20.211 [2024-12-13 10:40:13.881411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.211 [2024-12-13 10:40:13.881429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.211 qpair failed and we were unable to recover it. 00:38:20.211 [2024-12-13 10:40:13.881514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.211 [2024-12-13 10:40:13.881529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.211 qpair failed and we were unable to recover it. 00:38:20.211 [2024-12-13 10:40:13.881620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.211 [2024-12-13 10:40:13.881636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.212 qpair failed and we were unable to recover it. 00:38:20.212 [2024-12-13 10:40:13.881710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.212 [2024-12-13 10:40:13.881735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.212 qpair failed and we were unable to recover it. 00:38:20.212 [2024-12-13 10:40:13.881939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.212 [2024-12-13 10:40:13.881954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.212 qpair failed and we were unable to recover it. 00:38:20.212 [2024-12-13 10:40:13.882025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.212 [2024-12-13 10:40:13.882040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.212 qpair failed and we were unable to recover it. 00:38:20.212 [2024-12-13 10:40:13.882259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.212 [2024-12-13 10:40:13.882275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.212 qpair failed and we were unable to recover it. 00:38:20.212 [2024-12-13 10:40:13.882344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.212 [2024-12-13 10:40:13.882358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.212 qpair failed and we were unable to recover it. 00:38:20.212 [2024-12-13 10:40:13.882459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.212 [2024-12-13 10:40:13.882475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.212 qpair failed and we were unable to recover it. 00:38:20.212 [2024-12-13 10:40:13.882651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.212 [2024-12-13 10:40:13.882667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.212 qpair failed and we were unable to recover it. 00:38:20.212 [2024-12-13 10:40:13.882736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.212 [2024-12-13 10:40:13.882752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.212 qpair failed and we were unable to recover it. 00:38:20.212 [2024-12-13 10:40:13.882888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.212 [2024-12-13 10:40:13.882904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.212 qpair failed and we were unable to recover it. 00:38:20.212 [2024-12-13 10:40:13.883036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.212 [2024-12-13 10:40:13.883054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.212 qpair failed and we were unable to recover it. 00:38:20.212 [2024-12-13 10:40:13.883198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.212 [2024-12-13 10:40:13.883213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.212 qpair failed and we were unable to recover it. 00:38:20.212 [2024-12-13 10:40:13.883293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.212 [2024-12-13 10:40:13.883309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.212 qpair failed and we were unable to recover it. 00:38:20.212 [2024-12-13 10:40:13.883401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.212 [2024-12-13 10:40:13.883416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.212 qpair failed and we were unable to recover it. 00:38:20.212 [2024-12-13 10:40:13.883577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.212 [2024-12-13 10:40:13.883594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.212 qpair failed and we were unable to recover it. 00:38:20.212 [2024-12-13 10:40:13.883675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.212 [2024-12-13 10:40:13.883690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.212 qpair failed and we were unable to recover it. 00:38:20.212 [2024-12-13 10:40:13.883842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.212 [2024-12-13 10:40:13.883858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.212 qpair failed and we were unable to recover it. 00:38:20.212 [2024-12-13 10:40:13.883956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.212 [2024-12-13 10:40:13.883971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.212 qpair failed and we were unable to recover it. 00:38:20.212 [2024-12-13 10:40:13.884072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.212 [2024-12-13 10:40:13.884088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.212 qpair failed and we were unable to recover it. 00:38:20.212 [2024-12-13 10:40:13.884300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.212 [2024-12-13 10:40:13.884315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.212 qpair failed and we were unable to recover it. 00:38:20.212 [2024-12-13 10:40:13.884396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.212 [2024-12-13 10:40:13.884412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.212 qpair failed and we were unable to recover it. 00:38:20.212 [2024-12-13 10:40:13.884498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.212 [2024-12-13 10:40:13.884514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.212 qpair failed and we were unable to recover it. 00:38:20.212 [2024-12-13 10:40:13.884649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.212 [2024-12-13 10:40:13.884665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.212 qpair failed and we were unable to recover it. 00:38:20.212 [2024-12-13 10:40:13.884742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.212 [2024-12-13 10:40:13.884757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.212 qpair failed and we were unable to recover it. 00:38:20.212 [2024-12-13 10:40:13.884922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.212 [2024-12-13 10:40:13.884939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.212 qpair failed and we were unable to recover it. 00:38:20.212 [2024-12-13 10:40:13.885084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.212 [2024-12-13 10:40:13.885103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.212 qpair failed and we were unable to recover it. 00:38:20.212 [2024-12-13 10:40:13.885193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.212 [2024-12-13 10:40:13.885208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.212 qpair failed and we were unable to recover it. 00:38:20.212 [2024-12-13 10:40:13.885357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.212 [2024-12-13 10:40:13.885374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.212 qpair failed and we were unable to recover it. 00:38:20.212 [2024-12-13 10:40:13.885444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.212 [2024-12-13 10:40:13.885464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.212 qpair failed and we were unable to recover it. 00:38:20.212 [2024-12-13 10:40:13.885538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.212 [2024-12-13 10:40:13.885554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.212 qpair failed and we were unable to recover it. 00:38:20.212 [2024-12-13 10:40:13.885629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.212 [2024-12-13 10:40:13.885645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.212 qpair failed and we were unable to recover it. 00:38:20.212 [2024-12-13 10:40:13.885729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.212 [2024-12-13 10:40:13.885744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.212 qpair failed and we were unable to recover it. 00:38:20.212 [2024-12-13 10:40:13.885888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.212 [2024-12-13 10:40:13.885904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.212 qpair failed and we were unable to recover it. 00:38:20.212 [2024-12-13 10:40:13.886050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.212 [2024-12-13 10:40:13.886065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.212 qpair failed and we were unable to recover it. 00:38:20.212 [2024-12-13 10:40:13.886272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.212 [2024-12-13 10:40:13.886288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.212 qpair failed and we were unable to recover it. 00:38:20.212 [2024-12-13 10:40:13.886432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.212 [2024-12-13 10:40:13.886452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.212 qpair failed and we were unable to recover it. 00:38:20.212 [2024-12-13 10:40:13.886533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.212 [2024-12-13 10:40:13.886549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.212 qpair failed and we were unable to recover it. 00:38:20.212 [2024-12-13 10:40:13.886685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.212 [2024-12-13 10:40:13.886702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.212 qpair failed and we were unable to recover it. 00:38:20.213 [2024-12-13 10:40:13.886791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.213 [2024-12-13 10:40:13.886807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.213 qpair failed and we were unable to recover it. 00:38:20.213 [2024-12-13 10:40:13.886978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.213 [2024-12-13 10:40:13.886995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.213 qpair failed and we were unable to recover it. 00:38:20.213 [2024-12-13 10:40:13.887144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.213 [2024-12-13 10:40:13.887160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.213 qpair failed and we were unable to recover it. 00:38:20.213 [2024-12-13 10:40:13.887329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.213 [2024-12-13 10:40:13.887345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.213 qpair failed and we were unable to recover it. 00:38:20.213 [2024-12-13 10:40:13.887439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.213 [2024-12-13 10:40:13.887471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.213 qpair failed and we were unable to recover it. 00:38:20.213 [2024-12-13 10:40:13.887683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.213 [2024-12-13 10:40:13.887699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.213 qpair failed and we were unable to recover it. 00:38:20.213 [2024-12-13 10:40:13.887789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.213 [2024-12-13 10:40:13.887804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.213 qpair failed and we were unable to recover it. 00:38:20.213 [2024-12-13 10:40:13.887954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.213 [2024-12-13 10:40:13.887970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.213 qpair failed and we were unable to recover it. 00:38:20.213 [2024-12-13 10:40:13.888142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.213 [2024-12-13 10:40:13.888157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.213 qpair failed and we were unable to recover it. 00:38:20.213 [2024-12-13 10:40:13.888409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.213 [2024-12-13 10:40:13.888424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.213 qpair failed and we were unable to recover it. 00:38:20.213 [2024-12-13 10:40:13.888518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.213 [2024-12-13 10:40:13.888534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.213 qpair failed and we were unable to recover it. 00:38:20.213 [2024-12-13 10:40:13.888677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.213 [2024-12-13 10:40:13.888692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.213 qpair failed and we were unable to recover it. 00:38:20.213 [2024-12-13 10:40:13.888778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.213 [2024-12-13 10:40:13.888796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.213 qpair failed and we were unable to recover it. 00:38:20.213 [2024-12-13 10:40:13.888930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.213 [2024-12-13 10:40:13.888945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.213 qpair failed and we were unable to recover it. 00:38:20.213 [2024-12-13 10:40:13.889055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.213 [2024-12-13 10:40:13.889071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.213 qpair failed and we were unable to recover it. 00:38:20.213 [2024-12-13 10:40:13.889274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.213 [2024-12-13 10:40:13.889295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.213 qpair failed and we were unable to recover it. 00:38:20.213 [2024-12-13 10:40:13.889455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.213 [2024-12-13 10:40:13.889472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.213 qpair failed and we were unable to recover it. 00:38:20.213 [2024-12-13 10:40:13.889621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.213 [2024-12-13 10:40:13.889637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.213 qpair failed and we were unable to recover it. 00:38:20.213 [2024-12-13 10:40:13.889714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.213 [2024-12-13 10:40:13.889729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.213 qpair failed and we were unable to recover it. 00:38:20.213 [2024-12-13 10:40:13.889879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.213 [2024-12-13 10:40:13.889895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.213 qpair failed and we were unable to recover it. 00:38:20.213 [2024-12-13 10:40:13.890057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.213 [2024-12-13 10:40:13.890073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.213 qpair failed and we were unable to recover it. 00:38:20.213 [2024-12-13 10:40:13.890294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.213 [2024-12-13 10:40:13.890310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.213 qpair failed and we were unable to recover it. 00:38:20.213 [2024-12-13 10:40:13.890420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.213 [2024-12-13 10:40:13.890436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.213 qpair failed and we were unable to recover it. 00:38:20.213 [2024-12-13 10:40:13.890528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.213 [2024-12-13 10:40:13.890545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.213 qpair failed and we were unable to recover it. 00:38:20.213 [2024-12-13 10:40:13.890728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.213 [2024-12-13 10:40:13.890744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.213 qpair failed and we were unable to recover it. 00:38:20.213 [2024-12-13 10:40:13.890895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.213 [2024-12-13 10:40:13.890911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.213 qpair failed and we were unable to recover it. 00:38:20.213 [2024-12-13 10:40:13.891008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.213 [2024-12-13 10:40:13.891027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.213 qpair failed and we were unable to recover it. 00:38:20.213 [2024-12-13 10:40:13.891172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.213 [2024-12-13 10:40:13.891188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.213 qpair failed and we were unable to recover it. 00:38:20.213 [2024-12-13 10:40:13.891323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.213 [2024-12-13 10:40:13.891340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.213 qpair failed and we were unable to recover it. 00:38:20.213 [2024-12-13 10:40:13.891484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.213 [2024-12-13 10:40:13.891500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.213 qpair failed and we were unable to recover it. 00:38:20.213 [2024-12-13 10:40:13.891580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.213 [2024-12-13 10:40:13.891595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.213 qpair failed and we were unable to recover it. 00:38:20.213 [2024-12-13 10:40:13.891681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.213 [2024-12-13 10:40:13.891696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.213 qpair failed and we were unable to recover it. 00:38:20.213 [2024-12-13 10:40:13.891903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.213 [2024-12-13 10:40:13.891918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.213 qpair failed and we were unable to recover it. 00:38:20.213 [2024-12-13 10:40:13.892160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.213 [2024-12-13 10:40:13.892177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.213 qpair failed and we were unable to recover it. 00:38:20.213 [2024-12-13 10:40:13.892335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.213 [2024-12-13 10:40:13.892351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.213 qpair failed and we were unable to recover it. 00:38:20.213 [2024-12-13 10:40:13.892489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.213 [2024-12-13 10:40:13.892505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.213 qpair failed and we were unable to recover it. 00:38:20.213 [2024-12-13 10:40:13.892636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.213 [2024-12-13 10:40:13.892654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.214 qpair failed and we were unable to recover it. 00:38:20.214 [2024-12-13 10:40:13.892900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.214 [2024-12-13 10:40:13.892916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.214 qpair failed and we were unable to recover it. 00:38:20.214 [2024-12-13 10:40:13.892989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.214 [2024-12-13 10:40:13.893004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.214 qpair failed and we were unable to recover it. 00:38:20.214 [2024-12-13 10:40:13.893151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.214 [2024-12-13 10:40:13.893167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.214 qpair failed and we were unable to recover it. 00:38:20.214 [2024-12-13 10:40:13.893322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.214 [2024-12-13 10:40:13.893338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.214 qpair failed and we were unable to recover it. 00:38:20.214 [2024-12-13 10:40:13.893485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.214 [2024-12-13 10:40:13.893502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.214 qpair failed and we were unable to recover it. 00:38:20.214 [2024-12-13 10:40:13.893639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.214 [2024-12-13 10:40:13.893655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.214 qpair failed and we were unable to recover it. 00:38:20.214 [2024-12-13 10:40:13.893816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.214 [2024-12-13 10:40:13.893831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.214 qpair failed and we were unable to recover it. 00:38:20.214 [2024-12-13 10:40:13.894034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.214 [2024-12-13 10:40:13.894049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.214 qpair failed and we were unable to recover it. 00:38:20.214 [2024-12-13 10:40:13.894228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.214 [2024-12-13 10:40:13.894244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.214 qpair failed and we were unable to recover it. 00:38:20.214 [2024-12-13 10:40:13.894412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.214 [2024-12-13 10:40:13.894428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.214 qpair failed and we were unable to recover it. 00:38:20.214 [2024-12-13 10:40:13.894664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.214 [2024-12-13 10:40:13.894681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.214 qpair failed and we were unable to recover it. 00:38:20.214 [2024-12-13 10:40:13.894847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.214 [2024-12-13 10:40:13.894863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.214 qpair failed and we were unable to recover it. 00:38:20.214 [2024-12-13 10:40:13.895032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.214 [2024-12-13 10:40:13.895048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.214 qpair failed and we were unable to recover it. 00:38:20.214 [2024-12-13 10:40:13.895296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.214 [2024-12-13 10:40:13.895312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.214 qpair failed and we were unable to recover it. 00:38:20.214 [2024-12-13 10:40:13.895417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.214 [2024-12-13 10:40:13.895432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.214 qpair failed and we were unable to recover it. 00:38:20.214 [2024-12-13 10:40:13.895648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.214 [2024-12-13 10:40:13.895668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.214 qpair failed and we were unable to recover it. 00:38:20.214 [2024-12-13 10:40:13.895805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.214 [2024-12-13 10:40:13.895820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.214 qpair failed and we were unable to recover it. 00:38:20.214 [2024-12-13 10:40:13.895972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.214 [2024-12-13 10:40:13.895988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.214 qpair failed and we were unable to recover it. 00:38:20.214 [2024-12-13 10:40:13.896069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.214 [2024-12-13 10:40:13.896107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.214 qpair failed and we were unable to recover it. 00:38:20.214 [2024-12-13 10:40:13.896191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.214 [2024-12-13 10:40:13.896207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.214 qpair failed and we were unable to recover it. 00:38:20.214 [2024-12-13 10:40:13.896380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.214 [2024-12-13 10:40:13.896395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.214 qpair failed and we were unable to recover it. 00:38:20.214 [2024-12-13 10:40:13.896559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.214 [2024-12-13 10:40:13.896575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.214 qpair failed and we were unable to recover it. 00:38:20.214 [2024-12-13 10:40:13.896803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.214 [2024-12-13 10:40:13.896821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.214 qpair failed and we were unable to recover it. 00:38:20.214 [2024-12-13 10:40:13.896958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.214 [2024-12-13 10:40:13.896974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.214 qpair failed and we were unable to recover it. 00:38:20.214 [2024-12-13 10:40:13.897119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.214 [2024-12-13 10:40:13.897135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.214 qpair failed and we were unable to recover it. 00:38:20.214 [2024-12-13 10:40:13.897267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.214 [2024-12-13 10:40:13.897283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.214 qpair failed and we were unable to recover it. 00:38:20.214 [2024-12-13 10:40:13.897432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.214 [2024-12-13 10:40:13.897451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.214 qpair failed and we were unable to recover it. 00:38:20.214 [2024-12-13 10:40:13.897585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.214 [2024-12-13 10:40:13.897600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.214 qpair failed and we were unable to recover it. 00:38:20.214 [2024-12-13 10:40:13.897799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.214 [2024-12-13 10:40:13.897815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.214 qpair failed and we were unable to recover it. 00:38:20.214 [2024-12-13 10:40:13.898048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.214 [2024-12-13 10:40:13.898064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.214 qpair failed and we were unable to recover it. 00:38:20.214 [2024-12-13 10:40:13.898207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.214 [2024-12-13 10:40:13.898223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.214 qpair failed and we were unable to recover it. 00:38:20.214 [2024-12-13 10:40:13.898310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.214 [2024-12-13 10:40:13.898326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.214 qpair failed and we were unable to recover it. 00:38:20.214 [2024-12-13 10:40:13.898480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.214 [2024-12-13 10:40:13.898497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.214 qpair failed and we were unable to recover it. 00:38:20.214 [2024-12-13 10:40:13.898587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.214 [2024-12-13 10:40:13.898603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.214 qpair failed and we were unable to recover it. 00:38:20.214 [2024-12-13 10:40:13.898678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.214 [2024-12-13 10:40:13.898693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.214 qpair failed and we were unable to recover it. 00:38:20.214 [2024-12-13 10:40:13.898936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.214 [2024-12-13 10:40:13.898952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.214 qpair failed and we were unable to recover it. 00:38:20.214 [2024-12-13 10:40:13.899106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.214 [2024-12-13 10:40:13.899122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.214 qpair failed and we were unable to recover it. 00:38:20.214 [2024-12-13 10:40:13.899274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.214 [2024-12-13 10:40:13.899289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.214 qpair failed and we were unable to recover it. 00:38:20.214 [2024-12-13 10:40:13.899358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.214 [2024-12-13 10:40:13.899373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.215 qpair failed and we were unable to recover it. 00:38:20.215 [2024-12-13 10:40:13.899525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.215 [2024-12-13 10:40:13.899541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.215 qpair failed and we were unable to recover it. 00:38:20.215 [2024-12-13 10:40:13.899697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.215 [2024-12-13 10:40:13.899713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.215 qpair failed and we were unable to recover it. 00:38:20.215 [2024-12-13 10:40:13.899867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.215 [2024-12-13 10:40:13.899883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.215 qpair failed and we were unable to recover it. 00:38:20.215 [2024-12-13 10:40:13.900037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.215 [2024-12-13 10:40:13.900057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.215 qpair failed and we were unable to recover it. 00:38:20.215 [2024-12-13 10:40:13.900151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.215 [2024-12-13 10:40:13.900168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.215 qpair failed and we were unable to recover it. 00:38:20.215 [2024-12-13 10:40:13.900307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.215 [2024-12-13 10:40:13.900323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.215 qpair failed and we were unable to recover it. 00:38:20.215 [2024-12-13 10:40:13.900422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.215 [2024-12-13 10:40:13.900437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.215 qpair failed and we were unable to recover it. 00:38:20.215 [2024-12-13 10:40:13.900511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.215 [2024-12-13 10:40:13.900527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.215 qpair failed and we were unable to recover it. 00:38:20.215 [2024-12-13 10:40:13.900610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.215 [2024-12-13 10:40:13.900626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.215 qpair failed and we were unable to recover it. 00:38:20.215 [2024-12-13 10:40:13.900790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.215 [2024-12-13 10:40:13.900806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.215 qpair failed and we were unable to recover it. 00:38:20.215 [2024-12-13 10:40:13.900908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.215 [2024-12-13 10:40:13.900924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.215 qpair failed and we were unable to recover it. 00:38:20.215 [2024-12-13 10:40:13.901060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.215 [2024-12-13 10:40:13.901077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.215 qpair failed and we were unable to recover it. 00:38:20.215 [2024-12-13 10:40:13.901161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.215 [2024-12-13 10:40:13.901178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.215 qpair failed and we were unable to recover it. 00:38:20.215 [2024-12-13 10:40:13.901332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.215 [2024-12-13 10:40:13.901348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.215 qpair failed and we were unable to recover it. 00:38:20.215 [2024-12-13 10:40:13.901430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.215 [2024-12-13 10:40:13.901445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.215 qpair failed and we were unable to recover it. 00:38:20.215 [2024-12-13 10:40:13.901586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.215 [2024-12-13 10:40:13.901602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.215 qpair failed and we were unable to recover it. 00:38:20.215 [2024-12-13 10:40:13.901681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.215 [2024-12-13 10:40:13.901700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.215 qpair failed and we were unable to recover it. 00:38:20.215 [2024-12-13 10:40:13.901858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.215 [2024-12-13 10:40:13.901874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.215 qpair failed and we were unable to recover it. 00:38:20.215 [2024-12-13 10:40:13.901967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.215 [2024-12-13 10:40:13.901994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.215 qpair failed and we were unable to recover it. 00:38:20.215 [2024-12-13 10:40:13.902072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.215 [2024-12-13 10:40:13.902087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.215 qpair failed and we were unable to recover it. 00:38:20.215 [2024-12-13 10:40:13.902222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.215 [2024-12-13 10:40:13.902238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.215 qpair failed and we were unable to recover it. 00:38:20.215 [2024-12-13 10:40:13.902433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.215 [2024-12-13 10:40:13.902454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.215 qpair failed and we were unable to recover it. 00:38:20.215 [2024-12-13 10:40:13.902558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.215 [2024-12-13 10:40:13.902573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.215 qpair failed and we were unable to recover it. 00:38:20.215 [2024-12-13 10:40:13.902731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.215 [2024-12-13 10:40:13.902748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.215 qpair failed and we were unable to recover it. 00:38:20.215 [2024-12-13 10:40:13.902835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.215 [2024-12-13 10:40:13.902850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.215 qpair failed and we were unable to recover it. 00:38:20.215 [2024-12-13 10:40:13.902932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.215 [2024-12-13 10:40:13.902947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.215 qpair failed and we were unable to recover it. 00:38:20.215 [2024-12-13 10:40:13.903083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.215 [2024-12-13 10:40:13.903097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.215 qpair failed and we were unable to recover it. 00:38:20.215 [2024-12-13 10:40:13.903234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.215 [2024-12-13 10:40:13.903250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.215 qpair failed and we were unable to recover it. 00:38:20.215 [2024-12-13 10:40:13.903398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.215 [2024-12-13 10:40:13.903414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.215 qpair failed and we were unable to recover it. 00:38:20.215 [2024-12-13 10:40:13.903631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.215 [2024-12-13 10:40:13.903647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.215 qpair failed and we were unable to recover it. 00:38:20.215 [2024-12-13 10:40:13.903792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.215 [2024-12-13 10:40:13.903808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.215 qpair failed and we were unable to recover it. 00:38:20.215 [2024-12-13 10:40:13.903971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.215 [2024-12-13 10:40:13.903987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.215 qpair failed and we were unable to recover it. 00:38:20.215 [2024-12-13 10:40:13.904132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.215 [2024-12-13 10:40:13.904147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.215 qpair failed and we were unable to recover it. 00:38:20.215 [2024-12-13 10:40:13.904230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.215 [2024-12-13 10:40:13.904246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.215 qpair failed and we were unable to recover it. 00:38:20.215 [2024-12-13 10:40:13.904314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.215 [2024-12-13 10:40:13.904329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.215 qpair failed and we were unable to recover it. 00:38:20.215 [2024-12-13 10:40:13.904391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.215 [2024-12-13 10:40:13.904407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.215 qpair failed and we were unable to recover it. 00:38:20.215 [2024-12-13 10:40:13.904560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.215 [2024-12-13 10:40:13.904576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.215 qpair failed and we were unable to recover it. 00:38:20.215 [2024-12-13 10:40:13.904735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.215 [2024-12-13 10:40:13.904752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.215 qpair failed and we were unable to recover it. 00:38:20.215 [2024-12-13 10:40:13.904832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.215 [2024-12-13 10:40:13.904851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.215 qpair failed and we were unable to recover it. 00:38:20.216 [2024-12-13 10:40:13.904944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.216 [2024-12-13 10:40:13.904959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.216 qpair failed and we were unable to recover it. 00:38:20.216 [2024-12-13 10:40:13.905104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.216 [2024-12-13 10:40:13.905120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.216 qpair failed and we were unable to recover it. 00:38:20.216 [2024-12-13 10:40:13.905265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.216 [2024-12-13 10:40:13.905284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.216 qpair failed and we were unable to recover it. 00:38:20.216 [2024-12-13 10:40:13.905434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.216 [2024-12-13 10:40:13.905453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.216 qpair failed and we were unable to recover it. 00:38:20.216 [2024-12-13 10:40:13.905545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.216 [2024-12-13 10:40:13.905562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.216 qpair failed and we were unable to recover it. 00:38:20.216 [2024-12-13 10:40:13.905653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.216 [2024-12-13 10:40:13.905669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.216 qpair failed and we were unable to recover it. 00:38:20.216 [2024-12-13 10:40:13.905769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.216 [2024-12-13 10:40:13.905785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.216 qpair failed and we were unable to recover it. 00:38:20.216 [2024-12-13 10:40:13.905941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.216 [2024-12-13 10:40:13.905959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.216 qpair failed and we were unable to recover it. 00:38:20.216 [2024-12-13 10:40:13.906038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.216 [2024-12-13 10:40:13.906054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.216 qpair failed and we were unable to recover it. 00:38:20.216 [2024-12-13 10:40:13.906193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.216 [2024-12-13 10:40:13.906209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.216 qpair failed and we were unable to recover it. 00:38:20.216 [2024-12-13 10:40:13.906350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.216 [2024-12-13 10:40:13.906367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.216 qpair failed and we were unable to recover it. 00:38:20.216 [2024-12-13 10:40:13.906454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.216 [2024-12-13 10:40:13.906471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.216 qpair failed and we were unable to recover it. 00:38:20.216 [2024-12-13 10:40:13.906563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.216 [2024-12-13 10:40:13.906578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.216 qpair failed and we were unable to recover it. 00:38:20.216 [2024-12-13 10:40:13.906653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.216 [2024-12-13 10:40:13.906669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.216 qpair failed and we were unable to recover it. 00:38:20.216 [2024-12-13 10:40:13.906743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.216 [2024-12-13 10:40:13.906759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.216 qpair failed and we were unable to recover it. 00:38:20.216 [2024-12-13 10:40:13.906910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.216 [2024-12-13 10:40:13.906926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.216 qpair failed and we were unable to recover it. 00:38:20.216 [2024-12-13 10:40:13.907100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.216 [2024-12-13 10:40:13.907116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.216 qpair failed and we were unable to recover it. 00:38:20.216 [2024-12-13 10:40:13.907266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.216 [2024-12-13 10:40:13.907284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.216 qpair failed and we were unable to recover it. 00:38:20.216 [2024-12-13 10:40:13.907441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.216 [2024-12-13 10:40:13.907480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.216 qpair failed and we were unable to recover it. 00:38:20.216 [2024-12-13 10:40:13.907555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.216 [2024-12-13 10:40:13.907571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.216 qpair failed and we were unable to recover it. 00:38:20.216 [2024-12-13 10:40:13.907721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.216 [2024-12-13 10:40:13.907738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.216 qpair failed and we were unable to recover it. 00:38:20.216 [2024-12-13 10:40:13.907967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.216 [2024-12-13 10:40:13.907982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.216 qpair failed and we were unable to recover it. 00:38:20.216 [2024-12-13 10:40:13.908139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.216 [2024-12-13 10:40:13.908154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.216 qpair failed and we were unable to recover it. 00:38:20.216 [2024-12-13 10:40:13.908240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.216 [2024-12-13 10:40:13.908256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.216 qpair failed and we were unable to recover it. 00:38:20.216 [2024-12-13 10:40:13.908352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.216 [2024-12-13 10:40:13.908368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.216 qpair failed and we were unable to recover it. 00:38:20.216 [2024-12-13 10:40:13.908523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.216 [2024-12-13 10:40:13.908539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.216 qpair failed and we were unable to recover it. 00:38:20.216 [2024-12-13 10:40:13.908687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.216 [2024-12-13 10:40:13.908702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.216 qpair failed and we were unable to recover it. 00:38:20.216 [2024-12-13 10:40:13.908900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.216 [2024-12-13 10:40:13.908916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.216 qpair failed and we were unable to recover it. 00:38:20.216 [2024-12-13 10:40:13.909052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.216 [2024-12-13 10:40:13.909072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.216 qpair failed and we were unable to recover it. 00:38:20.216 [2024-12-13 10:40:13.909147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.216 [2024-12-13 10:40:13.909162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.216 qpair failed and we were unable to recover it. 00:38:20.216 [2024-12-13 10:40:13.909303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.216 [2024-12-13 10:40:13.909319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.216 qpair failed and we were unable to recover it. 00:38:20.216 [2024-12-13 10:40:13.909465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.216 [2024-12-13 10:40:13.909482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.216 qpair failed and we were unable to recover it. 00:38:20.216 [2024-12-13 10:40:13.909636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.216 [2024-12-13 10:40:13.909651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.216 qpair failed and we were unable to recover it. 00:38:20.216 [2024-12-13 10:40:13.909756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.216 [2024-12-13 10:40:13.909772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.216 qpair failed and we were unable to recover it. 00:38:20.216 [2024-12-13 10:40:13.909927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.216 [2024-12-13 10:40:13.909942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.216 qpair failed and we were unable to recover it. 00:38:20.216 [2024-12-13 10:40:13.910092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.216 [2024-12-13 10:40:13.910108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.216 qpair failed and we were unable to recover it. 00:38:20.216 [2024-12-13 10:40:13.910348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.216 [2024-12-13 10:40:13.910363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.216 qpair failed and we were unable to recover it. 00:38:20.216 [2024-12-13 10:40:13.910461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.216 [2024-12-13 10:40:13.910476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.216 qpair failed and we were unable to recover it. 00:38:20.216 [2024-12-13 10:40:13.910576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.216 [2024-12-13 10:40:13.910593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.216 qpair failed and we were unable to recover it. 00:38:20.216 [2024-12-13 10:40:13.910731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.217 [2024-12-13 10:40:13.910747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.217 qpair failed and we were unable to recover it. 00:38:20.217 [2024-12-13 10:40:13.910986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.217 [2024-12-13 10:40:13.911002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.217 qpair failed and we were unable to recover it. 00:38:20.217 [2024-12-13 10:40:13.911079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.217 [2024-12-13 10:40:13.911095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.217 qpair failed and we were unable to recover it. 00:38:20.217 [2024-12-13 10:40:13.911249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.217 [2024-12-13 10:40:13.911264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.217 qpair failed and we were unable to recover it. 00:38:20.217 [2024-12-13 10:40:13.911413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.217 [2024-12-13 10:40:13.911437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.217 qpair failed and we were unable to recover it. 00:38:20.217 [2024-12-13 10:40:13.911520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.217 [2024-12-13 10:40:13.911536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.217 qpair failed and we were unable to recover it. 00:38:20.217 [2024-12-13 10:40:13.911669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.217 [2024-12-13 10:40:13.911685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.217 qpair failed and we were unable to recover it. 00:38:20.217 [2024-12-13 10:40:13.911838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.217 [2024-12-13 10:40:13.911853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.217 qpair failed and we were unable to recover it. 00:38:20.217 [2024-12-13 10:40:13.911986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.217 [2024-12-13 10:40:13.912001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.217 qpair failed and we were unable to recover it. 00:38:20.217 [2024-12-13 10:40:13.912093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.217 [2024-12-13 10:40:13.912118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.217 qpair failed and we were unable to recover it. 00:38:20.217 [2024-12-13 10:40:13.912328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.217 [2024-12-13 10:40:13.912343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.217 qpair failed and we were unable to recover it. 00:38:20.217 [2024-12-13 10:40:13.912432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.217 [2024-12-13 10:40:13.912455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.217 qpair failed and we were unable to recover it. 00:38:20.217 [2024-12-13 10:40:13.912526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.217 [2024-12-13 10:40:13.912541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.217 qpair failed and we were unable to recover it. 00:38:20.217 [2024-12-13 10:40:13.912696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.217 [2024-12-13 10:40:13.912712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.217 qpair failed and we were unable to recover it. 00:38:20.217 [2024-12-13 10:40:13.912875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.217 [2024-12-13 10:40:13.912891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.217 qpair failed and we were unable to recover it. 00:38:20.217 [2024-12-13 10:40:13.913033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.217 [2024-12-13 10:40:13.913058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.217 qpair failed and we were unable to recover it. 00:38:20.217 [2024-12-13 10:40:13.913154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.217 [2024-12-13 10:40:13.913170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.217 qpair failed and we were unable to recover it. 00:38:20.217 [2024-12-13 10:40:13.913246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.217 [2024-12-13 10:40:13.913262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.217 qpair failed and we were unable to recover it. 00:38:20.217 [2024-12-13 10:40:13.913470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.217 [2024-12-13 10:40:13.913489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.217 qpair failed and we were unable to recover it. 00:38:20.217 [2024-12-13 10:40:13.913638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.217 [2024-12-13 10:40:13.913654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.217 qpair failed and we were unable to recover it. 00:38:20.217 [2024-12-13 10:40:13.913852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.217 [2024-12-13 10:40:13.913868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.217 qpair failed and we were unable to recover it. 00:38:20.217 [2024-12-13 10:40:13.914035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.217 [2024-12-13 10:40:13.914052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.217 qpair failed and we were unable to recover it. 00:38:20.217 [2024-12-13 10:40:13.914134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.217 [2024-12-13 10:40:13.914150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.217 qpair failed and we were unable to recover it. 00:38:20.217 [2024-12-13 10:40:13.914316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.217 [2024-12-13 10:40:13.914332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.217 qpair failed and we were unable to recover it. 00:38:20.217 [2024-12-13 10:40:13.914410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.217 [2024-12-13 10:40:13.914426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.217 qpair failed and we were unable to recover it. 00:38:20.217 [2024-12-13 10:40:13.914660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.217 [2024-12-13 10:40:13.914675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.217 qpair failed and we were unable to recover it. 00:38:20.217 [2024-12-13 10:40:13.914822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.217 [2024-12-13 10:40:13.914838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.217 qpair failed and we were unable to recover it. 00:38:20.217 [2024-12-13 10:40:13.915001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.217 [2024-12-13 10:40:13.915016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.217 qpair failed and we were unable to recover it. 00:38:20.217 [2024-12-13 10:40:13.915150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.217 [2024-12-13 10:40:13.915165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.217 qpair failed and we were unable to recover it. 00:38:20.217 [2024-12-13 10:40:13.915367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.217 [2024-12-13 10:40:13.915382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.217 qpair failed and we were unable to recover it. 00:38:20.217 [2024-12-13 10:40:13.915488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.217 [2024-12-13 10:40:13.915505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.217 qpair failed and we were unable to recover it. 00:38:20.217 [2024-12-13 10:40:13.915765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.217 [2024-12-13 10:40:13.915780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.217 qpair failed and we were unable to recover it. 00:38:20.217 [2024-12-13 10:40:13.915951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.217 [2024-12-13 10:40:13.915967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.217 qpair failed and we were unable to recover it. 00:38:20.217 [2024-12-13 10:40:13.916067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.217 [2024-12-13 10:40:13.916083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.217 qpair failed and we were unable to recover it. 00:38:20.217 [2024-12-13 10:40:13.916229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.217 [2024-12-13 10:40:13.916244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.217 qpair failed and we were unable to recover it. 00:38:20.218 [2024-12-13 10:40:13.916380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.218 [2024-12-13 10:40:13.916396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.218 qpair failed and we were unable to recover it. 00:38:20.218 [2024-12-13 10:40:13.916622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.218 [2024-12-13 10:40:13.916639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.218 qpair failed and we were unable to recover it. 00:38:20.218 [2024-12-13 10:40:13.916733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.218 [2024-12-13 10:40:13.916748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.218 qpair failed and we were unable to recover it. 00:38:20.218 [2024-12-13 10:40:13.916831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.218 [2024-12-13 10:40:13.916846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.218 qpair failed and we were unable to recover it. 00:38:20.218 [2024-12-13 10:40:13.916926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.218 [2024-12-13 10:40:13.916941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.218 qpair failed and we were unable to recover it. 00:38:20.218 [2024-12-13 10:40:13.917191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.218 [2024-12-13 10:40:13.917206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.218 qpair failed and we were unable to recover it. 00:38:20.218 [2024-12-13 10:40:13.917350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.218 [2024-12-13 10:40:13.917364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.218 qpair failed and we were unable to recover it. 00:38:20.218 [2024-12-13 10:40:13.917459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.218 [2024-12-13 10:40:13.917475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.218 qpair failed and we were unable to recover it. 00:38:20.218 [2024-12-13 10:40:13.917572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.218 [2024-12-13 10:40:13.917588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.218 qpair failed and we were unable to recover it. 00:38:20.218 [2024-12-13 10:40:13.917661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.218 [2024-12-13 10:40:13.917676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.218 qpair failed and we were unable to recover it. 00:38:20.218 [2024-12-13 10:40:13.917813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.218 [2024-12-13 10:40:13.917829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.218 qpair failed and we were unable to recover it. 00:38:20.218 [2024-12-13 10:40:13.917985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.218 [2024-12-13 10:40:13.918000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.218 qpair failed and we were unable to recover it. 00:38:20.218 [2024-12-13 10:40:13.918098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.218 [2024-12-13 10:40:13.918113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.218 qpair failed and we were unable to recover it. 00:38:20.218 [2024-12-13 10:40:13.918205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.218 [2024-12-13 10:40:13.918220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.218 qpair failed and we were unable to recover it. 00:38:20.218 [2024-12-13 10:40:13.918380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.218 [2024-12-13 10:40:13.918396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.218 qpair failed and we were unable to recover it. 00:38:20.218 [2024-12-13 10:40:13.918546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.218 [2024-12-13 10:40:13.918562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.218 qpair failed and we were unable to recover it. 00:38:20.218 [2024-12-13 10:40:13.918712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.218 [2024-12-13 10:40:13.918727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.218 qpair failed and we were unable to recover it. 00:38:20.218 [2024-12-13 10:40:13.918906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.218 [2024-12-13 10:40:13.918922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.218 qpair failed and we were unable to recover it. 00:38:20.218 [2024-12-13 10:40:13.919016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.218 [2024-12-13 10:40:13.919036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.218 qpair failed and we were unable to recover it. 00:38:20.218 [2024-12-13 10:40:13.919181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.218 [2024-12-13 10:40:13.919196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.218 qpair failed and we were unable to recover it. 00:38:20.218 [2024-12-13 10:40:13.919347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.218 [2024-12-13 10:40:13.919363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.218 qpair failed and we were unable to recover it. 00:38:20.218 [2024-12-13 10:40:13.919456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.218 [2024-12-13 10:40:13.919473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.218 qpair failed and we were unable to recover it. 00:38:20.218 [2024-12-13 10:40:13.919557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.218 [2024-12-13 10:40:13.919573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.218 qpair failed and we were unable to recover it. 00:38:20.218 [2024-12-13 10:40:13.919660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.218 [2024-12-13 10:40:13.919676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.218 qpair failed and we were unable to recover it. 00:38:20.218 [2024-12-13 10:40:13.919832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.218 [2024-12-13 10:40:13.919847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.218 qpair failed and we were unable to recover it. 00:38:20.218 [2024-12-13 10:40:13.919988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.218 [2024-12-13 10:40:13.920003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.218 qpair failed and we were unable to recover it. 00:38:20.218 [2024-12-13 10:40:13.920091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.218 [2024-12-13 10:40:13.920107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.218 qpair failed and we were unable to recover it. 00:38:20.218 [2024-12-13 10:40:13.920260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.218 [2024-12-13 10:40:13.920275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.218 qpair failed and we were unable to recover it. 00:38:20.218 [2024-12-13 10:40:13.920353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.218 [2024-12-13 10:40:13.920368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.218 qpair failed and we were unable to recover it. 00:38:20.218 [2024-12-13 10:40:13.920531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.218 [2024-12-13 10:40:13.920547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.218 qpair failed and we were unable to recover it. 00:38:20.218 [2024-12-13 10:40:13.920631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.218 [2024-12-13 10:40:13.920646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.218 qpair failed and we were unable to recover it. 00:38:20.218 [2024-12-13 10:40:13.920720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.218 [2024-12-13 10:40:13.920735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.218 qpair failed and we were unable to recover it. 00:38:20.218 [2024-12-13 10:40:13.920890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.218 [2024-12-13 10:40:13.920905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.218 qpair failed and we were unable to recover it. 00:38:20.218 [2024-12-13 10:40:13.921151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.218 [2024-12-13 10:40:13.921167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.218 qpair failed and we were unable to recover it. 00:38:20.218 [2024-12-13 10:40:13.921193] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:38:20.218 [2024-12-13 10:40:13.921250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.218 [2024-12-13 10:40:13.921264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.218 qpair failed and we were unable to recover it. 00:38:20.218 [2024-12-13 10:40:13.921364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.218 [2024-12-13 10:40:13.921379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.218 qpair failed and we were unable to recover it. 00:38:20.218 [2024-12-13 10:40:13.921518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.218 [2024-12-13 10:40:13.921533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.218 qpair failed and we were unable to recover it. 00:38:20.218 [2024-12-13 10:40:13.921625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.218 [2024-12-13 10:40:13.921641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.218 qpair failed and we were unable to recover it. 00:38:20.218 [2024-12-13 10:40:13.921716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.218 [2024-12-13 10:40:13.921732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.218 qpair failed and we were unable to recover it. 00:38:20.219 [2024-12-13 10:40:13.921817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.219 [2024-12-13 10:40:13.921832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.219 qpair failed and we were unable to recover it. 00:38:20.219 [2024-12-13 10:40:13.921912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.219 [2024-12-13 10:40:13.921928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.219 qpair failed and we were unable to recover it. 00:38:20.219 [2024-12-13 10:40:13.922063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.219 [2024-12-13 10:40:13.922078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.219 qpair failed and we were unable to recover it. 00:38:20.219 [2024-12-13 10:40:13.922212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.219 [2024-12-13 10:40:13.922227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.219 qpair failed and we were unable to recover it. 00:38:20.219 [2024-12-13 10:40:13.922375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.219 [2024-12-13 10:40:13.922390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.219 qpair failed and we were unable to recover it. 00:38:20.219 [2024-12-13 10:40:13.922462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.219 [2024-12-13 10:40:13.922478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.219 qpair failed and we were unable to recover it. 00:38:20.219 [2024-12-13 10:40:13.922566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.219 [2024-12-13 10:40:13.922582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.219 qpair failed and we were unable to recover it. 00:38:20.219 [2024-12-13 10:40:13.922755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.219 [2024-12-13 10:40:13.922771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.219 qpair failed and we were unable to recover it. 00:38:20.219 [2024-12-13 10:40:13.922845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.219 [2024-12-13 10:40:13.922861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.219 qpair failed and we were unable to recover it. 00:38:20.219 [2024-12-13 10:40:13.923063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.219 [2024-12-13 10:40:13.923079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.219 qpair failed and we were unable to recover it. 00:38:20.219 [2024-12-13 10:40:13.923168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.219 [2024-12-13 10:40:13.923184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.219 qpair failed and we were unable to recover it. 00:38:20.219 [2024-12-13 10:40:13.923362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.219 [2024-12-13 10:40:13.923377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.219 qpair failed and we were unable to recover it. 00:38:20.219 [2024-12-13 10:40:13.923478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.219 [2024-12-13 10:40:13.923493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.219 qpair failed and we were unable to recover it. 00:38:20.219 [2024-12-13 10:40:13.923634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.219 [2024-12-13 10:40:13.923651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.219 qpair failed and we were unable to recover it. 00:38:20.219 [2024-12-13 10:40:13.923733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.219 [2024-12-13 10:40:13.923751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.219 qpair failed and we were unable to recover it. 00:38:20.219 [2024-12-13 10:40:13.923844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.219 [2024-12-13 10:40:13.923859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.219 qpair failed and we were unable to recover it. 00:38:20.219 [2024-12-13 10:40:13.924013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.219 [2024-12-13 10:40:13.924027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.219 qpair failed and we were unable to recover it. 00:38:20.219 [2024-12-13 10:40:13.924125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.219 [2024-12-13 10:40:13.924140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.219 qpair failed and we were unable to recover it. 00:38:20.219 [2024-12-13 10:40:13.924222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.219 [2024-12-13 10:40:13.924236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.219 qpair failed and we were unable to recover it. 00:38:20.219 [2024-12-13 10:40:13.924406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.219 [2024-12-13 10:40:13.924421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.219 qpair failed and we were unable to recover it. 00:38:20.219 [2024-12-13 10:40:13.924512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.219 [2024-12-13 10:40:13.924527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.219 qpair failed and we were unable to recover it. 00:38:20.219 [2024-12-13 10:40:13.924622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.219 [2024-12-13 10:40:13.924637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.219 qpair failed and we were unable to recover it. 00:38:20.219 [2024-12-13 10:40:13.924781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.219 [2024-12-13 10:40:13.924796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.219 qpair failed and we were unable to recover it. 00:38:20.219 [2024-12-13 10:40:13.924957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.219 [2024-12-13 10:40:13.924973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.219 qpair failed and we were unable to recover it. 00:38:20.219 [2024-12-13 10:40:13.925063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.219 [2024-12-13 10:40:13.925081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.219 qpair failed and we were unable to recover it. 00:38:20.219 [2024-12-13 10:40:13.925232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.219 [2024-12-13 10:40:13.925248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.219 qpair failed and we were unable to recover it. 00:38:20.219 [2024-12-13 10:40:13.925344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.219 [2024-12-13 10:40:13.925359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.219 qpair failed and we were unable to recover it. 00:38:20.219 [2024-12-13 10:40:13.925453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.219 [2024-12-13 10:40:13.925470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.219 qpair failed and we were unable to recover it. 00:38:20.219 [2024-12-13 10:40:13.925677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.219 [2024-12-13 10:40:13.925693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.219 qpair failed and we were unable to recover it. 00:38:20.219 [2024-12-13 10:40:13.925830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.219 [2024-12-13 10:40:13.925845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.219 qpair failed and we were unable to recover it. 00:38:20.219 [2024-12-13 10:40:13.925941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.219 [2024-12-13 10:40:13.925956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.219 qpair failed and we were unable to recover it. 00:38:20.219 [2024-12-13 10:40:13.926093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.219 [2024-12-13 10:40:13.926107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.219 qpair failed and we were unable to recover it. 00:38:20.219 [2024-12-13 10:40:13.926182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.219 [2024-12-13 10:40:13.926198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.219 qpair failed and we were unable to recover it. 00:38:20.219 [2024-12-13 10:40:13.926336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.219 [2024-12-13 10:40:13.926351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.219 qpair failed and we were unable to recover it. 00:38:20.219 [2024-12-13 10:40:13.926438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.219 [2024-12-13 10:40:13.926457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.219 qpair failed and we were unable to recover it. 00:38:20.219 [2024-12-13 10:40:13.926546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.219 [2024-12-13 10:40:13.926560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.219 qpair failed and we were unable to recover it. 00:38:20.219 [2024-12-13 10:40:13.926635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.219 [2024-12-13 10:40:13.926650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.219 qpair failed and we were unable to recover it. 00:38:20.219 [2024-12-13 10:40:13.926791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.219 [2024-12-13 10:40:13.926807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.219 qpair failed and we were unable to recover it. 00:38:20.219 [2024-12-13 10:40:13.926902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.219 [2024-12-13 10:40:13.926917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.219 qpair failed and we were unable to recover it. 00:38:20.220 [2024-12-13 10:40:13.927008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.220 [2024-12-13 10:40:13.927024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.220 qpair failed and we were unable to recover it. 00:38:20.220 [2024-12-13 10:40:13.927176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.220 [2024-12-13 10:40:13.927192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.220 qpair failed and we were unable to recover it. 00:38:20.220 [2024-12-13 10:40:13.927329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.220 [2024-12-13 10:40:13.927345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.220 qpair failed and we were unable to recover it. 00:38:20.220 [2024-12-13 10:40:13.927455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.220 [2024-12-13 10:40:13.927472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.220 qpair failed and we were unable to recover it. 00:38:20.220 [2024-12-13 10:40:13.927560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.220 [2024-12-13 10:40:13.927580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.220 qpair failed and we were unable to recover it. 00:38:20.220 [2024-12-13 10:40:13.927651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.220 [2024-12-13 10:40:13.927666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.220 qpair failed and we were unable to recover it. 00:38:20.220 [2024-12-13 10:40:13.927837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.220 [2024-12-13 10:40:13.927852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.220 qpair failed and we were unable to recover it. 00:38:20.220 [2024-12-13 10:40:13.928005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.220 [2024-12-13 10:40:13.928022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.220 qpair failed and we were unable to recover it. 00:38:20.220 [2024-12-13 10:40:13.928102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.220 [2024-12-13 10:40:13.928117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.220 qpair failed and we were unable to recover it. 00:38:20.220 [2024-12-13 10:40:13.928206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.220 [2024-12-13 10:40:13.928222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.220 qpair failed and we were unable to recover it. 00:38:20.220 [2024-12-13 10:40:13.928315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.220 [2024-12-13 10:40:13.928332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.220 qpair failed and we were unable to recover it. 00:38:20.220 [2024-12-13 10:40:13.928413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.220 [2024-12-13 10:40:13.928428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.220 qpair failed and we were unable to recover it. 00:38:20.220 [2024-12-13 10:40:13.928525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.220 [2024-12-13 10:40:13.928542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.220 qpair failed and we were unable to recover it. 00:38:20.220 [2024-12-13 10:40:13.928756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.220 [2024-12-13 10:40:13.928771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.220 qpair failed and we were unable to recover it. 00:38:20.220 [2024-12-13 10:40:13.928926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.220 [2024-12-13 10:40:13.928940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.220 qpair failed and we were unable to recover it. 00:38:20.220 [2024-12-13 10:40:13.929032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.220 [2024-12-13 10:40:13.929047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.220 qpair failed and we were unable to recover it. 00:38:20.220 [2024-12-13 10:40:13.929300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.220 [2024-12-13 10:40:13.929317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.220 qpair failed and we were unable to recover it. 00:38:20.220 [2024-12-13 10:40:13.929480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.220 [2024-12-13 10:40:13.929497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.220 qpair failed and we were unable to recover it. 00:38:20.220 [2024-12-13 10:40:13.929686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.220 [2024-12-13 10:40:13.929702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.220 qpair failed and we were unable to recover it. 00:38:20.220 [2024-12-13 10:40:13.929930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.220 [2024-12-13 10:40:13.929945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.220 qpair failed and we were unable to recover it. 00:38:20.220 [2024-12-13 10:40:13.930188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.220 [2024-12-13 10:40:13.930203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.220 qpair failed and we were unable to recover it. 00:38:20.220 [2024-12-13 10:40:13.930434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.220 [2024-12-13 10:40:13.930454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.220 qpair failed and we were unable to recover it. 00:38:20.220 [2024-12-13 10:40:13.930678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.220 [2024-12-13 10:40:13.930695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.220 qpair failed and we were unable to recover it. 00:38:20.220 [2024-12-13 10:40:13.930945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.220 [2024-12-13 10:40:13.930962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.220 qpair failed and we were unable to recover it. 00:38:20.220 [2024-12-13 10:40:13.931049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.220 [2024-12-13 10:40:13.931065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.220 qpair failed and we were unable to recover it. 00:38:20.220 [2024-12-13 10:40:13.931149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.220 [2024-12-13 10:40:13.931169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.220 qpair failed and we were unable to recover it. 00:38:20.220 [2024-12-13 10:40:13.931421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.220 [2024-12-13 10:40:13.931439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.220 qpair failed and we were unable to recover it. 00:38:20.220 [2024-12-13 10:40:13.931657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.220 [2024-12-13 10:40:13.931674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.220 qpair failed and we were unable to recover it. 00:38:20.220 [2024-12-13 10:40:13.931764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.220 [2024-12-13 10:40:13.931779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.220 qpair failed and we were unable to recover it. 00:38:20.220 [2024-12-13 10:40:13.931998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.220 [2024-12-13 10:40:13.932014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.220 qpair failed and we were unable to recover it. 00:38:20.220 [2024-12-13 10:40:13.932242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.220 [2024-12-13 10:40:13.932258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.220 qpair failed and we were unable to recover it. 00:38:20.220 [2024-12-13 10:40:13.932328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.220 [2024-12-13 10:40:13.932344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.220 qpair failed and we were unable to recover it. 00:38:20.220 [2024-12-13 10:40:13.932427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.220 [2024-12-13 10:40:13.932443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.220 qpair failed and we were unable to recover it. 00:38:20.220 [2024-12-13 10:40:13.932606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.220 [2024-12-13 10:40:13.932622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.220 qpair failed and we were unable to recover it. 00:38:20.220 [2024-12-13 10:40:13.932803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.220 [2024-12-13 10:40:13.932820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.220 qpair failed and we were unable to recover it. 00:38:20.220 [2024-12-13 10:40:13.933067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.220 [2024-12-13 10:40:13.933082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.220 qpair failed and we were unable to recover it. 00:38:20.220 [2024-12-13 10:40:13.933232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.220 [2024-12-13 10:40:13.933248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.220 qpair failed and we were unable to recover it. 00:38:20.220 [2024-12-13 10:40:13.933477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.220 [2024-12-13 10:40:13.933493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.220 qpair failed and we were unable to recover it. 00:38:20.220 [2024-12-13 10:40:13.933724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.220 [2024-12-13 10:40:13.933742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.220 qpair failed and we were unable to recover it. 00:38:20.220 [2024-12-13 10:40:13.933906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.221 [2024-12-13 10:40:13.933922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.221 qpair failed and we were unable to recover it. 00:38:20.221 [2024-12-13 10:40:13.934013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.221 [2024-12-13 10:40:13.934029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.221 qpair failed and we were unable to recover it. 00:38:20.221 [2024-12-13 10:40:13.934168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.221 [2024-12-13 10:40:13.934184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.221 qpair failed and we were unable to recover it. 00:38:20.221 [2024-12-13 10:40:13.934405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.221 [2024-12-13 10:40:13.934421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.221 qpair failed and we were unable to recover it. 00:38:20.221 [2024-12-13 10:40:13.934520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.221 [2024-12-13 10:40:13.934536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.221 qpair failed and we were unable to recover it. 00:38:20.221 [2024-12-13 10:40:13.934764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.221 [2024-12-13 10:40:13.934780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.221 qpair failed and we were unable to recover it. 00:38:20.221 [2024-12-13 10:40:13.934861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.221 [2024-12-13 10:40:13.934878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.221 qpair failed and we were unable to recover it. 00:38:20.221 [2024-12-13 10:40:13.935029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.221 [2024-12-13 10:40:13.935045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.221 qpair failed and we were unable to recover it. 00:38:20.221 [2024-12-13 10:40:13.935189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.221 [2024-12-13 10:40:13.935205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.221 qpair failed and we were unable to recover it. 00:38:20.221 [2024-12-13 10:40:13.935361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.221 [2024-12-13 10:40:13.935378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.221 qpair failed and we were unable to recover it. 00:38:20.221 [2024-12-13 10:40:13.935597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.221 [2024-12-13 10:40:13.935613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.221 qpair failed and we were unable to recover it. 00:38:20.221 [2024-12-13 10:40:13.935783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.221 [2024-12-13 10:40:13.935799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.221 qpair failed and we were unable to recover it. 00:38:20.221 [2024-12-13 10:40:13.936036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.221 [2024-12-13 10:40:13.936052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.221 qpair failed and we were unable to recover it. 00:38:20.221 [2024-12-13 10:40:13.936222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.221 [2024-12-13 10:40:13.936238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.221 qpair failed and we were unable to recover it. 00:38:20.221 [2024-12-13 10:40:13.936393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.221 [2024-12-13 10:40:13.936410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.221 qpair failed and we were unable to recover it. 00:38:20.221 [2024-12-13 10:40:13.936492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.221 [2024-12-13 10:40:13.936508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.221 qpair failed and we were unable to recover it. 00:38:20.221 [2024-12-13 10:40:13.936580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.221 [2024-12-13 10:40:13.936595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.221 qpair failed and we were unable to recover it. 00:38:20.221 [2024-12-13 10:40:13.936799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.221 [2024-12-13 10:40:13.936816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.221 qpair failed and we were unable to recover it. 00:38:20.221 [2024-12-13 10:40:13.936953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.221 [2024-12-13 10:40:13.936968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.221 qpair failed and we were unable to recover it. 00:38:20.221 [2024-12-13 10:40:13.937126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.221 [2024-12-13 10:40:13.937142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.221 qpair failed and we were unable to recover it. 00:38:20.221 [2024-12-13 10:40:13.937249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.221 [2024-12-13 10:40:13.937267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.221 qpair failed and we were unable to recover it. 00:38:20.221 [2024-12-13 10:40:13.937496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.221 [2024-12-13 10:40:13.937515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.221 qpair failed and we were unable to recover it. 00:38:20.221 [2024-12-13 10:40:13.937673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.221 [2024-12-13 10:40:13.937691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.221 qpair failed and we were unable to recover it. 00:38:20.221 [2024-12-13 10:40:13.937841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.221 [2024-12-13 10:40:13.937858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.221 qpair failed and we were unable to recover it. 00:38:20.221 [2024-12-13 10:40:13.938112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.221 [2024-12-13 10:40:13.938130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.221 qpair failed and we were unable to recover it. 00:38:20.221 [2024-12-13 10:40:13.938294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.221 [2024-12-13 10:40:13.938311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.221 qpair failed and we were unable to recover it. 00:38:20.221 [2024-12-13 10:40:13.938466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.221 [2024-12-13 10:40:13.938486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.221 qpair failed and we were unable to recover it. 00:38:20.221 [2024-12-13 10:40:13.938577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.221 [2024-12-13 10:40:13.938594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.221 qpair failed and we were unable to recover it. 00:38:20.221 [2024-12-13 10:40:13.938771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.221 [2024-12-13 10:40:13.938791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.221 qpair failed and we were unable to recover it. 00:38:20.221 [2024-12-13 10:40:13.938936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.221 [2024-12-13 10:40:13.938957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.221 qpair failed and we were unable to recover it. 00:38:20.221 [2024-12-13 10:40:13.939182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.221 [2024-12-13 10:40:13.939201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.221 qpair failed and we were unable to recover it. 00:38:20.221 [2024-12-13 10:40:13.939433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.221 [2024-12-13 10:40:13.939466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.221 qpair failed and we were unable to recover it. 00:38:20.221 [2024-12-13 10:40:13.939702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.221 [2024-12-13 10:40:13.939718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.221 qpair failed and we were unable to recover it. 00:38:20.221 [2024-12-13 10:40:13.939944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.221 [2024-12-13 10:40:13.939961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.221 qpair failed and we were unable to recover it. 00:38:20.221 [2024-12-13 10:40:13.940105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.221 [2024-12-13 10:40:13.940122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.221 qpair failed and we were unable to recover it. 00:38:20.221 [2024-12-13 10:40:13.940279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.222 [2024-12-13 10:40:13.940295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.222 qpair failed and we were unable to recover it. 00:38:20.222 [2024-12-13 10:40:13.940511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.222 [2024-12-13 10:40:13.940527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.222 qpair failed and we were unable to recover it. 00:38:20.222 [2024-12-13 10:40:13.940624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.222 [2024-12-13 10:40:13.940640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.222 qpair failed and we were unable to recover it. 00:38:20.222 [2024-12-13 10:40:13.940891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.222 [2024-12-13 10:40:13.940907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.222 qpair failed and we were unable to recover it. 00:38:20.222 [2024-12-13 10:40:13.941112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.222 [2024-12-13 10:40:13.941127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.222 qpair failed and we were unable to recover it. 00:38:20.222 [2024-12-13 10:40:13.941265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.222 [2024-12-13 10:40:13.941280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.222 qpair failed and we were unable to recover it. 00:38:20.222 [2024-12-13 10:40:13.941459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.222 [2024-12-13 10:40:13.941475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.222 qpair failed and we were unable to recover it. 00:38:20.222 [2024-12-13 10:40:13.941637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.222 [2024-12-13 10:40:13.941652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.222 qpair failed and we were unable to recover it. 00:38:20.222 [2024-12-13 10:40:13.941878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.222 [2024-12-13 10:40:13.941894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.222 qpair failed and we were unable to recover it. 00:38:20.222 [2024-12-13 10:40:13.942099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.222 [2024-12-13 10:40:13.942115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.222 qpair failed and we were unable to recover it. 00:38:20.222 [2024-12-13 10:40:13.942340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.222 [2024-12-13 10:40:13.942355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.222 qpair failed and we were unable to recover it. 00:38:20.222 [2024-12-13 10:40:13.942501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.222 [2024-12-13 10:40:13.942517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.222 qpair failed and we were unable to recover it. 00:38:20.222 [2024-12-13 10:40:13.942758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.222 [2024-12-13 10:40:13.942774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.222 qpair failed and we were unable to recover it. 00:38:20.222 [2024-12-13 10:40:13.943029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.222 [2024-12-13 10:40:13.943044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.222 qpair failed and we were unable to recover it. 00:38:20.222 [2024-12-13 10:40:13.943275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.222 [2024-12-13 10:40:13.943291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.222 qpair failed and we were unable to recover it. 00:38:20.222 [2024-12-13 10:40:13.943385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.222 [2024-12-13 10:40:13.943400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.222 qpair failed and we were unable to recover it. 00:38:20.222 [2024-12-13 10:40:13.943540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.222 [2024-12-13 10:40:13.943556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.222 qpair failed and we were unable to recover it. 00:38:20.222 [2024-12-13 10:40:13.943661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.222 [2024-12-13 10:40:13.943677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.222 qpair failed and we were unable to recover it. 00:38:20.222 [2024-12-13 10:40:13.943898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.222 [2024-12-13 10:40:13.943914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.222 qpair failed and we were unable to recover it. 00:38:20.222 [2024-12-13 10:40:13.944059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.222 [2024-12-13 10:40:13.944075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.222 qpair failed and we were unable to recover it. 00:38:20.222 [2024-12-13 10:40:13.944216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.222 [2024-12-13 10:40:13.944231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.222 qpair failed and we were unable to recover it. 00:38:20.222 [2024-12-13 10:40:13.944329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.222 [2024-12-13 10:40:13.944344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.222 qpair failed and we were unable to recover it. 00:38:20.222 [2024-12-13 10:40:13.944504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.222 [2024-12-13 10:40:13.944521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.222 qpair failed and we were unable to recover it. 00:38:20.222 [2024-12-13 10:40:13.944719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.222 [2024-12-13 10:40:13.944734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.222 qpair failed and we were unable to recover it. 00:38:20.222 [2024-12-13 10:40:13.944957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.222 [2024-12-13 10:40:13.944973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.222 qpair failed and we were unable to recover it. 00:38:20.222 [2024-12-13 10:40:13.945134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.222 [2024-12-13 10:40:13.945149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.222 qpair failed and we were unable to recover it. 00:38:20.222 [2024-12-13 10:40:13.945304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.222 [2024-12-13 10:40:13.945320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.222 qpair failed and we were unable to recover it. 00:38:20.222 [2024-12-13 10:40:13.945581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.222 [2024-12-13 10:40:13.945597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.222 qpair failed and we were unable to recover it. 00:38:20.222 [2024-12-13 10:40:13.945758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.222 [2024-12-13 10:40:13.945773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.222 qpair failed and we were unable to recover it. 00:38:20.222 [2024-12-13 10:40:13.946014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.222 [2024-12-13 10:40:13.946029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.222 qpair failed and we were unable to recover it. 00:38:20.222 [2024-12-13 10:40:13.946277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.222 [2024-12-13 10:40:13.946292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.222 qpair failed and we were unable to recover it. 00:38:20.222 [2024-12-13 10:40:13.946379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.222 [2024-12-13 10:40:13.946397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.222 qpair failed and we were unable to recover it. 00:38:20.222 [2024-12-13 10:40:13.946573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.222 [2024-12-13 10:40:13.946589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.222 qpair failed and we were unable to recover it. 00:38:20.222 [2024-12-13 10:40:13.946815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.222 [2024-12-13 10:40:13.946831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.222 qpair failed and we were unable to recover it. 00:38:20.222 [2024-12-13 10:40:13.946979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.222 [2024-12-13 10:40:13.946995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.222 qpair failed and we were unable to recover it. 00:38:20.222 [2024-12-13 10:40:13.947162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.222 [2024-12-13 10:40:13.947177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.222 qpair failed and we were unable to recover it. 00:38:20.222 [2024-12-13 10:40:13.947413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.222 [2024-12-13 10:40:13.947428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.222 qpair failed and we were unable to recover it. 00:38:20.222 [2024-12-13 10:40:13.947605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.222 [2024-12-13 10:40:13.947621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.222 qpair failed and we were unable to recover it. 00:38:20.222 [2024-12-13 10:40:13.947768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.222 [2024-12-13 10:40:13.947784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.222 qpair failed and we were unable to recover it. 00:38:20.223 [2024-12-13 10:40:13.947928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.223 [2024-12-13 10:40:13.947944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.223 qpair failed and we were unable to recover it. 00:38:20.223 [2024-12-13 10:40:13.948118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.223 [2024-12-13 10:40:13.948133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.223 qpair failed and we were unable to recover it. 00:38:20.223 [2024-12-13 10:40:13.948215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.223 [2024-12-13 10:40:13.948232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.223 qpair failed and we were unable to recover it. 00:38:20.223 [2024-12-13 10:40:13.948402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.223 [2024-12-13 10:40:13.948416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.223 qpair failed and we were unable to recover it. 00:38:20.223 [2024-12-13 10:40:13.948498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.223 [2024-12-13 10:40:13.948514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.223 qpair failed and we were unable to recover it. 00:38:20.223 [2024-12-13 10:40:13.948745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.223 [2024-12-13 10:40:13.948761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.223 qpair failed and we were unable to recover it. 00:38:20.223 [2024-12-13 10:40:13.948966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.223 [2024-12-13 10:40:13.948982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.223 qpair failed and we were unable to recover it. 00:38:20.223 [2024-12-13 10:40:13.949248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.223 [2024-12-13 10:40:13.949263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.223 qpair failed and we were unable to recover it. 00:38:20.223 [2024-12-13 10:40:13.949465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.223 [2024-12-13 10:40:13.949481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.223 qpair failed and we were unable to recover it. 00:38:20.223 [2024-12-13 10:40:13.949730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.223 [2024-12-13 10:40:13.949745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.223 qpair failed and we were unable to recover it. 00:38:20.223 [2024-12-13 10:40:13.949904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.223 [2024-12-13 10:40:13.949919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.223 qpair failed and we were unable to recover it. 00:38:20.223 [2024-12-13 10:40:13.950124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.223 [2024-12-13 10:40:13.950139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.223 qpair failed and we were unable to recover it. 00:38:20.223 [2024-12-13 10:40:13.950363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.223 [2024-12-13 10:40:13.950378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.223 qpair failed and we were unable to recover it. 00:38:20.223 [2024-12-13 10:40:13.950605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.223 [2024-12-13 10:40:13.950621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.223 qpair failed and we were unable to recover it. 00:38:20.223 [2024-12-13 10:40:13.950855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.223 [2024-12-13 10:40:13.950871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.223 qpair failed and we were unable to recover it. 00:38:20.223 [2024-12-13 10:40:13.950952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.223 [2024-12-13 10:40:13.950967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.223 qpair failed and we were unable to recover it. 00:38:20.223 [2024-12-13 10:40:13.951115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.223 [2024-12-13 10:40:13.951131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.223 qpair failed and we were unable to recover it. 00:38:20.223 [2024-12-13 10:40:13.951296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.223 [2024-12-13 10:40:13.951310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.223 qpair failed and we were unable to recover it. 00:38:20.223 [2024-12-13 10:40:13.951540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.223 [2024-12-13 10:40:13.951555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.223 qpair failed and we were unable to recover it. 00:38:20.223 [2024-12-13 10:40:13.951710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.223 [2024-12-13 10:40:13.951738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.223 qpair failed and we were unable to recover it. 00:38:20.223 [2024-12-13 10:40:13.951951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.223 [2024-12-13 10:40:13.951967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.223 qpair failed and we were unable to recover it. 00:38:20.223 [2024-12-13 10:40:13.952121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.223 [2024-12-13 10:40:13.952136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.223 qpair failed and we were unable to recover it. 00:38:20.223 [2024-12-13 10:40:13.952290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.223 [2024-12-13 10:40:13.952305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.223 qpair failed and we were unable to recover it. 00:38:20.223 [2024-12-13 10:40:13.952445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.223 [2024-12-13 10:40:13.952467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.223 qpair failed and we were unable to recover it. 00:38:20.223 [2024-12-13 10:40:13.952608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.223 [2024-12-13 10:40:13.952624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.223 qpair failed and we were unable to recover it. 00:38:20.223 [2024-12-13 10:40:13.952704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.223 [2024-12-13 10:40:13.952720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.223 qpair failed and we were unable to recover it. 00:38:20.223 [2024-12-13 10:40:13.952868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.223 [2024-12-13 10:40:13.952884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.223 qpair failed and we were unable to recover it. 00:38:20.223 [2024-12-13 10:40:13.953066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.223 [2024-12-13 10:40:13.953081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.223 qpair failed and we were unable to recover it. 00:38:20.223 [2024-12-13 10:40:13.953237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.223 [2024-12-13 10:40:13.953252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.223 qpair failed and we were unable to recover it. 00:38:20.223 [2024-12-13 10:40:13.953483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.223 [2024-12-13 10:40:13.953499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.223 qpair failed and we were unable to recover it. 00:38:20.223 [2024-12-13 10:40:13.953663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.223 [2024-12-13 10:40:13.953679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.223 qpair failed and we were unable to recover it. 00:38:20.223 [2024-12-13 10:40:13.953900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.223 [2024-12-13 10:40:13.953916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.223 qpair failed and we were unable to recover it. 00:38:20.223 [2024-12-13 10:40:13.954080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.223 [2024-12-13 10:40:13.954098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.223 qpair failed and we were unable to recover it. 00:38:20.223 [2024-12-13 10:40:13.954248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.223 [2024-12-13 10:40:13.954263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.223 qpair failed and we were unable to recover it. 00:38:20.223 [2024-12-13 10:40:13.954419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.223 [2024-12-13 10:40:13.954435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.223 qpair failed and we were unable to recover it. 00:38:20.223 [2024-12-13 10:40:13.954647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.223 [2024-12-13 10:40:13.954663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.223 qpair failed and we were unable to recover it. 00:38:20.223 [2024-12-13 10:40:13.954821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.223 [2024-12-13 10:40:13.954836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.223 qpair failed and we were unable to recover it. 00:38:20.223 [2024-12-13 10:40:13.954979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.223 [2024-12-13 10:40:13.954995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.223 qpair failed and we were unable to recover it. 00:38:20.223 [2024-12-13 10:40:13.955206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.223 [2024-12-13 10:40:13.955222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.223 qpair failed and we were unable to recover it. 00:38:20.223 [2024-12-13 10:40:13.955456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.223 [2024-12-13 10:40:13.955473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.223 qpair failed and we were unable to recover it. 00:38:20.223 [2024-12-13 10:40:13.955554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.223 [2024-12-13 10:40:13.955570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.223 qpair failed and we were unable to recover it. 00:38:20.223 [2024-12-13 10:40:13.955714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.224 [2024-12-13 10:40:13.955729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.224 qpair failed and we were unable to recover it. 00:38:20.224 [2024-12-13 10:40:13.955957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.224 [2024-12-13 10:40:13.955973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.224 qpair failed and we were unable to recover it. 00:38:20.224 [2024-12-13 10:40:13.956210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.224 [2024-12-13 10:40:13.956226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.224 qpair failed and we were unable to recover it. 00:38:20.224 [2024-12-13 10:40:13.956481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.224 [2024-12-13 10:40:13.956510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.224 qpair failed and we were unable to recover it. 00:38:20.224 [2024-12-13 10:40:13.956604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.224 [2024-12-13 10:40:13.956622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.224 qpair failed and we were unable to recover it. 00:38:20.224 [2024-12-13 10:40:13.956779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.224 [2024-12-13 10:40:13.956795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.224 qpair failed and we were unable to recover it. 00:38:20.224 [2024-12-13 10:40:13.957009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.224 [2024-12-13 10:40:13.957024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.224 qpair failed and we were unable to recover it. 00:38:20.224 [2024-12-13 10:40:13.957232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.224 [2024-12-13 10:40:13.957248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.224 qpair failed and we were unable to recover it. 00:38:20.224 [2024-12-13 10:40:13.957475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.224 [2024-12-13 10:40:13.957491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.224 qpair failed and we were unable to recover it. 00:38:20.224 [2024-12-13 10:40:13.957717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.224 [2024-12-13 10:40:13.957733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.224 qpair failed and we were unable to recover it. 00:38:20.224 [2024-12-13 10:40:13.957958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.224 [2024-12-13 10:40:13.957974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.224 qpair failed and we were unable to recover it. 00:38:20.224 [2024-12-13 10:40:13.958187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.224 [2024-12-13 10:40:13.958203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.224 qpair failed and we were unable to recover it. 00:38:20.224 [2024-12-13 10:40:13.958376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.224 [2024-12-13 10:40:13.958392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.224 qpair failed and we were unable to recover it. 00:38:20.224 [2024-12-13 10:40:13.958617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.224 [2024-12-13 10:40:13.958633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.224 qpair failed and we were unable to recover it. 00:38:20.224 [2024-12-13 10:40:13.958856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.224 [2024-12-13 10:40:13.958871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.224 qpair failed and we were unable to recover it. 00:38:20.224 [2024-12-13 10:40:13.959031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.224 [2024-12-13 10:40:13.959046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.224 qpair failed and we were unable to recover it. 00:38:20.224 [2024-12-13 10:40:13.959270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.224 [2024-12-13 10:40:13.959285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.224 qpair failed and we were unable to recover it. 00:38:20.224 [2024-12-13 10:40:13.959379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.224 [2024-12-13 10:40:13.959395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.224 qpair failed and we were unable to recover it. 00:38:20.224 [2024-12-13 10:40:13.959700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.224 [2024-12-13 10:40:13.959742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:20.224 qpair failed and we were unable to recover it. 00:38:20.224 [2024-12-13 10:40:13.960036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.224 [2024-12-13 10:40:13.960070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:20.224 qpair failed and we were unable to recover it. 00:38:20.224 [2024-12-13 10:40:13.960334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.224 [2024-12-13 10:40:13.960369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:20.224 qpair failed and we were unable to recover it. 00:38:20.224 [2024-12-13 10:40:13.960630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.224 [2024-12-13 10:40:13.960648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.224 qpair failed and we were unable to recover it. 00:38:20.224 [2024-12-13 10:40:13.960892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.224 [2024-12-13 10:40:13.960908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.224 qpair failed and we were unable to recover it. 00:38:20.224 [2024-12-13 10:40:13.961051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.224 [2024-12-13 10:40:13.961067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.224 qpair failed and we were unable to recover it. 00:38:20.224 [2024-12-13 10:40:13.961309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.224 [2024-12-13 10:40:13.961325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.224 qpair failed and we were unable to recover it. 00:38:20.224 [2024-12-13 10:40:13.961423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.224 [2024-12-13 10:40:13.961439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.224 qpair failed and we were unable to recover it. 00:38:20.224 [2024-12-13 10:40:13.961655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.224 [2024-12-13 10:40:13.961671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.224 qpair failed and we were unable to recover it. 00:38:20.224 [2024-12-13 10:40:13.961806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.224 [2024-12-13 10:40:13.961821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.224 qpair failed and we were unable to recover it. 00:38:20.224 [2024-12-13 10:40:13.962049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.224 [2024-12-13 10:40:13.962064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.224 qpair failed and we were unable to recover it. 00:38:20.224 [2024-12-13 10:40:13.962201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.224 [2024-12-13 10:40:13.962216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.224 qpair failed and we were unable to recover it. 00:38:20.224 [2024-12-13 10:40:13.962404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.224 [2024-12-13 10:40:13.962419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.224 qpair failed and we were unable to recover it. 00:38:20.224 [2024-12-13 10:40:13.962570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.224 [2024-12-13 10:40:13.962588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.224 qpair failed and we were unable to recover it. 00:38:20.224 [2024-12-13 10:40:13.962746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.224 [2024-12-13 10:40:13.962761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.224 qpair failed and we were unable to recover it. 00:38:20.224 [2024-12-13 10:40:13.962926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.224 [2024-12-13 10:40:13.962942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.224 qpair failed and we were unable to recover it. 00:38:20.224 [2024-12-13 10:40:13.963097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.224 [2024-12-13 10:40:13.963113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.224 qpair failed and we were unable to recover it. 00:38:20.224 [2024-12-13 10:40:13.963293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.224 [2024-12-13 10:40:13.963310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.224 qpair failed and we were unable to recover it. 00:38:20.224 [2024-12-13 10:40:13.963388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.224 [2024-12-13 10:40:13.963403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.224 qpair failed and we were unable to recover it. 00:38:20.224 [2024-12-13 10:40:13.963566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.224 [2024-12-13 10:40:13.963582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.224 qpair failed and we were unable to recover it. 00:38:20.224 [2024-12-13 10:40:13.963746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.224 [2024-12-13 10:40:13.963762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.224 qpair failed and we were unable to recover it. 00:38:20.224 [2024-12-13 10:40:13.963919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.224 [2024-12-13 10:40:13.963935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.224 qpair failed and we were unable to recover it. 00:38:20.224 [2024-12-13 10:40:13.964090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.224 [2024-12-13 10:40:13.964107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.224 qpair failed and we were unable to recover it. 00:38:20.224 [2024-12-13 10:40:13.964259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.224 [2024-12-13 10:40:13.964276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.224 qpair failed and we were unable to recover it. 00:38:20.224 [2024-12-13 10:40:13.964454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.224 [2024-12-13 10:40:13.964471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.224 qpair failed and we were unable to recover it. 00:38:20.225 [2024-12-13 10:40:13.964652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.225 [2024-12-13 10:40:13.964667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.225 qpair failed and we were unable to recover it. 00:38:20.225 [2024-12-13 10:40:13.964826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.225 [2024-12-13 10:40:13.964842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.225 qpair failed and we were unable to recover it. 00:38:20.225 [2024-12-13 10:40:13.965046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.225 [2024-12-13 10:40:13.965071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.225 qpair failed and we were unable to recover it. 00:38:20.225 [2024-12-13 10:40:13.965229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.225 [2024-12-13 10:40:13.965245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.225 qpair failed and we were unable to recover it. 00:38:20.225 [2024-12-13 10:40:13.965472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.225 [2024-12-13 10:40:13.965490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.225 qpair failed and we were unable to recover it. 00:38:20.225 [2024-12-13 10:40:13.965711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.225 [2024-12-13 10:40:13.965727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.225 qpair failed and we were unable to recover it. 00:38:20.225 [2024-12-13 10:40:13.965833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.225 [2024-12-13 10:40:13.965849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.225 qpair failed and we were unable to recover it. 00:38:20.225 [2024-12-13 10:40:13.966104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.225 [2024-12-13 10:40:13.966120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.225 qpair failed and we were unable to recover it. 00:38:20.225 [2024-12-13 10:40:13.966363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.225 [2024-12-13 10:40:13.966380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.225 qpair failed and we were unable to recover it. 00:38:20.225 [2024-12-13 10:40:13.966612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.225 [2024-12-13 10:40:13.966628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.225 qpair failed and we were unable to recover it. 00:38:20.225 [2024-12-13 10:40:13.966794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.225 [2024-12-13 10:40:13.966810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.225 qpair failed and we were unable to recover it. 00:38:20.225 [2024-12-13 10:40:13.967025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.225 [2024-12-13 10:40:13.967040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.225 qpair failed and we were unable to recover it. 00:38:20.225 [2024-12-13 10:40:13.967239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.225 [2024-12-13 10:40:13.967255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.225 qpair failed and we were unable to recover it. 00:38:20.225 [2024-12-13 10:40:13.967391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.225 [2024-12-13 10:40:13.967406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.225 qpair failed and we were unable to recover it. 00:38:20.225 [2024-12-13 10:40:13.967563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.225 [2024-12-13 10:40:13.967579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.225 qpair failed and we were unable to recover it. 00:38:20.225 [2024-12-13 10:40:13.967841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.225 [2024-12-13 10:40:13.967872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:20.225 qpair failed and we were unable to recover it. 00:38:20.225 [2024-12-13 10:40:13.968080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.225 [2024-12-13 10:40:13.968106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:20.225 qpair failed and we were unable to recover it. 00:38:20.225 [2024-12-13 10:40:13.968326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.225 [2024-12-13 10:40:13.968384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:20.225 qpair failed and we were unable to recover it. 00:38:20.225 [2024-12-13 10:40:13.968634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.225 [2024-12-13 10:40:13.968659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:20.225 qpair failed and we were unable to recover it. 00:38:20.225 [2024-12-13 10:40:13.968781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.225 [2024-12-13 10:40:13.968805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:20.225 qpair failed and we were unable to recover it. 00:38:20.225 [2024-12-13 10:40:13.968909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.225 [2024-12-13 10:40:13.968932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:20.225 qpair failed and we were unable to recover it. 00:38:20.225 [2024-12-13 10:40:13.969099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.225 [2024-12-13 10:40:13.969123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:20.225 qpair failed and we were unable to recover it. 00:38:20.225 [2024-12-13 10:40:13.969368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.225 [2024-12-13 10:40:13.969393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:20.225 qpair failed and we were unable to recover it. 00:38:20.225 [2024-12-13 10:40:13.969549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.225 [2024-12-13 10:40:13.969574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:20.225 qpair failed and we were unable to recover it. 00:38:20.225 [2024-12-13 10:40:13.969685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.225 [2024-12-13 10:40:13.969702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.225 qpair failed and we were unable to recover it. 00:38:20.225 [2024-12-13 10:40:13.969856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.225 [2024-12-13 10:40:13.969871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.225 qpair failed and we were unable to recover it. 00:38:20.225 [2024-12-13 10:40:13.969960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.225 [2024-12-13 10:40:13.969976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.225 qpair failed and we were unable to recover it. 00:38:20.225 [2024-12-13 10:40:13.970219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.225 [2024-12-13 10:40:13.970234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.225 qpair failed and we were unable to recover it. 00:38:20.225 [2024-12-13 10:40:13.970388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.225 [2024-12-13 10:40:13.970405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.225 qpair failed and we were unable to recover it. 00:38:20.225 [2024-12-13 10:40:13.970638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.225 [2024-12-13 10:40:13.970654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.225 qpair failed and we were unable to recover it. 00:38:20.225 [2024-12-13 10:40:13.970865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.225 [2024-12-13 10:40:13.970880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.225 qpair failed and we were unable to recover it. 00:38:20.225 [2024-12-13 10:40:13.971145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.225 [2024-12-13 10:40:13.971160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.225 qpair failed and we were unable to recover it. 00:38:20.225 [2024-12-13 10:40:13.971390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.225 [2024-12-13 10:40:13.971405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.225 qpair failed and we were unable to recover it. 00:38:20.225 [2024-12-13 10:40:13.971586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.225 [2024-12-13 10:40:13.971601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.225 qpair failed and we were unable to recover it. 00:38:20.225 [2024-12-13 10:40:13.971801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.225 [2024-12-13 10:40:13.971816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.225 qpair failed and we were unable to recover it. 00:38:20.225 [2024-12-13 10:40:13.971956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.225 [2024-12-13 10:40:13.971971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.225 qpair failed and we were unable to recover it. 00:38:20.225 [2024-12-13 10:40:13.972135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.225 [2024-12-13 10:40:13.972150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.225 qpair failed and we were unable to recover it. 00:38:20.225 [2024-12-13 10:40:13.972285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.225 [2024-12-13 10:40:13.972300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.225 qpair failed and we were unable to recover it. 00:38:20.225 [2024-12-13 10:40:13.972540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.225 [2024-12-13 10:40:13.972556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.225 qpair failed and we were unable to recover it. 00:38:20.225 [2024-12-13 10:40:13.972761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.225 [2024-12-13 10:40:13.972778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.225 qpair failed and we were unable to recover it. 00:38:20.225 [2024-12-13 10:40:13.972888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.225 [2024-12-13 10:40:13.972903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.225 qpair failed and we were unable to recover it. 00:38:20.225 [2024-12-13 10:40:13.973068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.225 [2024-12-13 10:40:13.973084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.226 qpair failed and we were unable to recover it. 00:38:20.226 [2024-12-13 10:40:13.973217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.226 [2024-12-13 10:40:13.973232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.226 qpair failed and we were unable to recover it. 00:38:20.226 [2024-12-13 10:40:13.973459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.226 [2024-12-13 10:40:13.973475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.226 qpair failed and we were unable to recover it. 00:38:20.226 [2024-12-13 10:40:13.973631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.226 [2024-12-13 10:40:13.973647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.226 qpair failed and we were unable to recover it. 00:38:20.226 [2024-12-13 10:40:13.973738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.226 [2024-12-13 10:40:13.973753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.226 qpair failed and we were unable to recover it. 00:38:20.226 [2024-12-13 10:40:13.973973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.226 [2024-12-13 10:40:13.973988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.226 qpair failed and we were unable to recover it. 00:38:20.226 [2024-12-13 10:40:13.974215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.226 [2024-12-13 10:40:13.974231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.226 qpair failed and we were unable to recover it. 00:38:20.226 [2024-12-13 10:40:13.974431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.226 [2024-12-13 10:40:13.974454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.226 qpair failed and we were unable to recover it. 00:38:20.226 [2024-12-13 10:40:13.974674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.226 [2024-12-13 10:40:13.974689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.226 qpair failed and we were unable to recover it. 00:38:20.226 [2024-12-13 10:40:13.974838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.226 [2024-12-13 10:40:13.974853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.226 qpair failed and we were unable to recover it. 00:38:20.226 [2024-12-13 10:40:13.975009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.226 [2024-12-13 10:40:13.975024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.226 qpair failed and we were unable to recover it. 00:38:20.226 [2024-12-13 10:40:13.975225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.226 [2024-12-13 10:40:13.975241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.226 qpair failed and we were unable to recover it. 00:38:20.226 [2024-12-13 10:40:13.975461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.226 [2024-12-13 10:40:13.975477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.226 qpair failed and we were unable to recover it. 00:38:20.226 [2024-12-13 10:40:13.975574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.226 [2024-12-13 10:40:13.975590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.226 qpair failed and we were unable to recover it. 00:38:20.226 [2024-12-13 10:40:13.975797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.226 [2024-12-13 10:40:13.975812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.226 qpair failed and we were unable to recover it. 00:38:20.226 [2024-12-13 10:40:13.975969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.226 [2024-12-13 10:40:13.975983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.226 qpair failed and we were unable to recover it. 00:38:20.226 [2024-12-13 10:40:13.976123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.226 [2024-12-13 10:40:13.976139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.226 qpair failed and we were unable to recover it. 00:38:20.226 [2024-12-13 10:40:13.976283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.226 [2024-12-13 10:40:13.976300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.226 qpair failed and we were unable to recover it. 00:38:20.226 [2024-12-13 10:40:13.976383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.226 [2024-12-13 10:40:13.976398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.226 qpair failed and we were unable to recover it. 00:38:20.226 [2024-12-13 10:40:13.976505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.226 [2024-12-13 10:40:13.976520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.226 qpair failed and we were unable to recover it. 00:38:20.226 [2024-12-13 10:40:13.976748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.226 [2024-12-13 10:40:13.976763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.226 qpair failed and we were unable to recover it. 00:38:20.226 [2024-12-13 10:40:13.976967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.226 [2024-12-13 10:40:13.976982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.226 qpair failed and we were unable to recover it. 00:38:20.226 [2024-12-13 10:40:13.977136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.226 [2024-12-13 10:40:13.977151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.226 qpair failed and we were unable to recover it. 00:38:20.226 [2024-12-13 10:40:13.977372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.226 [2024-12-13 10:40:13.977388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.226 qpair failed and we were unable to recover it. 00:38:20.226 [2024-12-13 10:40:13.977540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.226 [2024-12-13 10:40:13.977557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.226 qpair failed and we were unable to recover it. 00:38:20.226 [2024-12-13 10:40:13.977710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.226 [2024-12-13 10:40:13.977725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.226 qpair failed and we were unable to recover it. 00:38:20.226 [2024-12-13 10:40:13.977924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.227 [2024-12-13 10:40:13.977940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.227 qpair failed and we were unable to recover it. 00:38:20.227 [2024-12-13 10:40:13.978163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.227 [2024-12-13 10:40:13.978181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.227 qpair failed and we were unable to recover it. 00:38:20.227 [2024-12-13 10:40:13.978272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.227 [2024-12-13 10:40:13.978287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.227 qpair failed and we were unable to recover it. 00:38:20.227 [2024-12-13 10:40:13.978460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.227 [2024-12-13 10:40:13.978476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.227 qpair failed and we were unable to recover it. 00:38:20.227 [2024-12-13 10:40:13.978613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.227 [2024-12-13 10:40:13.978629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.227 qpair failed and we were unable to recover it. 00:38:20.227 [2024-12-13 10:40:13.978830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.227 [2024-12-13 10:40:13.978846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.227 qpair failed and we were unable to recover it. 00:38:20.227 [2024-12-13 10:40:13.978923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.227 [2024-12-13 10:40:13.978939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.227 qpair failed and we were unable to recover it. 00:38:20.227 [2024-12-13 10:40:13.979103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.227 [2024-12-13 10:40:13.979119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.227 qpair failed and we were unable to recover it. 00:38:20.227 [2024-12-13 10:40:13.979372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.227 [2024-12-13 10:40:13.979393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.227 qpair failed and we were unable to recover it. 00:38:20.227 [2024-12-13 10:40:13.979618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.227 [2024-12-13 10:40:13.979634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.227 qpair failed and we were unable to recover it. 00:38:20.227 [2024-12-13 10:40:13.979796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.227 [2024-12-13 10:40:13.979810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.227 qpair failed and we were unable to recover it. 00:38:20.227 [2024-12-13 10:40:13.980012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.227 [2024-12-13 10:40:13.980028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.227 qpair failed and we were unable to recover it. 00:38:20.227 [2024-12-13 10:40:13.980178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.227 [2024-12-13 10:40:13.980193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.227 qpair failed and we were unable to recover it. 00:38:20.227 [2024-12-13 10:40:13.980275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.227 [2024-12-13 10:40:13.980290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.227 qpair failed and we were unable to recover it. 00:38:20.227 [2024-12-13 10:40:13.980365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.227 [2024-12-13 10:40:13.980379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.227 qpair failed and we were unable to recover it. 00:38:20.227 [2024-12-13 10:40:13.980528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.227 [2024-12-13 10:40:13.980544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.227 qpair failed and we were unable to recover it. 00:38:20.227 [2024-12-13 10:40:13.980701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.227 [2024-12-13 10:40:13.980717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.227 qpair failed and we were unable to recover it. 00:38:20.227 [2024-12-13 10:40:13.980859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.227 [2024-12-13 10:40:13.980875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.227 qpair failed and we were unable to recover it. 00:38:20.227 [2024-12-13 10:40:13.981048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.227 [2024-12-13 10:40:13.981064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.227 qpair failed and we were unable to recover it. 00:38:20.227 [2024-12-13 10:40:13.981224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.227 [2024-12-13 10:40:13.981239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.227 qpair failed and we were unable to recover it. 00:38:20.227 [2024-12-13 10:40:13.981491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.227 [2024-12-13 10:40:13.981506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.227 qpair failed and we were unable to recover it. 00:38:20.227 [2024-12-13 10:40:13.981657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.227 [2024-12-13 10:40:13.981673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.227 qpair failed and we were unable to recover it. 00:38:20.227 [2024-12-13 10:40:13.981849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.227 [2024-12-13 10:40:13.981864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.227 qpair failed and we were unable to recover it. 00:38:20.227 [2024-12-13 10:40:13.982045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.227 [2024-12-13 10:40:13.982061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.227 qpair failed and we were unable to recover it. 00:38:20.227 [2024-12-13 10:40:13.982211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.227 [2024-12-13 10:40:13.982226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.227 qpair failed and we were unable to recover it. 00:38:20.227 [2024-12-13 10:40:13.982455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.227 [2024-12-13 10:40:13.982471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.227 qpair failed and we were unable to recover it. 00:38:20.227 [2024-12-13 10:40:13.982698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.227 [2024-12-13 10:40:13.982714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.227 qpair failed and we were unable to recover it. 00:38:20.227 [2024-12-13 10:40:13.982888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.227 [2024-12-13 10:40:13.982903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.227 qpair failed and we were unable to recover it. 00:38:20.227 [2024-12-13 10:40:13.983046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.227 [2024-12-13 10:40:13.983062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.227 qpair failed and we were unable to recover it. 00:38:20.227 [2024-12-13 10:40:13.983213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.227 [2024-12-13 10:40:13.983229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.227 qpair failed and we were unable to recover it. 00:38:20.227 [2024-12-13 10:40:13.983368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.227 [2024-12-13 10:40:13.983383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.227 qpair failed and we were unable to recover it. 00:38:20.227 [2024-12-13 10:40:13.983622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.227 [2024-12-13 10:40:13.983638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.227 qpair failed and we were unable to recover it. 00:38:20.227 [2024-12-13 10:40:13.983807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.227 [2024-12-13 10:40:13.983822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.227 qpair failed and we were unable to recover it. 00:38:20.227 [2024-12-13 10:40:13.983971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.227 [2024-12-13 10:40:13.983986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.227 qpair failed and we were unable to recover it. 00:38:20.227 [2024-12-13 10:40:13.984068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.227 [2024-12-13 10:40:13.984084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.227 qpair failed and we were unable to recover it. 00:38:20.227 [2024-12-13 10:40:13.984248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.227 [2024-12-13 10:40:13.984264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.227 qpair failed and we were unable to recover it. 00:38:20.227 [2024-12-13 10:40:13.984432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.227 [2024-12-13 10:40:13.984447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.227 qpair failed and we were unable to recover it. 00:38:20.227 [2024-12-13 10:40:13.984715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.227 [2024-12-13 10:40:13.984731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.227 qpair failed and we were unable to recover it. 00:38:20.227 [2024-12-13 10:40:13.984956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.227 [2024-12-13 10:40:13.984971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.227 qpair failed and we were unable to recover it. 00:38:20.228 [2024-12-13 10:40:13.985072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.228 [2024-12-13 10:40:13.985087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.228 qpair failed and we were unable to recover it. 00:38:20.228 [2024-12-13 10:40:13.985277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.228 [2024-12-13 10:40:13.985292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.228 qpair failed and we were unable to recover it. 00:38:20.228 [2024-12-13 10:40:13.985443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.228 [2024-12-13 10:40:13.985467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.228 qpair failed and we were unable to recover it. 00:38:20.228 [2024-12-13 10:40:13.985699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.228 [2024-12-13 10:40:13.985716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.228 qpair failed and we were unable to recover it. 00:38:20.228 [2024-12-13 10:40:13.985942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.228 [2024-12-13 10:40:13.985958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.228 qpair failed and we were unable to recover it. 00:38:20.228 [2024-12-13 10:40:13.986057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.228 [2024-12-13 10:40:13.986072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.228 qpair failed and we were unable to recover it. 00:38:20.228 [2024-12-13 10:40:13.986295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.228 [2024-12-13 10:40:13.986310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.228 qpair failed and we were unable to recover it. 00:38:20.228 [2024-12-13 10:40:13.986410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.228 [2024-12-13 10:40:13.986425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.228 qpair failed and we were unable to recover it. 00:38:20.228 [2024-12-13 10:40:13.986663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.228 [2024-12-13 10:40:13.986679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.228 qpair failed and we were unable to recover it. 00:38:20.228 [2024-12-13 10:40:13.986832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.228 [2024-12-13 10:40:13.986848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.228 qpair failed and we were unable to recover it. 00:38:20.228 [2024-12-13 10:40:13.987048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.228 [2024-12-13 10:40:13.987064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.228 qpair failed and we were unable to recover it. 00:38:20.228 [2024-12-13 10:40:13.987166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.228 [2024-12-13 10:40:13.987181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.228 qpair failed and we were unable to recover it. 00:38:20.228 [2024-12-13 10:40:13.987323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.228 [2024-12-13 10:40:13.987339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.228 qpair failed and we were unable to recover it. 00:38:20.228 [2024-12-13 10:40:13.987542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.228 [2024-12-13 10:40:13.987558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.228 qpair failed and we were unable to recover it. 00:38:20.228 [2024-12-13 10:40:13.987644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.228 [2024-12-13 10:40:13.987659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.228 qpair failed and we were unable to recover it. 00:38:20.228 [2024-12-13 10:40:13.987920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.228 [2024-12-13 10:40:13.987935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.228 qpair failed and we were unable to recover it. 00:38:20.228 [2024-12-13 10:40:13.988165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.228 [2024-12-13 10:40:13.988181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.228 qpair failed and we were unable to recover it. 00:38:20.228 [2024-12-13 10:40:13.988406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.228 [2024-12-13 10:40:13.988422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.228 qpair failed and we were unable to recover it. 00:38:20.228 [2024-12-13 10:40:13.988654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.228 [2024-12-13 10:40:13.988671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.228 qpair failed and we were unable to recover it. 00:38:20.228 [2024-12-13 10:40:13.988899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.228 [2024-12-13 10:40:13.988915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.228 qpair failed and we were unable to recover it. 00:38:20.228 [2024-12-13 10:40:13.989086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.228 [2024-12-13 10:40:13.989102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.228 qpair failed and we were unable to recover it. 00:38:20.228 [2024-12-13 10:40:13.989336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.228 [2024-12-13 10:40:13.989352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.228 qpair failed and we were unable to recover it. 00:38:20.228 [2024-12-13 10:40:13.989513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.228 [2024-12-13 10:40:13.989529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.228 qpair failed and we were unable to recover it. 00:38:20.228 [2024-12-13 10:40:13.989662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.228 [2024-12-13 10:40:13.989678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.228 qpair failed and we were unable to recover it. 00:38:20.228 [2024-12-13 10:40:13.989837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.228 [2024-12-13 10:40:13.989852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.228 qpair failed and we were unable to recover it. 00:38:20.228 [2024-12-13 10:40:13.990073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.228 [2024-12-13 10:40:13.990088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.228 qpair failed and we were unable to recover it. 00:38:20.228 [2024-12-13 10:40:13.990302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.228 [2024-12-13 10:40:13.990318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.228 qpair failed and we were unable to recover it. 00:38:20.228 [2024-12-13 10:40:13.990520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.228 [2024-12-13 10:40:13.990537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.228 qpair failed and we were unable to recover it. 00:38:20.228 [2024-12-13 10:40:13.990737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.228 [2024-12-13 10:40:13.990752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.228 qpair failed and we were unable to recover it. 00:38:20.228 [2024-12-13 10:40:13.990955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.228 [2024-12-13 10:40:13.990990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:20.228 qpair failed and we were unable to recover it. 00:38:20.228 [2024-12-13 10:40:13.991188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.228 [2024-12-13 10:40:13.991212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:20.228 qpair failed and we were unable to recover it. 00:38:20.228 [2024-12-13 10:40:13.991399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.228 [2024-12-13 10:40:13.991423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:20.228 qpair failed and we were unable to recover it. 00:38:20.228 [2024-12-13 10:40:13.991542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.228 [2024-12-13 10:40:13.991566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:20.228 qpair failed and we were unable to recover it. 00:38:20.228 [2024-12-13 10:40:13.991783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.228 [2024-12-13 10:40:13.991806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:20.228 qpair failed and we were unable to recover it. 00:38:20.228 [2024-12-13 10:40:13.992025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.228 [2024-12-13 10:40:13.992048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:20.228 qpair failed and we were unable to recover it. 00:38:20.228 [2024-12-13 10:40:13.992205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.228 [2024-12-13 10:40:13.992229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:20.228 qpair failed and we were unable to recover it. 00:38:20.228 [2024-12-13 10:40:13.992472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.228 [2024-12-13 10:40:13.992497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:20.228 qpair failed and we were unable to recover it. 00:38:20.228 [2024-12-13 10:40:13.992697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.228 [2024-12-13 10:40:13.992720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:20.228 qpair failed and we were unable to recover it. 00:38:20.229 [2024-12-13 10:40:13.992933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.229 [2024-12-13 10:40:13.992951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.229 qpair failed and we were unable to recover it. 00:38:20.229 [2024-12-13 10:40:13.993098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.229 [2024-12-13 10:40:13.993113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.229 qpair failed and we were unable to recover it. 00:38:20.229 [2024-12-13 10:40:13.993316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.229 [2024-12-13 10:40:13.993332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.229 qpair failed and we were unable to recover it. 00:38:20.229 [2024-12-13 10:40:13.993440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.229 [2024-12-13 10:40:13.993460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.229 qpair failed and we were unable to recover it. 00:38:20.229 [2024-12-13 10:40:13.993548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.229 [2024-12-13 10:40:13.993570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.229 qpair failed and we were unable to recover it. 00:38:20.229 [2024-12-13 10:40:13.993794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.229 [2024-12-13 10:40:13.993811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.229 qpair failed and we were unable to recover it. 00:38:20.229 [2024-12-13 10:40:13.993962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.229 [2024-12-13 10:40:13.993977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.229 qpair failed and we were unable to recover it. 00:38:20.229 [2024-12-13 10:40:13.994130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.229 [2024-12-13 10:40:13.994147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.229 qpair failed and we were unable to recover it. 00:38:20.229 [2024-12-13 10:40:13.994309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.229 [2024-12-13 10:40:13.994326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.229 qpair failed and we were unable to recover it. 00:38:20.229 [2024-12-13 10:40:13.994528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.229 [2024-12-13 10:40:13.994544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.229 qpair failed and we were unable to recover it. 00:38:20.229 [2024-12-13 10:40:13.994760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.229 [2024-12-13 10:40:13.994776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.229 qpair failed and we were unable to recover it. 00:38:20.229 [2024-12-13 10:40:13.994914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.229 [2024-12-13 10:40:13.994929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.229 qpair failed and we were unable to recover it. 00:38:20.229 [2024-12-13 10:40:13.995079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.229 [2024-12-13 10:40:13.995094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.229 qpair failed and we were unable to recover it. 00:38:20.229 [2024-12-13 10:40:13.995317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.229 [2024-12-13 10:40:13.995333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.229 qpair failed and we were unable to recover it. 00:38:20.229 [2024-12-13 10:40:13.995503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.229 [2024-12-13 10:40:13.995519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.229 qpair failed and we were unable to recover it. 00:38:20.229 [2024-12-13 10:40:13.995700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.229 [2024-12-13 10:40:13.995716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.229 qpair failed and we were unable to recover it. 00:38:20.229 [2024-12-13 10:40:13.995924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.229 [2024-12-13 10:40:13.995940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.229 qpair failed and we were unable to recover it. 00:38:20.229 [2024-12-13 10:40:13.996092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.229 [2024-12-13 10:40:13.996107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.229 qpair failed and we were unable to recover it. 00:38:20.229 [2024-12-13 10:40:13.996355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.229 [2024-12-13 10:40:13.996370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.229 qpair failed and we were unable to recover it. 00:38:20.229 [2024-12-13 10:40:13.996621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.229 [2024-12-13 10:40:13.996637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.229 qpair failed and we were unable to recover it. 00:38:20.229 [2024-12-13 10:40:13.996837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.229 [2024-12-13 10:40:13.996852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.229 qpair failed and we were unable to recover it. 00:38:20.229 [2024-12-13 10:40:13.996993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.229 [2024-12-13 10:40:13.997009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.229 qpair failed and we were unable to recover it. 00:38:20.229 [2024-12-13 10:40:13.997235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.229 [2024-12-13 10:40:13.997251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.229 qpair failed and we were unable to recover it. 00:38:20.229 [2024-12-13 10:40:13.997465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.229 [2024-12-13 10:40:13.997481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.229 qpair failed and we were unable to recover it. 00:38:20.229 [2024-12-13 10:40:13.997583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.229 [2024-12-13 10:40:13.997599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.229 qpair failed and we were unable to recover it. 00:38:20.229 [2024-12-13 10:40:13.997683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.229 [2024-12-13 10:40:13.997698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.229 qpair failed and we were unable to recover it. 00:38:20.229 [2024-12-13 10:40:13.997902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.229 [2024-12-13 10:40:13.997917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.229 qpair failed and we were unable to recover it. 00:38:20.229 [2024-12-13 10:40:13.998075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.229 [2024-12-13 10:40:13.998090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.229 qpair failed and we were unable to recover it. 00:38:20.229 [2024-12-13 10:40:13.998233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.229 [2024-12-13 10:40:13.998249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.229 qpair failed and we were unable to recover it. 00:38:20.229 [2024-12-13 10:40:13.998388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.229 [2024-12-13 10:40:13.998403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.229 qpair failed and we were unable to recover it. 00:38:20.229 [2024-12-13 10:40:13.998571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.229 [2024-12-13 10:40:13.998587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.229 qpair failed and we were unable to recover it. 00:38:20.229 [2024-12-13 10:40:13.998867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.229 [2024-12-13 10:40:13.998894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:20.229 qpair failed and we were unable to recover it. 00:38:20.229 [2024-12-13 10:40:13.999064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.229 [2024-12-13 10:40:13.999088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:20.229 qpair failed and we were unable to recover it. 00:38:20.229 [2024-12-13 10:40:13.999305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.229 [2024-12-13 10:40:13.999328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:20.229 qpair failed and we were unable to recover it. 00:38:20.229 [2024-12-13 10:40:13.999487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.229 [2024-12-13 10:40:13.999511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:20.229 qpair failed and we were unable to recover it. 00:38:20.229 [2024-12-13 10:40:13.999744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.229 [2024-12-13 10:40:13.999772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:20.229 qpair failed and we were unable to recover it. 00:38:20.230 [2024-12-13 10:40:13.999985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.230 [2024-12-13 10:40:14.000008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:20.230 qpair failed and we were unable to recover it. 00:38:20.230 [2024-12-13 10:40:14.000113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.230 [2024-12-13 10:40:14.000130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.230 qpair failed and we were unable to recover it. 00:38:20.230 [2024-12-13 10:40:14.000278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.230 [2024-12-13 10:40:14.000293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.230 qpair failed and we were unable to recover it. 00:38:20.230 [2024-12-13 10:40:14.000453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.230 [2024-12-13 10:40:14.000468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.230 qpair failed and we were unable to recover it. 00:38:20.230 [2024-12-13 10:40:14.000550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.230 [2024-12-13 10:40:14.000566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.230 qpair failed and we were unable to recover it. 00:38:20.230 [2024-12-13 10:40:14.000658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.230 [2024-12-13 10:40:14.000674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.230 qpair failed and we were unable to recover it. 00:38:20.230 [2024-12-13 10:40:14.000886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.230 [2024-12-13 10:40:14.000901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.230 qpair failed and we were unable to recover it. 00:38:20.230 [2024-12-13 10:40:14.001144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.230 [2024-12-13 10:40:14.001160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.230 qpair failed and we were unable to recover it. 00:38:20.230 [2024-12-13 10:40:14.001391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.230 [2024-12-13 10:40:14.001406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.230 qpair failed and we were unable to recover it. 00:38:20.230 [2024-12-13 10:40:14.001612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.230 [2024-12-13 10:40:14.001628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.230 qpair failed and we were unable to recover it. 00:38:20.230 [2024-12-13 10:40:14.001715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.230 [2024-12-13 10:40:14.001730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.230 qpair failed and we were unable to recover it. 00:38:20.230 [2024-12-13 10:40:14.001822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.230 [2024-12-13 10:40:14.001838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.230 qpair failed and we were unable to recover it. 00:38:20.230 [2024-12-13 10:40:14.002062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.230 [2024-12-13 10:40:14.002077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.230 qpair failed and we were unable to recover it. 00:38:20.230 [2024-12-13 10:40:14.002315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.230 [2024-12-13 10:40:14.002331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.230 qpair failed and we were unable to recover it. 00:38:20.230 [2024-12-13 10:40:14.002589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.230 [2024-12-13 10:40:14.002605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.230 qpair failed and we were unable to recover it. 00:38:20.230 [2024-12-13 10:40:14.002882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.230 [2024-12-13 10:40:14.002897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.230 qpair failed and we were unable to recover it. 00:38:20.230 [2024-12-13 10:40:14.003188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.230 [2024-12-13 10:40:14.003204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.230 qpair failed and we were unable to recover it. 00:38:20.230 [2024-12-13 10:40:14.003292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.230 [2024-12-13 10:40:14.003308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.230 qpair failed and we were unable to recover it. 00:38:20.230 [2024-12-13 10:40:14.003466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.230 [2024-12-13 10:40:14.003482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.230 qpair failed and we were unable to recover it. 00:38:20.230 [2024-12-13 10:40:14.003584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.230 [2024-12-13 10:40:14.003600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.230 qpair failed and we were unable to recover it. 00:38:20.230 [2024-12-13 10:40:14.003771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.230 [2024-12-13 10:40:14.003786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.230 qpair failed and we were unable to recover it. 00:38:20.230 [2024-12-13 10:40:14.003937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.230 [2024-12-13 10:40:14.003953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.230 qpair failed and we were unable to recover it. 00:38:20.230 [2024-12-13 10:40:14.004175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.230 [2024-12-13 10:40:14.004191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.230 qpair failed and we were unable to recover it. 00:38:20.230 [2024-12-13 10:40:14.004343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.230 [2024-12-13 10:40:14.004358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.230 qpair failed and we were unable to recover it. 00:38:20.230 [2024-12-13 10:40:14.004445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.230 [2024-12-13 10:40:14.004467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.230 qpair failed and we were unable to recover it. 00:38:20.230 [2024-12-13 10:40:14.004621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.230 [2024-12-13 10:40:14.004637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.230 qpair failed and we were unable to recover it. 00:38:20.230 [2024-12-13 10:40:14.004788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.230 [2024-12-13 10:40:14.004804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.230 qpair failed and we were unable to recover it. 00:38:20.230 [2024-12-13 10:40:14.005005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.230 [2024-12-13 10:40:14.005020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.230 qpair failed and we were unable to recover it. 00:38:20.230 [2024-12-13 10:40:14.005109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.230 [2024-12-13 10:40:14.005126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.230 qpair failed and we were unable to recover it. 00:38:20.230 [2024-12-13 10:40:14.005267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.230 [2024-12-13 10:40:14.005282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.230 qpair failed and we were unable to recover it. 00:38:20.230 [2024-12-13 10:40:14.005456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.230 [2024-12-13 10:40:14.005471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.230 qpair failed and we were unable to recover it. 00:38:20.230 [2024-12-13 10:40:14.005642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.230 [2024-12-13 10:40:14.005658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.230 qpair failed and we were unable to recover it. 00:38:20.231 [2024-12-13 10:40:14.005948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.231 [2024-12-13 10:40:14.005963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.231 qpair failed and we were unable to recover it. 00:38:20.231 [2024-12-13 10:40:14.006212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.231 [2024-12-13 10:40:14.006228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.231 qpair failed and we were unable to recover it. 00:38:20.231 [2024-12-13 10:40:14.006388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.231 [2024-12-13 10:40:14.006403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.231 qpair failed and we were unable to recover it. 00:38:20.231 [2024-12-13 10:40:14.006630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.231 [2024-12-13 10:40:14.006648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.231 qpair failed and we were unable to recover it. 00:38:20.231 [2024-12-13 10:40:14.006798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.231 [2024-12-13 10:40:14.006814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.231 qpair failed and we were unable to recover it. 00:38:20.231 [2024-12-13 10:40:14.006994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.231 [2024-12-13 10:40:14.007011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.231 qpair failed and we were unable to recover it. 00:38:20.231 [2024-12-13 10:40:14.007170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.231 [2024-12-13 10:40:14.007191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.231 qpair failed and we were unable to recover it. 00:38:20.231 [2024-12-13 10:40:14.007407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.231 [2024-12-13 10:40:14.007423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.231 qpair failed and we were unable to recover it. 00:38:20.231 [2024-12-13 10:40:14.007594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.231 [2024-12-13 10:40:14.007611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.231 qpair failed and we were unable to recover it. 00:38:20.231 [2024-12-13 10:40:14.007769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.231 [2024-12-13 10:40:14.007784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.231 qpair failed and we were unable to recover it. 00:38:20.231 [2024-12-13 10:40:14.007953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.231 [2024-12-13 10:40:14.007969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.231 qpair failed and we were unable to recover it. 00:38:20.231 [2024-12-13 10:40:14.008140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.231 [2024-12-13 10:40:14.008156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.231 qpair failed and we were unable to recover it. 00:38:20.231 [2024-12-13 10:40:14.008385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.231 [2024-12-13 10:40:14.008401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.231 qpair failed and we were unable to recover it. 00:38:20.231 [2024-12-13 10:40:14.008630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.231 [2024-12-13 10:40:14.008646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.231 qpair failed and we were unable to recover it. 00:38:20.231 [2024-12-13 10:40:14.008851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.231 [2024-12-13 10:40:14.008867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.231 qpair failed and we were unable to recover it. 00:38:20.231 [2024-12-13 10:40:14.009137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.231 [2024-12-13 10:40:14.009153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.231 qpair failed and we were unable to recover it. 00:38:20.231 [2024-12-13 10:40:14.009338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.231 [2024-12-13 10:40:14.009353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.231 qpair failed and we were unable to recover it. 00:38:20.231 [2024-12-13 10:40:14.009545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.231 [2024-12-13 10:40:14.009561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.231 qpair failed and we were unable to recover it. 00:38:20.231 [2024-12-13 10:40:14.009721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.231 [2024-12-13 10:40:14.009737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.231 qpair failed and we were unable to recover it. 00:38:20.231 [2024-12-13 10:40:14.009990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.231 [2024-12-13 10:40:14.010005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.231 qpair failed and we were unable to recover it. 00:38:20.231 [2024-12-13 10:40:14.010174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.231 [2024-12-13 10:40:14.010190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.231 qpair failed and we were unable to recover it. 00:38:20.231 [2024-12-13 10:40:14.010440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.231 [2024-12-13 10:40:14.010463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.231 qpair failed and we were unable to recover it. 00:38:20.231 [2024-12-13 10:40:14.010562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.231 [2024-12-13 10:40:14.010578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.231 qpair failed and we were unable to recover it. 00:38:20.231 [2024-12-13 10:40:14.010710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.231 [2024-12-13 10:40:14.010726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.231 qpair failed and we were unable to recover it. 00:38:20.231 [2024-12-13 10:40:14.010867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.231 [2024-12-13 10:40:14.010883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.231 qpair failed and we were unable to recover it. 00:38:20.231 [2024-12-13 10:40:14.011115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.231 [2024-12-13 10:40:14.011131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.231 qpair failed and we were unable to recover it. 00:38:20.231 [2024-12-13 10:40:14.011337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.231 [2024-12-13 10:40:14.011353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.231 qpair failed and we were unable to recover it. 00:38:20.231 [2024-12-13 10:40:14.011596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.231 [2024-12-13 10:40:14.011612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.231 qpair failed and we were unable to recover it. 00:38:20.231 [2024-12-13 10:40:14.011814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.231 [2024-12-13 10:40:14.011830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.231 qpair failed and we were unable to recover it. 00:38:20.231 [2024-12-13 10:40:14.011912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.231 [2024-12-13 10:40:14.011928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.231 qpair failed and we were unable to recover it. 00:38:20.231 [2024-12-13 10:40:14.012066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.231 [2024-12-13 10:40:14.012083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.231 qpair failed and we were unable to recover it. 00:38:20.231 [2024-12-13 10:40:14.012223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.231 [2024-12-13 10:40:14.012240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.231 qpair failed and we were unable to recover it. 00:38:20.232 [2024-12-13 10:40:14.012465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.232 [2024-12-13 10:40:14.012481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.232 qpair failed and we were unable to recover it. 00:38:20.232 [2024-12-13 10:40:14.012579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.232 [2024-12-13 10:40:14.012594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.232 qpair failed and we were unable to recover it. 00:38:20.232 [2024-12-13 10:40:14.012740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.232 [2024-12-13 10:40:14.012756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.232 qpair failed and we were unable to recover it. 00:38:20.232 [2024-12-13 10:40:14.012955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.232 [2024-12-13 10:40:14.012970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.232 qpair failed and we were unable to recover it. 00:38:20.232 [2024-12-13 10:40:14.013117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.232 [2024-12-13 10:40:14.013132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.232 qpair failed and we were unable to recover it. 00:38:20.232 [2024-12-13 10:40:14.013323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.232 [2024-12-13 10:40:14.013338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.232 qpair failed and we were unable to recover it. 00:38:20.232 [2024-12-13 10:40:14.013560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.232 [2024-12-13 10:40:14.013576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.232 qpair failed and we were unable to recover it. 00:38:20.232 [2024-12-13 10:40:14.013808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.232 [2024-12-13 10:40:14.013823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.232 qpair failed and we were unable to recover it. 00:38:20.232 [2024-12-13 10:40:14.013977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.232 [2024-12-13 10:40:14.013993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.232 qpair failed and we were unable to recover it. 00:38:20.232 [2024-12-13 10:40:14.014182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.232 [2024-12-13 10:40:14.014198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.232 qpair failed and we were unable to recover it. 00:38:20.232 [2024-12-13 10:40:14.014434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.232 [2024-12-13 10:40:14.014454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.232 qpair failed and we were unable to recover it. 00:38:20.232 [2024-12-13 10:40:14.014638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.232 [2024-12-13 10:40:14.014657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.232 qpair failed and we were unable to recover it. 00:38:20.232 [2024-12-13 10:40:14.014861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.232 [2024-12-13 10:40:14.014877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.232 qpair failed and we were unable to recover it. 00:38:20.232 [2024-12-13 10:40:14.015044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.232 [2024-12-13 10:40:14.015059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.232 qpair failed and we were unable to recover it. 00:38:20.232 [2024-12-13 10:40:14.015230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.232 [2024-12-13 10:40:14.015246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.232 qpair failed and we were unable to recover it. 00:38:20.232 [2024-12-13 10:40:14.015473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.232 [2024-12-13 10:40:14.015489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.232 qpair failed and we were unable to recover it. 00:38:20.232 [2024-12-13 10:40:14.015656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.232 [2024-12-13 10:40:14.015672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.232 qpair failed and we were unable to recover it. 00:38:20.232 [2024-12-13 10:40:14.015812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.232 [2024-12-13 10:40:14.015828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.232 qpair failed and we were unable to recover it. 00:38:20.232 [2024-12-13 10:40:14.015965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.232 [2024-12-13 10:40:14.015981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.232 qpair failed and we were unable to recover it. 00:38:20.232 [2024-12-13 10:40:14.016136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.232 [2024-12-13 10:40:14.016152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.232 qpair failed and we were unable to recover it. 00:38:20.232 [2024-12-13 10:40:14.016290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.232 [2024-12-13 10:40:14.016306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.232 qpair failed and we were unable to recover it. 00:38:20.232 [2024-12-13 10:40:14.016490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.232 [2024-12-13 10:40:14.016506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.232 qpair failed and we were unable to recover it. 00:38:20.232 [2024-12-13 10:40:14.016733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.232 [2024-12-13 10:40:14.016749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.232 qpair failed and we were unable to recover it. 00:38:20.232 [2024-12-13 10:40:14.016928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.232 [2024-12-13 10:40:14.016943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.232 qpair failed and we were unable to recover it. 00:38:20.232 [2024-12-13 10:40:14.017167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.232 [2024-12-13 10:40:14.017183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.232 qpair failed and we were unable to recover it. 00:38:20.232 [2024-12-13 10:40:14.017416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.232 [2024-12-13 10:40:14.017432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.232 qpair failed and we were unable to recover it. 00:38:20.232 [2024-12-13 10:40:14.017668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.232 [2024-12-13 10:40:14.017684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.232 qpair failed and we were unable to recover it. 00:38:20.232 [2024-12-13 10:40:14.017845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.232 [2024-12-13 10:40:14.017861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.232 qpair failed and we were unable to recover it. 00:38:20.232 [2024-12-13 10:40:14.017940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.232 [2024-12-13 10:40:14.017955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.232 qpair failed and we were unable to recover it. 00:38:20.232 [2024-12-13 10:40:14.018182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.232 [2024-12-13 10:40:14.018198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.232 qpair failed and we were unable to recover it. 00:38:20.232 [2024-12-13 10:40:14.018342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.232 [2024-12-13 10:40:14.018358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.232 qpair failed and we were unable to recover it. 00:38:20.232 [2024-12-13 10:40:14.018591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.232 [2024-12-13 10:40:14.018607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.232 qpair failed and we were unable to recover it. 00:38:20.232 [2024-12-13 10:40:14.018812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.232 [2024-12-13 10:40:14.018827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.232 qpair failed and we were unable to recover it. 00:38:20.232 [2024-12-13 10:40:14.019072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.232 [2024-12-13 10:40:14.019087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.232 qpair failed and we were unable to recover it. 00:38:20.232 [2024-12-13 10:40:14.019243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.232 [2024-12-13 10:40:14.019259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.232 qpair failed and we were unable to recover it. 00:38:20.232 [2024-12-13 10:40:14.019344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.232 [2024-12-13 10:40:14.019359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.232 qpair failed and we were unable to recover it. 00:38:20.233 [2024-12-13 10:40:14.019512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.233 [2024-12-13 10:40:14.019527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.233 qpair failed and we were unable to recover it. 00:38:20.233 [2024-12-13 10:40:14.019690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.233 [2024-12-13 10:40:14.019706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.233 qpair failed and we were unable to recover it. 00:38:20.233 [2024-12-13 10:40:14.019912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.233 [2024-12-13 10:40:14.019932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.233 qpair failed and we were unable to recover it. 00:38:20.233 [2024-12-13 10:40:14.020022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.233 [2024-12-13 10:40:14.020037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.233 qpair failed and we were unable to recover it. 00:38:20.233 [2024-12-13 10:40:14.020190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.233 [2024-12-13 10:40:14.020206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.233 qpair failed and we were unable to recover it. 00:38:20.233 [2024-12-13 10:40:14.020376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.233 [2024-12-13 10:40:14.020391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.233 qpair failed and we were unable to recover it. 00:38:20.233 [2024-12-13 10:40:14.020531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.233 [2024-12-13 10:40:14.020547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.233 qpair failed and we were unable to recover it. 00:38:20.233 [2024-12-13 10:40:14.020768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.233 [2024-12-13 10:40:14.020784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.233 qpair failed and we were unable to recover it. 00:38:20.233 [2024-12-13 10:40:14.020948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.233 [2024-12-13 10:40:14.020964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.233 qpair failed and we were unable to recover it. 00:38:20.233 [2024-12-13 10:40:14.021134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.233 [2024-12-13 10:40:14.021149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.233 qpair failed and we were unable to recover it. 00:38:20.233 [2024-12-13 10:40:14.021350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.233 [2024-12-13 10:40:14.021366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.233 qpair failed and we were unable to recover it. 00:38:20.233 [2024-12-13 10:40:14.021524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.233 [2024-12-13 10:40:14.021540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.233 qpair failed and we were unable to recover it. 00:38:20.233 [2024-12-13 10:40:14.021628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.233 [2024-12-13 10:40:14.021644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.233 qpair failed and we were unable to recover it. 00:38:20.233 [2024-12-13 10:40:14.021869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.233 [2024-12-13 10:40:14.021885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.233 qpair failed and we were unable to recover it. 00:38:20.233 [2024-12-13 10:40:14.022032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.233 [2024-12-13 10:40:14.022048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.233 qpair failed and we were unable to recover it. 00:38:20.233 [2024-12-13 10:40:14.022193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.233 [2024-12-13 10:40:14.022213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.233 qpair failed and we were unable to recover it. 00:38:20.233 [2024-12-13 10:40:14.022367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.233 [2024-12-13 10:40:14.022382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.233 qpair failed and we were unable to recover it. 00:38:20.233 [2024-12-13 10:40:14.022541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.233 [2024-12-13 10:40:14.022557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.233 qpair failed and we were unable to recover it. 00:38:20.233 [2024-12-13 10:40:14.022804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.233 [2024-12-13 10:40:14.022820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.233 qpair failed and we were unable to recover it. 00:38:20.233 [2024-12-13 10:40:14.023107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.233 [2024-12-13 10:40:14.023123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.233 qpair failed and we were unable to recover it. 00:38:20.233 [2024-12-13 10:40:14.023321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.233 [2024-12-13 10:40:14.023336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.233 qpair failed and we were unable to recover it. 00:38:20.233 [2024-12-13 10:40:14.023485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.233 [2024-12-13 10:40:14.023501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.233 qpair failed and we were unable to recover it. 00:38:20.233 [2024-12-13 10:40:14.023657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.233 [2024-12-13 10:40:14.023673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.233 qpair failed and we were unable to recover it. 00:38:20.233 [2024-12-13 10:40:14.023810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.233 [2024-12-13 10:40:14.023826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.233 qpair failed and we were unable to recover it. 00:38:20.233 [2024-12-13 10:40:14.024074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.233 [2024-12-13 10:40:14.024090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.233 qpair failed and we were unable to recover it. 00:38:20.233 [2024-12-13 10:40:14.024177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.233 [2024-12-13 10:40:14.024193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.233 qpair failed and we were unable to recover it. 00:38:20.233 [2024-12-13 10:40:14.024397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.233 [2024-12-13 10:40:14.024412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.233 qpair failed and we were unable to recover it. 00:38:20.233 [2024-12-13 10:40:14.024560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.233 [2024-12-13 10:40:14.024575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.233 qpair failed and we were unable to recover it. 00:38:20.233 [2024-12-13 10:40:14.024805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.233 [2024-12-13 10:40:14.024820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.233 qpair failed and we were unable to recover it. 00:38:20.233 [2024-12-13 10:40:14.025051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.233 [2024-12-13 10:40:14.025066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.233 qpair failed and we were unable to recover it. 00:38:20.233 [2024-12-13 10:40:14.025203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.233 [2024-12-13 10:40:14.025219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.233 qpair failed and we were unable to recover it. 00:38:20.233 [2024-12-13 10:40:14.025358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.233 [2024-12-13 10:40:14.025374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.233 qpair failed and we were unable to recover it. 00:38:20.233 [2024-12-13 10:40:14.025528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.233 [2024-12-13 10:40:14.025543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.233 qpair failed and we were unable to recover it. 00:38:20.233 [2024-12-13 10:40:14.025710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.233 [2024-12-13 10:40:14.025726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.233 qpair failed and we were unable to recover it. 00:38:20.233 [2024-12-13 10:40:14.025881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.233 [2024-12-13 10:40:14.025896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.233 qpair failed and we were unable to recover it. 00:38:20.233 [2024-12-13 10:40:14.026053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.233 [2024-12-13 10:40:14.026069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.233 qpair failed and we were unable to recover it. 00:38:20.233 [2024-12-13 10:40:14.026178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.233 [2024-12-13 10:40:14.026196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.233 qpair failed and we were unable to recover it. 00:38:20.233 [2024-12-13 10:40:14.026290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.233 [2024-12-13 10:40:14.026305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.233 qpair failed and we were unable to recover it. 00:38:20.233 [2024-12-13 10:40:14.026531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.233 [2024-12-13 10:40:14.026546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.234 qpair failed and we were unable to recover it. 00:38:20.234 [2024-12-13 10:40:14.026766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.234 [2024-12-13 10:40:14.026782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.234 qpair failed and we were unable to recover it. 00:38:20.234 [2024-12-13 10:40:14.027003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.234 [2024-12-13 10:40:14.027019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.234 qpair failed and we were unable to recover it. 00:38:20.234 [2024-12-13 10:40:14.027181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.234 [2024-12-13 10:40:14.027197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.234 qpair failed and we were unable to recover it. 00:38:20.234 [2024-12-13 10:40:14.027390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.234 [2024-12-13 10:40:14.027406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.234 qpair failed and we were unable to recover it. 00:38:20.234 [2024-12-13 10:40:14.027630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.234 [2024-12-13 10:40:14.027647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.234 qpair failed and we were unable to recover it. 00:38:20.234 [2024-12-13 10:40:14.027804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.234 [2024-12-13 10:40:14.027820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.234 qpair failed and we were unable to recover it. 00:38:20.234 [2024-12-13 10:40:14.028048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.234 [2024-12-13 10:40:14.028064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.234 qpair failed and we were unable to recover it. 00:38:20.234 [2024-12-13 10:40:14.028253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.234 [2024-12-13 10:40:14.028269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.234 qpair failed and we were unable to recover it. 00:38:20.234 [2024-12-13 10:40:14.028470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.234 [2024-12-13 10:40:14.028486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.234 qpair failed and we were unable to recover it. 00:38:20.234 [2024-12-13 10:40:14.028664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.234 [2024-12-13 10:40:14.028680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.234 qpair failed and we were unable to recover it. 00:38:20.234 [2024-12-13 10:40:14.028908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.234 [2024-12-13 10:40:14.028924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.234 qpair failed and we were unable to recover it. 00:38:20.234 [2024-12-13 10:40:14.029071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.234 [2024-12-13 10:40:14.029087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.234 qpair failed and we were unable to recover it. 00:38:20.234 [2024-12-13 10:40:14.029290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.234 [2024-12-13 10:40:14.029305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.234 qpair failed and we were unable to recover it. 00:38:20.234 [2024-12-13 10:40:14.029480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.234 [2024-12-13 10:40:14.029496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.234 qpair failed and we were unable to recover it. 00:38:20.234 [2024-12-13 10:40:14.029662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.234 [2024-12-13 10:40:14.029678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.234 qpair failed and we were unable to recover it. 00:38:20.234 [2024-12-13 10:40:14.029835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.234 [2024-12-13 10:40:14.029851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.234 qpair failed and we were unable to recover it. 00:38:20.234 [2024-12-13 10:40:14.030102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.234 [2024-12-13 10:40:14.030120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.234 qpair failed and we were unable to recover it. 00:38:20.234 [2024-12-13 10:40:14.030348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.234 [2024-12-13 10:40:14.030364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.234 qpair failed and we were unable to recover it. 00:38:20.234 [2024-12-13 10:40:14.030509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.234 [2024-12-13 10:40:14.030526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.234 qpair failed and we were unable to recover it. 00:38:20.234 [2024-12-13 10:40:14.030731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.234 [2024-12-13 10:40:14.030747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.234 qpair failed and we were unable to recover it. 00:38:20.234 [2024-12-13 10:40:14.030843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.234 [2024-12-13 10:40:14.030859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.234 qpair failed and we were unable to recover it. 00:38:20.234 [2024-12-13 10:40:14.031112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.234 [2024-12-13 10:40:14.031127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.234 qpair failed and we were unable to recover it. 00:38:20.234 [2024-12-13 10:40:14.031276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.234 [2024-12-13 10:40:14.031291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.234 qpair failed and we were unable to recover it. 00:38:20.234 [2024-12-13 10:40:14.031473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.234 [2024-12-13 10:40:14.031490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.234 qpair failed and we were unable to recover it. 00:38:20.234 [2024-12-13 10:40:14.031721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.234 [2024-12-13 10:40:14.031736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.234 qpair failed and we were unable to recover it. 00:38:20.234 [2024-12-13 10:40:14.031967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.234 [2024-12-13 10:40:14.031984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.234 qpair failed and we were unable to recover it. 00:38:20.234 [2024-12-13 10:40:14.032143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.234 [2024-12-13 10:40:14.032159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.234 qpair failed and we were unable to recover it. 00:38:20.234 [2024-12-13 10:40:14.032397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.234 [2024-12-13 10:40:14.032418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.234 qpair failed and we were unable to recover it. 00:38:20.234 [2024-12-13 10:40:14.032595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.234 [2024-12-13 10:40:14.032611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.234 qpair failed and we were unable to recover it. 00:38:20.234 [2024-12-13 10:40:14.032835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.234 [2024-12-13 10:40:14.032851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.234 qpair failed and we were unable to recover it. 00:38:20.234 [2024-12-13 10:40:14.033026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.234 [2024-12-13 10:40:14.033042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.234 qpair failed and we were unable to recover it. 00:38:20.234 [2024-12-13 10:40:14.033212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.234 [2024-12-13 10:40:14.033213] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:20.234 [2024-12-13 10:40:14.033228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.234 qpair failed and we were unable to recover it. 00:38:20.234 [2024-12-13 10:40:14.033245] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:20.234 [2024-12-13 10:40:14.033257] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:20.234 [2024-12-13 10:40:14.033270] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:20.234 [2024-12-13 10:40:14.033279] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:20.234 [2024-12-13 10:40:14.033435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.234 [2024-12-13 10:40:14.033459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.234 qpair failed and we were unable to recover it. 00:38:20.234 [2024-12-13 10:40:14.033640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.234 [2024-12-13 10:40:14.033656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.234 qpair failed and we were unable to recover it. 00:38:20.234 [2024-12-13 10:40:14.033866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.234 [2024-12-13 10:40:14.033882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.234 qpair failed and we were unable to recover it. 00:38:20.234 [2024-12-13 10:40:14.034107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.234 [2024-12-13 10:40:14.034122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.234 qpair failed and we were unable to recover it. 00:38:20.234 [2024-12-13 10:40:14.034264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.234 [2024-12-13 10:40:14.034280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.234 qpair failed and we were unable to recover it. 00:38:20.235 [2024-12-13 10:40:14.034360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.235 [2024-12-13 10:40:14.034376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.235 qpair failed and we were unable to recover it. 00:38:20.235 [2024-12-13 10:40:14.034469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.235 [2024-12-13 10:40:14.034484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.235 qpair failed and we were unable to recover it. 00:38:20.235 [2024-12-13 10:40:14.034555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.235 [2024-12-13 10:40:14.034571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.235 qpair failed and we were unable to recover it. 00:38:20.235 [2024-12-13 10:40:14.034772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.235 [2024-12-13 10:40:14.034787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.235 qpair failed and we were unable to recover it. 00:38:20.235 [2024-12-13 10:40:14.034943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.235 [2024-12-13 10:40:14.034959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.235 qpair failed and we were unable to recover it. 00:38:20.235 [2024-12-13 10:40:14.035132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.235 [2024-12-13 10:40:14.035148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.235 qpair failed and we were unable to recover it. 00:38:20.235 [2024-12-13 10:40:14.035383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.235 [2024-12-13 10:40:14.035398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.235 qpair failed and we were unable to recover it. 00:38:20.235 [2024-12-13 10:40:14.035578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.235 [2024-12-13 10:40:14.035594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.235 qpair failed and we were unable to recover it. 00:38:20.235 [2024-12-13 10:40:14.035693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.235 [2024-12-13 10:40:14.035709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.235 qpair failed and we were unable to recover it. 00:38:20.235 [2024-12-13 10:40:14.035760] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 5 00:38:20.235 [2024-12-13 10:40:14.035860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.235 [2024-12-13 10:40:14.035876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.235 qpair failed and we were unable to recover it. 00:38:20.235 [2024-12-13 10:40:14.035859] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 6 00:38:20.235 [2024-12-13 10:40:14.035905] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 7 00:38:20.235 [2024-12-13 10:40:14.035885] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:38:20.235 [2024-12-13 10:40:14.036065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.235 [2024-12-13 10:40:14.036083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.235 qpair failed and we were unable to recover it. 00:38:20.235 [2024-12-13 10:40:14.036311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.235 [2024-12-13 10:40:14.036327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.235 qpair failed and we were unable to recover it. 00:38:20.235 [2024-12-13 10:40:14.036487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.235 [2024-12-13 10:40:14.036503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.235 qpair failed and we were unable to recover it. 00:38:20.235 [2024-12-13 10:40:14.036748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.235 [2024-12-13 10:40:14.036765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.235 qpair failed and we were unable to recover it. 00:38:20.235 [2024-12-13 10:40:14.036936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.235 [2024-12-13 10:40:14.036952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.235 qpair failed and we were unable to recover it. 00:38:20.235 [2024-12-13 10:40:14.037129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.235 [2024-12-13 10:40:14.037145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.235 qpair failed and we were unable to recover it. 00:38:20.235 [2024-12-13 10:40:14.037319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.235 [2024-12-13 10:40:14.037336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.235 qpair failed and we were unable to recover it. 00:38:20.235 [2024-12-13 10:40:14.037577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.235 [2024-12-13 10:40:14.037594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.235 qpair failed and we were unable to recover it. 00:38:20.235 [2024-12-13 10:40:14.037836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.235 [2024-12-13 10:40:14.037853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.235 qpair failed and we were unable to recover it. 00:38:20.235 [2024-12-13 10:40:14.038084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.235 [2024-12-13 10:40:14.038100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.235 qpair failed and we were unable to recover it. 00:38:20.235 [2024-12-13 10:40:14.038305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.235 [2024-12-13 10:40:14.038322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.235 qpair failed and we were unable to recover it. 00:38:20.235 [2024-12-13 10:40:14.038593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.235 [2024-12-13 10:40:14.038610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.235 qpair failed and we were unable to recover it. 00:38:20.235 [2024-12-13 10:40:14.038843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.235 [2024-12-13 10:40:14.038859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.235 qpair failed and we were unable to recover it. 00:38:20.235 [2024-12-13 10:40:14.039020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.235 [2024-12-13 10:40:14.039036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.235 qpair failed and we were unable to recover it. 00:38:20.235 [2024-12-13 10:40:14.039258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.235 [2024-12-13 10:40:14.039275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.235 qpair failed and we were unable to recover it. 00:38:20.235 [2024-12-13 10:40:14.039411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.235 [2024-12-13 10:40:14.039427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.235 qpair failed and we were unable to recover it. 00:38:20.235 [2024-12-13 10:40:14.039594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.235 [2024-12-13 10:40:14.039610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.235 qpair failed and we were unable to recover it. 00:38:20.235 [2024-12-13 10:40:14.039779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.235 [2024-12-13 10:40:14.039796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.235 qpair failed and we were unable to recover it. 00:38:20.235 [2024-12-13 10:40:14.040028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.235 [2024-12-13 10:40:14.040044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.235 qpair failed and we were unable to recover it. 00:38:20.235 [2024-12-13 10:40:14.040280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.235 [2024-12-13 10:40:14.040299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.235 qpair failed and we were unable to recover it. 00:38:20.235 [2024-12-13 10:40:14.040383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.235 [2024-12-13 10:40:14.040400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.235 qpair failed and we were unable to recover it. 00:38:20.235 [2024-12-13 10:40:14.040626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.236 [2024-12-13 10:40:14.040644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.236 qpair failed and we were unable to recover it. 00:38:20.236 [2024-12-13 10:40:14.040873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.236 [2024-12-13 10:40:14.040891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.236 qpair failed and we were unable to recover it. 00:38:20.236 [2024-12-13 10:40:14.041140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.236 [2024-12-13 10:40:14.041157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.236 qpair failed and we were unable to recover it. 00:38:20.236 [2024-12-13 10:40:14.041359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.236 [2024-12-13 10:40:14.041375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.236 qpair failed and we were unable to recover it. 00:38:20.236 [2024-12-13 10:40:14.041523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.236 [2024-12-13 10:40:14.041540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.236 qpair failed and we were unable to recover it. 00:38:20.236 [2024-12-13 10:40:14.041642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.236 [2024-12-13 10:40:14.041658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.236 qpair failed and we were unable to recover it. 00:38:20.236 [2024-12-13 10:40:14.041861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.236 [2024-12-13 10:40:14.041878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.236 qpair failed and we were unable to recover it. 00:38:20.236 [2024-12-13 10:40:14.042085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.236 [2024-12-13 10:40:14.042102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.236 qpair failed and we were unable to recover it. 00:38:20.236 [2024-12-13 10:40:14.042327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.236 [2024-12-13 10:40:14.042345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.236 qpair failed and we were unable to recover it. 00:38:20.236 [2024-12-13 10:40:14.042510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.236 [2024-12-13 10:40:14.042527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.236 qpair failed and we were unable to recover it. 00:38:20.236 [2024-12-13 10:40:14.042733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.236 [2024-12-13 10:40:14.042749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.236 qpair failed and we were unable to recover it. 00:38:20.236 [2024-12-13 10:40:14.042974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.236 [2024-12-13 10:40:14.042990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.236 qpair failed and we were unable to recover it. 00:38:20.236 [2024-12-13 10:40:14.043151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.236 [2024-12-13 10:40:14.043168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.236 qpair failed and we were unable to recover it. 00:38:20.236 [2024-12-13 10:40:14.043391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.236 [2024-12-13 10:40:14.043408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.236 qpair failed and we were unable to recover it. 00:38:20.236 [2024-12-13 10:40:14.043641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.236 [2024-12-13 10:40:14.043658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.236 qpair failed and we were unable to recover it. 00:38:20.236 [2024-12-13 10:40:14.043803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.236 [2024-12-13 10:40:14.043820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.236 qpair failed and we were unable to recover it. 00:38:20.236 [2024-12-13 10:40:14.043971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.236 [2024-12-13 10:40:14.043987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.236 qpair failed and we were unable to recover it. 00:38:20.236 [2024-12-13 10:40:14.044162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.236 [2024-12-13 10:40:14.044178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.236 qpair failed and we were unable to recover it. 00:38:20.236 [2024-12-13 10:40:14.044384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.236 [2024-12-13 10:40:14.044401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.236 qpair failed and we were unable to recover it. 00:38:20.236 [2024-12-13 10:40:14.044633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.236 [2024-12-13 10:40:14.044651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.236 qpair failed and we were unable to recover it. 00:38:20.236 [2024-12-13 10:40:14.044830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.236 [2024-12-13 10:40:14.044847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.236 qpair failed and we were unable to recover it. 00:38:20.236 [2024-12-13 10:40:14.044998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.236 [2024-12-13 10:40:14.045014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.236 qpair failed and we were unable to recover it. 00:38:20.236 [2024-12-13 10:40:14.045170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.236 [2024-12-13 10:40:14.045186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.236 qpair failed and we were unable to recover it. 00:38:20.236 [2024-12-13 10:40:14.045349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.236 [2024-12-13 10:40:14.045372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.236 qpair failed and we were unable to recover it. 00:38:20.236 [2024-12-13 10:40:14.045616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.236 [2024-12-13 10:40:14.045634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.236 qpair failed and we were unable to recover it. 00:38:20.236 [2024-12-13 10:40:14.045838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.236 [2024-12-13 10:40:14.045870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:20.236 qpair failed and we were unable to recover it. 00:38:20.236 [2024-12-13 10:40:14.046152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.236 [2024-12-13 10:40:14.046185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:20.236 qpair failed and we were unable to recover it. 00:38:20.236 [2024-12-13 10:40:14.046402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.236 [2024-12-13 10:40:14.046436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:20.236 qpair failed and we were unable to recover it. 00:38:20.236 [2024-12-13 10:40:14.046607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.236 [2024-12-13 10:40:14.046627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.236 qpair failed and we were unable to recover it. 00:38:20.236 [2024-12-13 10:40:14.046786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.236 [2024-12-13 10:40:14.046802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.236 qpair failed and we were unable to recover it. 00:38:20.236 [2024-12-13 10:40:14.046952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.236 [2024-12-13 10:40:14.046969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.236 qpair failed and we were unable to recover it. 00:38:20.236 [2024-12-13 10:40:14.047199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.236 [2024-12-13 10:40:14.047216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.236 qpair failed and we were unable to recover it. 00:38:20.512 [2024-12-13 10:40:14.047315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.512 [2024-12-13 10:40:14.047332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.512 qpair failed and we were unable to recover it. 00:38:20.512 [2024-12-13 10:40:14.047473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.512 [2024-12-13 10:40:14.047492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.512 qpair failed and we were unable to recover it. 00:38:20.512 [2024-12-13 10:40:14.047713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.512 [2024-12-13 10:40:14.047730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.512 qpair failed and we were unable to recover it. 00:38:20.512 [2024-12-13 10:40:14.047913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.512 [2024-12-13 10:40:14.047930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.512 qpair failed and we were unable to recover it. 00:38:20.512 [2024-12-13 10:40:14.048161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.512 [2024-12-13 10:40:14.048178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.512 qpair failed and we were unable to recover it. 00:38:20.512 [2024-12-13 10:40:14.048406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.512 [2024-12-13 10:40:14.048425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.512 qpair failed and we were unable to recover it. 00:38:20.512 [2024-12-13 10:40:14.048614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.512 [2024-12-13 10:40:14.048634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.512 qpair failed and we were unable to recover it. 00:38:20.512 [2024-12-13 10:40:14.048863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.512 [2024-12-13 10:40:14.048881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.512 qpair failed and we were unable to recover it. 00:38:20.512 [2024-12-13 10:40:14.049032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.512 [2024-12-13 10:40:14.049047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.512 qpair failed and we were unable to recover it. 00:38:20.512 [2024-12-13 10:40:14.049282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.512 [2024-12-13 10:40:14.049298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.512 qpair failed and we were unable to recover it. 00:38:20.512 [2024-12-13 10:40:14.049546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.512 [2024-12-13 10:40:14.049563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.512 qpair failed and we were unable to recover it. 00:38:20.512 [2024-12-13 10:40:14.049767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.512 [2024-12-13 10:40:14.049782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.512 qpair failed and we were unable to recover it. 00:38:20.512 [2024-12-13 10:40:14.049881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.512 [2024-12-13 10:40:14.049897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.512 qpair failed and we were unable to recover it. 00:38:20.512 [2024-12-13 10:40:14.050038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.512 [2024-12-13 10:40:14.050053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.512 qpair failed and we were unable to recover it. 00:38:20.512 [2024-12-13 10:40:14.050261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.512 [2024-12-13 10:40:14.050277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.512 qpair failed and we were unable to recover it. 00:38:20.512 [2024-12-13 10:40:14.050455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.512 [2024-12-13 10:40:14.050471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.512 qpair failed and we were unable to recover it. 00:38:20.512 [2024-12-13 10:40:14.050616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.512 [2024-12-13 10:40:14.050632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.512 qpair failed and we were unable to recover it. 00:38:20.512 [2024-12-13 10:40:14.050879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.512 [2024-12-13 10:40:14.050895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.512 qpair failed and we were unable to recover it. 00:38:20.512 [2024-12-13 10:40:14.051059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.512 [2024-12-13 10:40:14.051074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.512 qpair failed and we were unable to recover it. 00:38:20.512 [2024-12-13 10:40:14.051216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.512 [2024-12-13 10:40:14.051232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.512 qpair failed and we were unable to recover it. 00:38:20.512 [2024-12-13 10:40:14.051402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.512 [2024-12-13 10:40:14.051417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.512 qpair failed and we were unable to recover it. 00:38:20.512 [2024-12-13 10:40:14.051503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.512 [2024-12-13 10:40:14.051520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.512 qpair failed and we were unable to recover it. 00:38:20.512 [2024-12-13 10:40:14.051748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.512 [2024-12-13 10:40:14.051764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.512 qpair failed and we were unable to recover it. 00:38:20.512 [2024-12-13 10:40:14.051903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.512 [2024-12-13 10:40:14.051919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.512 qpair failed and we were unable to recover it. 00:38:20.512 [2024-12-13 10:40:14.052057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.512 [2024-12-13 10:40:14.052073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.512 qpair failed and we were unable to recover it. 00:38:20.512 [2024-12-13 10:40:14.052279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.512 [2024-12-13 10:40:14.052295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.512 qpair failed and we were unable to recover it. 00:38:20.512 [2024-12-13 10:40:14.052445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.512 [2024-12-13 10:40:14.052467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.512 qpair failed and we were unable to recover it. 00:38:20.512 [2024-12-13 10:40:14.052563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.513 [2024-12-13 10:40:14.052583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.513 qpair failed and we were unable to recover it. 00:38:20.513 [2024-12-13 10:40:14.052728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.513 [2024-12-13 10:40:14.052743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.513 qpair failed and we were unable to recover it. 00:38:20.513 [2024-12-13 10:40:14.052829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.513 [2024-12-13 10:40:14.052845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.513 qpair failed and we were unable to recover it. 00:38:20.513 [2024-12-13 10:40:14.053071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.513 [2024-12-13 10:40:14.053086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.513 qpair failed and we were unable to recover it. 00:38:20.513 [2024-12-13 10:40:14.053171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.513 [2024-12-13 10:40:14.053187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.513 qpair failed and we were unable to recover it. 00:38:20.513 [2024-12-13 10:40:14.053370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.513 [2024-12-13 10:40:14.053386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.513 qpair failed and we were unable to recover it. 00:38:20.513 [2024-12-13 10:40:14.053640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.513 [2024-12-13 10:40:14.053668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:20.513 qpair failed and we were unable to recover it. 00:38:20.513 [2024-12-13 10:40:14.053895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.513 [2024-12-13 10:40:14.053918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:20.513 qpair failed and we were unable to recover it. 00:38:20.513 [2024-12-13 10:40:14.054086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.513 [2024-12-13 10:40:14.054110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:20.513 qpair failed and we were unable to recover it. 00:38:20.513 [2024-12-13 10:40:14.054344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.513 [2024-12-13 10:40:14.054367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:20.513 qpair failed and we were unable to recover it. 00:38:20.513 [2024-12-13 10:40:14.054556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.513 [2024-12-13 10:40:14.054580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:20.513 qpair failed and we were unable to recover it. 00:38:20.513 [2024-12-13 10:40:14.054760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.513 [2024-12-13 10:40:14.054783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:20.513 qpair failed and we were unable to recover it. 00:38:20.513 [2024-12-13 10:40:14.054999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.513 [2024-12-13 10:40:14.055018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.513 qpair failed and we were unable to recover it. 00:38:20.513 [2024-12-13 10:40:14.055258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.513 [2024-12-13 10:40:14.055275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.513 qpair failed and we were unable to recover it. 00:38:20.513 [2024-12-13 10:40:14.055420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.513 [2024-12-13 10:40:14.055436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.513 qpair failed and we were unable to recover it. 00:38:20.513 [2024-12-13 10:40:14.055523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.513 [2024-12-13 10:40:14.055540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.513 qpair failed and we were unable to recover it. 00:38:20.513 [2024-12-13 10:40:14.055696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.513 [2024-12-13 10:40:14.055713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.513 qpair failed and we were unable to recover it. 00:38:20.513 [2024-12-13 10:40:14.055817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.513 [2024-12-13 10:40:14.055833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.513 qpair failed and we were unable to recover it. 00:38:20.513 [2024-12-13 10:40:14.056063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.513 [2024-12-13 10:40:14.056078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.513 qpair failed and we were unable to recover it. 00:38:20.513 [2024-12-13 10:40:14.056256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.513 [2024-12-13 10:40:14.056276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.513 qpair failed and we were unable to recover it. 00:38:20.513 [2024-12-13 10:40:14.056524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.513 [2024-12-13 10:40:14.056541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.513 qpair failed and we were unable to recover it. 00:38:20.513 [2024-12-13 10:40:14.056632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.513 [2024-12-13 10:40:14.056647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.513 qpair failed and we were unable to recover it. 00:38:20.513 [2024-12-13 10:40:14.056828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.513 [2024-12-13 10:40:14.056844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.513 qpair failed and we were unable to recover it. 00:38:20.513 [2024-12-13 10:40:14.057089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.513 [2024-12-13 10:40:14.057106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.513 qpair failed and we were unable to recover it. 00:38:20.513 [2024-12-13 10:40:14.057205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.513 [2024-12-13 10:40:14.057222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.513 qpair failed and we were unable to recover it. 00:38:20.513 [2024-12-13 10:40:14.057427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.513 [2024-12-13 10:40:14.057443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.513 qpair failed and we were unable to recover it. 00:38:20.513 [2024-12-13 10:40:14.057635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.513 [2024-12-13 10:40:14.057650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.513 qpair failed and we were unable to recover it. 00:38:20.513 [2024-12-13 10:40:14.057806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.513 [2024-12-13 10:40:14.057822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.513 qpair failed and we were unable to recover it. 00:38:20.513 [2024-12-13 10:40:14.057985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.513 [2024-12-13 10:40:14.058001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.513 qpair failed and we were unable to recover it. 00:38:20.513 [2024-12-13 10:40:14.058157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.513 [2024-12-13 10:40:14.058172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.513 qpair failed and we were unable to recover it. 00:38:20.513 [2024-12-13 10:40:14.058421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.513 [2024-12-13 10:40:14.058436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.513 qpair failed and we were unable to recover it. 00:38:20.513 [2024-12-13 10:40:14.058582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.513 [2024-12-13 10:40:14.058600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.513 qpair failed and we were unable to recover it. 00:38:20.513 [2024-12-13 10:40:14.058826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.513 [2024-12-13 10:40:14.058841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.513 qpair failed and we were unable to recover it. 00:38:20.513 [2024-12-13 10:40:14.058981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.513 [2024-12-13 10:40:14.058997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.513 qpair failed and we were unable to recover it. 00:38:20.513 [2024-12-13 10:40:14.059222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.513 [2024-12-13 10:40:14.059238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.513 qpair failed and we were unable to recover it. 00:38:20.513 [2024-12-13 10:40:14.059457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.513 [2024-12-13 10:40:14.059473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.513 qpair failed and we were unable to recover it. 00:38:20.513 [2024-12-13 10:40:14.059738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.513 [2024-12-13 10:40:14.059755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.513 qpair failed and we were unable to recover it. 00:38:20.513 [2024-12-13 10:40:14.059911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.513 [2024-12-13 10:40:14.059931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.513 qpair failed and we were unable to recover it. 00:38:20.513 [2024-12-13 10:40:14.060078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.513 [2024-12-13 10:40:14.060094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.513 qpair failed and we were unable to recover it. 00:38:20.513 [2024-12-13 10:40:14.060312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.514 [2024-12-13 10:40:14.060328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.514 qpair failed and we were unable to recover it. 00:38:20.514 [2024-12-13 10:40:14.060508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.514 [2024-12-13 10:40:14.060525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.514 qpair failed and we were unable to recover it. 00:38:20.514 [2024-12-13 10:40:14.060810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.514 [2024-12-13 10:40:14.060826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.514 qpair failed and we were unable to recover it. 00:38:20.514 [2024-12-13 10:40:14.060999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.514 [2024-12-13 10:40:14.061015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.514 qpair failed and we were unable to recover it. 00:38:20.514 [2024-12-13 10:40:14.061153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.514 [2024-12-13 10:40:14.061168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.514 qpair failed and we were unable to recover it. 00:38:20.514 [2024-12-13 10:40:14.061372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.514 [2024-12-13 10:40:14.061389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.514 qpair failed and we were unable to recover it. 00:38:20.514 [2024-12-13 10:40:14.061478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.514 [2024-12-13 10:40:14.061494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.514 qpair failed and we were unable to recover it. 00:38:20.514 [2024-12-13 10:40:14.061699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.514 [2024-12-13 10:40:14.061716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.514 qpair failed and we were unable to recover it. 00:38:20.514 [2024-12-13 10:40:14.061853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.514 [2024-12-13 10:40:14.061869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.514 qpair failed and we were unable to recover it. 00:38:20.514 [2024-12-13 10:40:14.062121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.514 [2024-12-13 10:40:14.062137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.514 qpair failed and we were unable to recover it. 00:38:20.514 [2024-12-13 10:40:14.062231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.514 [2024-12-13 10:40:14.062247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.514 qpair failed and we were unable to recover it. 00:38:20.514 [2024-12-13 10:40:14.062460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.514 [2024-12-13 10:40:14.062476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.514 qpair failed and we were unable to recover it. 00:38:20.514 [2024-12-13 10:40:14.062693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.514 [2024-12-13 10:40:14.062710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.514 qpair failed and we were unable to recover it. 00:38:20.514 [2024-12-13 10:40:14.062863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.514 [2024-12-13 10:40:14.062879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.514 qpair failed and we were unable to recover it. 00:38:20.514 [2024-12-13 10:40:14.062963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.514 [2024-12-13 10:40:14.062979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.514 qpair failed and we were unable to recover it. 00:38:20.514 [2024-12-13 10:40:14.063186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.514 [2024-12-13 10:40:14.063201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.514 qpair failed and we were unable to recover it. 00:38:20.514 [2024-12-13 10:40:14.063422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.514 [2024-12-13 10:40:14.063437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.514 qpair failed and we were unable to recover it. 00:38:20.514 [2024-12-13 10:40:14.063607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.514 [2024-12-13 10:40:14.063623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.514 qpair failed and we were unable to recover it. 00:38:20.514 [2024-12-13 10:40:14.063796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.514 [2024-12-13 10:40:14.063812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.514 qpair failed and we were unable to recover it. 00:38:20.514 [2024-12-13 10:40:14.063970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.514 [2024-12-13 10:40:14.063986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.514 qpair failed and we were unable to recover it. 00:38:20.514 [2024-12-13 10:40:14.064146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.514 [2024-12-13 10:40:14.064162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.514 qpair failed and we were unable to recover it. 00:38:20.514 [2024-12-13 10:40:14.064413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.514 [2024-12-13 10:40:14.064429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.514 qpair failed and we were unable to recover it. 00:38:20.514 [2024-12-13 10:40:14.064589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.514 [2024-12-13 10:40:14.064605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.514 qpair failed and we were unable to recover it. 00:38:20.514 [2024-12-13 10:40:14.064851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.514 [2024-12-13 10:40:14.064867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.514 qpair failed and we were unable to recover it. 00:38:20.514 [2024-12-13 10:40:14.065091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.514 [2024-12-13 10:40:14.065106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.514 qpair failed and we were unable to recover it. 00:38:20.514 [2024-12-13 10:40:14.065260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.514 [2024-12-13 10:40:14.065277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.514 qpair failed and we were unable to recover it. 00:38:20.514 [2024-12-13 10:40:14.065502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.514 [2024-12-13 10:40:14.065520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.514 qpair failed and we were unable to recover it. 00:38:20.514 [2024-12-13 10:40:14.065662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.514 [2024-12-13 10:40:14.065678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.514 qpair failed and we were unable to recover it. 00:38:20.514 [2024-12-13 10:40:14.065847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.514 [2024-12-13 10:40:14.065863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.514 qpair failed and we were unable to recover it. 00:38:20.514 [2024-12-13 10:40:14.066073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.514 [2024-12-13 10:40:14.066089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.514 qpair failed and we were unable to recover it. 00:38:20.514 [2024-12-13 10:40:14.066291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.514 [2024-12-13 10:40:14.066308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.514 qpair failed and we were unable to recover it. 00:38:20.514 [2024-12-13 10:40:14.066529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.514 [2024-12-13 10:40:14.066546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.514 qpair failed and we were unable to recover it. 00:38:20.514 [2024-12-13 10:40:14.066699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.514 [2024-12-13 10:40:14.066715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.514 qpair failed and we were unable to recover it. 00:38:20.514 [2024-12-13 10:40:14.066856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.514 [2024-12-13 10:40:14.066872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.514 qpair failed and we were unable to recover it. 00:38:20.514 [2024-12-13 10:40:14.066975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.514 [2024-12-13 10:40:14.066991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.514 qpair failed and we were unable to recover it. 00:38:20.514 [2024-12-13 10:40:14.067144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.514 [2024-12-13 10:40:14.067159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.514 qpair failed and we were unable to recover it. 00:38:20.514 [2024-12-13 10:40:14.067362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.514 [2024-12-13 10:40:14.067379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.514 qpair failed and we were unable to recover it. 00:38:20.514 [2024-12-13 10:40:14.067473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.514 [2024-12-13 10:40:14.067489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.514 qpair failed and we were unable to recover it. 00:38:20.514 [2024-12-13 10:40:14.067707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.514 [2024-12-13 10:40:14.067723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.514 qpair failed and we were unable to recover it. 00:38:20.514 [2024-12-13 10:40:14.067805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.515 [2024-12-13 10:40:14.067821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.515 qpair failed and we were unable to recover it. 00:38:20.515 [2024-12-13 10:40:14.067958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.515 [2024-12-13 10:40:14.067974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.515 qpair failed and we were unable to recover it. 00:38:20.515 [2024-12-13 10:40:14.068131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.515 [2024-12-13 10:40:14.068148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.515 qpair failed and we were unable to recover it. 00:38:20.515 [2024-12-13 10:40:14.068289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.515 [2024-12-13 10:40:14.068304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.515 qpair failed and we were unable to recover it. 00:38:20.515 [2024-12-13 10:40:14.068466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.515 [2024-12-13 10:40:14.068482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.515 qpair failed and we were unable to recover it. 00:38:20.515 [2024-12-13 10:40:14.068758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.515 [2024-12-13 10:40:14.068773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.515 qpair failed and we were unable to recover it. 00:38:20.515 [2024-12-13 10:40:14.069032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.515 [2024-12-13 10:40:14.069048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.515 qpair failed and we were unable to recover it. 00:38:20.515 [2024-12-13 10:40:14.069207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.515 [2024-12-13 10:40:14.069223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.515 qpair failed and we were unable to recover it. 00:38:20.515 [2024-12-13 10:40:14.069425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.515 [2024-12-13 10:40:14.069443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.515 qpair failed and we were unable to recover it. 00:38:20.515 [2024-12-13 10:40:14.069677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.515 [2024-12-13 10:40:14.069693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.515 qpair failed and we were unable to recover it. 00:38:20.515 [2024-12-13 10:40:14.069833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.515 [2024-12-13 10:40:14.069848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.515 qpair failed and we were unable to recover it. 00:38:20.515 [2024-12-13 10:40:14.069921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.515 [2024-12-13 10:40:14.069936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.515 qpair failed and we were unable to recover it. 00:38:20.515 [2024-12-13 10:40:14.070095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.515 [2024-12-13 10:40:14.070110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.515 qpair failed and we were unable to recover it. 00:38:20.515 [2024-12-13 10:40:14.070336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.515 [2024-12-13 10:40:14.070352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.515 qpair failed and we were unable to recover it. 00:38:20.515 [2024-12-13 10:40:14.070488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.515 [2024-12-13 10:40:14.070504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.515 qpair failed and we were unable to recover it. 00:38:20.515 [2024-12-13 10:40:14.070723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.515 [2024-12-13 10:40:14.070740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.515 qpair failed and we were unable to recover it. 00:38:20.515 [2024-12-13 10:40:14.070920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.515 [2024-12-13 10:40:14.070936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.515 qpair failed and we were unable to recover it. 00:38:20.515 [2024-12-13 10:40:14.071115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.515 [2024-12-13 10:40:14.071130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.515 qpair failed and we were unable to recover it. 00:38:20.515 [2024-12-13 10:40:14.071346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.515 [2024-12-13 10:40:14.071362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.515 qpair failed and we were unable to recover it. 00:38:20.515 [2024-12-13 10:40:14.071624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.515 [2024-12-13 10:40:14.071640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.515 qpair failed and we were unable to recover it. 00:38:20.515 [2024-12-13 10:40:14.071795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.515 [2024-12-13 10:40:14.071811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.515 qpair failed and we were unable to recover it. 00:38:20.515 [2024-12-13 10:40:14.071901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.515 [2024-12-13 10:40:14.071917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.515 qpair failed and we were unable to recover it. 00:38:20.515 [2024-12-13 10:40:14.072107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.515 [2024-12-13 10:40:14.072125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.515 qpair failed and we were unable to recover it. 00:38:20.515 [2024-12-13 10:40:14.072277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.515 [2024-12-13 10:40:14.072297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.515 qpair failed and we were unable to recover it. 00:38:20.515 [2024-12-13 10:40:14.072473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.515 [2024-12-13 10:40:14.072490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.515 qpair failed and we were unable to recover it. 00:38:20.515 [2024-12-13 10:40:14.072692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.515 [2024-12-13 10:40:14.072708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.515 qpair failed and we were unable to recover it. 00:38:20.515 [2024-12-13 10:40:14.072898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.515 [2024-12-13 10:40:14.072915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.515 qpair failed and we were unable to recover it. 00:38:20.515 [2024-12-13 10:40:14.073152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.515 [2024-12-13 10:40:14.073168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.515 qpair failed and we were unable to recover it. 00:38:20.515 [2024-12-13 10:40:14.073318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.515 [2024-12-13 10:40:14.073333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.515 qpair failed and we were unable to recover it. 00:38:20.515 [2024-12-13 10:40:14.073511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.515 [2024-12-13 10:40:14.073527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.515 qpair failed and we were unable to recover it. 00:38:20.515 [2024-12-13 10:40:14.073751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.515 [2024-12-13 10:40:14.073767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.515 qpair failed and we were unable to recover it. 00:38:20.515 [2024-12-13 10:40:14.073922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.515 [2024-12-13 10:40:14.073937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.515 qpair failed and we were unable to recover it. 00:38:20.515 [2024-12-13 10:40:14.074141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.515 [2024-12-13 10:40:14.074157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.515 qpair failed and we were unable to recover it. 00:38:20.515 [2024-12-13 10:40:14.074293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.515 [2024-12-13 10:40:14.074309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.515 qpair failed and we were unable to recover it. 00:38:20.515 [2024-12-13 10:40:14.074542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.515 [2024-12-13 10:40:14.074558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.515 qpair failed and we were unable to recover it. 00:38:20.515 [2024-12-13 10:40:14.074661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.515 [2024-12-13 10:40:14.074677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.515 qpair failed and we were unable to recover it. 00:38:20.515 [2024-12-13 10:40:14.074770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.515 [2024-12-13 10:40:14.074785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.515 qpair failed and we were unable to recover it. 00:38:20.515 [2024-12-13 10:40:14.074955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.515 [2024-12-13 10:40:14.074971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.515 qpair failed and we were unable to recover it. 00:38:20.515 [2024-12-13 10:40:14.075215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.515 [2024-12-13 10:40:14.075231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.515 qpair failed and we were unable to recover it. 00:38:20.516 [2024-12-13 10:40:14.075402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.516 [2024-12-13 10:40:14.075418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.516 qpair failed and we were unable to recover it. 00:38:20.516 [2024-12-13 10:40:14.075565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.516 [2024-12-13 10:40:14.075581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.516 qpair failed and we were unable to recover it. 00:38:20.516 [2024-12-13 10:40:14.075671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.516 [2024-12-13 10:40:14.075687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.516 qpair failed and we were unable to recover it. 00:38:20.516 [2024-12-13 10:40:14.075915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.516 [2024-12-13 10:40:14.075931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.516 qpair failed and we were unable to recover it. 00:38:20.516 [2024-12-13 10:40:14.076078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.516 [2024-12-13 10:40:14.076094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.516 qpair failed and we were unable to recover it. 00:38:20.516 [2024-12-13 10:40:14.076252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.516 [2024-12-13 10:40:14.076268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.516 qpair failed and we were unable to recover it. 00:38:20.516 [2024-12-13 10:40:14.076473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.516 [2024-12-13 10:40:14.076489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.516 qpair failed and we were unable to recover it. 00:38:20.516 [2024-12-13 10:40:14.076692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.516 [2024-12-13 10:40:14.076708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.516 qpair failed and we were unable to recover it. 00:38:20.516 [2024-12-13 10:40:14.076948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.516 [2024-12-13 10:40:14.076963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.516 qpair failed and we were unable to recover it. 00:38:20.516 [2024-12-13 10:40:14.077145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.516 [2024-12-13 10:40:14.077163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.516 qpair failed and we were unable to recover it. 00:38:20.516 [2024-12-13 10:40:14.077394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.516 [2024-12-13 10:40:14.077410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.516 qpair failed and we were unable to recover it. 00:38:20.516 [2024-12-13 10:40:14.077641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.516 [2024-12-13 10:40:14.077658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.516 qpair failed and we were unable to recover it. 00:38:20.516 [2024-12-13 10:40:14.077890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.516 [2024-12-13 10:40:14.077906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.516 qpair failed and we were unable to recover it. 00:38:20.516 [2024-12-13 10:40:14.078057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.516 [2024-12-13 10:40:14.078072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.516 qpair failed and we were unable to recover it. 00:38:20.516 [2024-12-13 10:40:14.078313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.516 [2024-12-13 10:40:14.078330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.516 qpair failed and we were unable to recover it. 00:38:20.516 [2024-12-13 10:40:14.078480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.516 [2024-12-13 10:40:14.078496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.516 qpair failed and we were unable to recover it. 00:38:20.516 [2024-12-13 10:40:14.078651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.516 [2024-12-13 10:40:14.078666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.516 qpair failed and we were unable to recover it. 00:38:20.516 [2024-12-13 10:40:14.078887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.516 [2024-12-13 10:40:14.078907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.516 qpair failed and we were unable to recover it. 00:38:20.516 [2024-12-13 10:40:14.079075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.516 [2024-12-13 10:40:14.079091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.516 qpair failed and we were unable to recover it. 00:38:20.516 [2024-12-13 10:40:14.079276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.516 [2024-12-13 10:40:14.079292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.516 qpair failed and we were unable to recover it. 00:38:20.516 [2024-12-13 10:40:14.079495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.516 [2024-12-13 10:40:14.079511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.516 qpair failed and we were unable to recover it. 00:38:20.516 [2024-12-13 10:40:14.079613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.516 [2024-12-13 10:40:14.079629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.516 qpair failed and we were unable to recover it. 00:38:20.516 [2024-12-13 10:40:14.079780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.516 [2024-12-13 10:40:14.079796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.516 qpair failed and we were unable to recover it. 00:38:20.516 [2024-12-13 10:40:14.080003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.516 [2024-12-13 10:40:14.080019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.516 qpair failed and we were unable to recover it. 00:38:20.516 [2024-12-13 10:40:14.080253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.516 [2024-12-13 10:40:14.080269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.516 qpair failed and we were unable to recover it. 00:38:20.516 [2024-12-13 10:40:14.080492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.516 [2024-12-13 10:40:14.080509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.516 qpair failed and we were unable to recover it. 00:38:20.516 [2024-12-13 10:40:14.080582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.516 [2024-12-13 10:40:14.080599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.516 qpair failed and we were unable to recover it. 00:38:20.516 [2024-12-13 10:40:14.080830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.516 [2024-12-13 10:40:14.080846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.516 qpair failed and we were unable to recover it. 00:38:20.516 [2024-12-13 10:40:14.081073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.516 [2024-12-13 10:40:14.081089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.516 qpair failed and we were unable to recover it. 00:38:20.516 [2024-12-13 10:40:14.081292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.516 [2024-12-13 10:40:14.081307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.516 qpair failed and we were unable to recover it. 00:38:20.516 [2024-12-13 10:40:14.081496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.516 [2024-12-13 10:40:14.081512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.516 qpair failed and we were unable to recover it. 00:38:20.516 [2024-12-13 10:40:14.081751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.516 [2024-12-13 10:40:14.081767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.516 qpair failed and we were unable to recover it. 00:38:20.516 [2024-12-13 10:40:14.081919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.516 [2024-12-13 10:40:14.081935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.516 qpair failed and we were unable to recover it. 00:38:20.516 [2024-12-13 10:40:14.082162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.517 [2024-12-13 10:40:14.082178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.517 qpair failed and we were unable to recover it. 00:38:20.517 [2024-12-13 10:40:14.082380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.517 [2024-12-13 10:40:14.082395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.517 qpair failed and we were unable to recover it. 00:38:20.517 [2024-12-13 10:40:14.082547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.517 [2024-12-13 10:40:14.082563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.517 qpair failed and we were unable to recover it. 00:38:20.517 [2024-12-13 10:40:14.082791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.517 [2024-12-13 10:40:14.082806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.517 qpair failed and we were unable to recover it. 00:38:20.517 [2024-12-13 10:40:14.082959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.517 [2024-12-13 10:40:14.082974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.517 qpair failed and we were unable to recover it. 00:38:20.517 [2024-12-13 10:40:14.083054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.517 [2024-12-13 10:40:14.083069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.517 qpair failed and we were unable to recover it. 00:38:20.517 [2024-12-13 10:40:14.083232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.517 [2024-12-13 10:40:14.083247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.517 qpair failed and we were unable to recover it. 00:38:20.517 [2024-12-13 10:40:14.083472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.517 [2024-12-13 10:40:14.083488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.517 qpair failed and we were unable to recover it. 00:38:20.517 [2024-12-13 10:40:14.083695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.517 [2024-12-13 10:40:14.083711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.517 qpair failed and we were unable to recover it. 00:38:20.517 [2024-12-13 10:40:14.083913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.517 [2024-12-13 10:40:14.083929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.517 qpair failed and we were unable to recover it. 00:38:20.517 [2024-12-13 10:40:14.084019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.517 [2024-12-13 10:40:14.084034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.517 qpair failed and we were unable to recover it. 00:38:20.517 [2024-12-13 10:40:14.084259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.517 [2024-12-13 10:40:14.084275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.517 qpair failed and we were unable to recover it. 00:38:20.517 [2024-12-13 10:40:14.084531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.517 [2024-12-13 10:40:14.084547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.517 qpair failed and we were unable to recover it. 00:38:20.517 [2024-12-13 10:40:14.084779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.517 [2024-12-13 10:40:14.084795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.517 qpair failed and we were unable to recover it. 00:38:20.517 [2024-12-13 10:40:14.085017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.517 [2024-12-13 10:40:14.085033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.517 qpair failed and we were unable to recover it. 00:38:20.517 [2024-12-13 10:40:14.085187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.517 [2024-12-13 10:40:14.085208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.517 qpair failed and we were unable to recover it. 00:38:20.517 [2024-12-13 10:40:14.085436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.517 [2024-12-13 10:40:14.085465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.517 qpair failed and we were unable to recover it. 00:38:20.517 [2024-12-13 10:40:14.085619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.517 [2024-12-13 10:40:14.085635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.517 qpair failed and we were unable to recover it. 00:38:20.517 [2024-12-13 10:40:14.085788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.517 [2024-12-13 10:40:14.085804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.517 qpair failed and we were unable to recover it. 00:38:20.517 [2024-12-13 10:40:14.085908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.517 [2024-12-13 10:40:14.085924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.517 qpair failed and we were unable to recover it. 00:38:20.517 [2024-12-13 10:40:14.086080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.517 [2024-12-13 10:40:14.086096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.517 qpair failed and we were unable to recover it. 00:38:20.517 [2024-12-13 10:40:14.086322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.517 [2024-12-13 10:40:14.086339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.517 qpair failed and we were unable to recover it. 00:38:20.517 [2024-12-13 10:40:14.086492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.517 [2024-12-13 10:40:14.086508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.517 qpair failed and we were unable to recover it. 00:38:20.517 [2024-12-13 10:40:14.086699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.517 [2024-12-13 10:40:14.086715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.517 qpair failed and we were unable to recover it. 00:38:20.517 [2024-12-13 10:40:14.086943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.517 [2024-12-13 10:40:14.086959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.517 qpair failed and we were unable to recover it. 00:38:20.517 [2024-12-13 10:40:14.087113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.517 [2024-12-13 10:40:14.087129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.517 qpair failed and we were unable to recover it. 00:38:20.517 [2024-12-13 10:40:14.087263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.517 [2024-12-13 10:40:14.087278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.517 qpair failed and we were unable to recover it. 00:38:20.517 [2024-12-13 10:40:14.087417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.517 [2024-12-13 10:40:14.087433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.517 qpair failed and we were unable to recover it. 00:38:20.517 [2024-12-13 10:40:14.087597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.517 [2024-12-13 10:40:14.087612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.517 qpair failed and we were unable to recover it. 00:38:20.517 [2024-12-13 10:40:14.087838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.517 [2024-12-13 10:40:14.087853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.517 qpair failed and we were unable to recover it. 00:38:20.517 [2024-12-13 10:40:14.088035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.517 [2024-12-13 10:40:14.088052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.517 qpair failed and we were unable to recover it. 00:38:20.517 [2024-12-13 10:40:14.088305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.517 [2024-12-13 10:40:14.088320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.517 qpair failed and we were unable to recover it. 00:38:20.517 [2024-12-13 10:40:14.088567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.517 [2024-12-13 10:40:14.088584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.517 qpair failed and we were unable to recover it. 00:38:20.517 [2024-12-13 10:40:14.088739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.517 [2024-12-13 10:40:14.088754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.517 qpair failed and we were unable to recover it. 00:38:20.517 [2024-12-13 10:40:14.088900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.517 [2024-12-13 10:40:14.088915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.517 qpair failed and we were unable to recover it. 00:38:20.517 [2024-12-13 10:40:14.089142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.517 [2024-12-13 10:40:14.089158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.517 qpair failed and we were unable to recover it. 00:38:20.517 [2024-12-13 10:40:14.089310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.517 [2024-12-13 10:40:14.089326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.517 qpair failed and we were unable to recover it. 00:38:20.517 [2024-12-13 10:40:14.089506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.517 [2024-12-13 10:40:14.089522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.517 qpair failed and we were unable to recover it. 00:38:20.517 [2024-12-13 10:40:14.089797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.517 [2024-12-13 10:40:14.089813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.517 qpair failed and we were unable to recover it. 00:38:20.517 [2024-12-13 10:40:14.090021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.517 [2024-12-13 10:40:14.090037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.518 qpair failed and we were unable to recover it. 00:38:20.518 [2024-12-13 10:40:14.090262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.518 [2024-12-13 10:40:14.090277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.518 qpair failed and we were unable to recover it. 00:38:20.518 [2024-12-13 10:40:14.090367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.518 [2024-12-13 10:40:14.090383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.518 qpair failed and we were unable to recover it. 00:38:20.518 [2024-12-13 10:40:14.090633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.518 [2024-12-13 10:40:14.090650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.518 qpair failed and we were unable to recover it. 00:38:20.518 [2024-12-13 10:40:14.090813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.518 [2024-12-13 10:40:14.090829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.518 qpair failed and we were unable to recover it. 00:38:20.518 [2024-12-13 10:40:14.090921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.518 [2024-12-13 10:40:14.090937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.518 qpair failed and we were unable to recover it. 00:38:20.518 [2024-12-13 10:40:14.091095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.518 [2024-12-13 10:40:14.091111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.518 qpair failed and we were unable to recover it. 00:38:20.518 [2024-12-13 10:40:14.091266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.518 [2024-12-13 10:40:14.091281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.518 qpair failed and we were unable to recover it. 00:38:20.518 [2024-12-13 10:40:14.091424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.518 [2024-12-13 10:40:14.091439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.518 qpair failed and we were unable to recover it. 00:38:20.518 [2024-12-13 10:40:14.091600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.518 [2024-12-13 10:40:14.091617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.518 qpair failed and we were unable to recover it. 00:38:20.518 [2024-12-13 10:40:14.091797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.518 [2024-12-13 10:40:14.091812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.518 qpair failed and we were unable to recover it. 00:38:20.518 [2024-12-13 10:40:14.091960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.518 [2024-12-13 10:40:14.091976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.518 qpair failed and we were unable to recover it. 00:38:20.518 [2024-12-13 10:40:14.092154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.518 [2024-12-13 10:40:14.092169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.518 qpair failed and we were unable to recover it. 00:38:20.518 [2024-12-13 10:40:14.092323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.518 [2024-12-13 10:40:14.092340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.518 qpair failed and we were unable to recover it. 00:38:20.518 [2024-12-13 10:40:14.092581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.518 [2024-12-13 10:40:14.092596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.518 qpair failed and we were unable to recover it. 00:38:20.518 [2024-12-13 10:40:14.092823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.518 [2024-12-13 10:40:14.092839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.518 qpair failed and we were unable to recover it. 00:38:20.518 [2024-12-13 10:40:14.092985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.518 [2024-12-13 10:40:14.093000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.518 qpair failed and we were unable to recover it. 00:38:20.518 [2024-12-13 10:40:14.093210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.518 [2024-12-13 10:40:14.093228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.518 qpair failed and we were unable to recover it. 00:38:20.518 [2024-12-13 10:40:14.093457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.518 [2024-12-13 10:40:14.093473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.518 qpair failed and we were unable to recover it. 00:38:20.518 [2024-12-13 10:40:14.093700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.518 [2024-12-13 10:40:14.093716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.518 qpair failed and we were unable to recover it. 00:38:20.518 [2024-12-13 10:40:14.093917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.518 [2024-12-13 10:40:14.093932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.518 qpair failed and we were unable to recover it. 00:38:20.518 [2024-12-13 10:40:14.094202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.518 [2024-12-13 10:40:14.094217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.518 qpair failed and we were unable to recover it. 00:38:20.518 [2024-12-13 10:40:14.094360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.518 [2024-12-13 10:40:14.094376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.518 qpair failed and we were unable to recover it. 00:38:20.518 [2024-12-13 10:40:14.094599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.518 [2024-12-13 10:40:14.094615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.518 qpair failed and we were unable to recover it. 00:38:20.518 [2024-12-13 10:40:14.094778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.518 [2024-12-13 10:40:14.094794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.518 qpair failed and we were unable to recover it. 00:38:20.518 [2024-12-13 10:40:14.095021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.518 [2024-12-13 10:40:14.095036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.518 qpair failed and we were unable to recover it. 00:38:20.518 [2024-12-13 10:40:14.095202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.518 [2024-12-13 10:40:14.095218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.518 qpair failed and we were unable to recover it. 00:38:20.518 [2024-12-13 10:40:14.095319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.518 [2024-12-13 10:40:14.095335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.518 qpair failed and we were unable to recover it. 00:38:20.518 [2024-12-13 10:40:14.095482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.518 [2024-12-13 10:40:14.095498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.518 qpair failed and we were unable to recover it. 00:38:20.518 [2024-12-13 10:40:14.095725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.518 [2024-12-13 10:40:14.095741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.518 qpair failed and we were unable to recover it. 00:38:20.518 [2024-12-13 10:40:14.095968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.518 [2024-12-13 10:40:14.095983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.518 qpair failed and we were unable to recover it. 00:38:20.518 [2024-12-13 10:40:14.096213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.518 [2024-12-13 10:40:14.096229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.518 qpair failed and we were unable to recover it. 00:38:20.518 [2024-12-13 10:40:14.096385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.518 [2024-12-13 10:40:14.096401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.518 qpair failed and we were unable to recover it. 00:38:20.518 [2024-12-13 10:40:14.096654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.518 [2024-12-13 10:40:14.096671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.518 qpair failed and we were unable to recover it. 00:38:20.518 [2024-12-13 10:40:14.096819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.518 [2024-12-13 10:40:14.096835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.518 qpair failed and we were unable to recover it. 00:38:20.518 [2024-12-13 10:40:14.096974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.518 [2024-12-13 10:40:14.096989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.518 qpair failed and we were unable to recover it. 00:38:20.518 [2024-12-13 10:40:14.097127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.518 [2024-12-13 10:40:14.097142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.518 qpair failed and we were unable to recover it. 00:38:20.518 [2024-12-13 10:40:14.097343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.518 [2024-12-13 10:40:14.097358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.518 qpair failed and we were unable to recover it. 00:38:20.518 [2024-12-13 10:40:14.097578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.518 [2024-12-13 10:40:14.097594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.518 qpair failed and we were unable to recover it. 00:38:20.518 [2024-12-13 10:40:14.097800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.518 [2024-12-13 10:40:14.097816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.518 qpair failed and we were unable to recover it. 00:38:20.519 [2024-12-13 10:40:14.098094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.519 [2024-12-13 10:40:14.098115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.519 qpair failed and we were unable to recover it. 00:38:20.519 [2024-12-13 10:40:14.098297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.519 [2024-12-13 10:40:14.098313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.519 qpair failed and we were unable to recover it. 00:38:20.519 [2024-12-13 10:40:14.098452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.519 [2024-12-13 10:40:14.098468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.519 qpair failed and we were unable to recover it. 00:38:20.519 [2024-12-13 10:40:14.098625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.519 [2024-12-13 10:40:14.098641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.519 qpair failed and we were unable to recover it. 00:38:20.519 [2024-12-13 10:40:14.098874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.519 [2024-12-13 10:40:14.098889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.519 qpair failed and we were unable to recover it. 00:38:20.519 [2024-12-13 10:40:14.098982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.519 [2024-12-13 10:40:14.098999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.519 qpair failed and we were unable to recover it. 00:38:20.519 [2024-12-13 10:40:14.099223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.519 [2024-12-13 10:40:14.099240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.519 qpair failed and we were unable to recover it. 00:38:20.519 [2024-12-13 10:40:14.099458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.519 [2024-12-13 10:40:14.099475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.519 qpair failed and we were unable to recover it. 00:38:20.519 [2024-12-13 10:40:14.099639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.519 [2024-12-13 10:40:14.099655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.519 qpair failed and we were unable to recover it. 00:38:20.519 [2024-12-13 10:40:14.099881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.519 [2024-12-13 10:40:14.099896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.519 qpair failed and we were unable to recover it. 00:38:20.519 [2024-12-13 10:40:14.100062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.519 [2024-12-13 10:40:14.100078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.519 qpair failed and we were unable to recover it. 00:38:20.519 [2024-12-13 10:40:14.100175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.519 [2024-12-13 10:40:14.100191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.519 qpair failed and we were unable to recover it. 00:38:20.519 [2024-12-13 10:40:14.100344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.519 [2024-12-13 10:40:14.100360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.519 qpair failed and we were unable to recover it. 00:38:20.519 [2024-12-13 10:40:14.100506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.519 [2024-12-13 10:40:14.100522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.519 qpair failed and we were unable to recover it. 00:38:20.519 [2024-12-13 10:40:14.100656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.519 [2024-12-13 10:40:14.100672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.519 qpair failed and we were unable to recover it. 00:38:20.519 [2024-12-13 10:40:14.100818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.519 [2024-12-13 10:40:14.100834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.519 qpair failed and we were unable to recover it. 00:38:20.519 [2024-12-13 10:40:14.100980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.519 [2024-12-13 10:40:14.100995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.519 qpair failed and we were unable to recover it. 00:38:20.519 [2024-12-13 10:40:14.101237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.519 [2024-12-13 10:40:14.101254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.519 qpair failed and we were unable to recover it. 00:38:20.519 [2024-12-13 10:40:14.101412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.519 [2024-12-13 10:40:14.101428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.519 qpair failed and we were unable to recover it. 00:38:20.519 [2024-12-13 10:40:14.101576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.519 [2024-12-13 10:40:14.101592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.519 qpair failed and we were unable to recover it. 00:38:20.519 [2024-12-13 10:40:14.101809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.519 [2024-12-13 10:40:14.101825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.519 qpair failed and we were unable to recover it. 00:38:20.519 [2024-12-13 10:40:14.101963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.519 [2024-12-13 10:40:14.101978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.519 qpair failed and we were unable to recover it. 00:38:20.519 [2024-12-13 10:40:14.102220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.519 [2024-12-13 10:40:14.102235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.519 qpair failed and we were unable to recover it. 00:38:20.519 [2024-12-13 10:40:14.102421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.519 [2024-12-13 10:40:14.102436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.519 qpair failed and we were unable to recover it. 00:38:20.519 [2024-12-13 10:40:14.102701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.519 [2024-12-13 10:40:14.102716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.519 qpair failed and we were unable to recover it. 00:38:20.519 [2024-12-13 10:40:14.102965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.519 [2024-12-13 10:40:14.102981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.519 qpair failed and we were unable to recover it. 00:38:20.519 [2024-12-13 10:40:14.103194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.519 [2024-12-13 10:40:14.103209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.519 qpair failed and we were unable to recover it. 00:38:20.519 [2024-12-13 10:40:14.103363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.519 [2024-12-13 10:40:14.103378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.519 qpair failed and we were unable to recover it. 00:38:20.519 [2024-12-13 10:40:14.103592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.519 [2024-12-13 10:40:14.103608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.519 qpair failed and we were unable to recover it. 00:38:20.519 [2024-12-13 10:40:14.103748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.519 [2024-12-13 10:40:14.103764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.519 qpair failed and we were unable to recover it. 00:38:20.519 [2024-12-13 10:40:14.103994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.519 [2024-12-13 10:40:14.104009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.519 qpair failed and we were unable to recover it. 00:38:20.519 [2024-12-13 10:40:14.104220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.519 [2024-12-13 10:40:14.104236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.519 qpair failed and we were unable to recover it. 00:38:20.519 [2024-12-13 10:40:14.104417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.519 [2024-12-13 10:40:14.104436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.519 qpair failed and we were unable to recover it. 00:38:20.519 [2024-12-13 10:40:14.104614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.519 [2024-12-13 10:40:14.104630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.519 qpair failed and we were unable to recover it. 00:38:20.519 [2024-12-13 10:40:14.104776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.519 [2024-12-13 10:40:14.104792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.519 qpair failed and we were unable to recover it. 00:38:20.519 [2024-12-13 10:40:14.104953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.519 [2024-12-13 10:40:14.104970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.519 qpair failed and we were unable to recover it. 00:38:20.519 [2024-12-13 10:40:14.105187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.519 [2024-12-13 10:40:14.105203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.519 qpair failed and we were unable to recover it. 00:38:20.519 [2024-12-13 10:40:14.105383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.519 [2024-12-13 10:40:14.105400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.519 qpair failed and we were unable to recover it. 00:38:20.519 [2024-12-13 10:40:14.105631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.519 [2024-12-13 10:40:14.105647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.519 qpair failed and we were unable to recover it. 00:38:20.520 [2024-12-13 10:40:14.105814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.520 [2024-12-13 10:40:14.105831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.520 qpair failed and we were unable to recover it. 00:38:20.520 [2024-12-13 10:40:14.106034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.520 [2024-12-13 10:40:14.106049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.520 qpair failed and we were unable to recover it. 00:38:20.520 [2024-12-13 10:40:14.106278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.520 [2024-12-13 10:40:14.106295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.520 qpair failed and we were unable to recover it. 00:38:20.520 [2024-12-13 10:40:14.106474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.520 [2024-12-13 10:40:14.106490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.520 qpair failed and we were unable to recover it. 00:38:20.520 [2024-12-13 10:40:14.106643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.520 [2024-12-13 10:40:14.106660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.520 qpair failed and we were unable to recover it. 00:38:20.520 [2024-12-13 10:40:14.106833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.520 [2024-12-13 10:40:14.106851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.520 qpair failed and we were unable to recover it. 00:38:20.520 [2024-12-13 10:40:14.107065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.520 [2024-12-13 10:40:14.107082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.520 qpair failed and we were unable to recover it. 00:38:20.520 [2024-12-13 10:40:14.107220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.520 [2024-12-13 10:40:14.107237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.520 qpair failed and we were unable to recover it. 00:38:20.520 [2024-12-13 10:40:14.107414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.520 [2024-12-13 10:40:14.107430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.520 qpair failed and we were unable to recover it. 00:38:20.520 [2024-12-13 10:40:14.107622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.520 [2024-12-13 10:40:14.107640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.520 qpair failed and we were unable to recover it. 00:38:20.520 [2024-12-13 10:40:14.107791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.520 [2024-12-13 10:40:14.107808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.520 qpair failed and we were unable to recover it. 00:38:20.520 [2024-12-13 10:40:14.107967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.520 [2024-12-13 10:40:14.107983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.520 qpair failed and we were unable to recover it. 00:38:20.520 [2024-12-13 10:40:14.108202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.520 [2024-12-13 10:40:14.108219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.520 qpair failed and we were unable to recover it. 00:38:20.520 [2024-12-13 10:40:14.108442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.520 [2024-12-13 10:40:14.108464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.520 qpair failed and we were unable to recover it. 00:38:20.520 [2024-12-13 10:40:14.108672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.520 [2024-12-13 10:40:14.108689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.520 qpair failed and we were unable to recover it. 00:38:20.520 [2024-12-13 10:40:14.108913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.520 [2024-12-13 10:40:14.108932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.520 qpair failed and we were unable to recover it. 00:38:20.520 [2024-12-13 10:40:14.109108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.520 [2024-12-13 10:40:14.109125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.520 qpair failed and we were unable to recover it. 00:38:20.520 [2024-12-13 10:40:14.109274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.520 [2024-12-13 10:40:14.109291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.520 qpair failed and we were unable to recover it. 00:38:20.520 [2024-12-13 10:40:14.109535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.520 [2024-12-13 10:40:14.109557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.520 qpair failed and we were unable to recover it. 00:38:20.520 [2024-12-13 10:40:14.109733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.520 [2024-12-13 10:40:14.109751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.520 qpair failed and we were unable to recover it. 00:38:20.520 [2024-12-13 10:40:14.109835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.520 [2024-12-13 10:40:14.109851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.520 qpair failed and we were unable to recover it. 00:38:20.520 [2024-12-13 10:40:14.110075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.520 [2024-12-13 10:40:14.110092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.520 qpair failed and we were unable to recover it. 00:38:20.520 [2024-12-13 10:40:14.110176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.520 [2024-12-13 10:40:14.110191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.520 qpair failed and we were unable to recover it. 00:38:20.520 [2024-12-13 10:40:14.110370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.520 [2024-12-13 10:40:14.110385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.520 qpair failed and we were unable to recover it. 00:38:20.520 [2024-12-13 10:40:14.110630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.520 [2024-12-13 10:40:14.110652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.520 qpair failed and we were unable to recover it. 00:38:20.520 [2024-12-13 10:40:14.110887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.520 [2024-12-13 10:40:14.110910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.520 qpair failed and we were unable to recover it. 00:38:20.520 [2024-12-13 10:40:14.111094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.520 [2024-12-13 10:40:14.111113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.520 qpair failed and we were unable to recover it. 00:38:20.520 [2024-12-13 10:40:14.111384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.520 [2024-12-13 10:40:14.111403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.520 qpair failed and we were unable to recover it. 00:38:20.520 [2024-12-13 10:40:14.111658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.520 [2024-12-13 10:40:14.111676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.520 qpair failed and we were unable to recover it. 00:38:20.520 [2024-12-13 10:40:14.111857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.520 [2024-12-13 10:40:14.111874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.520 qpair failed and we were unable to recover it. 00:38:20.520 [2024-12-13 10:40:14.112046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.520 [2024-12-13 10:40:14.112061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.520 qpair failed and we were unable to recover it. 00:38:20.520 [2024-12-13 10:40:14.112293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.520 [2024-12-13 10:40:14.112310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.520 qpair failed and we were unable to recover it. 00:38:20.520 [2024-12-13 10:40:14.112414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.520 [2024-12-13 10:40:14.112431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.520 qpair failed and we were unable to recover it. 00:38:20.520 [2024-12-13 10:40:14.112642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.520 [2024-12-13 10:40:14.112660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.520 qpair failed and we were unable to recover it. 00:38:20.520 [2024-12-13 10:40:14.112821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.520 [2024-12-13 10:40:14.112839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.520 qpair failed and we were unable to recover it. 00:38:20.520 [2024-12-13 10:40:14.113045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.520 [2024-12-13 10:40:14.113062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.520 qpair failed and we were unable to recover it. 00:38:20.520 [2024-12-13 10:40:14.113143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.520 [2024-12-13 10:40:14.113159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.520 qpair failed and we were unable to recover it. 00:38:20.520 [2024-12-13 10:40:14.113365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.520 [2024-12-13 10:40:14.113382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.520 qpair failed and we were unable to recover it. 00:38:20.520 [2024-12-13 10:40:14.113535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.520 [2024-12-13 10:40:14.113553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.520 qpair failed and we were unable to recover it. 00:38:20.521 [2024-12-13 10:40:14.113707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.521 [2024-12-13 10:40:14.113725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.521 qpair failed and we were unable to recover it. 00:38:20.521 [2024-12-13 10:40:14.113822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.521 [2024-12-13 10:40:14.113837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.521 qpair failed and we were unable to recover it. 00:38:20.521 [2024-12-13 10:40:14.113992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.521 [2024-12-13 10:40:14.114009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.521 qpair failed and we were unable to recover it. 00:38:20.521 [2024-12-13 10:40:14.114297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.521 [2024-12-13 10:40:14.114314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.521 qpair failed and we were unable to recover it. 00:38:20.521 [2024-12-13 10:40:14.114462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.521 [2024-12-13 10:40:14.114480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.521 qpair failed and we were unable to recover it. 00:38:20.521 [2024-12-13 10:40:14.114718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.521 [2024-12-13 10:40:14.114735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.521 qpair failed and we were unable to recover it. 00:38:20.521 [2024-12-13 10:40:14.114972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.521 [2024-12-13 10:40:14.114989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.521 qpair failed and we were unable to recover it. 00:38:20.521 [2024-12-13 10:40:14.115230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.521 [2024-12-13 10:40:14.115247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.521 qpair failed and we were unable to recover it. 00:38:20.521 [2024-12-13 10:40:14.115414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.521 [2024-12-13 10:40:14.115430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.521 qpair failed and we were unable to recover it. 00:38:20.521 [2024-12-13 10:40:14.115598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.521 [2024-12-13 10:40:14.115615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.521 qpair failed and we were unable to recover it. 00:38:20.521 [2024-12-13 10:40:14.115808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.521 [2024-12-13 10:40:14.115824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.521 qpair failed and we were unable to recover it. 00:38:20.521 [2024-12-13 10:40:14.115984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.521 [2024-12-13 10:40:14.116000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.521 qpair failed and we were unable to recover it. 00:38:20.521 [2024-12-13 10:40:14.116097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.521 [2024-12-13 10:40:14.116113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.521 qpair failed and we were unable to recover it. 00:38:20.521 [2024-12-13 10:40:14.116257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.521 [2024-12-13 10:40:14.116273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.521 qpair failed and we were unable to recover it. 00:38:20.521 [2024-12-13 10:40:14.116494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.521 [2024-12-13 10:40:14.116511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.521 qpair failed and we were unable to recover it. 00:38:20.521 [2024-12-13 10:40:14.116720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.521 [2024-12-13 10:40:14.116736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.521 qpair failed and we were unable to recover it. 00:38:20.521 [2024-12-13 10:40:14.116918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.521 [2024-12-13 10:40:14.116936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.521 qpair failed and we were unable to recover it. 00:38:20.521 [2024-12-13 10:40:14.117142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.521 [2024-12-13 10:40:14.117159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.521 qpair failed and we were unable to recover it. 00:38:20.521 [2024-12-13 10:40:14.117309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.521 [2024-12-13 10:40:14.117325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.521 qpair failed and we were unable to recover it. 00:38:20.521 [2024-12-13 10:40:14.117471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.521 [2024-12-13 10:40:14.117490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.521 qpair failed and we were unable to recover it. 00:38:20.521 [2024-12-13 10:40:14.117721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.521 [2024-12-13 10:40:14.117739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.521 qpair failed and we were unable to recover it. 00:38:20.521 [2024-12-13 10:40:14.117961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.521 [2024-12-13 10:40:14.117978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.521 qpair failed and we were unable to recover it. 00:38:20.521 [2024-12-13 10:40:14.118128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.521 [2024-12-13 10:40:14.118145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.521 qpair failed and we were unable to recover it. 00:38:20.521 [2024-12-13 10:40:14.118361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.521 [2024-12-13 10:40:14.118378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.521 qpair failed and we were unable to recover it. 00:38:20.521 [2024-12-13 10:40:14.118458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.521 [2024-12-13 10:40:14.118476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.521 qpair failed and we were unable to recover it. 00:38:20.521 [2024-12-13 10:40:14.118570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.521 [2024-12-13 10:40:14.118586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.521 qpair failed and we were unable to recover it. 00:38:20.521 [2024-12-13 10:40:14.118811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.521 [2024-12-13 10:40:14.118829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.521 qpair failed and we were unable to recover it. 00:38:20.521 [2024-12-13 10:40:14.119061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.521 [2024-12-13 10:40:14.119079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.521 qpair failed and we were unable to recover it. 00:38:20.521 [2024-12-13 10:40:14.119294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.521 [2024-12-13 10:40:14.119310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.521 qpair failed and we were unable to recover it. 00:38:20.521 [2024-12-13 10:40:14.119411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.521 [2024-12-13 10:40:14.119428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.521 qpair failed and we were unable to recover it. 00:38:20.521 [2024-12-13 10:40:14.119683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.521 [2024-12-13 10:40:14.119701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.521 qpair failed and we were unable to recover it. 00:38:20.521 [2024-12-13 10:40:14.119870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.521 [2024-12-13 10:40:14.119887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.521 qpair failed and we were unable to recover it. 00:38:20.521 [2024-12-13 10:40:14.120091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.521 [2024-12-13 10:40:14.120108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.521 qpair failed and we were unable to recover it. 00:38:20.521 [2024-12-13 10:40:14.120340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.521 [2024-12-13 10:40:14.120357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.521 qpair failed and we were unable to recover it. 00:38:20.521 [2024-12-13 10:40:14.120587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.521 [2024-12-13 10:40:14.120604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.521 qpair failed and we were unable to recover it. 00:38:20.522 [2024-12-13 10:40:14.120756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.522 [2024-12-13 10:40:14.120771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.522 qpair failed and we were unable to recover it. 00:38:20.522 [2024-12-13 10:40:14.120946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.522 [2024-12-13 10:40:14.120962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.522 qpair failed and we were unable to recover it. 00:38:20.522 [2024-12-13 10:40:14.121114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.522 [2024-12-13 10:40:14.121131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.522 qpair failed and we were unable to recover it. 00:38:20.522 [2024-12-13 10:40:14.121265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.522 [2024-12-13 10:40:14.121282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.522 qpair failed and we were unable to recover it. 00:38:20.522 [2024-12-13 10:40:14.121508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.522 [2024-12-13 10:40:14.121526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.522 qpair failed and we were unable to recover it. 00:38:20.522 [2024-12-13 10:40:14.121666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.522 [2024-12-13 10:40:14.121684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.522 qpair failed and we were unable to recover it. 00:38:20.522 [2024-12-13 10:40:14.121934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.522 [2024-12-13 10:40:14.121951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.522 qpair failed and we were unable to recover it. 00:38:20.522 [2024-12-13 10:40:14.122063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.522 [2024-12-13 10:40:14.122081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.522 qpair failed and we were unable to recover it. 00:38:20.522 [2024-12-13 10:40:14.122240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.522 [2024-12-13 10:40:14.122258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.522 qpair failed and we were unable to recover it. 00:38:20.522 [2024-12-13 10:40:14.122440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.522 [2024-12-13 10:40:14.122463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.522 qpair failed and we were unable to recover it. 00:38:20.522 [2024-12-13 10:40:14.122697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.522 [2024-12-13 10:40:14.122714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.522 qpair failed and we were unable to recover it. 00:38:20.522 [2024-12-13 10:40:14.122948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.522 [2024-12-13 10:40:14.122964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.522 qpair failed and we were unable to recover it. 00:38:20.522 [2024-12-13 10:40:14.123192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.522 [2024-12-13 10:40:14.123210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.522 qpair failed and we were unable to recover it. 00:38:20.522 [2024-12-13 10:40:14.123308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.522 [2024-12-13 10:40:14.123325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.522 qpair failed and we were unable to recover it. 00:38:20.522 [2024-12-13 10:40:14.123559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.522 [2024-12-13 10:40:14.123580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.522 qpair failed and we were unable to recover it. 00:38:20.522 [2024-12-13 10:40:14.123743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.522 [2024-12-13 10:40:14.123763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.522 qpair failed and we were unable to recover it. 00:38:20.522 [2024-12-13 10:40:14.123935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.522 [2024-12-13 10:40:14.123951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.522 qpair failed and we were unable to recover it. 00:38:20.522 [2024-12-13 10:40:14.124182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.522 [2024-12-13 10:40:14.124197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.522 qpair failed and we were unable to recover it. 00:38:20.522 [2024-12-13 10:40:14.124372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.522 [2024-12-13 10:40:14.124387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.522 qpair failed and we were unable to recover it. 00:38:20.522 [2024-12-13 10:40:14.124620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.522 [2024-12-13 10:40:14.124635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.522 qpair failed and we were unable to recover it. 00:38:20.522 [2024-12-13 10:40:14.124840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.522 [2024-12-13 10:40:14.124855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.522 qpair failed and we were unable to recover it. 00:38:20.522 [2024-12-13 10:40:14.125036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.522 [2024-12-13 10:40:14.125052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.522 qpair failed and we were unable to recover it. 00:38:20.522 [2024-12-13 10:40:14.125220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.522 [2024-12-13 10:40:14.125234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.522 qpair failed and we were unable to recover it. 00:38:20.522 [2024-12-13 10:40:14.125403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.522 [2024-12-13 10:40:14.125418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.522 qpair failed and we were unable to recover it. 00:38:20.522 [2024-12-13 10:40:14.125577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.522 [2024-12-13 10:40:14.125595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.522 qpair failed and we were unable to recover it. 00:38:20.522 [2024-12-13 10:40:14.125679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.522 [2024-12-13 10:40:14.125693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.522 qpair failed and we were unable to recover it. 00:38:20.522 [2024-12-13 10:40:14.125830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.522 [2024-12-13 10:40:14.125845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.522 qpair failed and we were unable to recover it. 00:38:20.522 [2024-12-13 10:40:14.126019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.522 [2024-12-13 10:40:14.126034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.522 qpair failed and we were unable to recover it. 00:38:20.522 [2024-12-13 10:40:14.126133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.522 [2024-12-13 10:40:14.126147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.522 qpair failed and we were unable to recover it. 00:38:20.522 [2024-12-13 10:40:14.126309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.522 [2024-12-13 10:40:14.126323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.522 qpair failed and we were unable to recover it. 00:38:20.522 [2024-12-13 10:40:14.126472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.522 [2024-12-13 10:40:14.126487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.522 qpair failed and we were unable to recover it. 00:38:20.522 [2024-12-13 10:40:14.126580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.522 [2024-12-13 10:40:14.126594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.522 qpair failed and we were unable to recover it. 00:38:20.522 [2024-12-13 10:40:14.126729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.522 [2024-12-13 10:40:14.126742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.522 qpair failed and we were unable to recover it. 00:38:20.522 [2024-12-13 10:40:14.126994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.522 [2024-12-13 10:40:14.127009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.522 qpair failed and we were unable to recover it. 00:38:20.522 [2024-12-13 10:40:14.127163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.522 [2024-12-13 10:40:14.127177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.522 qpair failed and we were unable to recover it. 00:38:20.522 [2024-12-13 10:40:14.127406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.522 [2024-12-13 10:40:14.127421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.522 qpair failed and we were unable to recover it. 00:38:20.522 [2024-12-13 10:40:14.127580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.522 [2024-12-13 10:40:14.127594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.522 qpair failed and we were unable to recover it. 00:38:20.522 [2024-12-13 10:40:14.127833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.522 [2024-12-13 10:40:14.127848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.522 qpair failed and we were unable to recover it. 00:38:20.522 [2024-12-13 10:40:14.128024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.522 [2024-12-13 10:40:14.128038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.522 qpair failed and we were unable to recover it. 00:38:20.523 [2024-12-13 10:40:14.128244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.523 [2024-12-13 10:40:14.128259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.523 qpair failed and we were unable to recover it. 00:38:20.523 [2024-12-13 10:40:14.128470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.523 [2024-12-13 10:40:14.128486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.523 qpair failed and we were unable to recover it. 00:38:20.523 [2024-12-13 10:40:14.128713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.523 [2024-12-13 10:40:14.128728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.523 qpair failed and we were unable to recover it. 00:38:20.523 [2024-12-13 10:40:14.128905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.523 [2024-12-13 10:40:14.128919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.523 qpair failed and we were unable to recover it. 00:38:20.523 [2024-12-13 10:40:14.129148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.523 [2024-12-13 10:40:14.129162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.523 qpair failed and we were unable to recover it. 00:38:20.523 [2024-12-13 10:40:14.129394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.523 [2024-12-13 10:40:14.129408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.523 qpair failed and we were unable to recover it. 00:38:20.523 [2024-12-13 10:40:14.129640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.523 [2024-12-13 10:40:14.129655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.523 qpair failed and we were unable to recover it. 00:38:20.523 [2024-12-13 10:40:14.129822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.523 [2024-12-13 10:40:14.129836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.523 qpair failed and we were unable to recover it. 00:38:20.523 [2024-12-13 10:40:14.130007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.523 [2024-12-13 10:40:14.130026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.523 qpair failed and we were unable to recover it. 00:38:20.523 [2024-12-13 10:40:14.130172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.523 [2024-12-13 10:40:14.130186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.523 qpair failed and we were unable to recover it. 00:38:20.523 [2024-12-13 10:40:14.130291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.523 [2024-12-13 10:40:14.130304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.523 qpair failed and we were unable to recover it. 00:38:20.523 [2024-12-13 10:40:14.130517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.523 [2024-12-13 10:40:14.130532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.523 qpair failed and we were unable to recover it. 00:38:20.523 [2024-12-13 10:40:14.130693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.523 [2024-12-13 10:40:14.130707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.523 qpair failed and we were unable to recover it. 00:38:20.523 [2024-12-13 10:40:14.130849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.523 [2024-12-13 10:40:14.130863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.523 qpair failed and we were unable to recover it. 00:38:20.523 [2024-12-13 10:40:14.131069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.523 [2024-12-13 10:40:14.131083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.523 qpair failed and we were unable to recover it. 00:38:20.523 [2024-12-13 10:40:14.131282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.523 [2024-12-13 10:40:14.131296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.523 qpair failed and we were unable to recover it. 00:38:20.523 [2024-12-13 10:40:14.131522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.523 [2024-12-13 10:40:14.131537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.523 qpair failed and we were unable to recover it. 00:38:20.523 [2024-12-13 10:40:14.131694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.523 [2024-12-13 10:40:14.131710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.523 qpair failed and we were unable to recover it. 00:38:20.523 [2024-12-13 10:40:14.131913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.523 [2024-12-13 10:40:14.131927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.523 qpair failed and we were unable to recover it. 00:38:20.523 [2024-12-13 10:40:14.132151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.523 [2024-12-13 10:40:14.132165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.523 qpair failed and we were unable to recover it. 00:38:20.523 [2024-12-13 10:40:14.132323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.523 [2024-12-13 10:40:14.132337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.523 qpair failed and we were unable to recover it. 00:38:20.523 [2024-12-13 10:40:14.132435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.523 [2024-12-13 10:40:14.132453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.523 qpair failed and we were unable to recover it. 00:38:20.523 [2024-12-13 10:40:14.132676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.523 [2024-12-13 10:40:14.132690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.523 qpair failed and we were unable to recover it. 00:38:20.523 [2024-12-13 10:40:14.132798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.523 [2024-12-13 10:40:14.132811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.523 qpair failed and we were unable to recover it. 00:38:20.523 [2024-12-13 10:40:14.133048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.523 [2024-12-13 10:40:14.133061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.523 qpair failed and we were unable to recover it. 00:38:20.523 [2024-12-13 10:40:14.133228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.523 [2024-12-13 10:40:14.133244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.523 qpair failed and we were unable to recover it. 00:38:20.523 [2024-12-13 10:40:14.133494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.523 [2024-12-13 10:40:14.133508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.523 qpair failed and we were unable to recover it. 00:38:20.523 [2024-12-13 10:40:14.133778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.523 [2024-12-13 10:40:14.133792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.523 qpair failed and we were unable to recover it. 00:38:20.523 [2024-12-13 10:40:14.133881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.523 [2024-12-13 10:40:14.133895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.523 qpair failed and we were unable to recover it. 00:38:20.523 [2024-12-13 10:40:14.134038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.523 [2024-12-13 10:40:14.134052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.523 qpair failed and we were unable to recover it. 00:38:20.523 [2024-12-13 10:40:14.134228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.523 [2024-12-13 10:40:14.134243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.523 qpair failed and we were unable to recover it. 00:38:20.523 [2024-12-13 10:40:14.134325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.523 [2024-12-13 10:40:14.134338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.523 qpair failed and we were unable to recover it. 00:38:20.523 [2024-12-13 10:40:14.134566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.523 [2024-12-13 10:40:14.134580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.523 qpair failed and we were unable to recover it. 00:38:20.523 [2024-12-13 10:40:14.134805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.523 [2024-12-13 10:40:14.134818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.523 qpair failed and we were unable to recover it. 00:38:20.523 [2024-12-13 10:40:14.135069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.523 [2024-12-13 10:40:14.135083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.523 qpair failed and we were unable to recover it. 00:38:20.523 [2024-12-13 10:40:14.135243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.523 [2024-12-13 10:40:14.135257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.523 qpair failed and we were unable to recover it. 00:38:20.523 [2024-12-13 10:40:14.135462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.523 [2024-12-13 10:40:14.135477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.523 qpair failed and we were unable to recover it. 00:38:20.523 [2024-12-13 10:40:14.135702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.523 [2024-12-13 10:40:14.135716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.523 qpair failed and we were unable to recover it. 00:38:20.523 [2024-12-13 10:40:14.135944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.523 [2024-12-13 10:40:14.135957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.524 qpair failed and we were unable to recover it. 00:38:20.524 [2024-12-13 10:40:14.136134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.524 [2024-12-13 10:40:14.136150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.524 qpair failed and we were unable to recover it. 00:38:20.524 [2024-12-13 10:40:14.136282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.524 [2024-12-13 10:40:14.136300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.524 qpair failed and we were unable to recover it. 00:38:20.524 [2024-12-13 10:40:14.136507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.524 [2024-12-13 10:40:14.136521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.524 qpair failed and we were unable to recover it. 00:38:20.524 [2024-12-13 10:40:14.136744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.524 [2024-12-13 10:40:14.136759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.524 qpair failed and we were unable to recover it. 00:38:20.524 [2024-12-13 10:40:14.136905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.524 [2024-12-13 10:40:14.136919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.524 qpair failed and we were unable to recover it. 00:38:20.524 [2024-12-13 10:40:14.137084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.524 [2024-12-13 10:40:14.137097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.524 qpair failed and we were unable to recover it. 00:38:20.524 [2024-12-13 10:40:14.137309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.524 [2024-12-13 10:40:14.137325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.524 qpair failed and we were unable to recover it. 00:38:20.524 [2024-12-13 10:40:14.137484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.524 [2024-12-13 10:40:14.137498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.524 qpair failed and we were unable to recover it. 00:38:20.524 [2024-12-13 10:40:14.137707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.524 [2024-12-13 10:40:14.137721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.524 qpair failed and we were unable to recover it. 00:38:20.524 [2024-12-13 10:40:14.137971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.524 [2024-12-13 10:40:14.137985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.524 qpair failed and we were unable to recover it. 00:38:20.524 [2024-12-13 10:40:14.138158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.524 [2024-12-13 10:40:14.138172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.524 qpair failed and we were unable to recover it. 00:38:20.524 [2024-12-13 10:40:14.138302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.524 [2024-12-13 10:40:14.138316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.524 qpair failed and we were unable to recover it. 00:38:20.524 [2024-12-13 10:40:14.138456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.524 [2024-12-13 10:40:14.138470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.524 qpair failed and we were unable to recover it. 00:38:20.524 [2024-12-13 10:40:14.138702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.524 [2024-12-13 10:40:14.138716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.524 qpair failed and we were unable to recover it. 00:38:20.524 [2024-12-13 10:40:14.138852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.524 [2024-12-13 10:40:14.138866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.524 qpair failed and we were unable to recover it. 00:38:20.524 [2024-12-13 10:40:14.139090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.524 [2024-12-13 10:40:14.139103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.524 qpair failed and we were unable to recover it. 00:38:20.524 [2024-12-13 10:40:14.139324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.524 [2024-12-13 10:40:14.139337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.524 qpair failed and we were unable to recover it. 00:38:20.524 [2024-12-13 10:40:14.139560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.524 [2024-12-13 10:40:14.139574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.524 qpair failed and we were unable to recover it. 00:38:20.524 [2024-12-13 10:40:14.139837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.524 [2024-12-13 10:40:14.139851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.524 qpair failed and we were unable to recover it. 00:38:20.524 [2024-12-13 10:40:14.139997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.524 [2024-12-13 10:40:14.140010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.524 qpair failed and we were unable to recover it. 00:38:20.524 [2024-12-13 10:40:14.140237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.524 [2024-12-13 10:40:14.140250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.524 qpair failed and we were unable to recover it. 00:38:20.524 [2024-12-13 10:40:14.140413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.524 [2024-12-13 10:40:14.140427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.524 qpair failed and we were unable to recover it. 00:38:20.524 [2024-12-13 10:40:14.140634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.524 [2024-12-13 10:40:14.140649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.524 qpair failed and we were unable to recover it. 00:38:20.524 [2024-12-13 10:40:14.140817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.524 [2024-12-13 10:40:14.140830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.524 qpair failed and we were unable to recover it. 00:38:20.524 [2024-12-13 10:40:14.141005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.524 [2024-12-13 10:40:14.141019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.524 qpair failed and we were unable to recover it. 00:38:20.524 [2024-12-13 10:40:14.141172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.524 [2024-12-13 10:40:14.141186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.524 qpair failed and we were unable to recover it. 00:38:20.524 [2024-12-13 10:40:14.141402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.524 [2024-12-13 10:40:14.141418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.524 qpair failed and we were unable to recover it. 00:38:20.524 [2024-12-13 10:40:14.141576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.524 [2024-12-13 10:40:14.141590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.524 qpair failed and we were unable to recover it. 00:38:20.524 [2024-12-13 10:40:14.141822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.524 [2024-12-13 10:40:14.141836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.524 qpair failed and we were unable to recover it. 00:38:20.524 [2024-12-13 10:40:14.141985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.524 [2024-12-13 10:40:14.141998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.524 qpair failed and we were unable to recover it. 00:38:20.524 [2024-12-13 10:40:14.142204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.524 [2024-12-13 10:40:14.142218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.524 qpair failed and we were unable to recover it. 00:38:20.524 [2024-12-13 10:40:14.142456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.524 [2024-12-13 10:40:14.142470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.524 qpair failed and we were unable to recover it. 00:38:20.524 [2024-12-13 10:40:14.142715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.524 [2024-12-13 10:40:14.142729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.524 qpair failed and we were unable to recover it. 00:38:20.524 [2024-12-13 10:40:14.142981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.524 [2024-12-13 10:40:14.142994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.524 qpair failed and we were unable to recover it. 00:38:20.524 [2024-12-13 10:40:14.143164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.524 [2024-12-13 10:40:14.143178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.524 qpair failed and we were unable to recover it. 00:38:20.524 [2024-12-13 10:40:14.143379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.524 [2024-12-13 10:40:14.143393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.524 qpair failed and we were unable to recover it. 00:38:20.524 [2024-12-13 10:40:14.143565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.524 [2024-12-13 10:40:14.143579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.524 qpair failed and we were unable to recover it. 00:38:20.524 [2024-12-13 10:40:14.143802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.524 [2024-12-13 10:40:14.143816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.524 qpair failed and we were unable to recover it. 00:38:20.524 [2024-12-13 10:40:14.144049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.524 [2024-12-13 10:40:14.144063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.524 qpair failed and we were unable to recover it. 00:38:20.525 [2024-12-13 10:40:14.144222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.525 [2024-12-13 10:40:14.144235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.525 qpair failed and we were unable to recover it. 00:38:20.525 [2024-12-13 10:40:14.144463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.525 [2024-12-13 10:40:14.144478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.525 qpair failed and we were unable to recover it. 00:38:20.525 [2024-12-13 10:40:14.144645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.525 [2024-12-13 10:40:14.144658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.525 qpair failed and we were unable to recover it. 00:38:20.525 [2024-12-13 10:40:14.144865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.525 [2024-12-13 10:40:14.144879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.525 qpair failed and we were unable to recover it. 00:38:20.525 [2024-12-13 10:40:14.145111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.525 [2024-12-13 10:40:14.145125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.525 qpair failed and we were unable to recover it. 00:38:20.525 [2024-12-13 10:40:14.145353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.525 [2024-12-13 10:40:14.145367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.525 qpair failed and we were unable to recover it. 00:38:20.525 [2024-12-13 10:40:14.145528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.525 [2024-12-13 10:40:14.145543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.525 qpair failed and we were unable to recover it. 00:38:20.525 [2024-12-13 10:40:14.145701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.525 [2024-12-13 10:40:14.145714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.525 qpair failed and we were unable to recover it. 00:38:20.525 [2024-12-13 10:40:14.145863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.525 [2024-12-13 10:40:14.145877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.525 qpair failed and we were unable to recover it. 00:38:20.525 [2024-12-13 10:40:14.146107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.525 [2024-12-13 10:40:14.146121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.525 qpair failed and we were unable to recover it. 00:38:20.525 [2024-12-13 10:40:14.146268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.525 [2024-12-13 10:40:14.146281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.525 qpair failed and we were unable to recover it. 00:38:20.525 [2024-12-13 10:40:14.146499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.525 [2024-12-13 10:40:14.146513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.525 qpair failed and we were unable to recover it. 00:38:20.525 [2024-12-13 10:40:14.146731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.525 [2024-12-13 10:40:14.146745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.525 qpair failed and we were unable to recover it. 00:38:20.525 [2024-12-13 10:40:14.146944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.525 [2024-12-13 10:40:14.146958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.525 qpair failed and we were unable to recover it. 00:38:20.525 [2024-12-13 10:40:14.147159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.525 [2024-12-13 10:40:14.147201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:20.525 qpair failed and we were unable to recover it. 00:38:20.525 [2024-12-13 10:40:14.147434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.525 [2024-12-13 10:40:14.147482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:20.525 qpair failed and we were unable to recover it. 00:38:20.525 [2024-12-13 10:40:14.147751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.525 [2024-12-13 10:40:14.147799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:20.525 qpair failed and we were unable to recover it. 00:38:20.525 [2024-12-13 10:40:14.147970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.525 [2024-12-13 10:40:14.147986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.525 qpair failed and we were unable to recover it. 00:38:20.525 [2024-12-13 10:40:14.148070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.525 [2024-12-13 10:40:14.148084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.525 qpair failed and we were unable to recover it. 00:38:20.525 [2024-12-13 10:40:14.148261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.525 [2024-12-13 10:40:14.148275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.525 qpair failed and we were unable to recover it. 00:38:20.525 [2024-12-13 10:40:14.148424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.525 [2024-12-13 10:40:14.148437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.525 qpair failed and we were unable to recover it. 00:38:20.525 [2024-12-13 10:40:14.148680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.525 [2024-12-13 10:40:14.148694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.525 qpair failed and we were unable to recover it. 00:38:20.525 [2024-12-13 10:40:14.148914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.525 [2024-12-13 10:40:14.148927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.525 qpair failed and we were unable to recover it. 00:38:20.525 [2024-12-13 10:40:14.149079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.525 [2024-12-13 10:40:14.149092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.525 qpair failed and we were unable to recover it. 00:38:20.525 [2024-12-13 10:40:14.149239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.525 [2024-12-13 10:40:14.149253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.525 qpair failed and we were unable to recover it. 00:38:20.525 [2024-12-13 10:40:14.149340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.525 [2024-12-13 10:40:14.149354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.525 qpair failed and we were unable to recover it. 00:38:20.525 [2024-12-13 10:40:14.149539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.525 [2024-12-13 10:40:14.149553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.525 qpair failed and we were unable to recover it. 00:38:20.525 [2024-12-13 10:40:14.149742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.525 [2024-12-13 10:40:14.149758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.525 qpair failed and we were unable to recover it. 00:38:20.525 [2024-12-13 10:40:14.149974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.525 [2024-12-13 10:40:14.149989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.525 qpair failed and we were unable to recover it. 00:38:20.525 [2024-12-13 10:40:14.150140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.525 [2024-12-13 10:40:14.150163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.525 qpair failed and we were unable to recover it. 00:38:20.525 [2024-12-13 10:40:14.150247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.525 [2024-12-13 10:40:14.150260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.525 qpair failed and we were unable to recover it. 00:38:20.525 [2024-12-13 10:40:14.150463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.525 [2024-12-13 10:40:14.150477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.525 qpair failed and we were unable to recover it. 00:38:20.525 [2024-12-13 10:40:14.150643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.525 [2024-12-13 10:40:14.150657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.525 qpair failed and we were unable to recover it. 00:38:20.525 [2024-12-13 10:40:14.150891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.525 [2024-12-13 10:40:14.150905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.525 qpair failed and we were unable to recover it. 00:38:20.525 [2024-12-13 10:40:14.151107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.526 [2024-12-13 10:40:14.151121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.526 qpair failed and we were unable to recover it. 00:38:20.526 [2024-12-13 10:40:14.151266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.526 [2024-12-13 10:40:14.151280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.526 qpair failed and we were unable to recover it. 00:38:20.526 [2024-12-13 10:40:14.151483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.526 [2024-12-13 10:40:14.151498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.526 qpair failed and we were unable to recover it. 00:38:20.526 [2024-12-13 10:40:14.151726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.526 [2024-12-13 10:40:14.151741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.526 qpair failed and we were unable to recover it. 00:38:20.526 [2024-12-13 10:40:14.152016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.526 [2024-12-13 10:40:14.152031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.526 qpair failed and we were unable to recover it. 00:38:20.526 [2024-12-13 10:40:14.152131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.526 [2024-12-13 10:40:14.152145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.526 qpair failed and we were unable to recover it. 00:38:20.526 [2024-12-13 10:40:14.152352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.526 [2024-12-13 10:40:14.152366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.526 qpair failed and we were unable to recover it. 00:38:20.526 [2024-12-13 10:40:14.152615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.526 [2024-12-13 10:40:14.152630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.526 qpair failed and we were unable to recover it. 00:38:20.526 [2024-12-13 10:40:14.152774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.526 [2024-12-13 10:40:14.152788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.526 qpair failed and we were unable to recover it. 00:38:20.526 [2024-12-13 10:40:14.152935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.526 [2024-12-13 10:40:14.152948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.526 qpair failed and we were unable to recover it. 00:38:20.526 [2024-12-13 10:40:14.153203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.526 [2024-12-13 10:40:14.153217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.526 qpair failed and we were unable to recover it. 00:38:20.526 [2024-12-13 10:40:14.153465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.526 [2024-12-13 10:40:14.153479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.526 qpair failed and we were unable to recover it. 00:38:20.526 [2024-12-13 10:40:14.153689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.526 [2024-12-13 10:40:14.153703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.526 qpair failed and we were unable to recover it. 00:38:20.526 [2024-12-13 10:40:14.153873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.526 [2024-12-13 10:40:14.153887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.526 qpair failed and we were unable to recover it. 00:38:20.526 [2024-12-13 10:40:14.154032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.526 [2024-12-13 10:40:14.154047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.526 qpair failed and we were unable to recover it. 00:38:20.526 [2024-12-13 10:40:14.154253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.526 [2024-12-13 10:40:14.154267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.526 qpair failed and we were unable to recover it. 00:38:20.526 [2024-12-13 10:40:14.154417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.526 [2024-12-13 10:40:14.154431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.526 qpair failed and we were unable to recover it. 00:38:20.526 [2024-12-13 10:40:14.154684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.526 [2024-12-13 10:40:14.154697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.526 qpair failed and we were unable to recover it. 00:38:20.526 [2024-12-13 10:40:14.154844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.526 [2024-12-13 10:40:14.154858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.526 qpair failed and we were unable to recover it. 00:38:20.526 [2024-12-13 10:40:14.154953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.526 [2024-12-13 10:40:14.154967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.526 qpair failed and we were unable to recover it. 00:38:20.526 [2024-12-13 10:40:14.155148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.526 [2024-12-13 10:40:14.155177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:20.526 qpair failed and we were unable to recover it. 00:38:20.526 [2024-12-13 10:40:14.155351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.526 [2024-12-13 10:40:14.155377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:20.526 qpair failed and we were unable to recover it. 00:38:20.526 A controller has encountered a failure and is being reset. 00:38:20.526 [2024-12-13 10:40:14.155635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.526 [2024-12-13 10:40:14.155660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:20.526 qpair failed and we were unable to recover it. 00:38:20.526 [2024-12-13 10:40:14.155948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.526 [2024-12-13 10:40:14.155964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.526 qpair failed and we were unable to recover it. 00:38:20.526 [2024-12-13 10:40:14.156214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.526 [2024-12-13 10:40:14.156228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.526 qpair failed and we were unable to recover it. 00:38:20.526 [2024-12-13 10:40:14.156379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.526 [2024-12-13 10:40:14.156393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.526 qpair failed and we were unable to recover it. 00:38:20.526 [2024-12-13 10:40:14.156571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.526 [2024-12-13 10:40:14.156585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.526 qpair failed and we were unable to recover it. 00:38:20.526 [2024-12-13 10:40:14.156738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.526 [2024-12-13 10:40:14.156752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.526 qpair failed and we were unable to recover it. 00:38:20.526 [2024-12-13 10:40:14.156901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.526 [2024-12-13 10:40:14.156915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.526 qpair failed and we were unable to recover it. 00:38:20.526 [2024-12-13 10:40:14.157069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.526 [2024-12-13 10:40:14.157083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.526 qpair failed and we were unable to recover it. 00:38:20.526 [2024-12-13 10:40:14.157273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.526 [2024-12-13 10:40:14.157287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.526 qpair failed and we were unable to recover it. 00:38:20.526 [2024-12-13 10:40:14.157490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.526 [2024-12-13 10:40:14.157504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.526 qpair failed and we were unable to recover it. 00:38:20.526 [2024-12-13 10:40:14.157719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.526 [2024-12-13 10:40:14.157733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.526 qpair failed and we were unable to recover it. 00:38:20.526 [2024-12-13 10:40:14.157878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.526 [2024-12-13 10:40:14.157895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.526 qpair failed and we were unable to recover it. 00:38:20.526 [2024-12-13 10:40:14.158148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.526 [2024-12-13 10:40:14.158162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.526 qpair failed and we were unable to recover it. 00:38:20.526 [2024-12-13 10:40:14.158349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.526 [2024-12-13 10:40:14.158362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.526 qpair failed and we were unable to recover it. 00:38:20.526 [2024-12-13 10:40:14.158633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.526 [2024-12-13 10:40:14.158647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.526 qpair failed and we were unable to recover it. 00:38:20.526 [2024-12-13 10:40:14.158852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.526 [2024-12-13 10:40:14.158866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.526 qpair failed and we were unable to recover it. 00:38:20.526 [2024-12-13 10:40:14.159020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.526 [2024-12-13 10:40:14.159033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.526 qpair failed and we were unable to recover it. 00:38:20.526 [2024-12-13 10:40:14.159181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.527 [2024-12-13 10:40:14.159194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.527 qpair failed and we were unable to recover it. 00:38:20.527 [2024-12-13 10:40:14.159332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.527 [2024-12-13 10:40:14.159346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.527 qpair failed and we were unable to recover it. 00:38:20.527 [2024-12-13 10:40:14.159524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.527 [2024-12-13 10:40:14.159539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.527 qpair failed and we were unable to recover it. 00:38:20.527 [2024-12-13 10:40:14.159711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.527 [2024-12-13 10:40:14.159725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.527 qpair failed and we were unable to recover it. 00:38:20.527 [2024-12-13 10:40:14.159868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.527 [2024-12-13 10:40:14.159882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.527 qpair failed and we were unable to recover it. 00:38:20.527 [2024-12-13 10:40:14.160031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.527 [2024-12-13 10:40:14.160045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.527 qpair failed and we were unable to recover it. 00:38:20.527 [2024-12-13 10:40:14.160180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.527 [2024-12-13 10:40:14.160194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.527 qpair failed and we were unable to recover it. 00:38:20.527 [2024-12-13 10:40:14.160289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.527 [2024-12-13 10:40:14.160303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.527 qpair failed and we were unable to recover it. 00:38:20.527 [2024-12-13 10:40:14.160510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.527 [2024-12-13 10:40:14.160524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.527 qpair failed and we were unable to recover it. 00:38:20.527 [2024-12-13 10:40:14.160663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.527 [2024-12-13 10:40:14.160676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.527 qpair failed and we were unable to recover it. 00:38:20.527 [2024-12-13 10:40:14.160876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.527 [2024-12-13 10:40:14.160891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.527 qpair failed and we were unable to recover it. 00:38:20.527 [2024-12-13 10:40:14.160989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.527 [2024-12-13 10:40:14.161003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.527 qpair failed and we were unable to recover it. 00:38:20.527 [2024-12-13 10:40:14.161089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.527 [2024-12-13 10:40:14.161103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.527 qpair failed and we were unable to recover it. 00:38:20.527 [2024-12-13 10:40:14.161195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.527 [2024-12-13 10:40:14.161209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.527 qpair failed and we were unable to recover it. 00:38:20.527 [2024-12-13 10:40:14.161416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.527 [2024-12-13 10:40:14.161430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.527 qpair failed and we were unable to recover it. 00:38:20.527 [2024-12-13 10:40:14.161581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.527 [2024-12-13 10:40:14.161595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.527 qpair failed and we were unable to recover it. 00:38:20.527 [2024-12-13 10:40:14.161846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.527 [2024-12-13 10:40:14.161860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.527 qpair failed and we were unable to recover it. 00:38:20.527 [2024-12-13 10:40:14.162011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.527 [2024-12-13 10:40:14.162024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.527 qpair failed and we were unable to recover it. 00:38:20.527 [2024-12-13 10:40:14.162254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.527 [2024-12-13 10:40:14.162268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.527 qpair failed and we were unable to recover it. 00:38:20.527 [2024-12-13 10:40:14.162476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.527 [2024-12-13 10:40:14.162490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.527 qpair failed and we were unable to recover it. 00:38:20.527 [2024-12-13 10:40:14.162635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.527 [2024-12-13 10:40:14.162649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.527 qpair failed and we were unable to recover it. 00:38:20.527 [2024-12-13 10:40:14.162820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.527 [2024-12-13 10:40:14.162834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.527 qpair failed and we were unable to recover it. 00:38:20.527 [2024-12-13 10:40:14.163008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.527 [2024-12-13 10:40:14.163023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.527 qpair failed and we were unable to recover it. 00:38:20.527 [2024-12-13 10:40:14.163119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.527 [2024-12-13 10:40:14.163151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.527 qpair failed and we were unable to recover it. 00:38:20.527 [2024-12-13 10:40:14.163293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.527 [2024-12-13 10:40:14.163307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.527 qpair failed and we were unable to recover it. 00:38:20.527 [2024-12-13 10:40:14.163574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.527 [2024-12-13 10:40:14.163588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.527 qpair failed and we were unable to recover it. 00:38:20.527 [2024-12-13 10:40:14.163842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.527 [2024-12-13 10:40:14.163856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.527 qpair failed and we were unable to recover it. 00:38:20.527 [2024-12-13 10:40:14.164054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.527 [2024-12-13 10:40:14.164067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.527 qpair failed and we were unable to recover it. 00:38:20.527 [2024-12-13 10:40:14.164295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.527 [2024-12-13 10:40:14.164310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.527 qpair failed and we were unable to recover it. 00:38:20.527 [2024-12-13 10:40:14.164398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.527 [2024-12-13 10:40:14.164411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.527 qpair failed and we were unable to recover it. 00:38:20.527 [2024-12-13 10:40:14.164637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.527 [2024-12-13 10:40:14.164651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.527 qpair failed and we were unable to recover it. 00:38:20.527 [2024-12-13 10:40:14.164853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.527 [2024-12-13 10:40:14.164867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.527 qpair failed and we were unable to recover it. 00:38:20.527 [2024-12-13 10:40:14.165041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.527 [2024-12-13 10:40:14.165055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.527 qpair failed and we were unable to recover it. 00:38:20.527 [2024-12-13 10:40:14.165286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.527 [2024-12-13 10:40:14.165300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.527 qpair failed and we were unable to recover it. 00:38:20.527 [2024-12-13 10:40:14.165539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.527 [2024-12-13 10:40:14.165557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.527 qpair failed and we were unable to recover it. 00:38:20.527 [2024-12-13 10:40:14.165758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.527 [2024-12-13 10:40:14.165772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.527 qpair failed and we were unable to recover it. 00:38:20.527 [2024-12-13 10:40:14.166020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.527 [2024-12-13 10:40:14.166033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.527 qpair failed and we were unable to recover it. 00:38:20.527 [2024-12-13 10:40:14.166295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.527 [2024-12-13 10:40:14.166309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.527 qpair failed and we were unable to recover it. 00:38:20.527 [2024-12-13 10:40:14.166537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.527 [2024-12-13 10:40:14.166551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.527 qpair failed and we were unable to recover it. 00:38:20.528 [2024-12-13 10:40:14.166726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.528 [2024-12-13 10:40:14.166740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.528 qpair failed and we were unable to recover it. 00:38:20.528 [2024-12-13 10:40:14.166964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.528 [2024-12-13 10:40:14.166978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.528 qpair failed and we were unable to recover it. 00:38:20.528 [2024-12-13 10:40:14.167225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.528 [2024-12-13 10:40:14.167239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.528 qpair failed and we were unable to recover it. 00:38:20.528 [2024-12-13 10:40:14.167394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.528 [2024-12-13 10:40:14.167408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.528 qpair failed and we were unable to recover it. 00:38:20.528 [2024-12-13 10:40:14.167635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.528 [2024-12-13 10:40:14.167650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.528 qpair failed and we were unable to recover it. 00:38:20.528 [2024-12-13 10:40:14.167880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.528 [2024-12-13 10:40:14.167894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.528 qpair failed and we were unable to recover it. 00:38:20.528 [2024-12-13 10:40:14.168063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.528 [2024-12-13 10:40:14.168077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.528 qpair failed and we were unable to recover it. 00:38:20.528 [2024-12-13 10:40:14.168310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.528 [2024-12-13 10:40:14.168324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.528 qpair failed and we were unable to recover it. 00:38:20.528 [2024-12-13 10:40:14.168552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.528 [2024-12-13 10:40:14.168567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.528 qpair failed and we were unable to recover it. 00:38:20.528 [2024-12-13 10:40:14.168771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.528 [2024-12-13 10:40:14.168785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.528 qpair failed and we were unable to recover it. 00:38:20.528 [2024-12-13 10:40:14.168985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.528 [2024-12-13 10:40:14.168999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.528 qpair failed and we were unable to recover it. 00:38:20.528 [2024-12-13 10:40:14.169228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.528 [2024-12-13 10:40:14.169242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.528 qpair failed and we were unable to recover it. 00:38:20.528 [2024-12-13 10:40:14.169399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.528 [2024-12-13 10:40:14.169413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.528 qpair failed and we were unable to recover it. 00:38:20.528 [2024-12-13 10:40:14.169615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.528 [2024-12-13 10:40:14.169630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.528 qpair failed and we were unable to recover it. 00:38:20.528 [2024-12-13 10:40:14.169781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.528 [2024-12-13 10:40:14.169795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.528 qpair failed and we were unable to recover it. 00:38:20.528 [2024-12-13 10:40:14.170025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.528 [2024-12-13 10:40:14.170039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.528 qpair failed and we were unable to recover it. 00:38:20.528 [2024-12-13 10:40:14.170218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.528 [2024-12-13 10:40:14.170232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.528 qpair failed and we were unable to recover it. 00:38:20.528 [2024-12-13 10:40:14.170476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.528 [2024-12-13 10:40:14.170490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.528 qpair failed and we were unable to recover it. 00:38:20.528 [2024-12-13 10:40:14.170635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.528 [2024-12-13 10:40:14.170649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.528 qpair failed and we were unable to recover it. 00:38:20.528 [2024-12-13 10:40:14.170801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.528 [2024-12-13 10:40:14.170815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.528 qpair failed and we were unable to recover it. 00:38:20.528 [2024-12-13 10:40:14.170984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.528 [2024-12-13 10:40:14.170997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.528 qpair failed and we were unable to recover it. 00:38:20.528 [2024-12-13 10:40:14.171169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.528 [2024-12-13 10:40:14.171182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.528 qpair failed and we were unable to recover it. 00:38:20.528 [2024-12-13 10:40:14.171387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.528 [2024-12-13 10:40:14.171401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.528 qpair failed and we were unable to recover it. 00:38:20.528 [2024-12-13 10:40:14.171547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.528 [2024-12-13 10:40:14.171561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.528 qpair failed and we were unable to recover it. 00:38:20.528 [2024-12-13 10:40:14.171705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.528 [2024-12-13 10:40:14.171719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.528 qpair failed and we were unable to recover it. 00:38:20.528 [2024-12-13 10:40:14.171873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.528 [2024-12-13 10:40:14.171886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.528 qpair failed and we were unable to recover it. 00:38:20.528 [2024-12-13 10:40:14.172049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.528 [2024-12-13 10:40:14.172063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.528 qpair failed and we were unable to recover it. 00:38:20.528 [2024-12-13 10:40:14.172198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.528 [2024-12-13 10:40:14.172211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.528 qpair failed and we were unable to recover it. 00:38:20.528 [2024-12-13 10:40:14.172434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.528 [2024-12-13 10:40:14.172452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.528 qpair failed and we were unable to recover it. 00:38:20.528 [2024-12-13 10:40:14.172630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.528 [2024-12-13 10:40:14.172644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.528 qpair failed and we were unable to recover it. 00:38:20.528 [2024-12-13 10:40:14.172742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.528 [2024-12-13 10:40:14.172756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.528 qpair failed and we were unable to recover it. 00:38:20.528 [2024-12-13 10:40:14.172895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.528 [2024-12-13 10:40:14.172908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.528 qpair failed and we were unable to recover it. 00:38:20.528 [2024-12-13 10:40:14.173053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.528 [2024-12-13 10:40:14.173068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.528 qpair failed and we were unable to recover it. 00:38:20.528 [2024-12-13 10:40:14.173313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.528 [2024-12-13 10:40:14.173326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.528 qpair failed and we were unable to recover it. 00:38:20.528 [2024-12-13 10:40:14.173527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.528 [2024-12-13 10:40:14.173542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.528 qpair failed and we were unable to recover it. 00:38:20.528 [2024-12-13 10:40:14.173680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.528 [2024-12-13 10:40:14.173697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.528 qpair failed and we were unable to recover it. 00:38:20.528 [2024-12-13 10:40:14.173933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.528 [2024-12-13 10:40:14.173947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.528 qpair failed and we were unable to recover it. 00:38:20.528 [2024-12-13 10:40:14.174140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.528 [2024-12-13 10:40:14.174154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.528 qpair failed and we were unable to recover it. 00:38:20.528 [2024-12-13 10:40:14.174303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.528 [2024-12-13 10:40:14.174317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.528 qpair failed and we were unable to recover it. 00:38:20.528 [2024-12-13 10:40:14.174463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.529 [2024-12-13 10:40:14.174478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.529 qpair failed and we were unable to recover it. 00:38:20.529 [2024-12-13 10:40:14.174689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.529 [2024-12-13 10:40:14.174703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.529 qpair failed and we were unable to recover it. 00:38:20.529 [2024-12-13 10:40:14.174947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.529 [2024-12-13 10:40:14.174961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.529 qpair failed and we were unable to recover it. 00:38:20.529 [2024-12-13 10:40:14.175066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.529 [2024-12-13 10:40:14.175080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.529 qpair failed and we were unable to recover it. 00:38:20.529 [2024-12-13 10:40:14.175235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.529 [2024-12-13 10:40:14.175249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.529 qpair failed and we were unable to recover it. 00:38:20.529 [2024-12-13 10:40:14.175335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.529 [2024-12-13 10:40:14.175349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.529 qpair failed and we were unable to recover it. 00:38:20.529 [2024-12-13 10:40:14.175573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.529 [2024-12-13 10:40:14.175588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.529 qpair failed and we were unable to recover it. 00:38:20.529 [2024-12-13 10:40:14.175679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.529 [2024-12-13 10:40:14.175693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.529 qpair failed and we were unable to recover it. 00:38:20.529 [2024-12-13 10:40:14.175968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.529 [2024-12-13 10:40:14.175982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.529 qpair failed and we were unable to recover it. 00:38:20.529 [2024-12-13 10:40:14.176146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.529 [2024-12-13 10:40:14.176165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.529 qpair failed and we were unable to recover it. 00:38:20.529 [2024-12-13 10:40:14.176306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.529 [2024-12-13 10:40:14.176320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.529 qpair failed and we were unable to recover it. 00:38:20.529 [2024-12-13 10:40:14.176468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.529 [2024-12-13 10:40:14.176482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.529 qpair failed and we were unable to recover it. 00:38:20.529 [2024-12-13 10:40:14.176682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.529 [2024-12-13 10:40:14.176696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.529 qpair failed and we were unable to recover it. 00:38:20.529 [2024-12-13 10:40:14.176918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.529 [2024-12-13 10:40:14.176932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.529 qpair failed and we were unable to recover it. 00:38:20.529 [2024-12-13 10:40:14.177089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.529 [2024-12-13 10:40:14.177103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.529 qpair failed and we were unable to recover it. 00:38:20.529 [2024-12-13 10:40:14.177333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.529 [2024-12-13 10:40:14.177347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.529 qpair failed and we were unable to recover it. 00:38:20.529 [2024-12-13 10:40:14.177569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.529 [2024-12-13 10:40:14.177583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.529 qpair failed and we were unable to recover it. 00:38:20.529 [2024-12-13 10:40:14.177753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.529 [2024-12-13 10:40:14.177766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.529 qpair failed and we were unable to recover it. 00:38:20.529 [2024-12-13 10:40:14.177989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.529 [2024-12-13 10:40:14.178003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.529 qpair failed and we were unable to recover it. 00:38:20.529 [2024-12-13 10:40:14.178248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.529 [2024-12-13 10:40:14.178261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.529 qpair failed and we were unable to recover it. 00:38:20.529 [2024-12-13 10:40:14.178417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.529 [2024-12-13 10:40:14.178430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.529 qpair failed and we were unable to recover it. 00:38:20.529 [2024-12-13 10:40:14.178594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.529 [2024-12-13 10:40:14.178608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.529 qpair failed and we were unable to recover it. 00:38:20.529 [2024-12-13 10:40:14.178763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.529 [2024-12-13 10:40:14.178777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.529 qpair failed and we were unable to recover it. 00:38:20.529 [2024-12-13 10:40:14.179001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.529 [2024-12-13 10:40:14.179016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.529 qpair failed and we were unable to recover it. 00:38:20.529 [2024-12-13 10:40:14.179158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.529 [2024-12-13 10:40:14.179172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.529 qpair failed and we were unable to recover it. 00:38:20.529 [2024-12-13 10:40:14.179416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.529 [2024-12-13 10:40:14.179430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.529 qpair failed and we were unable to recover it. 00:38:20.529 [2024-12-13 10:40:14.179573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.529 [2024-12-13 10:40:14.179588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.529 qpair failed and we were unable to recover it. 00:38:20.529 [2024-12-13 10:40:14.179786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.529 [2024-12-13 10:40:14.179800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.529 qpair failed and we were unable to recover it. 00:38:20.529 [2024-12-13 10:40:14.180037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.529 [2024-12-13 10:40:14.180051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.529 qpair failed and we were unable to recover it. 00:38:20.529 [2024-12-13 10:40:14.180198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.529 [2024-12-13 10:40:14.180212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.529 qpair failed and we were unable to recover it. 00:38:20.529 [2024-12-13 10:40:14.180402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.529 [2024-12-13 10:40:14.180416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.529 qpair failed and we were unable to recover it. 00:38:20.529 [2024-12-13 10:40:14.180582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.529 [2024-12-13 10:40:14.180596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.529 qpair failed and we were unable to recover it. 00:38:20.529 [2024-12-13 10:40:14.180841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.529 [2024-12-13 10:40:14.180855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.529 qpair failed and we were unable to recover it. 00:38:20.529 [2024-12-13 10:40:14.181083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.529 [2024-12-13 10:40:14.181097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.529 qpair failed and we were unable to recover it. 00:38:20.529 [2024-12-13 10:40:14.181299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.529 [2024-12-13 10:40:14.181313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.529 qpair failed and we were unable to recover it. 00:38:20.529 [2024-12-13 10:40:14.181530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.529 [2024-12-13 10:40:14.181544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.529 qpair failed and we were unable to recover it. 00:38:20.529 [2024-12-13 10:40:14.181770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.529 [2024-12-13 10:40:14.181788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.529 qpair failed and we were unable to recover it. 00:38:20.529 [2024-12-13 10:40:14.181956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.529 [2024-12-13 10:40:14.181970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.530 qpair failed and we were unable to recover it. 00:38:20.530 [2024-12-13 10:40:14.182192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.530 [2024-12-13 10:40:14.182206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.530 qpair failed and we were unable to recover it. 00:38:20.530 [2024-12-13 10:40:14.182366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.530 [2024-12-13 10:40:14.182381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.530 qpair failed and we were unable to recover it. 00:38:20.530 [2024-12-13 10:40:14.182606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.530 [2024-12-13 10:40:14.182621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.530 qpair failed and we were unable to recover it. 00:38:20.530 [2024-12-13 10:40:14.182814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.530 [2024-12-13 10:40:14.182828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.530 qpair failed and we were unable to recover it. 00:38:20.530 [2024-12-13 10:40:14.183089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.530 [2024-12-13 10:40:14.183103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.530 qpair failed and we were unable to recover it. 00:38:20.530 [2024-12-13 10:40:14.183355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.530 [2024-12-13 10:40:14.183369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.530 qpair failed and we were unable to recover it. 00:38:20.530 [2024-12-13 10:40:14.183457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.530 [2024-12-13 10:40:14.183471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.530 qpair failed and we were unable to recover it. 00:38:20.530 [2024-12-13 10:40:14.183666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.530 [2024-12-13 10:40:14.183680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.530 qpair failed and we were unable to recover it. 00:38:20.530 [2024-12-13 10:40:14.183827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.530 [2024-12-13 10:40:14.183841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.530 qpair failed and we were unable to recover it. 00:38:20.530 [2024-12-13 10:40:14.184099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.530 [2024-12-13 10:40:14.184112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.530 qpair failed and we were unable to recover it. 00:38:20.530 [2024-12-13 10:40:14.184338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.530 [2024-12-13 10:40:14.184352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.530 qpair failed and we were unable to recover it. 00:38:20.530 [2024-12-13 10:40:14.184499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.530 [2024-12-13 10:40:14.184514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.530 qpair failed and we were unable to recover it. 00:38:20.530 [2024-12-13 10:40:14.184724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.530 [2024-12-13 10:40:14.184738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.530 qpair failed and we were unable to recover it. 00:38:20.530 [2024-12-13 10:40:14.184965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.530 [2024-12-13 10:40:14.184979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.530 qpair failed and we were unable to recover it. 00:38:20.530 [2024-12-13 10:40:14.185131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.530 [2024-12-13 10:40:14.185146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.530 qpair failed and we were unable to recover it. 00:38:20.530 [2024-12-13 10:40:14.185294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.530 [2024-12-13 10:40:14.185307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.530 qpair failed and we were unable to recover it. 00:38:20.530 [2024-12-13 10:40:14.185456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.530 [2024-12-13 10:40:14.185470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.530 qpair failed and we were unable to recover it. 00:38:20.530 [2024-12-13 10:40:14.185675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.530 [2024-12-13 10:40:14.185689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.530 qpair failed and we were unable to recover it. 00:38:20.530 [2024-12-13 10:40:14.185787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.530 [2024-12-13 10:40:14.185800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.530 qpair failed and we were unable to recover it. 00:38:20.530 [2024-12-13 10:40:14.186027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.530 [2024-12-13 10:40:14.186041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.530 qpair failed and we were unable to recover it. 00:38:20.530 [2024-12-13 10:40:14.186127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.530 [2024-12-13 10:40:14.186140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.530 qpair failed and we were unable to recover it. 00:38:20.530 [2024-12-13 10:40:14.186409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.530 [2024-12-13 10:40:14.186422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.530 qpair failed and we were unable to recover it. 00:38:20.530 [2024-12-13 10:40:14.186579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.530 [2024-12-13 10:40:14.186593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.530 qpair failed and we were unable to recover it. 00:38:20.530 [2024-12-13 10:40:14.186778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.530 [2024-12-13 10:40:14.186793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.530 qpair failed and we were unable to recover it. 00:38:20.530 [2024-12-13 10:40:14.186951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.530 [2024-12-13 10:40:14.186964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.530 qpair failed and we were unable to recover it. 00:38:20.530 [2024-12-13 10:40:14.187210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.530 [2024-12-13 10:40:14.187242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:20.530 qpair failed and we were unable to recover it. 00:38:20.530 [2024-12-13 10:40:14.187472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.530 [2024-12-13 10:40:14.187497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:20.530 qpair failed and we were unable to recover it. 00:38:20.530 [2024-12-13 10:40:14.187677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.530 [2024-12-13 10:40:14.187699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:20.530 qpair failed and we were unable to recover it. 00:38:20.530 [2024-12-13 10:40:14.187871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.530 [2024-12-13 10:40:14.187893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:20.530 qpair failed and we were unable to recover it. 00:38:20.530 [2024-12-13 10:40:14.188112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.530 [2024-12-13 10:40:14.188134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:20.530 qpair failed and we were unable to recover it. 00:38:20.530 [2024-12-13 10:40:14.188379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.530 [2024-12-13 10:40:14.188400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:20.530 qpair failed and we were unable to recover it. 00:38:20.530 [2024-12-13 10:40:14.188531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.530 [2024-12-13 10:40:14.188555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:20.530 qpair failed and we were unable to recover it. 00:38:20.530 [2024-12-13 10:40:14.188789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.530 [2024-12-13 10:40:14.188814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:20.530 qpair failed and we were unable to recover it. 00:38:20.530 [2024-12-13 10:40:14.188994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.530 [2024-12-13 10:40:14.189010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.530 qpair failed and we were unable to recover it. 00:38:20.530 [2024-12-13 10:40:14.189226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.530 [2024-12-13 10:40:14.189240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.530 qpair failed and we were unable to recover it. 00:38:20.530 [2024-12-13 10:40:14.189385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.530 [2024-12-13 10:40:14.189399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.531 qpair failed and we were unable to recover it. 00:38:20.531 [2024-12-13 10:40:14.189617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.531 [2024-12-13 10:40:14.189631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.531 qpair failed and we were unable to recover it. 00:38:20.531 [2024-12-13 10:40:14.189877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.531 [2024-12-13 10:40:14.189891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.531 qpair failed and we were unable to recover it. 00:38:20.531 [2024-12-13 10:40:14.190042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.531 [2024-12-13 10:40:14.190058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.531 qpair failed and we were unable to recover it. 00:38:20.531 [2024-12-13 10:40:14.190222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.531 [2024-12-13 10:40:14.190236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.531 qpair failed and we were unable to recover it. 00:38:20.531 [2024-12-13 10:40:14.190458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.531 [2024-12-13 10:40:14.190472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.531 qpair failed and we were unable to recover it. 00:38:20.531 [2024-12-13 10:40:14.190726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.531 [2024-12-13 10:40:14.190739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.531 qpair failed and we were unable to recover it. 00:38:20.531 [2024-12-13 10:40:14.190980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.531 [2024-12-13 10:40:14.190994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.531 qpair failed and we were unable to recover it. 00:38:20.531 [2024-12-13 10:40:14.191140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.531 [2024-12-13 10:40:14.191159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.531 qpair failed and we were unable to recover it. 00:38:20.531 [2024-12-13 10:40:14.191305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.531 [2024-12-13 10:40:14.191319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.531 qpair failed and we were unable to recover it. 00:38:20.531 [2024-12-13 10:40:14.191493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.531 [2024-12-13 10:40:14.191508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.531 qpair failed and we were unable to recover it. 00:38:20.531 [2024-12-13 10:40:14.191662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.531 [2024-12-13 10:40:14.191676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.531 qpair failed and we were unable to recover it. 00:38:20.531 [2024-12-13 10:40:14.191893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.531 [2024-12-13 10:40:14.191907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.531 qpair failed and we were unable to recover it. 00:38:20.531 [2024-12-13 10:40:14.192120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.531 [2024-12-13 10:40:14.192134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.531 qpair failed and we were unable to recover it. 00:38:20.531 [2024-12-13 10:40:14.192358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.531 [2024-12-13 10:40:14.192372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.531 qpair failed and we were unable to recover it. 00:38:20.531 [2024-12-13 10:40:14.192626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.531 [2024-12-13 10:40:14.192640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.531 qpair failed and we were unable to recover it. 00:38:20.531 [2024-12-13 10:40:14.192841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.531 [2024-12-13 10:40:14.192855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.531 qpair failed and we were unable to recover it. 00:38:20.531 [2024-12-13 10:40:14.193066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.531 [2024-12-13 10:40:14.193080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.531 qpair failed and we were unable to recover it. 00:38:20.531 [2024-12-13 10:40:14.193282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.531 [2024-12-13 10:40:14.193296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.531 qpair failed and we were unable to recover it. 00:38:20.531 [2024-12-13 10:40:14.193453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.531 [2024-12-13 10:40:14.193467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.531 qpair failed and we were unable to recover it. 00:38:20.531 [2024-12-13 10:40:14.193609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.531 [2024-12-13 10:40:14.193623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.531 qpair failed and we were unable to recover it. 00:38:20.531 [2024-12-13 10:40:14.193775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.531 [2024-12-13 10:40:14.193789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.531 qpair failed and we were unable to recover it. 00:38:20.531 [2024-12-13 10:40:14.193994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.531 [2024-12-13 10:40:14.194008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.531 qpair failed and we were unable to recover it. 00:38:20.531 [2024-12-13 10:40:14.194241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.531 [2024-12-13 10:40:14.194255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.531 qpair failed and we were unable to recover it. 00:38:20.531 [2024-12-13 10:40:14.194484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.531 [2024-12-13 10:40:14.194498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.531 qpair failed and we were unable to recover it. 00:38:20.531 [2024-12-13 10:40:14.194636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.531 [2024-12-13 10:40:14.194649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.531 qpair failed and we were unable to recover it. 00:38:20.531 [2024-12-13 10:40:14.194874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.531 [2024-12-13 10:40:14.194888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.531 qpair failed and we were unable to recover it. 00:38:20.531 [2024-12-13 10:40:14.195109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.531 [2024-12-13 10:40:14.195123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.531 qpair failed and we were unable to recover it. 00:38:20.531 [2024-12-13 10:40:14.195330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.531 [2024-12-13 10:40:14.195344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.531 qpair failed and we were unable to recover it. 00:38:20.531 [2024-12-13 10:40:14.195547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.531 [2024-12-13 10:40:14.195562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.531 qpair failed and we were unable to recover it. 00:38:20.531 [2024-12-13 10:40:14.195793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.531 [2024-12-13 10:40:14.195807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.531 qpair failed and we were unable to recover it. 00:38:20.531 [2024-12-13 10:40:14.196031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.531 [2024-12-13 10:40:14.196044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.531 qpair failed and we were unable to recover it. 00:38:20.531 [2024-12-13 10:40:14.196305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.531 [2024-12-13 10:40:14.196318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.531 qpair failed and we were unable to recover it. 00:38:20.531 [2024-12-13 10:40:14.196415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.531 [2024-12-13 10:40:14.196429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.531 qpair failed and we were unable to recover it. 00:38:20.531 [2024-12-13 10:40:14.196646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.531 [2024-12-13 10:40:14.196660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.531 qpair failed and we were unable to recover it. 00:38:20.531 [2024-12-13 10:40:14.196888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.531 [2024-12-13 10:40:14.196902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.531 qpair failed and we were unable to recover it. 00:38:20.531 [2024-12-13 10:40:14.197109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.531 [2024-12-13 10:40:14.197123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.531 qpair failed and we were unable to recover it. 00:38:20.531 [2024-12-13 10:40:14.197351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.531 [2024-12-13 10:40:14.197365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.531 qpair failed and we were unable to recover it. 00:38:20.531 [2024-12-13 10:40:14.197470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.531 [2024-12-13 10:40:14.197484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.531 qpair failed and we were unable to recover it. 00:38:20.531 [2024-12-13 10:40:14.197631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.532 [2024-12-13 10:40:14.197644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.532 qpair failed and we were unable to recover it. 00:38:20.532 [2024-12-13 10:40:14.197875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.532 [2024-12-13 10:40:14.197889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.532 qpair failed and we were unable to recover it. 00:38:20.532 [2024-12-13 10:40:14.198032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.532 [2024-12-13 10:40:14.198045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.532 qpair failed and we were unable to recover it. 00:38:20.532 [2024-12-13 10:40:14.198277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.532 [2024-12-13 10:40:14.198291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.532 qpair failed and we were unable to recover it. 00:38:20.532 [2024-12-13 10:40:14.198440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.532 [2024-12-13 10:40:14.198460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.532 qpair failed and we were unable to recover it. 00:38:20.532 [2024-12-13 10:40:14.198671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.532 [2024-12-13 10:40:14.198684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.532 qpair failed and we were unable to recover it. 00:38:20.532 [2024-12-13 10:40:14.198906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.532 [2024-12-13 10:40:14.198920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.532 qpair failed and we were unable to recover it. 00:38:20.532 [2024-12-13 10:40:14.199121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.532 [2024-12-13 10:40:14.199135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.532 qpair failed and we were unable to recover it. 00:38:20.532 [2024-12-13 10:40:14.199382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.532 [2024-12-13 10:40:14.199396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.532 qpair failed and we were unable to recover it. 00:38:20.532 [2024-12-13 10:40:14.199582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.532 [2024-12-13 10:40:14.199596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.532 qpair failed and we were unable to recover it. 00:38:20.532 [2024-12-13 10:40:14.199747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.532 [2024-12-13 10:40:14.199761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.532 qpair failed and we were unable to recover it. 00:38:20.532 [2024-12-13 10:40:14.200005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.532 [2024-12-13 10:40:14.200019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.532 qpair failed and we were unable to recover it. 00:38:20.532 [2024-12-13 10:40:14.200153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.532 [2024-12-13 10:40:14.200166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.532 qpair failed and we were unable to recover it. 00:38:20.532 [2024-12-13 10:40:14.200330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.532 [2024-12-13 10:40:14.200344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.532 qpair failed and we were unable to recover it. 00:38:20.532 [2024-12-13 10:40:14.200492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.532 [2024-12-13 10:40:14.200506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.532 qpair failed and we were unable to recover it. 00:38:20.532 [2024-12-13 10:40:14.200708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.532 [2024-12-13 10:40:14.200722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.532 qpair failed and we were unable to recover it. 00:38:20.532 [2024-12-13 10:40:14.200865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.532 [2024-12-13 10:40:14.200879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.532 qpair failed and we were unable to recover it. 00:38:20.532 [2024-12-13 10:40:14.201107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.532 [2024-12-13 10:40:14.201120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.532 qpair failed and we were unable to recover it. 00:38:20.532 [2024-12-13 10:40:14.201380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.532 [2024-12-13 10:40:14.201393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.532 qpair failed and we were unable to recover it. 00:38:20.532 [2024-12-13 10:40:14.201548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.532 [2024-12-13 10:40:14.201563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.532 qpair failed and we were unable to recover it. 00:38:20.532 [2024-12-13 10:40:14.201827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.532 [2024-12-13 10:40:14.201841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.532 qpair failed and we were unable to recover it. 00:38:20.532 [2024-12-13 10:40:14.202042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.532 [2024-12-13 10:40:14.202056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.532 qpair failed and we were unable to recover it. 00:38:20.532 [2024-12-13 10:40:14.202278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.532 [2024-12-13 10:40:14.202292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.532 qpair failed and we were unable to recover it. 00:38:20.532 [2024-12-13 10:40:14.202427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.532 [2024-12-13 10:40:14.202441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.532 qpair failed and we were unable to recover it. 00:38:20.532 [2024-12-13 10:40:14.202699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.532 [2024-12-13 10:40:14.202713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.532 qpair failed and we were unable to recover it. 00:38:20.532 [2024-12-13 10:40:14.202964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.532 [2024-12-13 10:40:14.202978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.532 qpair failed and we were unable to recover it. 00:38:20.532 [2024-12-13 10:40:14.203183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.532 [2024-12-13 10:40:14.203196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.532 qpair failed and we were unable to recover it. 00:38:20.532 [2024-12-13 10:40:14.203430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.532 [2024-12-13 10:40:14.203444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.532 qpair failed and we were unable to recover it. 00:38:20.532 [2024-12-13 10:40:14.203536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.532 [2024-12-13 10:40:14.203550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.532 qpair failed and we were unable to recover it. 00:38:20.532 [2024-12-13 10:40:14.203685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.532 [2024-12-13 10:40:14.203699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.532 qpair failed and we were unable to recover it. 00:38:20.532 [2024-12-13 10:40:14.203943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.532 [2024-12-13 10:40:14.203956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.532 qpair failed and we were unable to recover it. 00:38:20.532 [2024-12-13 10:40:14.204122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.532 [2024-12-13 10:40:14.204136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.532 qpair failed and we were unable to recover it. 00:38:20.532 [2024-12-13 10:40:14.204348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.532 [2024-12-13 10:40:14.204362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.532 qpair failed and we were unable to recover it. 00:38:20.532 [2024-12-13 10:40:14.204551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.532 [2024-12-13 10:40:14.204574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.532 qpair failed and we were unable to recover it. 00:38:20.532 [2024-12-13 10:40:14.204656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.532 [2024-12-13 10:40:14.204670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.532 qpair failed and we were unable to recover it. 00:38:20.532 [2024-12-13 10:40:14.204898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.532 [2024-12-13 10:40:14.204912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.532 qpair failed and we were unable to recover it. 00:38:20.532 [2024-12-13 10:40:14.205115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.532 [2024-12-13 10:40:14.205129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.532 qpair failed and we were unable to recover it. 00:38:20.532 [2024-12-13 10:40:14.205406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.532 [2024-12-13 10:40:14.205419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.532 qpair failed and we were unable to recover it. 00:38:20.532 [2024-12-13 10:40:14.205618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.532 [2024-12-13 10:40:14.205633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.532 qpair failed and we were unable to recover it. 00:38:20.532 [2024-12-13 10:40:14.205882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.533 [2024-12-13 10:40:14.205895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.533 qpair failed and we were unable to recover it. 00:38:20.533 [2024-12-13 10:40:14.206059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.533 [2024-12-13 10:40:14.206073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.533 qpair failed and we were unable to recover it. 00:38:20.533 [2024-12-13 10:40:14.206234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.533 [2024-12-13 10:40:14.206249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.533 qpair failed and we were unable to recover it. 00:38:20.533 [2024-12-13 10:40:14.206401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.533 [2024-12-13 10:40:14.206414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.533 qpair failed and we were unable to recover it. 00:38:20.533 [2024-12-13 10:40:14.206619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.533 [2024-12-13 10:40:14.206633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.533 qpair failed and we were unable to recover it. 00:38:20.533 [2024-12-13 10:40:14.206800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.533 [2024-12-13 10:40:14.206818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.533 qpair failed and we were unable to recover it. 00:38:20.533 [2024-12-13 10:40:14.206972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.533 [2024-12-13 10:40:14.206985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.533 qpair failed and we were unable to recover it. 00:38:20.533 [2024-12-13 10:40:14.207116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.533 [2024-12-13 10:40:14.207129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.533 qpair failed and we were unable to recover it. 00:38:20.533 [2024-12-13 10:40:14.207351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.533 [2024-12-13 10:40:14.207365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.533 qpair failed and we were unable to recover it. 00:38:20.533 [2024-12-13 10:40:14.207592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.533 [2024-12-13 10:40:14.207606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.533 qpair failed and we were unable to recover it. 00:38:20.533 [2024-12-13 10:40:14.207699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.533 [2024-12-13 10:40:14.207712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.533 qpair failed and we were unable to recover it. 00:38:20.533 [2024-12-13 10:40:14.207792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.533 [2024-12-13 10:40:14.207806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.533 qpair failed and we were unable to recover it. 00:38:20.533 [2024-12-13 10:40:14.208032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.533 [2024-12-13 10:40:14.208046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.533 qpair failed and we were unable to recover it. 00:38:20.533 [2024-12-13 10:40:14.208196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.533 [2024-12-13 10:40:14.208210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.533 qpair failed and we were unable to recover it. 00:38:20.533 [2024-12-13 10:40:14.208410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.533 [2024-12-13 10:40:14.208424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.533 qpair failed and we were unable to recover it. 00:38:20.533 [2024-12-13 10:40:14.208575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.533 [2024-12-13 10:40:14.208589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.533 qpair failed and we were unable to recover it. 00:38:20.533 [2024-12-13 10:40:14.208813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.533 [2024-12-13 10:40:14.208827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.533 qpair failed and we were unable to recover it. 00:38:20.533 [2024-12-13 10:40:14.208909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.533 [2024-12-13 10:40:14.208922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.533 qpair failed and we were unable to recover it. 00:38:20.533 [2024-12-13 10:40:14.209159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.533 [2024-12-13 10:40:14.209173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.533 qpair failed and we were unable to recover it. 00:38:20.533 [2024-12-13 10:40:14.209374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.533 [2024-12-13 10:40:14.209388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.533 qpair failed and we were unable to recover it. 00:38:20.533 [2024-12-13 10:40:14.209537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.533 [2024-12-13 10:40:14.209552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.533 qpair failed and we were unable to recover it. 00:38:20.533 [2024-12-13 10:40:14.209778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.533 [2024-12-13 10:40:14.209792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.533 qpair failed and we were unable to recover it. 00:38:20.533 [2024-12-13 10:40:14.210021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.533 [2024-12-13 10:40:14.210035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.533 qpair failed and we were unable to recover it. 00:38:20.533 [2024-12-13 10:40:14.210254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.533 [2024-12-13 10:40:14.210268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.533 qpair failed and we were unable to recover it. 00:38:20.533 [2024-12-13 10:40:14.210532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.533 [2024-12-13 10:40:14.210547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.533 qpair failed and we were unable to recover it. 00:38:20.533 [2024-12-13 10:40:14.210780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.533 [2024-12-13 10:40:14.210795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.533 qpair failed and we were unable to recover it. 00:38:20.533 [2024-12-13 10:40:14.210941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.533 [2024-12-13 10:40:14.210955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.533 qpair failed and we were unable to recover it. 00:38:20.533 [2024-12-13 10:40:14.211182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.533 [2024-12-13 10:40:14.211195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.533 qpair failed and we were unable to recover it. 00:38:20.533 [2024-12-13 10:40:14.211372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.533 [2024-12-13 10:40:14.211386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.533 qpair failed and we were unable to recover it. 00:38:20.533 [2024-12-13 10:40:14.211592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.533 [2024-12-13 10:40:14.211606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.533 qpair failed and we were unable to recover it. 00:38:20.533 [2024-12-13 10:40:14.211810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.533 [2024-12-13 10:40:14.211824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.533 qpair failed and we were unable to recover it. 00:38:20.533 [2024-12-13 10:40:14.212052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.533 [2024-12-13 10:40:14.212066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.533 qpair failed and we were unable to recover it. 00:38:20.533 [2024-12-13 10:40:14.212205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.533 [2024-12-13 10:40:14.212219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.533 qpair failed and we were unable to recover it. 00:38:20.533 [2024-12-13 10:40:14.212439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.533 [2024-12-13 10:40:14.212456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.533 qpair failed and we were unable to recover it. 00:38:20.533 [2024-12-13 10:40:14.212682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.533 [2024-12-13 10:40:14.212695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.533 qpair failed and we were unable to recover it. 00:38:20.533 [2024-12-13 10:40:14.212934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.533 [2024-12-13 10:40:14.212948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.533 qpair failed and we were unable to recover it. 00:38:20.533 [2024-12-13 10:40:14.213174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.533 [2024-12-13 10:40:14.213188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.533 qpair failed and we were unable to recover it. 00:38:20.533 [2024-12-13 10:40:14.213393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.533 [2024-12-13 10:40:14.213407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.533 qpair failed and we were unable to recover it. 00:38:20.533 [2024-12-13 10:40:14.213607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.533 [2024-12-13 10:40:14.213621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.533 qpair failed and we were unable to recover it. 00:38:20.534 [2024-12-13 10:40:14.213800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.534 [2024-12-13 10:40:14.213814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.534 qpair failed and we were unable to recover it. 00:38:20.534 [2024-12-13 10:40:14.214064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.534 [2024-12-13 10:40:14.214077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.534 qpair failed and we were unable to recover it. 00:38:20.534 [2024-12-13 10:40:14.214230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.534 [2024-12-13 10:40:14.214244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.534 qpair failed and we were unable to recover it. 00:38:20.534 [2024-12-13 10:40:14.214490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.534 [2024-12-13 10:40:14.214504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.534 qpair failed and we were unable to recover it. 00:38:20.534 [2024-12-13 10:40:14.214730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.534 [2024-12-13 10:40:14.214744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.534 qpair failed and we were unable to recover it. 00:38:20.534 [2024-12-13 10:40:14.214944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.534 [2024-12-13 10:40:14.214958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.534 qpair failed and we were unable to recover it. 00:38:20.534 [2024-12-13 10:40:14.215109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.534 [2024-12-13 10:40:14.215125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.534 qpair failed and we were unable to recover it. 00:38:20.534 [2024-12-13 10:40:14.215356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.534 [2024-12-13 10:40:14.215370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.534 qpair failed and we were unable to recover it. 00:38:20.534 [2024-12-13 10:40:14.215520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.534 [2024-12-13 10:40:14.215534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.534 qpair failed and we were unable to recover it. 00:38:20.534 [2024-12-13 10:40:14.215710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.534 [2024-12-13 10:40:14.215723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.534 qpair failed and we were unable to recover it. 00:38:20.534 [2024-12-13 10:40:14.215922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.534 [2024-12-13 10:40:14.215936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.534 qpair failed and we were unable to recover it. 00:38:20.534 [2024-12-13 10:40:14.216165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.534 [2024-12-13 10:40:14.216179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.534 qpair failed and we were unable to recover it. 00:38:20.534 [2024-12-13 10:40:14.216334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.534 [2024-12-13 10:40:14.216348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.534 qpair failed and we were unable to recover it. 00:38:20.534 [2024-12-13 10:40:14.216598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.534 [2024-12-13 10:40:14.216612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.534 qpair failed and we were unable to recover it. 00:38:20.534 [2024-12-13 10:40:14.216788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.534 [2024-12-13 10:40:14.216802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.534 qpair failed and we were unable to recover it. 00:38:20.534 [2024-12-13 10:40:14.217004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.534 [2024-12-13 10:40:14.217018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.534 qpair failed and we were unable to recover it. 00:38:20.534 [2024-12-13 10:40:14.217243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.534 [2024-12-13 10:40:14.217257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.534 qpair failed and we were unable to recover it. 00:38:20.534 [2024-12-13 10:40:14.217480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.534 [2024-12-13 10:40:14.217494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.534 qpair failed and we were unable to recover it. 00:38:20.534 [2024-12-13 10:40:14.217713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.534 [2024-12-13 10:40:14.217727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.534 qpair failed and we were unable to recover it. 00:38:20.534 [2024-12-13 10:40:14.217893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.534 [2024-12-13 10:40:14.217911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.534 qpair failed and we were unable to recover it. 00:38:20.534 [2024-12-13 10:40:14.218155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.534 [2024-12-13 10:40:14.218169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.534 qpair failed and we were unable to recover it. 00:38:20.534 [2024-12-13 10:40:14.218338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.534 [2024-12-13 10:40:14.218351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.534 qpair failed and we were unable to recover it. 00:38:20.534 [2024-12-13 10:40:14.218540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.534 [2024-12-13 10:40:14.218554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.534 qpair failed and we were unable to recover it. 00:38:20.534 [2024-12-13 10:40:14.218728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.534 [2024-12-13 10:40:14.218742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.534 qpair failed and we were unable to recover it. 00:38:20.534 [2024-12-13 10:40:14.218888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.534 [2024-12-13 10:40:14.218902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.534 qpair failed and we were unable to recover it. 00:38:20.534 [2024-12-13 10:40:14.219051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.534 [2024-12-13 10:40:14.219065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.534 qpair failed and we were unable to recover it. 00:38:20.534 [2024-12-13 10:40:14.219270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.534 [2024-12-13 10:40:14.219283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.534 qpair failed and we were unable to recover it. 00:38:20.534 [2024-12-13 10:40:14.219427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.534 [2024-12-13 10:40:14.219441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.534 qpair failed and we were unable to recover it. 00:38:20.534 [2024-12-13 10:40:14.219676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.534 [2024-12-13 10:40:14.219690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.534 qpair failed and we were unable to recover it. 00:38:20.534 [2024-12-13 10:40:14.219837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.534 [2024-12-13 10:40:14.219851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.534 qpair failed and we were unable to recover it. 00:38:20.534 [2024-12-13 10:40:14.220080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.534 [2024-12-13 10:40:14.220093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.534 qpair failed and we were unable to recover it. 00:38:20.534 [2024-12-13 10:40:14.220311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.534 [2024-12-13 10:40:14.220325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.534 qpair failed and we were unable to recover it. 00:38:20.534 [2024-12-13 10:40:14.220503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.534 [2024-12-13 10:40:14.220517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.534 qpair failed and we were unable to recover it. 00:38:20.534 [2024-12-13 10:40:14.220655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.534 [2024-12-13 10:40:14.220669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.534 qpair failed and we were unable to recover it. 00:38:20.535 [2024-12-13 10:40:14.220869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.535 [2024-12-13 10:40:14.220883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.535 qpair failed and we were unable to recover it. 00:38:20.535 [2024-12-13 10:40:14.221107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.535 [2024-12-13 10:40:14.221120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.535 qpair failed and we were unable to recover it. 00:38:20.535 [2024-12-13 10:40:14.221348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.535 [2024-12-13 10:40:14.221362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.535 qpair failed and we were unable to recover it. 00:38:20.535 [2024-12-13 10:40:14.221581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.535 [2024-12-13 10:40:14.221596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.535 qpair failed and we were unable to recover it. 00:38:20.535 [2024-12-13 10:40:14.221814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.535 [2024-12-13 10:40:14.221828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.535 qpair failed and we were unable to recover it. 00:38:20.535 [2024-12-13 10:40:14.222014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.535 [2024-12-13 10:40:14.222029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.535 qpair failed and we were unable to recover it. 00:38:20.535 [2024-12-13 10:40:14.222257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.535 [2024-12-13 10:40:14.222270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.535 qpair failed and we were unable to recover it. 00:38:20.535 [2024-12-13 10:40:14.222414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.535 [2024-12-13 10:40:14.222428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.535 qpair failed and we were unable to recover it. 00:38:20.535 [2024-12-13 10:40:14.222657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.535 [2024-12-13 10:40:14.222671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.535 qpair failed and we were unable to recover it. 00:38:20.535 [2024-12-13 10:40:14.222887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.535 [2024-12-13 10:40:14.222901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.535 qpair failed and we were unable to recover it. 00:38:20.535 [2024-12-13 10:40:14.223048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.535 [2024-12-13 10:40:14.223061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.535 qpair failed and we were unable to recover it. 00:38:20.535 [2024-12-13 10:40:14.223283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.535 [2024-12-13 10:40:14.223296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.535 qpair failed and we were unable to recover it. 00:38:20.535 [2024-12-13 10:40:14.223519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.535 [2024-12-13 10:40:14.223533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.535 qpair failed and we were unable to recover it. 00:38:20.535 [2024-12-13 10:40:14.223787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.535 [2024-12-13 10:40:14.223800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.535 qpair failed and we were unable to recover it. 00:38:20.535 [2024-12-13 10:40:14.224027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.535 [2024-12-13 10:40:14.224041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.535 qpair failed and we were unable to recover it. 00:38:20.535 [2024-12-13 10:40:14.224285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.535 [2024-12-13 10:40:14.224299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.535 qpair failed and we were unable to recover it. 00:38:20.535 [2024-12-13 10:40:14.224466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.535 [2024-12-13 10:40:14.224480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.535 qpair failed and we were unable to recover it. 00:38:20.535 [2024-12-13 10:40:14.224708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.535 [2024-12-13 10:40:14.224722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.535 qpair failed and we were unable to recover it. 00:38:20.535 [2024-12-13 10:40:14.224978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.535 [2024-12-13 10:40:14.224992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.535 qpair failed and we were unable to recover it. 00:38:20.535 [2024-12-13 10:40:14.225163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.535 [2024-12-13 10:40:14.225177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.535 qpair failed and we were unable to recover it. 00:38:20.535 [2024-12-13 10:40:14.225277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.535 [2024-12-13 10:40:14.225290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.535 qpair failed and we were unable to recover it. 00:38:20.535 [2024-12-13 10:40:14.225502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.535 [2024-12-13 10:40:14.225515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.535 qpair failed and we were unable to recover it. 00:38:20.535 [2024-12-13 10:40:14.225689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.535 [2024-12-13 10:40:14.225703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.535 qpair failed and we were unable to recover it. 00:38:20.535 [2024-12-13 10:40:14.225923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.535 [2024-12-13 10:40:14.225937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.535 qpair failed and we were unable to recover it. 00:38:20.535 [2024-12-13 10:40:14.226140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.535 [2024-12-13 10:40:14.226154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.535 qpair failed and we were unable to recover it. 00:38:20.535 [2024-12-13 10:40:14.226306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.535 [2024-12-13 10:40:14.226319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.535 qpair failed and we were unable to recover it. 00:38:20.535 [2024-12-13 10:40:14.226571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.535 [2024-12-13 10:40:14.226586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.535 qpair failed and we were unable to recover it. 00:38:20.535 [2024-12-13 10:40:14.226788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.535 [2024-12-13 10:40:14.226802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.535 qpair failed and we were unable to recover it. 00:38:20.535 [2024-12-13 10:40:14.227003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.535 [2024-12-13 10:40:14.227017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.535 qpair failed and we were unable to recover it. 00:38:20.535 [2024-12-13 10:40:14.227196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.535 [2024-12-13 10:40:14.227210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.535 qpair failed and we were unable to recover it. 00:38:20.535 [2024-12-13 10:40:14.227433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.535 [2024-12-13 10:40:14.227446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.535 qpair failed and we were unable to recover it. 00:38:20.535 [2024-12-13 10:40:14.227622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.535 [2024-12-13 10:40:14.227636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.535 qpair failed and we were unable to recover it. 00:38:20.535 [2024-12-13 10:40:14.227855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.535 [2024-12-13 10:40:14.227869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.535 qpair failed and we were unable to recover it. 00:38:20.535 [2024-12-13 10:40:14.228118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.535 [2024-12-13 10:40:14.228131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.535 qpair failed and we were unable to recover it. 00:38:20.535 [2024-12-13 10:40:14.228302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.535 [2024-12-13 10:40:14.228315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.535 qpair failed and we were unable to recover it. 00:38:20.535 [2024-12-13 10:40:14.228490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.535 [2024-12-13 10:40:14.228504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.535 qpair failed and we were unable to recover it. 00:38:20.535 [2024-12-13 10:40:14.228667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.535 [2024-12-13 10:40:14.228680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.535 qpair failed and we were unable to recover it. 00:38:20.535 [2024-12-13 10:40:14.228873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.535 [2024-12-13 10:40:14.228886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.535 qpair failed and we were unable to recover it. 00:38:20.535 [2024-12-13 10:40:14.229037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.535 [2024-12-13 10:40:14.229050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.536 qpair failed and we were unable to recover it. 00:38:20.536 [2024-12-13 10:40:14.229312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.536 [2024-12-13 10:40:14.229330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.536 qpair failed and we were unable to recover it. 00:38:20.536 [2024-12-13 10:40:14.229530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.536 [2024-12-13 10:40:14.229545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.536 qpair failed and we were unable to recover it. 00:38:20.536 [2024-12-13 10:40:14.229628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.536 [2024-12-13 10:40:14.229642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.536 qpair failed and we were unable to recover it. 00:38:20.536 [2024-12-13 10:40:14.229868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.536 [2024-12-13 10:40:14.229881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.536 qpair failed and we were unable to recover it. 00:38:20.536 [2024-12-13 10:40:14.230033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.536 [2024-12-13 10:40:14.230046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.536 qpair failed and we were unable to recover it. 00:38:20.536 [2024-12-13 10:40:14.230201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.536 [2024-12-13 10:40:14.230215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.536 qpair failed and we were unable to recover it. 00:38:20.536 [2024-12-13 10:40:14.230396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.536 [2024-12-13 10:40:14.230409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.536 qpair failed and we were unable to recover it. 00:38:20.536 [2024-12-13 10:40:14.230576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.536 [2024-12-13 10:40:14.230589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.536 qpair failed and we were unable to recover it. 00:38:20.536 [2024-12-13 10:40:14.230815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.536 [2024-12-13 10:40:14.230828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.536 qpair failed and we were unable to recover it. 00:38:20.536 [2024-12-13 10:40:14.231084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.536 [2024-12-13 10:40:14.231098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.536 qpair failed and we were unable to recover it. 00:38:20.536 [2024-12-13 10:40:14.231264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.536 [2024-12-13 10:40:14.231283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.536 qpair failed and we were unable to recover it. 00:38:20.536 [2024-12-13 10:40:14.231438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.536 [2024-12-13 10:40:14.231463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.536 qpair failed and we were unable to recover it. 00:38:20.536 [2024-12-13 10:40:14.231616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.536 [2024-12-13 10:40:14.231630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.536 qpair failed and we were unable to recover it. 00:38:20.536 [2024-12-13 10:40:14.231807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.536 [2024-12-13 10:40:14.231821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.536 qpair failed and we were unable to recover it. 00:38:20.536 [2024-12-13 10:40:14.232064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.536 [2024-12-13 10:40:14.232077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.536 qpair failed and we were unable to recover it. 00:38:20.536 [2024-12-13 10:40:14.232318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.536 [2024-12-13 10:40:14.232332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.536 qpair failed and we were unable to recover it. 00:38:20.536 [2024-12-13 10:40:14.232477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.536 [2024-12-13 10:40:14.232492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.536 qpair failed and we were unable to recover it. 00:38:20.536 [2024-12-13 10:40:14.232653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.536 [2024-12-13 10:40:14.232666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.536 qpair failed and we were unable to recover it. 00:38:20.536 [2024-12-13 10:40:14.232842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.536 [2024-12-13 10:40:14.232856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.536 qpair failed and we were unable to recover it. 00:38:20.536 [2024-12-13 10:40:14.233106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.536 [2024-12-13 10:40:14.233120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.536 qpair failed and we were unable to recover it. 00:38:20.536 [2024-12-13 10:40:14.233320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.536 [2024-12-13 10:40:14.233333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.536 qpair failed and we were unable to recover it. 00:38:20.536 [2024-12-13 10:40:14.233474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.536 [2024-12-13 10:40:14.233488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.536 qpair failed and we were unable to recover it. 00:38:20.536 [2024-12-13 10:40:14.233719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.536 [2024-12-13 10:40:14.233733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.536 qpair failed and we were unable to recover it. 00:38:20.536 [2024-12-13 10:40:14.233883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.536 [2024-12-13 10:40:14.233897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.536 qpair failed and we were unable to recover it. 00:38:20.536 [2024-12-13 10:40:14.234107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.536 [2024-12-13 10:40:14.234121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.536 qpair failed and we were unable to recover it. 00:38:20.536 [2024-12-13 10:40:14.234280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.536 [2024-12-13 10:40:14.234294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.536 qpair failed and we were unable to recover it. 00:38:20.536 [2024-12-13 10:40:14.234444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.536 [2024-12-13 10:40:14.234464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.536 qpair failed and we were unable to recover it. 00:38:20.536 [2024-12-13 10:40:14.234671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.536 [2024-12-13 10:40:14.234684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.536 qpair failed and we were unable to recover it. 00:38:20.536 [2024-12-13 10:40:14.234907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.536 [2024-12-13 10:40:14.234920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.536 qpair failed and we were unable to recover it. 00:38:20.536 [2024-12-13 10:40:14.235142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.536 [2024-12-13 10:40:14.235156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.536 qpair failed and we were unable to recover it. 00:38:20.536 [2024-12-13 10:40:14.235428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.536 [2024-12-13 10:40:14.235442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.536 qpair failed and we were unable to recover it. 00:38:20.536 [2024-12-13 10:40:14.235583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.536 [2024-12-13 10:40:14.235596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.536 qpair failed and we were unable to recover it. 00:38:20.536 [2024-12-13 10:40:14.235767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.536 [2024-12-13 10:40:14.235780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.536 qpair failed and we were unable to recover it. 00:38:20.536 [2024-12-13 10:40:14.235962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.536 [2024-12-13 10:40:14.235975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.536 qpair failed and we were unable to recover it. 00:38:20.536 [2024-12-13 10:40:14.236071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.536 [2024-12-13 10:40:14.236085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.536 qpair failed and we were unable to recover it. 00:38:20.536 [2024-12-13 10:40:14.236289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.536 [2024-12-13 10:40:14.236303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.536 qpair failed and we were unable to recover it. 00:38:20.536 [2024-12-13 10:40:14.236438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.536 [2024-12-13 10:40:14.236455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.536 qpair failed and we were unable to recover it. 00:38:20.536 [2024-12-13 10:40:14.236600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.536 [2024-12-13 10:40:14.236614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.536 qpair failed and we were unable to recover it. 00:38:20.536 [2024-12-13 10:40:14.236854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.536 [2024-12-13 10:40:14.236867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.537 qpair failed and we were unable to recover it. 00:38:20.537 [2024-12-13 10:40:14.237099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.537 [2024-12-13 10:40:14.237112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.537 qpair failed and we were unable to recover it. 00:38:20.537 [2024-12-13 10:40:14.237343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.537 [2024-12-13 10:40:14.237359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.537 qpair failed and we were unable to recover it. 00:38:20.537 [2024-12-13 10:40:14.237493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.537 [2024-12-13 10:40:14.237507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.537 qpair failed and we were unable to recover it. 00:38:20.537 [2024-12-13 10:40:14.237648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.537 [2024-12-13 10:40:14.237661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.537 qpair failed and we were unable to recover it. 00:38:20.537 [2024-12-13 10:40:14.237816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.537 [2024-12-13 10:40:14.237830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.537 qpair failed and we were unable to recover it. 00:38:20.537 [2024-12-13 10:40:14.238051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.537 [2024-12-13 10:40:14.238065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.537 qpair failed and we were unable to recover it. 00:38:20.537 [2024-12-13 10:40:14.238239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.537 [2024-12-13 10:40:14.238253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.537 qpair failed and we were unable to recover it. 00:38:20.537 [2024-12-13 10:40:14.238429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.537 [2024-12-13 10:40:14.238443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.537 qpair failed and we were unable to recover it. 00:38:20.537 [2024-12-13 10:40:14.238649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.537 [2024-12-13 10:40:14.238663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.537 qpair failed and we were unable to recover it. 00:38:20.537 [2024-12-13 10:40:14.238751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.537 [2024-12-13 10:40:14.238765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.537 qpair failed and we were unable to recover it. 00:38:20.537 [2024-12-13 10:40:14.238988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.537 [2024-12-13 10:40:14.239002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.537 qpair failed and we were unable to recover it. 00:38:20.537 [2024-12-13 10:40:14.239217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.537 [2024-12-13 10:40:14.239231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.537 qpair failed and we were unable to recover it. 00:38:20.537 [2024-12-13 10:40:14.239466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.537 [2024-12-13 10:40:14.239481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.537 qpair failed and we were unable to recover it. 00:38:20.537 [2024-12-13 10:40:14.239683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.537 [2024-12-13 10:40:14.239697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.537 qpair failed and we were unable to recover it. 00:38:20.537 [2024-12-13 10:40:14.239920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.537 [2024-12-13 10:40:14.239934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.537 qpair failed and we were unable to recover it. 00:38:20.537 [2024-12-13 10:40:14.240033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.537 [2024-12-13 10:40:14.240046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.537 qpair failed and we were unable to recover it. 00:38:20.537 [2024-12-13 10:40:14.240223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.537 [2024-12-13 10:40:14.240236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.537 qpair failed and we were unable to recover it. 00:38:20.537 [2024-12-13 10:40:14.240367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.537 [2024-12-13 10:40:14.240381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.537 qpair failed and we were unable to recover it. 00:38:20.537 [2024-12-13 10:40:14.240632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.537 [2024-12-13 10:40:14.240646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.537 qpair failed and we were unable to recover it. 00:38:20.537 [2024-12-13 10:40:14.240841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.537 [2024-12-13 10:40:14.240855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.537 qpair failed and we were unable to recover it. 00:38:20.537 [2024-12-13 10:40:14.241098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.537 [2024-12-13 10:40:14.241112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.537 qpair failed and we were unable to recover it. 00:38:20.537 [2024-12-13 10:40:14.241198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.537 [2024-12-13 10:40:14.241212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.537 qpair failed and we were unable to recover it. 00:38:20.537 [2024-12-13 10:40:14.241414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.537 [2024-12-13 10:40:14.241428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.537 qpair failed and we were unable to recover it. 00:38:20.537 [2024-12-13 10:40:14.241660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.537 [2024-12-13 10:40:14.241673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.537 qpair failed and we were unable to recover it. 00:38:20.537 [2024-12-13 10:40:14.241775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.537 [2024-12-13 10:40:14.241789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.537 qpair failed and we were unable to recover it. 00:38:20.537 [2024-12-13 10:40:14.242012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.537 [2024-12-13 10:40:14.242025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.537 qpair failed and we were unable to recover it. 00:38:20.537 [2024-12-13 10:40:14.242162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.537 [2024-12-13 10:40:14.242176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.537 qpair failed and we were unable to recover it. 00:38:20.537 [2024-12-13 10:40:14.242399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.537 [2024-12-13 10:40:14.242413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.537 qpair failed and we were unable to recover it. 00:38:20.537 [2024-12-13 10:40:14.242641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.537 [2024-12-13 10:40:14.242655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.537 qpair failed and we were unable to recover it. 00:38:20.537 [2024-12-13 10:40:14.242919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.537 [2024-12-13 10:40:14.242932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.537 qpair failed and we were unable to recover it. 00:38:20.537 [2024-12-13 10:40:14.243161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.537 [2024-12-13 10:40:14.243174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.537 qpair failed and we were unable to recover it. 00:38:20.537 [2024-12-13 10:40:14.243329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.537 [2024-12-13 10:40:14.243342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.537 qpair failed and we were unable to recover it. 00:38:20.537 [2024-12-13 10:40:14.243495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.537 [2024-12-13 10:40:14.243509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.537 qpair failed and we were unable to recover it. 00:38:20.537 [2024-12-13 10:40:14.243733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.537 [2024-12-13 10:40:14.243746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.537 qpair failed and we were unable to recover it. 00:38:20.537 [2024-12-13 10:40:14.243958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.537 [2024-12-13 10:40:14.243972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.537 qpair failed and we were unable to recover it. 00:38:20.537 [2024-12-13 10:40:14.244204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.537 [2024-12-13 10:40:14.244222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.537 qpair failed and we were unable to recover it. 00:38:20.537 [2024-12-13 10:40:14.244453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.537 [2024-12-13 10:40:14.244467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.537 qpair failed and we were unable to recover it. 00:38:20.537 [2024-12-13 10:40:14.244715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.537 [2024-12-13 10:40:14.244728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.537 qpair failed and we were unable to recover it. 00:38:20.537 [2024-12-13 10:40:14.244878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.537 [2024-12-13 10:40:14.244892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.537 qpair failed and we were unable to recover it. 00:38:20.538 [2024-12-13 10:40:14.245035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.538 [2024-12-13 10:40:14.245049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.538 qpair failed and we were unable to recover it. 00:38:20.538 [2024-12-13 10:40:14.245141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.538 [2024-12-13 10:40:14.245154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.538 qpair failed and we were unable to recover it. 00:38:20.538 [2024-12-13 10:40:14.245382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.538 [2024-12-13 10:40:14.245398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.538 qpair failed and we were unable to recover it. 00:38:20.538 [2024-12-13 10:40:14.245494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.538 [2024-12-13 10:40:14.245508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.538 qpair failed and we were unable to recover it. 00:38:20.538 [2024-12-13 10:40:14.245660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.538 [2024-12-13 10:40:14.245674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.538 qpair failed and we were unable to recover it. 00:38:20.538 [2024-12-13 10:40:14.245873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.538 [2024-12-13 10:40:14.245887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.538 qpair failed and we were unable to recover it. 00:38:20.538 [2024-12-13 10:40:14.246044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.538 [2024-12-13 10:40:14.246058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.538 qpair failed and we were unable to recover it. 00:38:20.538 [2024-12-13 10:40:14.246203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.538 [2024-12-13 10:40:14.246216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.538 qpair failed and we were unable to recover it. 00:38:20.538 [2024-12-13 10:40:14.246399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.538 [2024-12-13 10:40:14.246412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.538 qpair failed and we were unable to recover it. 00:38:20.538 [2024-12-13 10:40:14.246639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.538 [2024-12-13 10:40:14.246654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.538 qpair failed and we were unable to recover it. 00:38:20.538 [2024-12-13 10:40:14.246900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.538 [2024-12-13 10:40:14.246914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.538 qpair failed and we were unable to recover it. 00:38:20.538 [2024-12-13 10:40:14.247086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.538 [2024-12-13 10:40:14.247100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.538 qpair failed and we were unable to recover it. 00:38:20.538 [2024-12-13 10:40:14.247390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.538 [2024-12-13 10:40:14.247403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.538 qpair failed and we were unable to recover it. 00:38:20.538 [2024-12-13 10:40:14.247557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.538 [2024-12-13 10:40:14.247571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.538 qpair failed and we were unable to recover it. 00:38:20.538 [2024-12-13 10:40:14.247723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.538 [2024-12-13 10:40:14.247736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.538 qpair failed and we were unable to recover it. 00:38:20.538 [2024-12-13 10:40:14.247878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.538 [2024-12-13 10:40:14.247891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.538 qpair failed and we were unable to recover it. 00:38:20.538 [2024-12-13 10:40:14.248102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.538 [2024-12-13 10:40:14.248116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.538 qpair failed and we were unable to recover it. 00:38:20.538 [2024-12-13 10:40:14.248264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.538 [2024-12-13 10:40:14.248278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.538 qpair failed and we were unable to recover it. 00:38:20.538 [2024-12-13 10:40:14.248507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.538 [2024-12-13 10:40:14.248521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.538 qpair failed and we were unable to recover it. 00:38:20.538 [2024-12-13 10:40:14.248730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.538 [2024-12-13 10:40:14.248744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.538 qpair failed and we were unable to recover it. 00:38:20.538 [2024-12-13 10:40:14.248837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.538 [2024-12-13 10:40:14.248851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.538 qpair failed and we were unable to recover it. 00:38:20.538 [2024-12-13 10:40:14.249074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.538 [2024-12-13 10:40:14.249089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.538 qpair failed and we were unable to recover it. 00:38:20.538 [2024-12-13 10:40:14.249238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.538 [2024-12-13 10:40:14.249251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.538 qpair failed and we were unable to recover it. 00:38:20.538 [2024-12-13 10:40:14.249398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.538 [2024-12-13 10:40:14.249412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.538 qpair failed and we were unable to recover it. 00:38:20.538 [2024-12-13 10:40:14.249642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.538 [2024-12-13 10:40:14.249656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.538 qpair failed and we were unable to recover it. 00:38:20.538 [2024-12-13 10:40:14.249862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.538 [2024-12-13 10:40:14.249876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.538 qpair failed and we were unable to recover it. 00:38:20.538 [2024-12-13 10:40:14.250107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.538 [2024-12-13 10:40:14.250120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.538 qpair failed and we were unable to recover it. 00:38:20.538 [2024-12-13 10:40:14.250257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.538 [2024-12-13 10:40:14.250270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.538 qpair failed and we were unable to recover it. 00:38:20.538 [2024-12-13 10:40:14.250510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.538 [2024-12-13 10:40:14.250525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.538 qpair failed and we were unable to recover it. 00:38:20.538 [2024-12-13 10:40:14.250618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.538 [2024-12-13 10:40:14.250632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.538 qpair failed and we were unable to recover it. 00:38:20.538 [2024-12-13 10:40:14.250865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.538 [2024-12-13 10:40:14.250879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.538 qpair failed and we were unable to recover it. 00:38:20.538 [2024-12-13 10:40:14.251026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.538 [2024-12-13 10:40:14.251040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.538 qpair failed and we were unable to recover it. 00:38:20.538 [2024-12-13 10:40:14.251270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.538 [2024-12-13 10:40:14.251283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.538 qpair failed and we were unable to recover it. 00:38:20.538 [2024-12-13 10:40:14.251493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.538 [2024-12-13 10:40:14.251508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.538 qpair failed and we were unable to recover it. 00:38:20.538 [2024-12-13 10:40:14.251743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.538 [2024-12-13 10:40:14.251757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.539 qpair failed and we were unable to recover it. 00:38:20.539 [2024-12-13 10:40:14.251926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.539 [2024-12-13 10:40:14.251940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.539 qpair failed and we were unable to recover it. 00:38:20.539 [2024-12-13 10:40:14.252144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.539 [2024-12-13 10:40:14.252157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.539 qpair failed and we were unable to recover it. 00:38:20.539 [2024-12-13 10:40:14.252317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.539 [2024-12-13 10:40:14.252331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.539 qpair failed and we were unable to recover it. 00:38:20.539 [2024-12-13 10:40:14.252482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.539 [2024-12-13 10:40:14.252497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.539 qpair failed and we were unable to recover it. 00:38:20.539 [2024-12-13 10:40:14.252668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.539 [2024-12-13 10:40:14.252682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.539 qpair failed and we were unable to recover it. 00:38:20.539 [2024-12-13 10:40:14.252930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.539 [2024-12-13 10:40:14.252943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.539 qpair failed and we were unable to recover it. 00:38:20.539 [2024-12-13 10:40:14.253169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.539 [2024-12-13 10:40:14.253183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.539 qpair failed and we were unable to recover it. 00:38:20.539 [2024-12-13 10:40:14.253340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.539 [2024-12-13 10:40:14.253355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.539 qpair failed and we were unable to recover it. 00:38:20.539 [2024-12-13 10:40:14.253561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.539 [2024-12-13 10:40:14.253576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.539 qpair failed and we were unable to recover it. 00:38:20.539 [2024-12-13 10:40:14.253679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.539 [2024-12-13 10:40:14.253694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.539 qpair failed and we were unable to recover it. 00:38:20.539 [2024-12-13 10:40:14.253838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.539 [2024-12-13 10:40:14.253852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.539 qpair failed and we were unable to recover it. 00:38:20.539 [2024-12-13 10:40:14.254080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.539 [2024-12-13 10:40:14.254094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.539 qpair failed and we were unable to recover it. 00:38:20.539 [2024-12-13 10:40:14.254257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.539 [2024-12-13 10:40:14.254272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.539 qpair failed and we were unable to recover it. 00:38:20.539 [2024-12-13 10:40:14.254501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.539 [2024-12-13 10:40:14.254515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.539 qpair failed and we were unable to recover it. 00:38:20.539 [2024-12-13 10:40:14.254669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.539 [2024-12-13 10:40:14.254683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.539 qpair failed and we were unable to recover it. 00:38:20.539 [2024-12-13 10:40:14.254829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.539 [2024-12-13 10:40:14.254843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.539 qpair failed and we were unable to recover it. 00:38:20.539 [2024-12-13 10:40:14.255075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.539 [2024-12-13 10:40:14.255090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.539 qpair failed and we were unable to recover it. 00:38:20.539 [2024-12-13 10:40:14.255230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.539 [2024-12-13 10:40:14.255244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.539 qpair failed and we were unable to recover it. 00:38:20.539 [2024-12-13 10:40:14.255475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.539 [2024-12-13 10:40:14.255490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.539 qpair failed and we were unable to recover it. 00:38:20.539 [2024-12-13 10:40:14.255643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.539 [2024-12-13 10:40:14.255657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.539 qpair failed and we were unable to recover it. 00:38:20.539 [2024-12-13 10:40:14.255833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.539 [2024-12-13 10:40:14.255848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.539 qpair failed and we were unable to recover it. 00:38:20.539 [2024-12-13 10:40:14.255942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.539 [2024-12-13 10:40:14.255956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.539 qpair failed and we were unable to recover it. 00:38:20.539 [2024-12-13 10:40:14.256101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.539 [2024-12-13 10:40:14.256115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.539 qpair failed and we were unable to recover it. 00:38:20.539 [2024-12-13 10:40:14.256264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.539 [2024-12-13 10:40:14.256279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.539 qpair failed and we were unable to recover it. 00:38:20.539 [2024-12-13 10:40:14.256440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.539 [2024-12-13 10:40:14.256466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.539 qpair failed and we were unable to recover it. 00:38:20.539 [2024-12-13 10:40:14.256624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.539 [2024-12-13 10:40:14.256643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.539 qpair failed and we were unable to recover it. 00:38:20.539 [2024-12-13 10:40:14.256788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.539 [2024-12-13 10:40:14.256803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.539 qpair failed and we were unable to recover it. 00:38:20.539 [2024-12-13 10:40:14.257048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.539 [2024-12-13 10:40:14.257064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.539 qpair failed and we were unable to recover it. 00:38:20.539 [2024-12-13 10:40:14.257284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.539 [2024-12-13 10:40:14.257299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.539 qpair failed and we were unable to recover it. 00:38:20.539 [2024-12-13 10:40:14.257531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.539 [2024-12-13 10:40:14.257546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.539 qpair failed and we were unable to recover it. 00:38:20.539 [2024-12-13 10:40:14.257705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.539 [2024-12-13 10:40:14.257720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.539 qpair failed and we were unable to recover it. 00:38:20.539 [2024-12-13 10:40:14.257877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.539 [2024-12-13 10:40:14.257892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.539 qpair failed and we were unable to recover it. 00:38:20.539 [2024-12-13 10:40:14.257996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.539 [2024-12-13 10:40:14.258011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.539 qpair failed and we were unable to recover it. 00:38:20.539 [2024-12-13 10:40:14.258238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.539 [2024-12-13 10:40:14.258253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.539 qpair failed and we were unable to recover it. 00:38:20.539 [2024-12-13 10:40:14.258498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.539 [2024-12-13 10:40:14.258515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.539 qpair failed and we were unable to recover it. 00:38:20.539 [2024-12-13 10:40:14.258675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.539 [2024-12-13 10:40:14.258691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.539 qpair failed and we were unable to recover it. 00:38:20.539 [2024-12-13 10:40:14.258862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.539 [2024-12-13 10:40:14.258877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.539 qpair failed and we were unable to recover it. 00:38:20.539 [2024-12-13 10:40:14.259100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.539 [2024-12-13 10:40:14.259115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.539 qpair failed and we were unable to recover it. 00:38:20.539 [2024-12-13 10:40:14.259293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.539 [2024-12-13 10:40:14.259308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.540 qpair failed and we were unable to recover it. 00:38:20.540 [2024-12-13 10:40:14.259394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.540 [2024-12-13 10:40:14.259409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.540 qpair failed and we were unable to recover it. 00:38:20.540 [2024-12-13 10:40:14.259560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.540 [2024-12-13 10:40:14.259575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.540 qpair failed and we were unable to recover it. 00:38:20.540 [2024-12-13 10:40:14.259805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.540 [2024-12-13 10:40:14.259821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.540 qpair failed and we were unable to recover it. 00:38:20.540 [2024-12-13 10:40:14.259960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.540 [2024-12-13 10:40:14.259975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.540 qpair failed and we were unable to recover it. 00:38:20.540 [2024-12-13 10:40:14.260180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.540 [2024-12-13 10:40:14.260196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.540 qpair failed and we were unable to recover it. 00:38:20.540 [2024-12-13 10:40:14.260452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.540 [2024-12-13 10:40:14.260469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.540 qpair failed and we were unable to recover it. 00:38:20.540 [2024-12-13 10:40:14.260676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.540 [2024-12-13 10:40:14.260693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.540 qpair failed and we were unable to recover it. 00:38:20.540 [2024-12-13 10:40:14.260831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.540 [2024-12-13 10:40:14.260847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.540 qpair failed and we were unable to recover it. 00:38:20.540 [2024-12-13 10:40:14.261049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.540 [2024-12-13 10:40:14.261069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.540 qpair failed and we were unable to recover it. 00:38:20.540 [2024-12-13 10:40:14.261275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.540 [2024-12-13 10:40:14.261290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.540 qpair failed and we were unable to recover it. 00:38:20.540 [2024-12-13 10:40:14.261515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.540 [2024-12-13 10:40:14.261532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.540 qpair failed and we were unable to recover it. 00:38:20.540 [2024-12-13 10:40:14.261763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.540 [2024-12-13 10:40:14.261780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.540 qpair failed and we were unable to recover it. 00:38:20.540 [2024-12-13 10:40:14.262028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.540 [2024-12-13 10:40:14.262044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.540 qpair failed and we were unable to recover it. 00:38:20.540 [2024-12-13 10:40:14.262276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.540 [2024-12-13 10:40:14.262292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.540 qpair failed and we were unable to recover it. 00:38:20.540 [2024-12-13 10:40:14.262376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.540 [2024-12-13 10:40:14.262391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.540 qpair failed and we were unable to recover it. 00:38:20.540 [2024-12-13 10:40:14.262557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.540 [2024-12-13 10:40:14.262573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.540 qpair failed and we were unable to recover it. 00:38:20.540 [2024-12-13 10:40:14.262802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.540 [2024-12-13 10:40:14.262824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.540 qpair failed and we were unable to recover it. 00:38:20.540 [2024-12-13 10:40:14.263038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.540 [2024-12-13 10:40:14.263054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.540 qpair failed and we were unable to recover it. 00:38:20.540 [2024-12-13 10:40:14.263290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.540 [2024-12-13 10:40:14.263307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.540 qpair failed and we were unable to recover it. 00:38:20.540 [2024-12-13 10:40:14.263450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.540 [2024-12-13 10:40:14.263467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.540 qpair failed and we were unable to recover it. 00:38:20.540 [2024-12-13 10:40:14.263702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.540 [2024-12-13 10:40:14.263719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.540 qpair failed and we were unable to recover it. 00:38:20.540 [2024-12-13 10:40:14.263877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.540 [2024-12-13 10:40:14.263892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.540 qpair failed and we were unable to recover it. 00:38:20.540 [2024-12-13 10:40:14.263984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.540 [2024-12-13 10:40:14.263998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.540 qpair failed and we were unable to recover it. 00:38:20.540 [2024-12-13 10:40:14.264218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.540 [2024-12-13 10:40:14.264235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.540 qpair failed and we were unable to recover it. 00:38:20.540 [2024-12-13 10:40:14.264441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.540 [2024-12-13 10:40:14.264465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.540 qpair failed and we were unable to recover it. 00:38:20.540 [2024-12-13 10:40:14.264625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.540 [2024-12-13 10:40:14.264640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.540 qpair failed and we were unable to recover it. 00:38:20.540 [2024-12-13 10:40:14.264868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.540 [2024-12-13 10:40:14.264884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.540 qpair failed and we were unable to recover it. 00:38:20.540 [2024-12-13 10:40:14.265060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.540 [2024-12-13 10:40:14.265077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.540 qpair failed and we were unable to recover it. 00:38:20.540 [2024-12-13 10:40:14.265230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.540 [2024-12-13 10:40:14.265246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.540 qpair failed and we were unable to recover it. 00:38:20.540 [2024-12-13 10:40:14.265433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.540 [2024-12-13 10:40:14.265455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.540 qpair failed and we were unable to recover it. 00:38:20.540 [2024-12-13 10:40:14.265660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.540 [2024-12-13 10:40:14.265676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.540 qpair failed and we were unable to recover it. 00:38:20.540 [2024-12-13 10:40:14.265900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.540 [2024-12-13 10:40:14.265916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.540 qpair failed and we were unable to recover it. 00:38:20.540 [2024-12-13 10:40:14.266167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.540 [2024-12-13 10:40:14.266183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.540 qpair failed and we were unable to recover it. 00:38:20.540 [2024-12-13 10:40:14.266386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.540 [2024-12-13 10:40:14.266402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.540 qpair failed and we were unable to recover it. 00:38:20.540 [2024-12-13 10:40:14.266550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.540 [2024-12-13 10:40:14.266567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.540 qpair failed and we were unable to recover it. 00:38:20.540 [2024-12-13 10:40:14.266674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.540 [2024-12-13 10:40:14.266689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.540 qpair failed and we were unable to recover it. 00:38:20.540 [2024-12-13 10:40:14.266913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.540 [2024-12-13 10:40:14.266928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.540 qpair failed and we were unable to recover it. 00:38:20.540 [2024-12-13 10:40:14.267064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.540 [2024-12-13 10:40:14.267080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.540 qpair failed and we were unable to recover it. 00:38:20.540 [2024-12-13 10:40:14.267323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.540 [2024-12-13 10:40:14.267339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.541 qpair failed and we were unable to recover it. 00:38:20.541 [2024-12-13 10:40:14.267496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.541 [2024-12-13 10:40:14.267513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.541 qpair failed and we were unable to recover it. 00:38:20.541 [2024-12-13 10:40:14.267659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.541 [2024-12-13 10:40:14.267674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.541 qpair failed and we were unable to recover it. 00:38:20.541 [2024-12-13 10:40:14.267848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.541 [2024-12-13 10:40:14.267863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.541 qpair failed and we were unable to recover it. 00:38:20.541 [2024-12-13 10:40:14.268002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.541 [2024-12-13 10:40:14.268018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.541 qpair failed and we were unable to recover it. 00:38:20.541 [2024-12-13 10:40:14.268180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.541 [2024-12-13 10:40:14.268196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.541 qpair failed and we were unable to recover it. 00:38:20.541 [2024-12-13 10:40:14.268425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.541 [2024-12-13 10:40:14.268442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.541 qpair failed and we were unable to recover it. 00:38:20.541 [2024-12-13 10:40:14.268600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.541 [2024-12-13 10:40:14.268615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.541 qpair failed and we were unable to recover it. 00:38:20.541 [2024-12-13 10:40:14.268753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.541 [2024-12-13 10:40:14.268767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.541 qpair failed and we were unable to recover it. 00:38:20.541 [2024-12-13 10:40:14.268903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.541 [2024-12-13 10:40:14.268918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.541 qpair failed and we were unable to recover it. 00:38:20.541 [2024-12-13 10:40:14.269028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.541 [2024-12-13 10:40:14.269046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.541 qpair failed and we were unable to recover it. 00:38:20.541 [2024-12-13 10:40:14.269312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.541 [2024-12-13 10:40:14.269334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.541 qpair failed and we were unable to recover it. 00:38:20.541 [2024-12-13 10:40:14.269548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.541 [2024-12-13 10:40:14.269568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.541 qpair failed and we were unable to recover it. 00:38:20.541 [2024-12-13 10:40:14.269793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.541 [2024-12-13 10:40:14.269808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.541 qpair failed and we were unable to recover it. 00:38:20.541 [2024-12-13 10:40:14.269947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.541 [2024-12-13 10:40:14.269961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.541 qpair failed and we were unable to recover it. 00:38:20.541 [2024-12-13 10:40:14.270186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.541 [2024-12-13 10:40:14.270199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.541 qpair failed and we were unable to recover it. 00:38:20.541 [2024-12-13 10:40:14.270352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.541 [2024-12-13 10:40:14.270366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.541 qpair failed and we were unable to recover it. 00:38:20.541 [2024-12-13 10:40:14.270594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.541 [2024-12-13 10:40:14.270609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.541 qpair failed and we were unable to recover it. 00:38:20.541 [2024-12-13 10:40:14.270832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.541 [2024-12-13 10:40:14.270846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.541 qpair failed and we were unable to recover it. 00:38:20.541 [2024-12-13 10:40:14.271098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.541 [2024-12-13 10:40:14.271112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.541 qpair failed and we were unable to recover it. 00:38:20.541 [2024-12-13 10:40:14.271286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.541 [2024-12-13 10:40:14.271300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.541 qpair failed and we were unable to recover it. 00:38:20.541 [2024-12-13 10:40:14.271529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.541 [2024-12-13 10:40:14.271543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.541 qpair failed and we were unable to recover it. 00:38:20.541 [2024-12-13 10:40:14.271690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.541 [2024-12-13 10:40:14.271704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.541 qpair failed and we were unable to recover it. 00:38:20.541 [2024-12-13 10:40:14.271913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.541 [2024-12-13 10:40:14.271927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.541 qpair failed and we were unable to recover it. 00:38:20.541 [2024-12-13 10:40:14.272097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.541 [2024-12-13 10:40:14.272111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.541 qpair failed and we were unable to recover it. 00:38:20.541 [2024-12-13 10:40:14.272329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.541 [2024-12-13 10:40:14.272343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.541 qpair failed and we were unable to recover it. 00:38:20.541 [2024-12-13 10:40:14.272511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.541 [2024-12-13 10:40:14.272525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.541 qpair failed and we were unable to recover it. 00:38:20.541 [2024-12-13 10:40:14.272730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.541 [2024-12-13 10:40:14.272744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.541 qpair failed and we were unable to recover it. 00:38:20.541 [2024-12-13 10:40:14.272821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.541 [2024-12-13 10:40:14.272834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.541 qpair failed and we were unable to recover it. 00:38:20.541 [2024-12-13 10:40:14.273037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.541 [2024-12-13 10:40:14.273051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.541 qpair failed and we were unable to recover it. 00:38:20.541 [2024-12-13 10:40:14.273254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.541 [2024-12-13 10:40:14.273268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.541 qpair failed and we were unable to recover it. 00:38:20.541 [2024-12-13 10:40:14.273492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.541 [2024-12-13 10:40:14.273506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.541 qpair failed and we were unable to recover it. 00:38:20.541 [2024-12-13 10:40:14.273712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.541 [2024-12-13 10:40:14.273726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.541 qpair failed and we were unable to recover it. 00:38:20.541 [2024-12-13 10:40:14.273949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.541 [2024-12-13 10:40:14.273962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.541 qpair failed and we were unable to recover it. 00:38:20.541 [2024-12-13 10:40:14.274245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.541 [2024-12-13 10:40:14.274259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.541 qpair failed and we were unable to recover it. 00:38:20.541 [2024-12-13 10:40:14.274469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.541 [2024-12-13 10:40:14.274483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.541 qpair failed and we were unable to recover it. 00:38:20.541 [2024-12-13 10:40:14.274710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.541 [2024-12-13 10:40:14.274724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.541 qpair failed and we were unable to recover it. 00:38:20.541 [2024-12-13 10:40:14.274931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.541 [2024-12-13 10:40:14.274945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.541 qpair failed and we were unable to recover it. 00:38:20.541 [2024-12-13 10:40:14.275092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.541 [2024-12-13 10:40:14.275105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.541 qpair failed and we were unable to recover it. 00:38:20.541 [2024-12-13 10:40:14.275279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.541 [2024-12-13 10:40:14.275293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.541 qpair failed and we were unable to recover it. 00:38:20.542 [2024-12-13 10:40:14.275521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.542 [2024-12-13 10:40:14.275535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.542 qpair failed and we were unable to recover it. 00:38:20.542 [2024-12-13 10:40:14.275699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.542 [2024-12-13 10:40:14.275713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.542 qpair failed and we were unable to recover it. 00:38:20.542 [2024-12-13 10:40:14.275954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.542 [2024-12-13 10:40:14.275968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.542 qpair failed and we were unable to recover it. 00:38:20.542 [2024-12-13 10:40:14.276123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.542 [2024-12-13 10:40:14.276137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.542 qpair failed and we were unable to recover it. 00:38:20.542 [2024-12-13 10:40:14.276311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.542 [2024-12-13 10:40:14.276325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.542 qpair failed and we were unable to recover it. 00:38:20.542 [2024-12-13 10:40:14.276472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.542 [2024-12-13 10:40:14.276486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.542 qpair failed and we were unable to recover it. 00:38:20.542 [2024-12-13 10:40:14.276639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.542 [2024-12-13 10:40:14.276653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.542 qpair failed and we were unable to recover it. 00:38:20.542 [2024-12-13 10:40:14.276822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.542 [2024-12-13 10:40:14.276836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.542 qpair failed and we were unable to recover it. 00:38:20.542 [2024-12-13 10:40:14.277034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.542 [2024-12-13 10:40:14.277047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.542 qpair failed and we were unable to recover it. 00:38:20.542 [2024-12-13 10:40:14.277263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.542 [2024-12-13 10:40:14.277277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.542 qpair failed and we were unable to recover it. 00:38:20.542 [2024-12-13 10:40:14.277478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.542 [2024-12-13 10:40:14.277495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.542 qpair failed and we were unable to recover it. 00:38:20.542 [2024-12-13 10:40:14.277720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.542 [2024-12-13 10:40:14.277734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.542 qpair failed and we were unable to recover it. 00:38:20.542 [2024-12-13 10:40:14.277868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.542 [2024-12-13 10:40:14.277882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.542 qpair failed and we were unable to recover it. 00:38:20.542 [2024-12-13 10:40:14.278038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.542 [2024-12-13 10:40:14.278051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.542 qpair failed and we were unable to recover it. 00:38:20.542 [2024-12-13 10:40:14.278202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.542 [2024-12-13 10:40:14.278215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.542 qpair failed and we were unable to recover it. 00:38:20.542 [2024-12-13 10:40:14.278473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.542 [2024-12-13 10:40:14.278487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.542 qpair failed and we were unable to recover it. 00:38:20.542 [2024-12-13 10:40:14.278659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.542 [2024-12-13 10:40:14.278673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.542 qpair failed and we were unable to recover it. 00:38:20.542 [2024-12-13 10:40:14.278852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.542 [2024-12-13 10:40:14.278866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.542 qpair failed and we were unable to recover it. 00:38:20.542 [2024-12-13 10:40:14.279087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.542 [2024-12-13 10:40:14.279102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.542 qpair failed and we were unable to recover it. 00:38:20.542 [2024-12-13 10:40:14.279358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.542 [2024-12-13 10:40:14.279372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.542 qpair failed and we were unable to recover it. 00:38:20.542 [2024-12-13 10:40:14.279531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.542 [2024-12-13 10:40:14.279546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.542 qpair failed and we were unable to recover it. 00:38:20.542 [2024-12-13 10:40:14.279745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.542 [2024-12-13 10:40:14.279759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.542 qpair failed and we were unable to recover it. 00:38:20.542 [2024-12-13 10:40:14.279987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.542 [2024-12-13 10:40:14.280001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.542 qpair failed and we were unable to recover it. 00:38:20.542 [2024-12-13 10:40:14.280225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.542 [2024-12-13 10:40:14.280239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.542 qpair failed and we were unable to recover it. 00:38:20.542 [2024-12-13 10:40:14.280416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.542 [2024-12-13 10:40:14.280430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.542 qpair failed and we were unable to recover it. 00:38:20.542 [2024-12-13 10:40:14.280584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.542 [2024-12-13 10:40:14.280598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.542 qpair failed and we were unable to recover it. 00:38:20.542 [2024-12-13 10:40:14.280797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.542 [2024-12-13 10:40:14.280811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.542 qpair failed and we were unable to recover it. 00:38:20.542 [2024-12-13 10:40:14.281040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.542 [2024-12-13 10:40:14.281054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.542 qpair failed and we were unable to recover it. 00:38:20.542 [2024-12-13 10:40:14.281201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.542 [2024-12-13 10:40:14.281215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.542 qpair failed and we were unable to recover it. 00:38:20.542 [2024-12-13 10:40:14.281424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.542 [2024-12-13 10:40:14.281438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.542 qpair failed and we were unable to recover it. 00:38:20.542 [2024-12-13 10:40:14.281576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.542 [2024-12-13 10:40:14.281590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.542 qpair failed and we were unable to recover it. 00:38:20.542 [2024-12-13 10:40:14.281735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.542 [2024-12-13 10:40:14.281749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.542 qpair failed and we were unable to recover it. 00:38:20.542 [2024-12-13 10:40:14.281830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.542 [2024-12-13 10:40:14.281844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.542 qpair failed and we were unable to recover it. 00:38:20.542 [2024-12-13 10:40:14.282045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.542 [2024-12-13 10:40:14.282058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.542 qpair failed and we were unable to recover it. 00:38:20.542 [2024-12-13 10:40:14.282204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.542 [2024-12-13 10:40:14.282218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.542 qpair failed and we were unable to recover it. 00:38:20.542 [2024-12-13 10:40:14.282459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.542 [2024-12-13 10:40:14.282475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.542 qpair failed and we were unable to recover it. 00:38:20.542 [2024-12-13 10:40:14.282706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.542 [2024-12-13 10:40:14.282725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.542 qpair failed and we were unable to recover it. 00:38:20.542 [2024-12-13 10:40:14.282964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.542 [2024-12-13 10:40:14.282978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.542 qpair failed and we were unable to recover it. 00:38:20.542 [2024-12-13 10:40:14.283113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.542 [2024-12-13 10:40:14.283126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.542 qpair failed and we were unable to recover it. 00:38:20.542 [2024-12-13 10:40:14.283293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.543 [2024-12-13 10:40:14.283307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.543 qpair failed and we were unable to recover it. 00:38:20.543 [2024-12-13 10:40:14.283532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.543 [2024-12-13 10:40:14.283547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.543 qpair failed and we were unable to recover it. 00:38:20.543 [2024-12-13 10:40:14.283724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.543 [2024-12-13 10:40:14.283738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.543 qpair failed and we were unable to recover it. 00:38:20.543 [2024-12-13 10:40:14.283824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.543 [2024-12-13 10:40:14.283837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.543 qpair failed and we were unable to recover it. 00:38:20.543 [2024-12-13 10:40:14.284087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.543 [2024-12-13 10:40:14.284101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.543 qpair failed and we were unable to recover it. 00:38:20.543 [2024-12-13 10:40:14.284343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.543 [2024-12-13 10:40:14.284357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.543 qpair failed and we were unable to recover it. 00:38:20.543 [2024-12-13 10:40:14.284559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.543 [2024-12-13 10:40:14.284574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.543 qpair failed and we were unable to recover it. 00:38:20.543 [2024-12-13 10:40:14.284835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.543 [2024-12-13 10:40:14.284849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.543 qpair failed and we were unable to recover it. 00:38:20.543 [2024-12-13 10:40:14.285050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.543 [2024-12-13 10:40:14.285064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.543 qpair failed and we were unable to recover it. 00:38:20.543 [2024-12-13 10:40:14.285234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.543 [2024-12-13 10:40:14.285248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.543 qpair failed and we were unable to recover it. 00:38:20.543 [2024-12-13 10:40:14.285389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.543 [2024-12-13 10:40:14.285403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.543 qpair failed and we were unable to recover it. 00:38:20.543 [2024-12-13 10:40:14.285577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.543 [2024-12-13 10:40:14.285595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.543 qpair failed and we were unable to recover it. 00:38:20.543 [2024-12-13 10:40:14.285749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.543 [2024-12-13 10:40:14.285763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.543 qpair failed and we were unable to recover it. 00:38:20.543 [2024-12-13 10:40:14.285900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.543 [2024-12-13 10:40:14.285913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.543 qpair failed and we were unable to recover it. 00:38:20.543 [2024-12-13 10:40:14.286112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.543 [2024-12-13 10:40:14.286126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.543 qpair failed and we were unable to recover it. 00:38:20.543 [2024-12-13 10:40:14.286325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.543 [2024-12-13 10:40:14.286339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.543 qpair failed and we were unable to recover it. 00:38:20.543 [2024-12-13 10:40:14.286561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.543 [2024-12-13 10:40:14.286576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.543 qpair failed and we were unable to recover it. 00:38:20.543 [2024-12-13 10:40:14.286810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.543 [2024-12-13 10:40:14.286825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.543 qpair failed and we were unable to recover it. 00:38:20.543 [2024-12-13 10:40:14.287031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.543 [2024-12-13 10:40:14.287045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.543 qpair failed and we were unable to recover it. 00:38:20.543 [2024-12-13 10:40:14.287145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.543 [2024-12-13 10:40:14.287159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.543 qpair failed and we were unable to recover it. 00:38:20.543 [2024-12-13 10:40:14.287295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.543 [2024-12-13 10:40:14.287308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.543 qpair failed and we were unable to recover it. 00:38:20.543 [2024-12-13 10:40:14.287556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.543 [2024-12-13 10:40:14.287571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.543 qpair failed and we were unable to recover it. 00:38:20.543 [2024-12-13 10:40:14.287794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.543 [2024-12-13 10:40:14.287808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.543 qpair failed and we were unable to recover it. 00:38:20.543 [2024-12-13 10:40:14.288035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.543 [2024-12-13 10:40:14.288049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.543 qpair failed and we were unable to recover it. 00:38:20.543 [2024-12-13 10:40:14.288308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.543 [2024-12-13 10:40:14.288322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.543 qpair failed and we were unable to recover it. 00:38:20.543 [2024-12-13 10:40:14.288474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.543 [2024-12-13 10:40:14.288488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.543 qpair failed and we were unable to recover it. 00:38:20.543 [2024-12-13 10:40:14.288696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.543 [2024-12-13 10:40:14.288709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.543 qpair failed and we were unable to recover it. 00:38:20.543 [2024-12-13 10:40:14.288910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.543 [2024-12-13 10:40:14.288927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.543 qpair failed and we were unable to recover it. 00:38:20.543 [2024-12-13 10:40:14.289220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.543 [2024-12-13 10:40:14.289234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.543 qpair failed and we were unable to recover it. 00:38:20.543 [2024-12-13 10:40:14.289367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.543 [2024-12-13 10:40:14.289380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.543 qpair failed and we were unable to recover it. 00:38:20.543 [2024-12-13 10:40:14.289528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.543 [2024-12-13 10:40:14.289542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.543 qpair failed and we were unable to recover it. 00:38:20.543 [2024-12-13 10:40:14.289638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.543 [2024-12-13 10:40:14.289652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.543 qpair failed and we were unable to recover it. 00:38:20.543 [2024-12-13 10:40:14.289898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.543 [2024-12-13 10:40:14.289912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.543 qpair failed and we were unable to recover it. 00:38:20.543 [2024-12-13 10:40:14.290059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.543 [2024-12-13 10:40:14.290072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.543 qpair failed and we were unable to recover it. 00:38:20.543 [2024-12-13 10:40:14.290161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.544 [2024-12-13 10:40:14.290174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.544 qpair failed and we were unable to recover it. 00:38:20.544 [2024-12-13 10:40:14.290395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.544 [2024-12-13 10:40:14.290408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.544 qpair failed and we were unable to recover it. 00:38:20.544 [2024-12-13 10:40:14.290631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.544 [2024-12-13 10:40:14.290646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.544 qpair failed and we were unable to recover it. 00:38:20.544 [2024-12-13 10:40:14.290900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.544 [2024-12-13 10:40:14.290913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.544 qpair failed and we were unable to recover it. 00:38:20.544 [2024-12-13 10:40:14.291152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.544 [2024-12-13 10:40:14.291166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.544 qpair failed and we were unable to recover it. 00:38:20.544 [2024-12-13 10:40:14.291406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.544 [2024-12-13 10:40:14.291420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.544 qpair failed and we were unable to recover it. 00:38:20.544 [2024-12-13 10:40:14.291621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.544 [2024-12-13 10:40:14.291636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.544 qpair failed and we were unable to recover it. 00:38:20.544 [2024-12-13 10:40:14.291718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.544 [2024-12-13 10:40:14.291731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.544 qpair failed and we were unable to recover it. 00:38:20.544 [2024-12-13 10:40:14.291958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.544 [2024-12-13 10:40:14.291972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.544 qpair failed and we were unable to recover it. 00:38:20.544 [2024-12-13 10:40:14.292103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.544 [2024-12-13 10:40:14.292117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.544 qpair failed and we were unable to recover it. 00:38:20.544 [2024-12-13 10:40:14.292331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.544 [2024-12-13 10:40:14.292345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.544 qpair failed and we were unable to recover it. 00:38:20.544 [2024-12-13 10:40:14.292443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.544 [2024-12-13 10:40:14.292461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.544 qpair failed and we were unable to recover it. 00:38:20.544 [2024-12-13 10:40:14.292671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.544 [2024-12-13 10:40:14.292684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.544 qpair failed and we were unable to recover it. 00:38:20.544 [2024-12-13 10:40:14.292896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.544 [2024-12-13 10:40:14.292909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.544 qpair failed and we were unable to recover it. 00:38:20.544 [2024-12-13 10:40:14.293131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.544 [2024-12-13 10:40:14.293144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.544 qpair failed and we were unable to recover it. 00:38:20.544 [2024-12-13 10:40:14.293345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.544 [2024-12-13 10:40:14.293359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.544 qpair failed and we were unable to recover it. 00:38:20.544 [2024-12-13 10:40:14.293523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.544 [2024-12-13 10:40:14.293537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.544 qpair failed and we were unable to recover it. 00:38:20.544 [2024-12-13 10:40:14.293759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.544 [2024-12-13 10:40:14.293775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.544 qpair failed and we were unable to recover it. 00:38:20.544 [2024-12-13 10:40:14.293946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.544 [2024-12-13 10:40:14.293960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.544 qpair failed and we were unable to recover it. 00:38:20.544 [2024-12-13 10:40:14.294203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.544 [2024-12-13 10:40:14.294216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.544 qpair failed and we were unable to recover it. 00:38:20.544 [2024-12-13 10:40:14.294462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.544 [2024-12-13 10:40:14.294476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.544 qpair failed and we were unable to recover it. 00:38:20.544 [2024-12-13 10:40:14.294700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.544 [2024-12-13 10:40:14.294714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.544 qpair failed and we were unable to recover it. 00:38:20.544 [2024-12-13 10:40:14.294846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.544 [2024-12-13 10:40:14.294860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.544 qpair failed and we were unable to recover it. 00:38:20.544 [2024-12-13 10:40:14.295025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.544 [2024-12-13 10:40:14.295038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.544 qpair failed and we were unable to recover it. 00:38:20.544 [2024-12-13 10:40:14.295204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.544 [2024-12-13 10:40:14.295218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.544 qpair failed and we were unable to recover it. 00:38:20.544 [2024-12-13 10:40:14.295371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.544 [2024-12-13 10:40:14.295384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.544 qpair failed and we were unable to recover it. 00:38:20.544 [2024-12-13 10:40:14.295592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.544 [2024-12-13 10:40:14.295607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.544 qpair failed and we were unable to recover it. 00:38:20.544 [2024-12-13 10:40:14.295833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.544 [2024-12-13 10:40:14.295852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.544 qpair failed and we were unable to recover it. 00:38:20.544 [2024-12-13 10:40:14.296123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.544 [2024-12-13 10:40:14.296137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.544 qpair failed and we were unable to recover it. 00:38:20.544 [2024-12-13 10:40:14.296307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.544 [2024-12-13 10:40:14.296320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.544 qpair failed and we were unable to recover it. 00:38:20.544 [2024-12-13 10:40:14.296477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.544 [2024-12-13 10:40:14.296492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.544 qpair failed and we were unable to recover it. 00:38:20.544 [2024-12-13 10:40:14.296722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.544 [2024-12-13 10:40:14.296736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.544 qpair failed and we were unable to recover it. 00:38:20.544 [2024-12-13 10:40:14.296983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.544 [2024-12-13 10:40:14.296997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.544 qpair failed and we were unable to recover it. 00:38:20.544 [2024-12-13 10:40:14.297269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.544 [2024-12-13 10:40:14.297282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.544 qpair failed and we were unable to recover it. 00:38:20.544 [2024-12-13 10:40:14.297441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.544 [2024-12-13 10:40:14.297458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.544 qpair failed and we were unable to recover it. 00:38:20.544 [2024-12-13 10:40:14.297685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.544 [2024-12-13 10:40:14.297699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.544 qpair failed and we were unable to recover it. 00:38:20.544 [2024-12-13 10:40:14.297935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.544 [2024-12-13 10:40:14.297949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.544 qpair failed and we were unable to recover it. 00:38:20.544 [2024-12-13 10:40:14.298152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.544 [2024-12-13 10:40:14.298166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.544 qpair failed and we were unable to recover it. 00:38:20.544 [2024-12-13 10:40:14.298255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.544 [2024-12-13 10:40:14.298269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.544 qpair failed and we were unable to recover it. 00:38:20.544 [2024-12-13 10:40:14.298491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.545 [2024-12-13 10:40:14.298505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.545 qpair failed and we were unable to recover it. 00:38:20.545 [2024-12-13 10:40:14.298715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.545 [2024-12-13 10:40:14.298729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.545 qpair failed and we were unable to recover it. 00:38:20.545 [2024-12-13 10:40:14.298876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.545 [2024-12-13 10:40:14.298890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.545 qpair failed and we were unable to recover it. 00:38:20.545 [2024-12-13 10:40:14.299140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.545 [2024-12-13 10:40:14.299154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.545 qpair failed and we were unable to recover it. 00:38:20.545 [2024-12-13 10:40:14.299380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.545 [2024-12-13 10:40:14.299394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:20.545 qpair failed and we were unable to recover it. 00:38:20.545 [2024-12-13 10:40:14.299690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.545 [2024-12-13 10:40:14.299729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:20.545 qpair failed and we were unable to recover it. 00:38:20.545 [2024-12-13 10:40:14.299929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.545 [2024-12-13 10:40:14.299965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:38:20.545 qpair failed and we were unable to recover it. 00:38:20.545 [2024-12-13 10:40:14.300227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.545 [2024-12-13 10:40:14.300259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:20.545 qpair failed and we were unable to recover it. 00:38:20.545 [2024-12-13 10:40:14.300564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:20.545 [2024-12-13 10:40:14.300596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325d00 with addr=10.0.0.2, port=4420 00:38:20.545 [2024-12-13 10:40:14.300616] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325d00 is same with the state(6) to be set 00:38:20.545 [2024-12-13 10:40:14.300646] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325d00 (9): Bad file descriptor 00:38:20.545 [2024-12-13 10:40:14.300668] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:38:20.545 [2024-12-13 10:40:14.300687] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:38:20.545 [2024-12-13 10:40:14.300706] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:38:20.545 Unable to reset the controller. 00:38:20.802 10:40:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:20.802 10:40:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:38:20.802 10:40:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:38:20.802 10:40:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:20.802 10:40:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:20.802 10:40:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:20.802 10:40:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:38:20.802 10:40:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:20.802 10:40:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:21.060 Malloc0 00:38:21.060 10:40:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:21.060 10:40:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:38:21.060 10:40:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:21.060 10:40:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:21.060 [2024-12-13 10:40:14.757755] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:21.060 10:40:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:21.060 10:40:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:38:21.060 10:40:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:21.060 10:40:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:21.060 10:40:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:21.060 10:40:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:38:21.060 10:40:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:21.060 10:40:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:21.060 10:40:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:21.060 10:40:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:21.060 10:40:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:21.060 10:40:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:21.060 [2024-12-13 10:40:14.786068] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:21.060 10:40:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:21.060 10:40:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:38:21.060 10:40:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:21.060 10:40:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:21.060 10:40:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:21.060 10:40:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 4160286 00:38:21.630 Controller properly reset. 00:38:26.888 Initializing NVMe Controllers 00:38:26.888 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:38:26.888 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:38:26.888 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:38:26.888 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:38:26.888 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:38:26.888 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:38:26.888 Initialization complete. Launching workers. 00:38:26.888 Starting thread on core 1 00:38:26.888 Starting thread on core 2 00:38:26.888 Starting thread on core 3 00:38:26.888 Starting thread on core 0 00:38:26.888 10:40:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:38:26.888 00:38:26.888 real 0m11.551s 00:38:26.888 user 0m36.462s 00:38:26.888 sys 0m5.957s 00:38:26.888 10:40:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:26.888 10:40:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:26.888 ************************************ 00:38:26.888 END TEST nvmf_target_disconnect_tc2 00:38:26.888 ************************************ 00:38:26.888 10:40:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:38:26.888 10:40:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:38:26.888 10:40:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:38:26.889 10:40:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:38:26.889 10:40:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # sync 00:38:26.889 10:40:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:26.889 10:40:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set +e 00:38:26.889 10:40:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:26.889 10:40:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:26.889 rmmod nvme_tcp 00:38:26.889 rmmod nvme_fabrics 00:38:26.889 rmmod nvme_keyring 00:38:26.889 10:40:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:26.889 10:40:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@128 -- # set -e 00:38:26.889 10:40:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # return 0 00:38:26.889 10:40:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@517 -- # '[' -n 4160954 ']' 00:38:26.889 10:40:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@518 -- # killprocess 4160954 00:38:26.889 10:40:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # '[' -z 4160954 ']' 00:38:26.889 10:40:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # kill -0 4160954 00:38:26.889 10:40:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # uname 00:38:26.889 10:40:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:26.889 10:40:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4160954 00:38:26.889 10:40:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_4 00:38:26.889 10:40:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_4 = sudo ']' 00:38:26.889 10:40:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4160954' 00:38:26.889 killing process with pid 4160954 00:38:26.889 10:40:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@973 -- # kill 4160954 00:38:26.889 10:40:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@978 -- # wait 4160954 00:38:27.823 10:40:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:38:27.823 10:40:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:38:27.823 10:40:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:38:27.823 10:40:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # iptr 00:38:27.823 10:40:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:38:27.823 10:40:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:38:27.824 10:40:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:38:27.824 10:40:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:27.824 10:40:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:27.824 10:40:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:27.824 10:40:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:27.824 10:40:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:30.395 10:40:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:30.395 00:38:30.395 real 0m21.030s 00:38:30.395 user 1m7.208s 00:38:30.395 sys 0m10.858s 00:38:30.395 10:40:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:30.395 10:40:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:38:30.395 ************************************ 00:38:30.395 END TEST nvmf_target_disconnect 00:38:30.395 ************************************ 00:38:30.395 10:40:23 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:38:30.395 00:38:30.395 real 8m7.478s 00:38:30.395 user 19m19.199s 00:38:30.395 sys 2m7.451s 00:38:30.395 10:40:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:30.395 10:40:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:38:30.395 ************************************ 00:38:30.395 END TEST nvmf_host 00:38:30.395 ************************************ 00:38:30.395 10:40:23 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:38:30.395 10:40:23 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 0 -eq 0 ]] 00:38:30.395 10:40:23 nvmf_tcp -- nvmf/nvmf.sh@20 -- # run_test nvmf_target_core_interrupt_mode /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:38:30.395 10:40:23 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:38:30.395 10:40:23 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:30.395 10:40:23 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:38:30.395 ************************************ 00:38:30.395 START TEST nvmf_target_core_interrupt_mode 00:38:30.395 ************************************ 00:38:30.395 10:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:38:30.395 * Looking for test storage... 00:38:30.395 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:38:30.395 10:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:38:30.395 10:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1711 -- # lcov --version 00:38:30.395 10:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:38:30.395 10:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:38:30.395 10:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:30.395 10:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:30.395 10:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:30.395 10:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 00:38:30.395 10:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 00:38:30.395 10:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 00:38:30.395 10:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 00:38:30.395 10:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 00:38:30.395 10:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 00:38:30.395 10:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 00:38:30.395 10:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:30.395 10:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 00:38:30.395 10:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1 00:38:30.395 10:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:30.395 10:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:30.395 10:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1 00:38:30.395 10:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1 00:38:30.395 10:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:30.395 10:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1 00:38:30.395 10:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 00:38:30.395 10:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2 00:38:30.395 10:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2 00:38:30.395 10:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:30.395 10:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2 00:38:30.395 10:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 00:38:30.395 10:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:30.395 10:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:30.395 10:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0 00:38:30.395 10:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:30.395 10:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:38:30.395 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:30.395 --rc genhtml_branch_coverage=1 00:38:30.395 --rc genhtml_function_coverage=1 00:38:30.395 --rc genhtml_legend=1 00:38:30.395 --rc geninfo_all_blocks=1 00:38:30.395 --rc geninfo_unexecuted_blocks=1 00:38:30.395 00:38:30.395 ' 00:38:30.395 10:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:38:30.395 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:30.395 --rc genhtml_branch_coverage=1 00:38:30.395 --rc genhtml_function_coverage=1 00:38:30.395 --rc genhtml_legend=1 00:38:30.395 --rc geninfo_all_blocks=1 00:38:30.395 --rc geninfo_unexecuted_blocks=1 00:38:30.395 00:38:30.395 ' 00:38:30.395 10:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:38:30.395 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:30.395 --rc genhtml_branch_coverage=1 00:38:30.395 --rc genhtml_function_coverage=1 00:38:30.395 --rc genhtml_legend=1 00:38:30.395 --rc geninfo_all_blocks=1 00:38:30.395 --rc geninfo_unexecuted_blocks=1 00:38:30.396 00:38:30.396 ' 00:38:30.396 10:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:38:30.396 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:30.396 --rc genhtml_branch_coverage=1 00:38:30.396 --rc genhtml_function_coverage=1 00:38:30.396 --rc genhtml_legend=1 00:38:30.396 --rc geninfo_all_blocks=1 00:38:30.396 --rc geninfo_unexecuted_blocks=1 00:38:30.396 00:38:30.396 ' 00:38:30.396 10:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:38:30.396 10:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:38:30.396 10:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:30.396 10:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 00:38:30.396 10:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:30.396 10:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:30.396 10:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:30.396 10:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:30.396 10:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:30.396 10:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:30.396 10:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:30.396 10:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:30.396 10:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:30.396 10:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:30.396 10:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:38:30.396 10:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:38:30.396 10:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:30.396 10:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:30.396 10:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:30.396 10:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:30.396 10:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:30.396 10:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob 00:38:30.396 10:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:30.396 10:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:30.396 10:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:30.396 10:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:30.396 10:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:30.396 10:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:30.396 10:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 00:38:30.396 10:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:30.396 10:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # : 0 00:38:30.396 10:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:30.396 10:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:30.396 10:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:30.396 10:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:30.396 10:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:30.396 10:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:38:30.396 10:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:38:30.396 10:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:30.396 10:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:30.396 10:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:30.396 10:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:38:30.396 10:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:38:30.396 10:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:38:30.396 10:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:38:30.396 10:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:38:30.396 10:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:30.396 10:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:38:30.396 ************************************ 00:38:30.396 START TEST nvmf_abort 00:38:30.396 ************************************ 00:38:30.396 10:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:38:30.396 * Looking for test storage... 00:38:30.396 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:38:30.396 10:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:38:30.396 10:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1711 -- # lcov --version 00:38:30.396 10:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:38:30.396 10:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:38:30.396 10:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:30.396 10:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:30.396 10:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:30.396 10:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:38:30.396 10:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:38:30.396 10:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:38:30.396 10:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:38:30.396 10:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:38:30.396 10:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:38:30.396 10:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:38:30.396 10:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:30.396 10:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:38:30.396 10:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:38:30.396 10:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:30.396 10:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:30.396 10:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:38:30.396 10:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:38:30.396 10:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:30.396 10:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:38:30.396 10:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:38:30.396 10:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:38:30.396 10:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:38:30.396 10:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:30.396 10:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:38:30.396 10:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:38:30.396 10:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:30.396 10:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:30.396 10:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:38:30.396 10:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:30.396 10:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:38:30.396 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:30.396 --rc genhtml_branch_coverage=1 00:38:30.396 --rc genhtml_function_coverage=1 00:38:30.396 --rc genhtml_legend=1 00:38:30.396 --rc geninfo_all_blocks=1 00:38:30.396 --rc geninfo_unexecuted_blocks=1 00:38:30.396 00:38:30.396 ' 00:38:30.396 10:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:38:30.396 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:30.396 --rc genhtml_branch_coverage=1 00:38:30.396 --rc genhtml_function_coverage=1 00:38:30.396 --rc genhtml_legend=1 00:38:30.397 --rc geninfo_all_blocks=1 00:38:30.397 --rc geninfo_unexecuted_blocks=1 00:38:30.397 00:38:30.397 ' 00:38:30.397 10:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:38:30.397 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:30.397 --rc genhtml_branch_coverage=1 00:38:30.397 --rc genhtml_function_coverage=1 00:38:30.397 --rc genhtml_legend=1 00:38:30.397 --rc geninfo_all_blocks=1 00:38:30.397 --rc geninfo_unexecuted_blocks=1 00:38:30.397 00:38:30.397 ' 00:38:30.397 10:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:38:30.397 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:30.397 --rc genhtml_branch_coverage=1 00:38:30.397 --rc genhtml_function_coverage=1 00:38:30.397 --rc genhtml_legend=1 00:38:30.397 --rc geninfo_all_blocks=1 00:38:30.397 --rc geninfo_unexecuted_blocks=1 00:38:30.397 00:38:30.397 ' 00:38:30.397 10:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:30.397 10:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:38:30.397 10:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:30.397 10:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:30.397 10:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:30.397 10:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:30.397 10:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:30.397 10:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:30.397 10:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:30.397 10:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:30.397 10:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:30.397 10:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:30.397 10:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:38:30.397 10:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:38:30.397 10:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:30.397 10:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:30.397 10:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:30.397 10:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:30.397 10:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:30.397 10:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:38:30.397 10:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:30.397 10:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:30.397 10:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:30.397 10:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:30.397 10:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:30.397 10:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:30.397 10:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:38:30.397 10:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:30.397 10:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:38:30.397 10:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:30.397 10:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:30.397 10:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:30.397 10:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:30.397 10:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:30.397 10:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:38:30.397 10:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:38:30.397 10:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:30.397 10:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:30.397 10:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:30.397 10:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:38:30.397 10:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:38:30.397 10:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:38:30.397 10:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:38:30.397 10:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:30.397 10:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:38:30.397 10:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:38:30.397 10:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:38:30.397 10:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:30.397 10:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:30.397 10:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:30.397 10:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:38:30.397 10:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:38:30.397 10:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:38:30.397 10:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:38:35.669 10:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:35.669 10:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:38:35.669 10:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:38:35.669 10:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:38:35.669 10:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:38:35.669 10:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:38:35.669 10:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:38:35.669 10:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:38:35.669 10:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:38:35.669 10:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:38:35.669 10:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:38:35.669 10:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:38:35.669 10:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:38:35.669 10:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:38:35.669 10:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:38:35.669 10:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:35.669 10:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:35.669 10:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:35.669 10:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:35.669 10:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:35.669 10:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:35.669 10:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:35.669 10:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:38:35.669 10:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:35.669 10:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:35.669 10:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:35.669 10:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:35.669 10:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:38:35.669 10:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:38:35.669 10:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:38:35.669 10:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:38:35.669 10:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:38:35.669 10:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:38:35.669 10:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:35.669 10:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:38:35.669 Found 0000:af:00.0 (0x8086 - 0x159b) 00:38:35.669 10:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:35.669 10:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:35.669 10:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:35.669 10:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:35.669 10:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:35.669 10:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:35.669 10:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:38:35.669 Found 0000:af:00.1 (0x8086 - 0x159b) 00:38:35.669 10:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:35.669 10:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:35.669 10:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:35.669 10:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:35.669 10:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:35.669 10:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:38:35.669 10:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:38:35.669 10:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:38:35.669 10:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:35.669 10:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:35.669 10:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:35.669 10:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:35.669 10:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:35.669 10:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:35.669 10:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:35.669 10:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:38:35.669 Found net devices under 0000:af:00.0: cvl_0_0 00:38:35.669 10:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:35.669 10:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:35.669 10:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:35.669 10:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:35.669 10:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:35.669 10:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:35.669 10:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:35.669 10:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:35.669 10:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:38:35.669 Found net devices under 0000:af:00.1: cvl_0_1 00:38:35.669 10:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:35.669 10:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:38:35.669 10:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:38:35.669 10:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:38:35.669 10:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:38:35.669 10:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:38:35.669 10:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:35.669 10:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:35.669 10:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:35.669 10:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:35.669 10:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:38:35.669 10:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:35.669 10:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:35.669 10:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:38:35.669 10:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:38:35.669 10:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:35.670 10:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:35.670 10:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:38:35.670 10:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:38:35.670 10:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:38:35.670 10:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:35.670 10:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:35.670 10:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:35.929 10:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:38:35.929 10:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:35.929 10:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:35.929 10:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:35.929 10:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:38:35.929 10:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:38:35.929 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:35.929 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.304 ms 00:38:35.929 00:38:35.929 --- 10.0.0.2 ping statistics --- 00:38:35.929 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:35.929 rtt min/avg/max/mdev = 0.304/0.304/0.304/0.000 ms 00:38:35.929 10:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:35.929 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:35.929 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.208 ms 00:38:35.929 00:38:35.929 --- 10.0.0.1 ping statistics --- 00:38:35.929 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:35.929 rtt min/avg/max/mdev = 0.208/0.208/0.208/0.000 ms 00:38:35.929 10:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:35.929 10:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:38:35.929 10:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:38:35.929 10:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:35.929 10:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:38:35.929 10:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:38:35.929 10:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:35.929 10:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:38:35.929 10:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:38:35.929 10:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:38:35.929 10:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:38:35.929 10:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:35.929 10:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:38:35.929 10:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=4165628 00:38:35.929 10:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 4165628 00:38:35.929 10:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:38:35.929 10:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 4165628 ']' 00:38:35.929 10:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:35.929 10:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:35.929 10:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:35.929 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:35.929 10:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:35.929 10:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:38:36.189 [2024-12-13 10:40:29.826609] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:38:36.189 [2024-12-13 10:40:29.828818] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:38:36.189 [2024-12-13 10:40:29.828892] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:36.189 [2024-12-13 10:40:29.948844] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:38:36.189 [2024-12-13 10:40:30.059637] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:36.189 [2024-12-13 10:40:30.059685] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:36.189 [2024-12-13 10:40:30.059699] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:36.189 [2024-12-13 10:40:30.059709] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:36.189 [2024-12-13 10:40:30.059719] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:36.189 [2024-12-13 10:40:30.062249] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:38:36.189 [2024-12-13 10:40:30.062268] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:38:36.189 [2024-12-13 10:40:30.062278] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:38:36.757 [2024-12-13 10:40:30.377490] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:38:36.757 [2024-12-13 10:40:30.378504] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:38:36.757 [2024-12-13 10:40:30.379278] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:38:36.757 [2024-12-13 10:40:30.379495] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:38:36.757 10:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:36.757 10:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:38:36.757 10:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:38:36.757 10:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:36.757 10:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:38:37.016 10:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:37.016 10:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:38:37.016 10:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:37.016 10:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:38:37.016 [2024-12-13 10:40:30.667151] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:37.016 10:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:37.016 10:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:38:37.016 10:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:37.016 10:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:38:37.016 Malloc0 00:38:37.016 10:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:37.016 10:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:38:37.016 10:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:37.016 10:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:38:37.016 Delay0 00:38:37.016 10:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:37.016 10:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:38:37.016 10:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:37.016 10:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:38:37.016 10:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:37.016 10:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:38:37.016 10:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:37.016 10:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:38:37.016 10:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:37.016 10:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:38:37.016 10:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:37.016 10:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:38:37.016 [2024-12-13 10:40:30.799216] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:37.016 10:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:37.016 10:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:38:37.016 10:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:37.016 10:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:38:37.016 10:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:37.017 10:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:38:37.275 [2024-12-13 10:40:30.991612] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:38:39.808 Initializing NVMe Controllers 00:38:39.808 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:38:39.808 controller IO queue size 128 less than required 00:38:39.808 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:38:39.808 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:38:39.808 Initialization complete. Launching workers. 00:38:39.808 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 127, failed: 34340 00:38:39.808 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 34401, failed to submit 66 00:38:39.808 success 34340, unsuccessful 61, failed 0 00:38:39.808 10:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:38:39.808 10:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:39.808 10:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:38:39.808 10:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:39.808 10:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:38:39.808 10:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:38:39.808 10:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:38:39.808 10:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:38:39.808 10:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:39.808 10:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:38:39.808 10:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:39.808 10:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:39.808 rmmod nvme_tcp 00:38:39.808 rmmod nvme_fabrics 00:38:39.808 rmmod nvme_keyring 00:38:39.808 10:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:39.808 10:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:38:39.808 10:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:38:39.808 10:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 4165628 ']' 00:38:39.808 10:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 4165628 00:38:39.808 10:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 4165628 ']' 00:38:39.808 10:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 4165628 00:38:39.808 10:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:38:39.808 10:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:39.808 10:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4165628 00:38:39.808 10:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:38:39.808 10:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:38:39.808 10:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4165628' 00:38:39.808 killing process with pid 4165628 00:38:39.809 10:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@973 -- # kill 4165628 00:38:39.809 10:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@978 -- # wait 4165628 00:38:40.745 10:40:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:38:40.745 10:40:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:38:40.745 10:40:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:38:40.745 10:40:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:38:40.745 10:40:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:38:40.745 10:40:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:38:40.745 10:40:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:38:40.745 10:40:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:40.745 10:40:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:40.745 10:40:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:40.745 10:40:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:40.745 10:40:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:43.282 10:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:43.282 00:38:43.282 real 0m12.522s 00:38:43.282 user 0m12.184s 00:38:43.282 sys 0m5.317s 00:38:43.282 10:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:43.282 10:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:38:43.282 ************************************ 00:38:43.282 END TEST nvmf_abort 00:38:43.282 ************************************ 00:38:43.282 10:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:38:43.282 10:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:38:43.282 10:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:43.282 10:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:38:43.282 ************************************ 00:38:43.282 START TEST nvmf_ns_hotplug_stress 00:38:43.282 ************************************ 00:38:43.282 10:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:38:43.282 * Looking for test storage... 00:38:43.282 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:38:43.282 10:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:38:43.282 10:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lcov --version 00:38:43.282 10:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:38:43.282 10:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:38:43.282 10:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:43.282 10:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:43.282 10:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:43.282 10:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:38:43.282 10:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:38:43.282 10:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:38:43.282 10:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:38:43.282 10:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:38:43.282 10:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:38:43.282 10:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:38:43.282 10:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:43.282 10:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:38:43.282 10:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:38:43.282 10:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:43.283 10:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:43.283 10:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:38:43.283 10:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:38:43.283 10:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:43.283 10:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:38:43.283 10:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:38:43.283 10:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:38:43.283 10:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:38:43.283 10:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:43.283 10:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:38:43.283 10:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:38:43.283 10:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:43.283 10:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:43.283 10:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:38:43.283 10:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:43.283 10:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:38:43.283 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:43.283 --rc genhtml_branch_coverage=1 00:38:43.283 --rc genhtml_function_coverage=1 00:38:43.283 --rc genhtml_legend=1 00:38:43.283 --rc geninfo_all_blocks=1 00:38:43.283 --rc geninfo_unexecuted_blocks=1 00:38:43.283 00:38:43.283 ' 00:38:43.283 10:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:38:43.283 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:43.283 --rc genhtml_branch_coverage=1 00:38:43.283 --rc genhtml_function_coverage=1 00:38:43.283 --rc genhtml_legend=1 00:38:43.283 --rc geninfo_all_blocks=1 00:38:43.283 --rc geninfo_unexecuted_blocks=1 00:38:43.283 00:38:43.283 ' 00:38:43.283 10:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:38:43.283 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:43.283 --rc genhtml_branch_coverage=1 00:38:43.283 --rc genhtml_function_coverage=1 00:38:43.283 --rc genhtml_legend=1 00:38:43.283 --rc geninfo_all_blocks=1 00:38:43.283 --rc geninfo_unexecuted_blocks=1 00:38:43.283 00:38:43.283 ' 00:38:43.283 10:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:38:43.283 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:43.283 --rc genhtml_branch_coverage=1 00:38:43.283 --rc genhtml_function_coverage=1 00:38:43.283 --rc genhtml_legend=1 00:38:43.283 --rc geninfo_all_blocks=1 00:38:43.283 --rc geninfo_unexecuted_blocks=1 00:38:43.283 00:38:43.283 ' 00:38:43.283 10:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:43.283 10:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:38:43.283 10:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:43.283 10:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:43.283 10:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:43.283 10:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:43.283 10:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:43.283 10:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:43.283 10:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:43.283 10:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:43.283 10:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:43.283 10:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:43.283 10:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:38:43.283 10:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:38:43.283 10:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:43.283 10:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:43.283 10:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:43.283 10:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:43.283 10:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:43.283 10:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:38:43.283 10:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:43.283 10:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:43.283 10:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:43.283 10:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:43.283 10:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:43.283 10:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:43.283 10:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:38:43.283 10:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:43.283 10:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:38:43.283 10:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:43.283 10:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:43.283 10:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:43.283 10:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:43.283 10:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:43.283 10:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:38:43.283 10:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:38:43.283 10:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:43.283 10:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:43.283 10:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:43.283 10:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:38:43.283 10:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:38:43.283 10:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:38:43.283 10:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:43.283 10:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:38:43.283 10:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:38:43.283 10:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:38:43.283 10:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:43.283 10:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:43.283 10:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:43.283 10:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:38:43.283 10:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:38:43.284 10:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:38:43.284 10:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:38:48.555 10:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:48.555 10:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:38:48.555 10:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:38:48.555 10:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:38:48.555 10:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:38:48.555 10:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:38:48.555 10:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:38:48.555 10:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:38:48.555 10:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:38:48.555 10:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:38:48.555 10:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:38:48.555 10:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:38:48.555 10:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:38:48.555 10:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:38:48.555 10:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:38:48.555 10:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:48.555 10:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:48.555 10:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:48.555 10:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:48.555 10:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:48.555 10:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:48.555 10:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:48.555 10:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:38:48.555 10:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:48.555 10:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:48.555 10:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:48.555 10:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:48.556 10:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:38:48.556 10:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:38:48.556 10:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:38:48.556 10:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:38:48.556 10:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:38:48.556 10:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:38:48.556 10:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:48.556 10:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:38:48.556 Found 0000:af:00.0 (0x8086 - 0x159b) 00:38:48.556 10:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:48.556 10:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:48.556 10:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:48.556 10:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:48.556 10:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:48.556 10:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:48.556 10:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:38:48.556 Found 0000:af:00.1 (0x8086 - 0x159b) 00:38:48.556 10:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:48.556 10:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:48.556 10:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:48.556 10:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:48.556 10:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:48.556 10:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:38:48.556 10:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:38:48.556 10:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:38:48.556 10:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:48.556 10:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:48.556 10:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:48.556 10:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:48.556 10:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:48.556 10:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:48.556 10:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:48.556 10:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:38:48.556 Found net devices under 0000:af:00.0: cvl_0_0 00:38:48.556 10:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:48.556 10:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:48.556 10:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:48.556 10:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:48.556 10:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:48.556 10:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:48.556 10:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:48.556 10:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:48.556 10:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:38:48.556 Found net devices under 0000:af:00.1: cvl_0_1 00:38:48.556 10:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:48.556 10:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:38:48.556 10:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:38:48.556 10:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:38:48.556 10:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:38:48.556 10:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:38:48.556 10:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:48.556 10:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:48.556 10:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:48.556 10:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:48.556 10:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:38:48.556 10:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:48.556 10:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:48.556 10:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:38:48.556 10:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:38:48.556 10:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:48.556 10:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:48.556 10:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:38:48.556 10:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:38:48.556 10:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:38:48.556 10:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:48.556 10:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:48.556 10:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:48.556 10:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:38:48.556 10:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:48.556 10:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:48.556 10:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:48.556 10:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:38:48.556 10:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:38:48.556 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:48.556 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.343 ms 00:38:48.556 00:38:48.556 --- 10.0.0.2 ping statistics --- 00:38:48.556 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:48.556 rtt min/avg/max/mdev = 0.343/0.343/0.343/0.000 ms 00:38:48.556 10:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:48.556 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:48.556 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.192 ms 00:38:48.556 00:38:48.556 --- 10.0.0.1 ping statistics --- 00:38:48.556 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:48.556 rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms 00:38:48.556 10:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:48.556 10:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:38:48.556 10:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:38:48.556 10:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:48.556 10:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:38:48.556 10:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:38:48.556 10:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:48.556 10:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:38:48.556 10:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:38:48.556 10:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:38:48.556 10:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:38:48.556 10:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:48.556 10:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:38:48.556 10:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=4169764 00:38:48.556 10:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 4169764 00:38:48.556 10:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:38:48.556 10:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 4169764 ']' 00:38:48.556 10:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:48.556 10:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:48.557 10:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:48.557 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:48.557 10:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:48.557 10:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:38:48.557 [2024-12-13 10:40:42.297366] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:38:48.557 [2024-12-13 10:40:42.299497] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:38:48.557 [2024-12-13 10:40:42.299565] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:48.557 [2024-12-13 10:40:42.417962] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:38:48.816 [2024-12-13 10:40:42.523392] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:48.816 [2024-12-13 10:40:42.523431] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:48.816 [2024-12-13 10:40:42.523443] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:48.816 [2024-12-13 10:40:42.523454] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:48.816 [2024-12-13 10:40:42.523463] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:48.816 [2024-12-13 10:40:42.525547] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:38:48.816 [2024-12-13 10:40:42.525613] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:38:48.816 [2024-12-13 10:40:42.525624] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:38:49.075 [2024-12-13 10:40:42.828705] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:38:49.075 [2024-12-13 10:40:42.829731] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:38:49.075 [2024-12-13 10:40:42.830262] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:38:49.075 [2024-12-13 10:40:42.830474] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:38:49.333 10:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:49.333 10:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:38:49.333 10:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:38:49.333 10:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:49.333 10:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:38:49.333 10:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:49.333 10:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:38:49.333 10:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:38:49.593 [2024-12-13 10:40:43.310372] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:49.593 10:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:38:49.852 10:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:49.852 [2024-12-13 10:40:43.714813] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:49.852 10:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:38:50.110 10:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:38:50.369 Malloc0 00:38:50.369 10:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:38:50.628 Delay0 00:38:50.628 10:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:50.628 10:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:38:50.886 NULL1 00:38:50.886 10:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:38:51.145 10:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=4170134 00:38:51.145 10:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:38:51.145 10:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4170134 00:38:51.146 10:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:51.407 10:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:51.668 10:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:38:51.668 10:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:38:51.668 true 00:38:51.668 10:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4170134 00:38:51.668 10:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:51.926 10:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:52.185 10:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:38:52.185 10:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:38:52.443 true 00:38:52.443 10:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4170134 00:38:52.443 10:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:52.702 10:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:52.961 10:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:38:52.961 10:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:38:52.961 true 00:38:52.961 10:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4170134 00:38:52.961 10:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:53.219 10:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:53.477 10:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:38:53.477 10:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:38:53.736 true 00:38:53.736 10:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4170134 00:38:53.736 10:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:53.995 10:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:54.254 10:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:38:54.254 10:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:38:54.254 true 00:38:54.254 10:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4170134 00:38:54.254 10:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:54.512 10:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:54.771 10:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:38:54.771 10:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:38:55.029 true 00:38:55.029 10:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4170134 00:38:55.029 10:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:55.288 10:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:55.546 10:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:38:55.546 10:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:38:55.546 true 00:38:55.805 10:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4170134 00:38:55.805 10:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:55.805 10:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:56.064 10:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:38:56.064 10:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:38:56.322 true 00:38:56.322 10:40:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4170134 00:38:56.322 10:40:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:56.581 10:40:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:56.840 10:40:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:38:56.840 10:40:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:38:56.840 true 00:38:57.099 10:40:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4170134 00:38:57.099 10:40:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:57.099 10:40:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:57.358 10:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:38:57.358 10:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:38:57.617 true 00:38:57.617 10:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4170134 00:38:57.617 10:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:57.875 10:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:58.134 10:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:38:58.134 10:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:38:58.134 true 00:38:58.392 10:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4170134 00:38:58.392 10:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:58.651 10:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:58.651 10:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:38:58.651 10:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:38:58.910 true 00:38:58.910 10:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4170134 00:38:58.910 10:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:59.168 10:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:59.427 10:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:38:59.427 10:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:38:59.685 true 00:38:59.685 10:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4170134 00:38:59.685 10:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:59.943 10:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:59.943 10:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:38:59.943 10:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:39:00.202 true 00:39:00.202 10:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4170134 00:39:00.202 10:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:00.460 10:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:00.719 10:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:39:00.719 10:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:39:00.977 true 00:39:00.977 10:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4170134 00:39:00.977 10:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:01.236 10:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:01.493 10:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:39:01.493 10:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:39:01.493 true 00:39:01.493 10:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4170134 00:39:01.493 10:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:01.751 10:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:02.009 10:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:39:02.009 10:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:39:02.268 true 00:39:02.268 10:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4170134 00:39:02.268 10:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:02.526 10:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:02.784 10:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:39:02.784 10:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:39:02.784 true 00:39:03.043 10:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4170134 00:39:03.043 10:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:03.043 10:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:03.301 10:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:39:03.301 10:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:39:03.559 true 00:39:03.559 10:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4170134 00:39:03.559 10:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:03.818 10:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:04.076 10:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:39:04.076 10:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:39:04.076 true 00:39:04.335 10:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4170134 00:39:04.335 10:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:04.335 10:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:04.593 10:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:39:04.593 10:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:39:04.852 true 00:39:04.852 10:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4170134 00:39:04.852 10:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:05.110 10:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:05.368 10:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:39:05.368 10:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:39:05.368 true 00:39:05.629 10:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4170134 00:39:05.629 10:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:05.629 10:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:05.887 10:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:39:05.887 10:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:39:06.145 true 00:39:06.145 10:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4170134 00:39:06.145 10:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:06.403 10:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:06.662 10:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:39:06.662 10:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:39:06.921 true 00:39:06.921 10:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4170134 00:39:06.921 10:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:07.180 10:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:07.180 10:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:39:07.180 10:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:39:07.438 true 00:39:07.438 10:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4170134 00:39:07.438 10:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:07.697 10:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:07.955 10:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:39:07.955 10:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:39:08.214 true 00:39:08.214 10:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4170134 00:39:08.214 10:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:08.474 10:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:08.474 10:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:39:08.474 10:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:39:08.732 true 00:39:08.732 10:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4170134 00:39:08.732 10:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:08.991 10:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:09.249 10:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:39:09.249 10:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:39:09.508 true 00:39:09.508 10:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4170134 00:39:09.508 10:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:09.766 10:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:10.024 10:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:39:10.024 10:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:39:10.024 true 00:39:10.024 10:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4170134 00:39:10.024 10:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:10.283 10:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:10.541 10:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:39:10.541 10:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:39:10.799 true 00:39:10.799 10:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4170134 00:39:10.799 10:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:11.058 10:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:11.317 10:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:39:11.317 10:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:39:11.317 true 00:39:11.575 10:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4170134 00:39:11.575 10:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:11.575 10:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:11.833 10:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:39:11.833 10:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:39:12.092 true 00:39:12.092 10:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4170134 00:39:12.092 10:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:12.351 10:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:12.609 10:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:39:12.609 10:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:39:12.609 true 00:39:12.868 10:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4170134 00:39:12.868 10:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:12.868 10:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:13.127 10:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:39:13.127 10:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:39:13.385 true 00:39:13.385 10:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4170134 00:39:13.386 10:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:13.644 10:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:13.903 10:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:39:13.903 10:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:39:13.903 true 00:39:14.162 10:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4170134 00:39:14.162 10:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:14.162 10:41:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:14.420 10:41:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:39:14.420 10:41:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:39:14.679 true 00:39:14.679 10:41:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4170134 00:39:14.679 10:41:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:14.938 10:41:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:15.197 10:41:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:39:15.197 10:41:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:39:15.456 true 00:39:15.456 10:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4170134 00:39:15.456 10:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:15.714 10:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:15.714 10:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:39:15.714 10:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:39:15.973 true 00:39:15.973 10:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4170134 00:39:15.973 10:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:16.231 10:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:16.490 10:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1039 00:39:16.490 10:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:39:16.749 true 00:39:16.749 10:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4170134 00:39:16.749 10:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:17.008 10:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:17.008 10:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1040 00:39:17.008 10:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:39:17.266 true 00:39:17.266 10:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4170134 00:39:17.266 10:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:17.525 10:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:17.783 10:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1041 00:39:17.784 10:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:39:18.042 true 00:39:18.042 10:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4170134 00:39:18.042 10:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:18.301 10:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:18.301 10:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1042 00:39:18.301 10:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:39:18.560 true 00:39:18.560 10:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4170134 00:39:18.560 10:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:18.819 10:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:19.078 10:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1043 00:39:19.078 10:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:39:19.337 true 00:39:19.337 10:41:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4170134 00:39:19.337 10:41:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:19.595 10:41:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:19.596 10:41:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1044 00:39:19.596 10:41:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:39:19.854 true 00:39:19.854 10:41:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4170134 00:39:19.854 10:41:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:20.186 10:41:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:20.489 10:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1045 00:39:20.489 10:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:39:20.489 true 00:39:20.489 10:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4170134 00:39:20.489 10:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:20.791 10:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:21.050 10:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1046 00:39:21.050 10:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:39:21.309 true 00:39:21.309 10:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4170134 00:39:21.309 10:41:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:21.567 10:41:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:21.567 Initializing NVMe Controllers 00:39:21.567 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:39:21.567 Controller IO queue size 128, less than required. 00:39:21.567 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:39:21.567 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:39:21.567 Initialization complete. Launching workers. 00:39:21.567 ======================================================== 00:39:21.567 Latency(us) 00:39:21.567 Device Information : IOPS MiB/s Average min max 00:39:21.567 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 24032.98 11.73 5326.20 1738.92 10785.46 00:39:21.567 ======================================================== 00:39:21.567 Total : 24032.98 11.73 5326.20 1738.92 10785.46 00:39:21.567 00:39:21.567 10:41:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1047 00:39:21.567 10:41:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:39:21.825 true 00:39:21.825 10:41:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4170134 00:39:21.825 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (4170134) - No such process 00:39:21.825 10:41:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 4170134 00:39:21.825 10:41:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:22.084 10:41:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:39:22.342 10:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:39:22.342 10:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:39:22.342 10:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:39:22.342 10:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:39:22.342 10:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:39:22.342 null0 00:39:22.342 10:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:39:22.342 10:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:39:22.342 10:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:39:22.601 null1 00:39:22.601 10:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:39:22.601 10:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:39:22.601 10:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:39:22.860 null2 00:39:22.860 10:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:39:22.860 10:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:39:22.860 10:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:39:22.860 null3 00:39:23.119 10:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:39:23.119 10:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:39:23.119 10:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:39:23.119 null4 00:39:23.119 10:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:39:23.119 10:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:39:23.119 10:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:39:23.378 null5 00:39:23.378 10:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:39:23.378 10:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:39:23.378 10:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:39:23.637 null6 00:39:23.637 10:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:39:23.637 10:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:39:23.637 10:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:39:23.637 null7 00:39:23.896 10:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:39:23.896 10:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:39:23.896 10:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:39:23.896 10:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:39:23.896 10:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:39:23.896 10:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:39:23.896 10:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:39:23.896 10:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:39:23.896 10:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:39:23.896 10:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:39:23.896 10:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:23.896 10:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:39:23.896 10:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:39:23.896 10:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:39:23.896 10:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:39:23.896 10:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:39:23.897 10:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:39:23.897 10:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:39:23.897 10:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:23.897 10:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:39:23.897 10:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:39:23.897 10:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:39:23.897 10:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:39:23.897 10:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:39:23.897 10:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:39:23.897 10:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:39:23.897 10:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:23.897 10:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:39:23.897 10:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:39:23.897 10:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:39:23.897 10:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:39:23.897 10:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:39:23.897 10:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:39:23.897 10:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:39:23.897 10:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:23.897 10:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:39:23.897 10:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:39:23.897 10:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:39:23.897 10:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:39:23.897 10:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:39:23.897 10:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:39:23.897 10:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:39:23.897 10:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:23.897 10:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:39:23.897 10:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:39:23.897 10:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:39:23.897 10:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:39:23.897 10:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:39:23.897 10:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:39:23.897 10:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:39:23.897 10:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:23.897 10:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:39:23.897 10:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:39:23.897 10:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:39:23.897 10:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:39:23.897 10:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:39:23.897 10:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:39:23.897 10:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:39:23.897 10:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:23.897 10:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:39:23.897 10:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:39:23.897 10:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:39:23.897 10:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:39:23.897 10:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:39:23.897 10:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 4175290 4175291 4175293 4175295 4175297 4175299 4175301 4175303 00:39:23.897 10:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:39:23.897 10:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:39:23.897 10:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:23.897 10:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:39:23.897 10:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:23.897 10:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:39:23.897 10:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:39:23.897 10:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:39:23.897 10:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:39:23.897 10:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:39:23.897 10:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:39:23.897 10:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:39:24.157 10:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:24.157 10:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:24.157 10:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:24.157 10:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:24.157 10:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:39:24.157 10:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:39:24.157 10:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:24.157 10:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:24.157 10:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:39:24.157 10:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:24.157 10:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:24.157 10:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:24.157 10:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:39:24.157 10:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:24.157 10:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:39:24.157 10:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:24.157 10:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:24.157 10:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:39:24.157 10:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:24.157 10:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:24.157 10:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:39:24.157 10:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:24.157 10:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:24.157 10:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:39:24.416 10:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:24.416 10:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:39:24.416 10:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:39:24.416 10:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:39:24.416 10:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:39:24.416 10:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:39:24.416 10:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:39:24.416 10:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:39:24.675 10:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:24.675 10:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:24.675 10:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:39:24.676 10:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:24.676 10:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:24.676 10:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:39:24.676 10:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:24.676 10:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:24.676 10:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:39:24.676 10:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:24.676 10:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:24.676 10:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:39:24.676 10:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:24.676 10:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:24.676 10:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:39:24.676 10:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:24.676 10:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:24.676 10:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:39:24.676 10:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:24.676 10:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:24.676 10:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:39:24.676 10:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:24.676 10:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:24.676 10:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:39:24.676 10:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:24.935 10:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:39:24.935 10:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:39:24.935 10:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:39:24.935 10:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:39:24.935 10:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:39:24.935 10:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:39:24.935 10:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:39:24.935 10:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:24.935 10:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:24.935 10:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:39:24.935 10:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:24.935 10:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:24.935 10:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:39:24.935 10:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:24.935 10:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:24.935 10:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:39:24.935 10:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:24.935 10:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:24.935 10:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:39:24.935 10:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:24.935 10:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:24.935 10:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:24.935 10:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:24.935 10:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:39:24.935 10:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:39:24.935 10:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:24.935 10:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:24.935 10:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:39:24.935 10:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:24.935 10:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:24.935 10:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:39:25.194 10:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:25.194 10:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:39:25.194 10:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:39:25.194 10:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:39:25.195 10:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:39:25.195 10:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:39:25.195 10:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:39:25.195 10:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:39:25.454 10:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:25.454 10:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:25.454 10:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:39:25.454 10:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:25.454 10:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:25.454 10:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:39:25.454 10:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:25.454 10:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:25.454 10:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:39:25.454 10:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:25.454 10:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:25.454 10:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:39:25.454 10:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:25.454 10:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:25.454 10:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:39:25.454 10:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:25.454 10:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:25.454 10:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:39:25.454 10:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:25.454 10:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:25.454 10:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:39:25.454 10:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:25.454 10:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:25.454 10:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:39:25.713 10:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:25.713 10:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:39:25.713 10:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:39:25.713 10:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:39:25.713 10:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:39:25.713 10:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:39:25.713 10:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:39:25.713 10:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:39:25.713 10:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:25.713 10:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:25.713 10:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:39:25.972 10:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:25.972 10:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:25.972 10:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:39:25.972 10:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:25.972 10:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:25.972 10:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:39:25.972 10:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:25.972 10:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:25.972 10:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:39:25.972 10:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:25.972 10:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:25.972 10:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:25.972 10:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:39:25.972 10:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:25.972 10:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:39:25.972 10:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:25.972 10:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:25.973 10:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:39:25.973 10:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:25.973 10:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:25.973 10:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:39:25.973 10:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:25.973 10:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:39:25.973 10:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:39:25.973 10:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:39:25.973 10:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:39:25.973 10:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:39:25.973 10:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:39:25.973 10:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:39:26.231 10:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:26.231 10:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:26.232 10:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:39:26.232 10:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:26.232 10:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:26.232 10:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:39:26.232 10:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:26.232 10:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:26.232 10:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:39:26.232 10:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:26.232 10:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:26.232 10:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:39:26.232 10:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:26.232 10:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:26.232 10:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:39:26.232 10:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:26.232 10:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:26.232 10:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:39:26.232 10:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:26.232 10:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:26.232 10:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:39:26.232 10:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:26.232 10:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:26.232 10:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:39:26.491 10:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:26.491 10:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:39:26.491 10:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:39:26.491 10:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:39:26.491 10:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:39:26.491 10:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:39:26.491 10:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:39:26.491 10:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:39:26.491 10:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:26.491 10:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:26.750 10:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:39:26.750 10:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:26.750 10:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:26.750 10:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:39:26.750 10:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:26.750 10:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:26.750 10:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:39:26.751 10:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:26.751 10:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:26.751 10:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:39:26.751 10:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:26.751 10:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:26.751 10:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:39:26.751 10:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:26.751 10:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:26.751 10:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:26.751 10:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:26.751 10:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:39:26.751 10:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:39:26.751 10:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:26.751 10:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:26.751 10:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:39:26.751 10:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:27.009 10:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:39:27.009 10:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:39:27.009 10:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:39:27.009 10:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:39:27.009 10:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:39:27.009 10:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:39:27.009 10:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:39:27.009 10:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:27.009 10:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:27.009 10:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:39:27.009 10:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:27.009 10:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:27.009 10:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:39:27.009 10:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:27.009 10:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:27.009 10:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:27.009 10:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:39:27.009 10:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:27.009 10:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:39:27.009 10:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:27.010 10:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:27.010 10:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:39:27.268 10:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:27.268 10:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:27.268 10:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:27.268 10:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:27.268 10:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:39:27.268 10:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:39:27.268 10:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:27.268 10:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:27.268 10:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:39:27.268 10:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:27.268 10:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:39:27.269 10:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:39:27.269 10:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:39:27.269 10:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:39:27.269 10:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:39:27.269 10:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:39:27.269 10:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:39:27.528 10:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:27.528 10:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:27.528 10:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:39:27.528 10:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:27.528 10:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:27.528 10:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:39:27.528 10:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:27.528 10:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:27.528 10:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:39:27.528 10:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:27.528 10:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:27.528 10:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:39:27.528 10:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:27.528 10:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:27.528 10:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:39:27.528 10:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:27.528 10:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:27.528 10:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:39:27.528 10:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:27.528 10:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:27.528 10:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:39:27.528 10:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:27.528 10:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:27.528 10:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:39:27.788 10:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:27.788 10:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:39:27.788 10:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:39:27.788 10:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:39:27.788 10:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:39:27.788 10:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:39:27.788 10:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:39:27.788 10:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:39:27.788 10:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:27.788 10:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:28.047 10:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:28.047 10:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:28.047 10:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:28.047 10:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:28.047 10:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:28.047 10:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:28.047 10:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:28.047 10:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:28.047 10:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:28.047 10:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:28.047 10:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:28.047 10:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:28.047 10:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:28.047 10:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:28.047 10:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:39:28.047 10:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:39:28.047 10:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:39:28.047 10:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:39:28.047 10:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:28.047 10:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:39:28.047 10:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:28.047 10:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:28.047 rmmod nvme_tcp 00:39:28.047 rmmod nvme_fabrics 00:39:28.047 rmmod nvme_keyring 00:39:28.047 10:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:28.047 10:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:39:28.047 10:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:39:28.047 10:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 4169764 ']' 00:39:28.047 10:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 4169764 00:39:28.047 10:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 4169764 ']' 00:39:28.047 10:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 4169764 00:39:28.047 10:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:39:28.047 10:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:28.047 10:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4169764 00:39:28.047 10:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:39:28.047 10:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:39:28.047 10:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4169764' 00:39:28.047 killing process with pid 4169764 00:39:28.047 10:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 4169764 00:39:28.047 10:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 4169764 00:39:29.425 10:41:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:39:29.425 10:41:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:39:29.425 10:41:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:39:29.425 10:41:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:39:29.425 10:41:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:39:29.425 10:41:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:39:29.425 10:41:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:39:29.425 10:41:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:29.425 10:41:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:39:29.425 10:41:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:29.425 10:41:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:29.425 10:41:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:31.335 10:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:39:31.335 00:39:31.335 real 0m48.463s 00:39:31.335 user 3m4.705s 00:39:31.335 sys 0m21.007s 00:39:31.335 10:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:31.335 10:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:39:31.335 ************************************ 00:39:31.335 END TEST nvmf_ns_hotplug_stress 00:39:31.335 ************************************ 00:39:31.335 10:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:39:31.335 10:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:39:31.335 10:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:31.335 10:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:39:31.335 ************************************ 00:39:31.335 START TEST nvmf_delete_subsystem 00:39:31.335 ************************************ 00:39:31.335 10:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:39:31.598 * Looking for test storage... 00:39:31.598 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:39:31.598 10:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:39:31.598 10:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lcov --version 00:39:31.598 10:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:39:31.598 10:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:39:31.598 10:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:31.598 10:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:31.598 10:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:31.598 10:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:39:31.598 10:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:39:31.598 10:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:39:31.598 10:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:39:31.598 10:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:39:31.598 10:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:39:31.598 10:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:39:31.598 10:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:31.598 10:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:39:31.598 10:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:39:31.598 10:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:31.598 10:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:31.598 10:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:39:31.598 10:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:39:31.598 10:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:31.598 10:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:39:31.598 10:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:39:31.598 10:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:39:31.598 10:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:39:31.598 10:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:31.598 10:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:39:31.598 10:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:39:31.598 10:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:31.598 10:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:31.598 10:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:39:31.598 10:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:31.598 10:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:39:31.598 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:31.598 --rc genhtml_branch_coverage=1 00:39:31.598 --rc genhtml_function_coverage=1 00:39:31.598 --rc genhtml_legend=1 00:39:31.598 --rc geninfo_all_blocks=1 00:39:31.598 --rc geninfo_unexecuted_blocks=1 00:39:31.598 00:39:31.598 ' 00:39:31.598 10:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:39:31.598 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:31.598 --rc genhtml_branch_coverage=1 00:39:31.598 --rc genhtml_function_coverage=1 00:39:31.598 --rc genhtml_legend=1 00:39:31.598 --rc geninfo_all_blocks=1 00:39:31.598 --rc geninfo_unexecuted_blocks=1 00:39:31.598 00:39:31.598 ' 00:39:31.598 10:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:39:31.598 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:31.598 --rc genhtml_branch_coverage=1 00:39:31.598 --rc genhtml_function_coverage=1 00:39:31.598 --rc genhtml_legend=1 00:39:31.598 --rc geninfo_all_blocks=1 00:39:31.598 --rc geninfo_unexecuted_blocks=1 00:39:31.598 00:39:31.598 ' 00:39:31.598 10:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:39:31.598 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:31.598 --rc genhtml_branch_coverage=1 00:39:31.598 --rc genhtml_function_coverage=1 00:39:31.598 --rc genhtml_legend=1 00:39:31.598 --rc geninfo_all_blocks=1 00:39:31.598 --rc geninfo_unexecuted_blocks=1 00:39:31.598 00:39:31.598 ' 00:39:31.598 10:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:31.598 10:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:39:31.598 10:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:31.598 10:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:31.598 10:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:31.598 10:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:31.598 10:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:31.598 10:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:31.598 10:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:31.598 10:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:31.599 10:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:31.599 10:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:31.599 10:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:39:31.599 10:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:39:31.599 10:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:31.599 10:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:31.599 10:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:31.599 10:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:31.599 10:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:31.599 10:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:39:31.599 10:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:31.599 10:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:31.599 10:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:31.599 10:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:31.599 10:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:31.599 10:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:31.599 10:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:39:31.599 10:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:31.599 10:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:39:31.599 10:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:31.599 10:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:31.599 10:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:31.599 10:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:31.599 10:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:31.599 10:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:39:31.599 10:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:39:31.599 10:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:31.599 10:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:31.599 10:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:31.599 10:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:39:31.599 10:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:39:31.599 10:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:31.599 10:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:39:31.599 10:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:39:31.599 10:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:39:31.599 10:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:31.599 10:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:31.599 10:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:31.599 10:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:39:31.599 10:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:39:31.599 10:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:39:31.599 10:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:39:36.874 10:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:36.874 10:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:39:36.874 10:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:39:36.874 10:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:39:36.874 10:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:39:36.874 10:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:39:36.874 10:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:39:36.874 10:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:39:36.874 10:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:39:36.874 10:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:39:36.874 10:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:39:36.874 10:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:39:36.874 10:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:39:36.874 10:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:39:36.874 10:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:39:36.874 10:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:36.874 10:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:36.874 10:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:36.874 10:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:36.874 10:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:36.874 10:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:36.874 10:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:36.874 10:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:39:36.874 10:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:36.874 10:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:36.874 10:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:36.874 10:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:36.874 10:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:39:36.874 10:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:39:36.874 10:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:39:36.874 10:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:39:36.874 10:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:39:36.874 10:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:39:36.874 10:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:36.874 10:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:39:36.874 Found 0000:af:00.0 (0x8086 - 0x159b) 00:39:36.874 10:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:36.874 10:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:36.874 10:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:36.874 10:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:36.874 10:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:36.874 10:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:36.874 10:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:39:36.874 Found 0000:af:00.1 (0x8086 - 0x159b) 00:39:36.874 10:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:36.874 10:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:36.874 10:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:36.874 10:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:36.874 10:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:36.874 10:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:39:36.874 10:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:39:36.874 10:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:39:36.874 10:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:36.874 10:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:36.874 10:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:36.874 10:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:36.874 10:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:36.874 10:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:36.874 10:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:36.874 10:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:39:36.874 Found net devices under 0000:af:00.0: cvl_0_0 00:39:36.874 10:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:36.874 10:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:36.874 10:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:36.874 10:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:36.874 10:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:36.874 10:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:36.874 10:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:36.874 10:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:36.874 10:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:39:36.874 Found net devices under 0000:af:00.1: cvl_0_1 00:39:36.874 10:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:36.874 10:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:39:36.874 10:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:39:36.874 10:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:39:36.874 10:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:39:36.874 10:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:39:36.874 10:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:39:36.874 10:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:36.874 10:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:36.874 10:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:36.874 10:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:39:36.874 10:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:36.874 10:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:36.874 10:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:39:36.874 10:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:39:36.874 10:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:36.874 10:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:36.874 10:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:39:36.874 10:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:39:36.874 10:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:39:36.874 10:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:37.133 10:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:37.133 10:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:37.133 10:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:39:37.133 10:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:37.133 10:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:37.133 10:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:37.133 10:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:39:37.133 10:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:39:37.133 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:37.133 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.325 ms 00:39:37.133 00:39:37.133 --- 10.0.0.2 ping statistics --- 00:39:37.133 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:37.133 rtt min/avg/max/mdev = 0.325/0.325/0.325/0.000 ms 00:39:37.133 10:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:37.133 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:37.133 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.171 ms 00:39:37.133 00:39:37.133 --- 10.0.0.1 ping statistics --- 00:39:37.133 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:37.133 rtt min/avg/max/mdev = 0.171/0.171/0.171/0.000 ms 00:39:37.133 10:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:37.133 10:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:39:37.133 10:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:39:37.133 10:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:37.133 10:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:39:37.133 10:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:39:37.133 10:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:37.133 10:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:39:37.133 10:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:39:37.133 10:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:39:37.133 10:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:39:37.133 10:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:39:37.133 10:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:39:37.133 10:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:39:37.133 10:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=4179788 00:39:37.133 10:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 4179788 00:39:37.133 10:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 4179788 ']' 00:39:37.133 10:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:37.133 10:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:37.133 10:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:37.133 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:37.133 10:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:37.133 10:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:39:37.391 [2024-12-13 10:41:31.035601] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:39:37.391 [2024-12-13 10:41:31.037694] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:39:37.391 [2024-12-13 10:41:31.037775] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:37.391 [2024-12-13 10:41:31.155690] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:39:37.391 [2024-12-13 10:41:31.260734] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:37.391 [2024-12-13 10:41:31.260775] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:37.391 [2024-12-13 10:41:31.260787] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:37.391 [2024-12-13 10:41:31.260811] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:37.391 [2024-12-13 10:41:31.260826] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:37.391 [2024-12-13 10:41:31.262778] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:39:37.391 [2024-12-13 10:41:31.262789] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:39:37.959 [2024-12-13 10:41:31.572630] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:39:37.959 [2024-12-13 10:41:31.573237] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:39:37.959 [2024-12-13 10:41:31.573467] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:39:38.218 10:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:38.218 10:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:39:38.218 10:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:39:38.218 10:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:39:38.218 10:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:39:38.218 10:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:38.218 10:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:39:38.218 10:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:38.218 10:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:39:38.218 [2024-12-13 10:41:31.903837] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:38.218 10:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:38.219 10:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:39:38.219 10:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:38.219 10:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:39:38.219 10:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:38.219 10:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:38.219 10:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:38.219 10:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:39:38.219 [2024-12-13 10:41:31.932082] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:38.219 10:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:38.219 10:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:39:38.219 10:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:38.219 10:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:39:38.219 NULL1 00:39:38.219 10:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:38.219 10:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:39:38.219 10:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:38.219 10:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:39:38.219 Delay0 00:39:38.219 10:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:38.219 10:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:38.219 10:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:38.219 10:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:39:38.219 10:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:38.219 10:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=4180003 00:39:38.219 10:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:39:38.219 10:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:39:38.219 [2024-12-13 10:41:32.084368] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:39:40.121 10:41:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:39:40.121 10:41:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:40.121 10:41:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:39:40.378 Read completed with error (sct=0, sc=8) 00:39:40.378 starting I/O failed: -6 00:39:40.378 Write completed with error (sct=0, sc=8) 00:39:40.378 Read completed with error (sct=0, sc=8) 00:39:40.378 Read completed with error (sct=0, sc=8) 00:39:40.378 Write completed with error (sct=0, sc=8) 00:39:40.378 starting I/O failed: -6 00:39:40.378 Read completed with error (sct=0, sc=8) 00:39:40.378 Read completed with error (sct=0, sc=8) 00:39:40.378 Write completed with error (sct=0, sc=8) 00:39:40.378 Read completed with error (sct=0, sc=8) 00:39:40.378 starting I/O failed: -6 00:39:40.378 Read completed with error (sct=0, sc=8) 00:39:40.378 Write completed with error (sct=0, sc=8) 00:39:40.378 Write completed with error (sct=0, sc=8) 00:39:40.378 Read completed with error (sct=0, sc=8) 00:39:40.378 starting I/O failed: -6 00:39:40.378 Read completed with error (sct=0, sc=8) 00:39:40.378 Read completed with error (sct=0, sc=8) 00:39:40.378 Write completed with error (sct=0, sc=8) 00:39:40.378 Read completed with error (sct=0, sc=8) 00:39:40.378 starting I/O failed: -6 00:39:40.378 Read completed with error (sct=0, sc=8) 00:39:40.378 Read completed with error (sct=0, sc=8) 00:39:40.378 Read completed with error (sct=0, sc=8) 00:39:40.378 Read completed with error (sct=0, sc=8) 00:39:40.378 starting I/O failed: -6 00:39:40.378 Read completed with error (sct=0, sc=8) 00:39:40.378 Write completed with error (sct=0, sc=8) 00:39:40.378 Read completed with error (sct=0, sc=8) 00:39:40.378 Write completed with error (sct=0, sc=8) 00:39:40.378 starting I/O failed: -6 00:39:40.378 Read completed with error (sct=0, sc=8) 00:39:40.378 Read completed with error (sct=0, sc=8) 00:39:40.378 Read completed with error (sct=0, sc=8) 00:39:40.378 Write completed with error (sct=0, sc=8) 00:39:40.378 starting I/O failed: -6 00:39:40.378 Read completed with error (sct=0, sc=8) 00:39:40.378 Write completed with error (sct=0, sc=8) 00:39:40.378 Write completed with error (sct=0, sc=8) 00:39:40.378 Write completed with error (sct=0, sc=8) 00:39:40.378 starting I/O failed: -6 00:39:40.378 Read completed with error (sct=0, sc=8) 00:39:40.378 Read completed with error (sct=0, sc=8) 00:39:40.378 Read completed with error (sct=0, sc=8) 00:39:40.378 Read completed with error (sct=0, sc=8) 00:39:40.378 starting I/O failed: -6 00:39:40.378 Write completed with error (sct=0, sc=8) 00:39:40.378 Read completed with error (sct=0, sc=8) 00:39:40.378 Read completed with error (sct=0, sc=8) 00:39:40.378 Read completed with error (sct=0, sc=8) 00:39:40.378 Read completed with error (sct=0, sc=8) 00:39:40.378 Read completed with error (sct=0, sc=8) 00:39:40.378 Read completed with error (sct=0, sc=8) 00:39:40.378 Read completed with error (sct=0, sc=8) 00:39:40.378 Write completed with error (sct=0, sc=8) 00:39:40.378 Read completed with error (sct=0, sc=8) 00:39:40.378 Read completed with error (sct=0, sc=8) 00:39:40.378 Write completed with error (sct=0, sc=8) 00:39:40.378 Read completed with error (sct=0, sc=8) 00:39:40.378 Write completed with error (sct=0, sc=8) 00:39:40.378 Read completed with error (sct=0, sc=8) 00:39:40.378 Read completed with error (sct=0, sc=8) 00:39:40.378 Read completed with error (sct=0, sc=8) 00:39:40.378 Write completed with error (sct=0, sc=8) 00:39:40.378 Write completed with error (sct=0, sc=8) 00:39:40.378 Read completed with error (sct=0, sc=8) 00:39:40.378 Write completed with error (sct=0, sc=8) 00:39:40.378 Read completed with error (sct=0, sc=8) 00:39:40.378 Write completed with error (sct=0, sc=8) 00:39:40.378 Read completed with error (sct=0, sc=8) 00:39:40.378 Read completed with error (sct=0, sc=8) 00:39:40.378 Read completed with error (sct=0, sc=8) 00:39:40.378 Read completed with error (sct=0, sc=8) 00:39:40.378 Read completed with error (sct=0, sc=8) 00:39:40.378 Write completed with error (sct=0, sc=8) 00:39:40.378 Read completed with error (sct=0, sc=8) 00:39:40.378 Read completed with error (sct=0, sc=8) 00:39:40.378 Read completed with error (sct=0, sc=8) 00:39:40.378 Write completed with error (sct=0, sc=8) 00:39:40.379 Write completed with error (sct=0, sc=8) 00:39:40.379 Read completed with error (sct=0, sc=8) 00:39:40.379 Read completed with error (sct=0, sc=8) 00:39:40.379 Read completed with error (sct=0, sc=8) 00:39:40.379 Read completed with error (sct=0, sc=8) 00:39:40.379 Read completed with error (sct=0, sc=8) 00:39:40.379 Read completed with error (sct=0, sc=8) 00:39:40.379 Read completed with error (sct=0, sc=8) 00:39:40.379 Read completed with error (sct=0, sc=8) 00:39:40.379 Write completed with error (sct=0, sc=8) 00:39:40.379 Read completed with error (sct=0, sc=8) 00:39:40.379 Write completed with error (sct=0, sc=8) 00:39:40.379 Write completed with error (sct=0, sc=8) 00:39:40.379 Write completed with error (sct=0, sc=8) 00:39:40.379 Read completed with error (sct=0, sc=8) 00:39:40.379 Read completed with error (sct=0, sc=8) 00:39:40.379 Write completed with error (sct=0, sc=8) 00:39:40.379 Read completed with error (sct=0, sc=8) 00:39:40.379 Read completed with error (sct=0, sc=8) 00:39:40.379 Write completed with error (sct=0, sc=8) 00:39:40.379 [2024-12-13 10:41:34.212356] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000020100 is same with the state(6) to be set 00:39:40.379 Read completed with error (sct=0, sc=8) 00:39:40.379 Write completed with error (sct=0, sc=8) 00:39:40.379 Write completed with error (sct=0, sc=8) 00:39:40.379 Write completed with error (sct=0, sc=8) 00:39:40.379 Read completed with error (sct=0, sc=8) 00:39:40.379 Read completed with error (sct=0, sc=8) 00:39:40.379 Read completed with error (sct=0, sc=8) 00:39:40.379 Read completed with error (sct=0, sc=8) 00:39:40.379 Write completed with error (sct=0, sc=8) 00:39:40.379 Read completed with error (sct=0, sc=8) 00:39:40.379 Read completed with error (sct=0, sc=8) 00:39:40.379 Read completed with error (sct=0, sc=8) 00:39:40.379 Read completed with error (sct=0, sc=8) 00:39:40.379 Write completed with error (sct=0, sc=8) 00:39:40.379 Write completed with error (sct=0, sc=8) 00:39:40.379 Read completed with error (sct=0, sc=8) 00:39:40.379 Write completed with error (sct=0, sc=8) 00:39:40.379 Read completed with error (sct=0, sc=8) 00:39:40.379 Write completed with error (sct=0, sc=8) 00:39:40.379 Read completed with error (sct=0, sc=8) 00:39:40.379 Read completed with error (sct=0, sc=8) 00:39:40.379 Read completed with error (sct=0, sc=8) 00:39:40.379 Read completed with error (sct=0, sc=8) 00:39:40.379 Write completed with error (sct=0, sc=8) 00:39:40.379 Read completed with error (sct=0, sc=8) 00:39:40.379 Read completed with error (sct=0, sc=8) 00:39:40.379 Read completed with error (sct=0, sc=8) 00:39:40.379 Write completed with error (sct=0, sc=8) 00:39:40.379 Read completed with error (sct=0, sc=8) 00:39:40.379 Write completed with error (sct=0, sc=8) 00:39:40.379 Read completed with error (sct=0, sc=8) 00:39:40.379 Read completed with error (sct=0, sc=8) 00:39:40.379 Read completed with error (sct=0, sc=8) 00:39:40.379 Read completed with error (sct=0, sc=8) 00:39:40.379 Read completed with error (sct=0, sc=8) 00:39:40.379 Read completed with error (sct=0, sc=8) 00:39:40.379 Read completed with error (sct=0, sc=8) 00:39:40.379 Read completed with error (sct=0, sc=8) 00:39:40.379 Read completed with error (sct=0, sc=8) 00:39:40.379 Read completed with error (sct=0, sc=8) 00:39:40.379 Read completed with error (sct=0, sc=8) 00:39:40.379 Read completed with error (sct=0, sc=8) 00:39:40.379 Read completed with error (sct=0, sc=8) 00:39:40.379 Write completed with error (sct=0, sc=8) 00:39:40.379 Read completed with error (sct=0, sc=8) 00:39:40.379 Write completed with error (sct=0, sc=8) 00:39:40.379 Read completed with error (sct=0, sc=8) 00:39:40.379 Read completed with error (sct=0, sc=8) 00:39:40.379 Write completed with error (sct=0, sc=8) 00:39:40.379 Write completed with error (sct=0, sc=8) 00:39:40.379 [2024-12-13 10:41:34.213363] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000020380 is same with the state(6) to be set 00:39:40.379 Read completed with error (sct=0, sc=8) 00:39:40.379 Write completed with error (sct=0, sc=8) 00:39:40.379 starting I/O failed: -6 00:39:40.379 Read completed with error (sct=0, sc=8) 00:39:40.379 Read completed with error (sct=0, sc=8) 00:39:40.379 Read completed with error (sct=0, sc=8) 00:39:40.379 Read completed with error (sct=0, sc=8) 00:39:40.379 starting I/O failed: -6 00:39:40.379 Write completed with error (sct=0, sc=8) 00:39:40.379 Read completed with error (sct=0, sc=8) 00:39:40.379 Read completed with error (sct=0, sc=8) 00:39:40.379 Write completed with error (sct=0, sc=8) 00:39:40.379 starting I/O failed: -6 00:39:40.379 Write completed with error (sct=0, sc=8) 00:39:40.379 Read completed with error (sct=0, sc=8) 00:39:40.379 Read completed with error (sct=0, sc=8) 00:39:40.379 Write completed with error (sct=0, sc=8) 00:39:40.379 starting I/O failed: -6 00:39:40.379 Read completed with error (sct=0, sc=8) 00:39:40.379 Read completed with error (sct=0, sc=8) 00:39:40.379 Read completed with error (sct=0, sc=8) 00:39:40.379 Write completed with error (sct=0, sc=8) 00:39:40.379 starting I/O failed: -6 00:39:40.379 Write completed with error (sct=0, sc=8) 00:39:40.379 Write completed with error (sct=0, sc=8) 00:39:40.379 Read completed with error (sct=0, sc=8) 00:39:40.379 Write completed with error (sct=0, sc=8) 00:39:40.379 starting I/O failed: -6 00:39:40.379 Write completed with error (sct=0, sc=8) 00:39:40.379 Write completed with error (sct=0, sc=8) 00:39:40.379 Write completed with error (sct=0, sc=8) 00:39:40.379 Read completed with error (sct=0, sc=8) 00:39:40.379 starting I/O failed: -6 00:39:40.379 Read completed with error (sct=0, sc=8) 00:39:40.379 Read completed with error (sct=0, sc=8) 00:39:40.379 Read completed with error (sct=0, sc=8) 00:39:40.379 Read completed with error (sct=0, sc=8) 00:39:40.379 starting I/O failed: -6 00:39:40.379 Read completed with error (sct=0, sc=8) 00:39:40.379 Read completed with error (sct=0, sc=8) 00:39:40.379 Read completed with error (sct=0, sc=8) 00:39:40.379 Read completed with error (sct=0, sc=8) 00:39:40.379 starting I/O failed: -6 00:39:40.379 Read completed with error (sct=0, sc=8) 00:39:40.379 Write completed with error (sct=0, sc=8) 00:39:40.379 Read completed with error (sct=0, sc=8) 00:39:40.379 Write completed with error (sct=0, sc=8) 00:39:40.379 starting I/O failed: -6 00:39:40.379 Write completed with error (sct=0, sc=8) 00:39:40.379 Read completed with error (sct=0, sc=8) 00:39:40.379 Write completed with error (sct=0, sc=8) 00:39:40.379 Read completed with error (sct=0, sc=8) 00:39:40.379 starting I/O failed: -6 00:39:40.379 Write completed with error (sct=0, sc=8) 00:39:40.379 Read completed with error (sct=0, sc=8) 00:39:40.379 Write completed with error (sct=0, sc=8) 00:39:40.379 Read completed with error (sct=0, sc=8) 00:39:40.379 starting I/O failed: -6 00:39:40.379 Read completed with error (sct=0, sc=8) 00:39:40.379 Read completed with error (sct=0, sc=8) 00:39:40.379 Write completed with error (sct=0, sc=8) 00:39:40.379 Read completed with error (sct=0, sc=8) 00:39:40.379 starting I/O failed: -6 00:39:40.379 Write completed with error (sct=0, sc=8) 00:39:40.379 Read completed with error (sct=0, sc=8) 00:39:40.379 Read completed with error (sct=0, sc=8) 00:39:40.379 Read completed with error (sct=0, sc=8) 00:39:40.379 starting I/O failed: -6 00:39:40.379 [2024-12-13 10:41:34.214941] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500001ed00 is same with the state(6) to be set 00:39:41.315 [2024-12-13 10:41:35.182614] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500001e080 is same with the state(6) to be set 00:39:41.574 Write completed with error (sct=0, sc=8) 00:39:41.574 Read completed with error (sct=0, sc=8) 00:39:41.574 Write completed with error (sct=0, sc=8) 00:39:41.574 Read completed with error (sct=0, sc=8) 00:39:41.574 Write completed with error (sct=0, sc=8) 00:39:41.574 Read completed with error (sct=0, sc=8) 00:39:41.574 Read completed with error (sct=0, sc=8) 00:39:41.574 Read completed with error (sct=0, sc=8) 00:39:41.574 Write completed with error (sct=0, sc=8) 00:39:41.574 Read completed with error (sct=0, sc=8) 00:39:41.574 Read completed with error (sct=0, sc=8) 00:39:41.574 Read completed with error (sct=0, sc=8) 00:39:41.574 Write completed with error (sct=0, sc=8) 00:39:41.574 Write completed with error (sct=0, sc=8) 00:39:41.574 Read completed with error (sct=0, sc=8) 00:39:41.574 Write completed with error (sct=0, sc=8) 00:39:41.574 Read completed with error (sct=0, sc=8) 00:39:41.574 Read completed with error (sct=0, sc=8) 00:39:41.574 Write completed with error (sct=0, sc=8) 00:39:41.574 Read completed with error (sct=0, sc=8) 00:39:41.574 Read completed with error (sct=0, sc=8) 00:39:41.574 Read completed with error (sct=0, sc=8) 00:39:41.574 Write completed with error (sct=0, sc=8) 00:39:41.574 Read completed with error (sct=0, sc=8) 00:39:41.575 Write completed with error (sct=0, sc=8) 00:39:41.575 Read completed with error (sct=0, sc=8) 00:39:41.575 Read completed with error (sct=0, sc=8) 00:39:41.575 Read completed with error (sct=0, sc=8) 00:39:41.575 Read completed with error (sct=0, sc=8) 00:39:41.575 Read completed with error (sct=0, sc=8) 00:39:41.575 Read completed with error (sct=0, sc=8) 00:39:41.575 Write completed with error (sct=0, sc=8) 00:39:41.575 Read completed with error (sct=0, sc=8) 00:39:41.575 Write completed with error (sct=0, sc=8) 00:39:41.575 Read completed with error (sct=0, sc=8) 00:39:41.575 [2024-12-13 10:41:35.213358] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500001ef80 is same with the state(6) to be set 00:39:41.575 Write completed with error (sct=0, sc=8) 00:39:41.575 Read completed with error (sct=0, sc=8) 00:39:41.575 Read completed with error (sct=0, sc=8) 00:39:41.575 Read completed with error (sct=0, sc=8) 00:39:41.575 Write completed with error (sct=0, sc=8) 00:39:41.575 Write completed with error (sct=0, sc=8) 00:39:41.575 Read completed with error (sct=0, sc=8) 00:39:41.575 Read completed with error (sct=0, sc=8) 00:39:41.575 Read completed with error (sct=0, sc=8) 00:39:41.575 Read completed with error (sct=0, sc=8) 00:39:41.575 Read completed with error (sct=0, sc=8) 00:39:41.575 Read completed with error (sct=0, sc=8) 00:39:41.575 Write completed with error (sct=0, sc=8) 00:39:41.575 Read completed with error (sct=0, sc=8) 00:39:41.575 Write completed with error (sct=0, sc=8) 00:39:41.575 Read completed with error (sct=0, sc=8) 00:39:41.575 Write completed with error (sct=0, sc=8) 00:39:41.575 Write completed with error (sct=0, sc=8) 00:39:41.575 Write completed with error (sct=0, sc=8) 00:39:41.575 Write completed with error (sct=0, sc=8) 00:39:41.575 Read completed with error (sct=0, sc=8) 00:39:41.575 Read completed with error (sct=0, sc=8) 00:39:41.575 Read completed with error (sct=0, sc=8) 00:39:41.575 Write completed with error (sct=0, sc=8) 00:39:41.575 Read completed with error (sct=0, sc=8) 00:39:41.575 Read completed with error (sct=0, sc=8) 00:39:41.575 Read completed with error (sct=0, sc=8) 00:39:41.575 Read completed with error (sct=0, sc=8) 00:39:41.575 Read completed with error (sct=0, sc=8) 00:39:41.575 Read completed with error (sct=0, sc=8) 00:39:41.575 Write completed with error (sct=0, sc=8) 00:39:41.575 Read completed with error (sct=0, sc=8) 00:39:41.575 Read completed with error (sct=0, sc=8) 00:39:41.575 Write completed with error (sct=0, sc=8) 00:39:41.575 Read completed with error (sct=0, sc=8) 00:39:41.575 Read completed with error (sct=0, sc=8) 00:39:41.575 [2024-12-13 10:41:35.214111] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500001e800 is same with the state(6) to be set 00:39:41.575 Read completed with error (sct=0, sc=8) 00:39:41.575 Read completed with error (sct=0, sc=8) 00:39:41.575 Read completed with error (sct=0, sc=8) 00:39:41.575 Read completed with error (sct=0, sc=8) 00:39:41.575 Read completed with error (sct=0, sc=8) 00:39:41.575 Write completed with error (sct=0, sc=8) 00:39:41.575 Read completed with error (sct=0, sc=8) 00:39:41.575 Read completed with error (sct=0, sc=8) 00:39:41.575 Write completed with error (sct=0, sc=8) 00:39:41.575 Read completed with error (sct=0, sc=8) 00:39:41.575 Write completed with error (sct=0, sc=8) 00:39:41.575 Read completed with error (sct=0, sc=8) 00:39:41.575 Write completed with error (sct=0, sc=8) 00:39:41.575 Read completed with error (sct=0, sc=8) 00:39:41.575 Read completed with error (sct=0, sc=8) 00:39:41.575 Read completed with error (sct=0, sc=8) 00:39:41.575 Read completed with error (sct=0, sc=8) 00:39:41.575 Write completed with error (sct=0, sc=8) 00:39:41.575 Write completed with error (sct=0, sc=8) 00:39:41.575 Read completed with error (sct=0, sc=8) 00:39:41.575 Read completed with error (sct=0, sc=8) 00:39:41.575 Write completed with error (sct=0, sc=8) 00:39:41.575 Read completed with error (sct=0, sc=8) 00:39:41.575 Write completed with error (sct=0, sc=8) 00:39:41.575 Read completed with error (sct=0, sc=8) 00:39:41.575 Write completed with error (sct=0, sc=8) 00:39:41.575 Write completed with error (sct=0, sc=8) 00:39:41.575 Read completed with error (sct=0, sc=8) 00:39:41.575 Read completed with error (sct=0, sc=8) 00:39:41.575 Write completed with error (sct=0, sc=8) 00:39:41.575 Read completed with error (sct=0, sc=8) 00:39:41.575 Read completed with error (sct=0, sc=8) 00:39:41.575 Read completed with error (sct=0, sc=8) 00:39:41.575 Write completed with error (sct=0, sc=8) 00:39:41.575 Read completed with error (sct=0, sc=8) 00:39:41.575 Read completed with error (sct=0, sc=8) 00:39:41.575 [2024-12-13 10:41:35.214981] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500001ea80 is same with the state(6) to be set 00:39:41.575 Write completed with error (sct=0, sc=8) 00:39:41.575 Read completed with error (sct=0, sc=8) 00:39:41.575 Write completed with error (sct=0, sc=8) 00:39:41.575 Write completed with error (sct=0, sc=8) 00:39:41.575 Write completed with error (sct=0, sc=8) 00:39:41.575 Write completed with error (sct=0, sc=8) 00:39:41.575 Read completed with error (sct=0, sc=8) 00:39:41.575 Read completed with error (sct=0, sc=8) 00:39:41.575 Read completed with error (sct=0, sc=8) 00:39:41.575 Read completed with error (sct=0, sc=8) 00:39:41.575 Write completed with error (sct=0, sc=8) 00:39:41.575 Read completed with error (sct=0, sc=8) 00:39:41.575 Write completed with error (sct=0, sc=8) 00:39:41.575 Write completed with error (sct=0, sc=8) 00:39:41.575 Read completed with error (sct=0, sc=8) 00:39:41.575 Read completed with error (sct=0, sc=8) 00:39:41.575 Write completed with error (sct=0, sc=8) 00:39:41.575 Read completed with error (sct=0, sc=8) 00:39:41.575 [2024-12-13 10:41:35.220086] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000020600 is same with the state(6) to be set 00:39:41.575 10:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:41.575 10:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:39:41.575 10:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 4180003 00:39:41.575 10:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:39:41.575 Initializing NVMe Controllers 00:39:41.575 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:39:41.575 Controller IO queue size 128, less than required. 00:39:41.575 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:39:41.575 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:39:41.575 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:39:41.575 Initialization complete. Launching workers. 00:39:41.575 ======================================================== 00:39:41.575 Latency(us) 00:39:41.575 Device Information : IOPS MiB/s Average min max 00:39:41.575 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 190.62 0.09 948177.88 5336.18 1015391.22 00:39:41.575 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 157.44 0.08 868200.11 464.76 1011471.05 00:39:41.575 ======================================================== 00:39:41.575 Total : 348.06 0.17 912000.17 464.76 1015391.22 00:39:41.575 00:39:41.575 [2024-12-13 10:41:35.225414] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500001e080 (9): Bad file descriptor 00:39:41.575 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:39:41.834 10:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:39:41.834 10:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 4180003 00:39:42.096 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (4180003) - No such process 00:39:42.096 10:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 4180003 00:39:42.096 10:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:39:42.096 10:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 4180003 00:39:42.096 10:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:39:42.096 10:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:42.096 10:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:39:42.096 10:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:42.096 10:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 4180003 00:39:42.096 10:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:39:42.096 10:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:39:42.096 10:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:39:42.096 10:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:39:42.096 10:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:39:42.096 10:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:42.096 10:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:39:42.096 10:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:42.096 10:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:42.096 10:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:42.096 10:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:39:42.096 [2024-12-13 10:41:35.751846] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:42.096 10:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:42.096 10:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:42.096 10:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:42.096 10:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:39:42.096 10:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:42.096 10:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=4180486 00:39:42.096 10:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:39:42.096 10:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:39:42.096 10:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 4180486 00:39:42.096 10:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:39:42.096 [2024-12-13 10:41:35.854833] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:39:42.689 10:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:39:42.689 10:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 4180486 00:39:42.689 10:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:39:42.947 10:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:39:42.947 10:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 4180486 00:39:42.947 10:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:39:43.513 10:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:39:43.513 10:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 4180486 00:39:43.513 10:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:39:44.082 10:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:39:44.082 10:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 4180486 00:39:44.082 10:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:39:44.651 10:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:39:44.651 10:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 4180486 00:39:44.651 10:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:39:44.910 10:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:39:44.910 10:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 4180486 00:39:44.910 10:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:39:45.169 Initializing NVMe Controllers 00:39:45.169 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:39:45.169 Controller IO queue size 128, less than required. 00:39:45.169 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:39:45.169 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:39:45.169 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:39:45.169 Initialization complete. Launching workers. 00:39:45.169 ======================================================== 00:39:45.169 Latency(us) 00:39:45.169 Device Information : IOPS MiB/s Average min max 00:39:45.169 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1003592.54 1000155.60 1011642.00 00:39:45.169 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1006209.92 1000336.93 1014399.29 00:39:45.169 ======================================================== 00:39:45.169 Total : 256.00 0.12 1004901.23 1000155.60 1014399.29 00:39:45.169 00:39:45.428 10:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:39:45.428 10:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 4180486 00:39:45.428 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (4180486) - No such process 00:39:45.428 10:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 4180486 00:39:45.428 10:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:39:45.428 10:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:39:45.428 10:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:39:45.428 10:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:39:45.428 10:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:45.428 10:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:39:45.428 10:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:45.428 10:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:45.428 rmmod nvme_tcp 00:39:45.687 rmmod nvme_fabrics 00:39:45.687 rmmod nvme_keyring 00:39:45.687 10:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:45.687 10:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:39:45.687 10:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:39:45.687 10:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 4179788 ']' 00:39:45.687 10:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 4179788 00:39:45.687 10:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 4179788 ']' 00:39:45.687 10:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 4179788 00:39:45.687 10:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:39:45.687 10:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:45.687 10:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4179788 00:39:45.687 10:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:39:45.687 10:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:39:45.687 10:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4179788' 00:39:45.687 killing process with pid 4179788 00:39:45.687 10:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 4179788 00:39:45.687 10:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 4179788 00:39:47.064 10:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:39:47.064 10:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:39:47.064 10:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:39:47.064 10:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:39:47.064 10:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:39:47.064 10:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:39:47.064 10:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:39:47.064 10:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:47.064 10:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:39:47.064 10:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:47.064 10:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:47.064 10:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:48.968 10:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:39:48.968 00:39:48.968 real 0m17.436s 00:39:48.968 user 0m27.491s 00:39:48.968 sys 0m6.108s 00:39:48.968 10:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:48.968 10:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:39:48.968 ************************************ 00:39:48.968 END TEST nvmf_delete_subsystem 00:39:48.968 ************************************ 00:39:48.968 10:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:39:48.968 10:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:39:48.968 10:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:48.968 10:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:39:48.968 ************************************ 00:39:48.968 START TEST nvmf_host_management 00:39:48.968 ************************************ 00:39:48.968 10:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:39:48.968 * Looking for test storage... 00:39:48.968 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:39:48.968 10:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:39:48.968 10:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1711 -- # lcov --version 00:39:48.968 10:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:39:48.968 10:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:39:48.968 10:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:48.968 10:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:48.968 10:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:48.968 10:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:39:48.968 10:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:39:48.968 10:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:39:48.968 10:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:39:48.968 10:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:39:48.968 10:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:39:48.968 10:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:39:48.968 10:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:48.968 10:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:39:48.968 10:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:39:48.968 10:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:48.968 10:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:48.968 10:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:39:48.968 10:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:39:48.968 10:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:48.968 10:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:39:48.968 10:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:39:48.968 10:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:39:48.968 10:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:39:48.968 10:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:48.968 10:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:39:48.968 10:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:39:48.968 10:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:48.968 10:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:48.968 10:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:39:48.968 10:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:48.968 10:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:39:48.968 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:48.968 --rc genhtml_branch_coverage=1 00:39:48.968 --rc genhtml_function_coverage=1 00:39:48.968 --rc genhtml_legend=1 00:39:48.968 --rc geninfo_all_blocks=1 00:39:48.968 --rc geninfo_unexecuted_blocks=1 00:39:48.968 00:39:48.968 ' 00:39:48.968 10:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:39:48.968 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:48.968 --rc genhtml_branch_coverage=1 00:39:48.968 --rc genhtml_function_coverage=1 00:39:48.968 --rc genhtml_legend=1 00:39:48.968 --rc geninfo_all_blocks=1 00:39:48.968 --rc geninfo_unexecuted_blocks=1 00:39:48.968 00:39:48.968 ' 00:39:48.968 10:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:39:48.968 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:48.968 --rc genhtml_branch_coverage=1 00:39:48.968 --rc genhtml_function_coverage=1 00:39:48.968 --rc genhtml_legend=1 00:39:48.968 --rc geninfo_all_blocks=1 00:39:48.968 --rc geninfo_unexecuted_blocks=1 00:39:48.968 00:39:48.968 ' 00:39:49.227 10:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:39:49.227 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:49.227 --rc genhtml_branch_coverage=1 00:39:49.227 --rc genhtml_function_coverage=1 00:39:49.227 --rc genhtml_legend=1 00:39:49.227 --rc geninfo_all_blocks=1 00:39:49.227 --rc geninfo_unexecuted_blocks=1 00:39:49.227 00:39:49.227 ' 00:39:49.227 10:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:49.227 10:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:39:49.227 10:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:49.227 10:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:49.227 10:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:49.227 10:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:49.227 10:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:49.227 10:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:49.227 10:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:49.228 10:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:49.228 10:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:49.228 10:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:49.228 10:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:39:49.228 10:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:39:49.228 10:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:49.228 10:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:49.228 10:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:49.228 10:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:49.228 10:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:49.228 10:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:39:49.228 10:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:49.228 10:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:49.228 10:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:49.228 10:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:49.228 10:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:49.228 10:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:49.228 10:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:39:49.228 10:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:49.228 10:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:39:49.228 10:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:49.228 10:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:49.228 10:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:49.228 10:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:49.228 10:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:49.228 10:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:39:49.228 10:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:39:49.228 10:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:49.228 10:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:49.228 10:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:49.228 10:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:39:49.228 10:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:39:49.228 10:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:39:49.228 10:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:39:49.228 10:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:49.228 10:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:39:49.228 10:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:39:49.228 10:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:39:49.228 10:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:49.228 10:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:49.228 10:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:49.228 10:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:39:49.228 10:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:39:49.228 10:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:39:49.228 10:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:39:54.500 10:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:54.500 10:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:39:54.501 10:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:39:54.501 10:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:39:54.501 10:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:39:54.501 10:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:39:54.501 10:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:39:54.501 10:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:39:54.501 10:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:39:54.501 10:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:39:54.501 10:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:39:54.501 10:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:39:54.501 10:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:39:54.501 10:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:39:54.501 10:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:39:54.501 10:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:54.501 10:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:54.501 10:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:54.501 10:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:54.501 10:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:54.501 10:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:54.501 10:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:54.501 10:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:39:54.501 10:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:54.501 10:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:54.501 10:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:54.501 10:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:54.501 10:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:39:54.501 10:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:39:54.501 10:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:39:54.501 10:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:39:54.501 10:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:39:54.501 10:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:39:54.501 10:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:54.501 10:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:39:54.501 Found 0000:af:00.0 (0x8086 - 0x159b) 00:39:54.501 10:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:54.501 10:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:54.501 10:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:54.501 10:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:54.501 10:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:54.501 10:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:54.501 10:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:39:54.501 Found 0000:af:00.1 (0x8086 - 0x159b) 00:39:54.501 10:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:54.501 10:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:54.501 10:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:54.501 10:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:54.501 10:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:54.501 10:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:39:54.501 10:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:39:54.501 10:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:39:54.501 10:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:54.501 10:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:54.501 10:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:54.501 10:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:54.501 10:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:54.501 10:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:54.501 10:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:54.501 10:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:39:54.501 Found net devices under 0000:af:00.0: cvl_0_0 00:39:54.501 10:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:54.501 10:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:54.501 10:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:54.501 10:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:54.501 10:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:54.501 10:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:54.501 10:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:54.501 10:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:54.501 10:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:39:54.501 Found net devices under 0000:af:00.1: cvl_0_1 00:39:54.501 10:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:54.501 10:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:39:54.501 10:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:39:54.501 10:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:39:54.501 10:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:39:54.501 10:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:39:54.501 10:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:39:54.501 10:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:54.501 10:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:54.501 10:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:54.501 10:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:39:54.501 10:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:54.501 10:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:54.501 10:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:39:54.501 10:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:39:54.501 10:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:54.501 10:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:54.501 10:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:39:54.501 10:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:39:54.501 10:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:39:54.501 10:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:54.501 10:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:54.501 10:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:54.501 10:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:39:54.501 10:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:54.501 10:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:54.501 10:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:54.501 10:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:39:54.501 10:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:39:54.502 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:54.502 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.269 ms 00:39:54.502 00:39:54.502 --- 10.0.0.2 ping statistics --- 00:39:54.502 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:54.502 rtt min/avg/max/mdev = 0.269/0.269/0.269/0.000 ms 00:39:54.502 10:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:54.502 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:54.502 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.168 ms 00:39:54.502 00:39:54.502 --- 10.0.0.1 ping statistics --- 00:39:54.502 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:54.502 rtt min/avg/max/mdev = 0.168/0.168/0.168/0.000 ms 00:39:54.502 10:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:54.502 10:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:39:54.502 10:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:39:54.502 10:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:54.502 10:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:39:54.502 10:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:39:54.502 10:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:54.502 10:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:39:54.502 10:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:39:54.502 10:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:39:54.502 10:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:39:54.502 10:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:39:54.502 10:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:39:54.502 10:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:39:54.502 10:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:39:54.502 10:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=4184593 00:39:54.502 10:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 4184593 00:39:54.502 10:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 00:39:54.502 10:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 4184593 ']' 00:39:54.502 10:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:54.502 10:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:54.502 10:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:54.502 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:54.502 10:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:54.502 10:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:39:54.502 [2024-12-13 10:41:47.901222] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:39:54.502 [2024-12-13 10:41:47.903246] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:39:54.502 [2024-12-13 10:41:47.903328] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:54.502 [2024-12-13 10:41:48.022229] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:39:54.502 [2024-12-13 10:41:48.127128] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:54.502 [2024-12-13 10:41:48.127172] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:54.502 [2024-12-13 10:41:48.127184] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:54.502 [2024-12-13 10:41:48.127192] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:54.502 [2024-12-13 10:41:48.127217] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:54.502 [2024-12-13 10:41:48.129557] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:39:54.502 [2024-12-13 10:41:48.129635] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:39:54.502 [2024-12-13 10:41:48.129668] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:39:54.502 [2024-12-13 10:41:48.129693] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:39:54.761 [2024-12-13 10:41:48.431655] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:39:54.761 [2024-12-13 10:41:48.433160] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:39:54.761 [2024-12-13 10:41:48.435086] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:39:54.761 [2024-12-13 10:41:48.435914] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:39:54.761 [2024-12-13 10:41:48.436235] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:39:55.020 10:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:55.020 10:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:39:55.020 10:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:39:55.020 10:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:39:55.020 10:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:39:55.020 10:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:55.020 10:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:39:55.020 10:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:55.020 10:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:39:55.020 [2024-12-13 10:41:48.742675] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:55.020 10:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:55.020 10:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:39:55.020 10:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:39:55.020 10:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:39:55.020 10:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:39:55.020 10:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:39:55.020 10:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:39:55.020 10:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:55.020 10:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:39:55.020 Malloc0 00:39:55.020 [2024-12-13 10:41:48.866677] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:55.020 10:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:55.020 10:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:39:55.020 10:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:39:55.020 10:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:39:55.280 10:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=4184677 00:39:55.280 10:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 4184677 /var/tmp/bdevperf.sock 00:39:55.280 10:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 4184677 ']' 00:39:55.280 10:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:39:55.280 10:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:39:55.280 10:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:55.280 10:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:39:55.280 10:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:39:55.280 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:39:55.280 10:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:39:55.280 10:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:55.280 10:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:39:55.280 10:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:39:55.280 10:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:39:55.280 10:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:39:55.280 { 00:39:55.280 "params": { 00:39:55.280 "name": "Nvme$subsystem", 00:39:55.280 "trtype": "$TEST_TRANSPORT", 00:39:55.280 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:55.280 "adrfam": "ipv4", 00:39:55.280 "trsvcid": "$NVMF_PORT", 00:39:55.280 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:55.280 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:55.280 "hdgst": ${hdgst:-false}, 00:39:55.280 "ddgst": ${ddgst:-false} 00:39:55.280 }, 00:39:55.280 "method": "bdev_nvme_attach_controller" 00:39:55.280 } 00:39:55.280 EOF 00:39:55.280 )") 00:39:55.280 10:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:39:55.280 10:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:39:55.280 10:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:39:55.280 10:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:39:55.280 "params": { 00:39:55.280 "name": "Nvme0", 00:39:55.280 "trtype": "tcp", 00:39:55.280 "traddr": "10.0.0.2", 00:39:55.280 "adrfam": "ipv4", 00:39:55.280 "trsvcid": "4420", 00:39:55.280 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:55.280 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:39:55.280 "hdgst": false, 00:39:55.280 "ddgst": false 00:39:55.280 }, 00:39:55.280 "method": "bdev_nvme_attach_controller" 00:39:55.280 }' 00:39:55.280 [2024-12-13 10:41:48.989582] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:39:55.280 [2024-12-13 10:41:48.989674] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4184677 ] 00:39:55.280 [2024-12-13 10:41:49.107995] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:55.539 [2024-12-13 10:41:49.220802] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:39:56.107 Running I/O for 10 seconds... 00:39:56.107 10:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:56.107 10:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:39:56.107 10:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:39:56.107 10:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:56.107 10:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:39:56.107 10:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:56.107 10:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:39:56.107 10:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:39:56.107 10:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:39:56.107 10:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:39:56.107 10:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:39:56.107 10:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:39:56.107 10:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:39:56.107 10:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:39:56.107 10:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:39:56.107 10:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:39:56.107 10:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:56.107 10:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:39:56.107 10:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:56.107 10:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:39:56.107 10:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:39:56.107 10:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:39:56.366 10:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:39:56.366 10:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:39:56.367 10:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:39:56.367 10:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:39:56.367 10:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:56.367 10:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:39:56.367 10:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:56.367 10:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=643 00:39:56.367 10:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 643 -ge 100 ']' 00:39:56.367 10:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:39:56.367 10:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 00:39:56.367 10:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:39:56.367 10:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:39:56.367 10:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:56.367 10:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:39:56.367 [2024-12-13 10:41:50.250348] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:39:56.367 [2024-12-13 10:41:50.250402] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:39:56.367 [2024-12-13 10:41:50.250414] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:39:56.367 [2024-12-13 10:41:50.250423] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:39:56.367 [2024-12-13 10:41:50.250432] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:39:56.367 [2024-12-13 10:41:50.250440] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:39:56.367 [2024-12-13 10:41:50.250455] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:39:56.367 [2024-12-13 10:41:50.250463] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:39:56.367 [2024-12-13 10:41:50.250471] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:39:56.367 [2024-12-13 10:41:50.250480] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:39:56.367 [2024-12-13 10:41:50.250488] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:39:56.367 10:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:56.367 10:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:39:56.367 10:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:56.367 10:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:39:56.627 [2024-12-13 10:41:50.261984] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:39:56.627 [2024-12-13 10:41:50.262028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.627 [2024-12-13 10:41:50.262043] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:39:56.627 [2024-12-13 10:41:50.262054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.627 [2024-12-13 10:41:50.262064] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:39:56.627 [2024-12-13 10:41:50.262074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.627 [2024-12-13 10:41:50.262085] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:39:56.627 [2024-12-13 10:41:50.262095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.627 [2024-12-13 10:41:50.262115] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:39:56.627 [2024-12-13 10:41:50.262471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:56.627 [2024-12-13 10:41:50.262499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.627 [2024-12-13 10:41:50.262521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:90240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:56.628 [2024-12-13 10:41:50.262533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.628 [2024-12-13 10:41:50.262545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:90368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:56.628 [2024-12-13 10:41:50.262555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.628 [2024-12-13 10:41:50.262567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:90496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:56.628 [2024-12-13 10:41:50.262578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.628 [2024-12-13 10:41:50.262589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:90624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:56.628 [2024-12-13 10:41:50.262600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.628 [2024-12-13 10:41:50.262611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:90752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:56.628 [2024-12-13 10:41:50.262621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.628 [2024-12-13 10:41:50.262633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:90880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:56.628 [2024-12-13 10:41:50.262643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.628 [2024-12-13 10:41:50.262655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:91008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:56.628 [2024-12-13 10:41:50.262665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.628 [2024-12-13 10:41:50.262676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:91136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:56.628 [2024-12-13 10:41:50.262686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.628 [2024-12-13 10:41:50.262698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:91264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:56.628 [2024-12-13 10:41:50.262707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.628 [2024-12-13 10:41:50.262719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:91392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:56.628 [2024-12-13 10:41:50.262728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.628 [2024-12-13 10:41:50.262739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:91520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:56.628 [2024-12-13 10:41:50.262749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.628 [2024-12-13 10:41:50.262761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:91648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:56.628 [2024-12-13 10:41:50.262774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.628 [2024-12-13 10:41:50.262786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:91776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:56.628 [2024-12-13 10:41:50.262797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.628 [2024-12-13 10:41:50.262809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:91904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:56.628 [2024-12-13 10:41:50.262819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.628 [2024-12-13 10:41:50.262831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:92032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:56.628 [2024-12-13 10:41:50.262841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.628 [2024-12-13 10:41:50.262852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:92160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:56.628 [2024-12-13 10:41:50.262863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.628 10:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:56.628 [2024-12-13 10:41:50.262876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:92288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:56.628 [2024-12-13 10:41:50.262887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.628 [2024-12-13 10:41:50.262899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:92416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:56.628 [2024-12-13 10:41:50.262908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.628 [2024-12-13 10:41:50.262920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:92544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:56.628 [2024-12-13 10:41:50.262930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.628 [2024-12-13 10:41:50.262942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:92672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:56.628 [2024-12-13 10:41:50.262952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.628 [2024-12-13 10:41:50.262963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:92800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:56.628 [2024-12-13 10:41:50.262973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.628 [2024-12-13 10:41:50.262985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:92928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:56.628 [2024-12-13 10:41:50.262995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.628 [2024-12-13 10:41:50.263007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:93056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:56.628 [2024-12-13 10:41:50.263016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.628 [2024-12-13 10:41:50.263028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:93184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:56.628 [2024-12-13 10:41:50.263039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.628 [2024-12-13 10:41:50.263051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:93312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:56.628 [2024-12-13 10:41:50.263061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.628 [2024-12-13 10:41:50.263073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:93440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:56.628 [2024-12-13 10:41:50.263082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.628 [2024-12-13 10:41:50.263094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:93568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:56.628 [2024-12-13 10:41:50.263104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.628 [2024-12-13 10:41:50.263116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:93696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:56.628 [2024-12-13 10:41:50.263125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.628 [2024-12-13 10:41:50.263139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:93824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:56.628 [2024-12-13 10:41:50.263150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.628 [2024-12-13 10:41:50.263163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:93952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:56.628 [2024-12-13 10:41:50.263173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.628 [2024-12-13 10:41:50.263185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:94080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:56.628 [2024-12-13 10:41:50.263195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.628 [2024-12-13 10:41:50.263207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:94208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:56.628 [2024-12-13 10:41:50.263217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.628 10:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:39:56.628 [2024-12-13 10:41:50.263228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:94336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:56.628 [2024-12-13 10:41:50.263239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.628 [2024-12-13 10:41:50.263251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:94464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:56.628 [2024-12-13 10:41:50.263261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.628 [2024-12-13 10:41:50.263272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:94592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:56.628 [2024-12-13 10:41:50.263282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.628 [2024-12-13 10:41:50.263294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:94720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:56.628 [2024-12-13 10:41:50.263305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.628 [2024-12-13 10:41:50.263317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:94848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:56.628 [2024-12-13 10:41:50.263327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.628 [2024-12-13 10:41:50.263338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:94976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:56.628 [2024-12-13 10:41:50.263348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.628 [2024-12-13 10:41:50.263360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:95104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:56.628 [2024-12-13 10:41:50.263369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.628 [2024-12-13 10:41:50.263381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:95232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:56.629 [2024-12-13 10:41:50.263390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.629 [2024-12-13 10:41:50.263402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:95360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:56.629 [2024-12-13 10:41:50.263411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.629 [2024-12-13 10:41:50.263423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:95488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:56.629 [2024-12-13 10:41:50.263432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.629 [2024-12-13 10:41:50.263444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:95616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:56.629 [2024-12-13 10:41:50.263459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.629 [2024-12-13 10:41:50.263470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:95744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:56.629 [2024-12-13 10:41:50.263480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.629 [2024-12-13 10:41:50.263491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:95872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:56.629 [2024-12-13 10:41:50.263501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.629 [2024-12-13 10:41:50.263513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:96000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:56.629 [2024-12-13 10:41:50.263523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.629 [2024-12-13 10:41:50.263534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:96128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:56.629 [2024-12-13 10:41:50.263544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.629 [2024-12-13 10:41:50.263555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:96256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:56.629 [2024-12-13 10:41:50.263565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.629 [2024-12-13 10:41:50.263578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:96384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:56.629 [2024-12-13 10:41:50.263588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.629 [2024-12-13 10:41:50.263599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:96512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:56.629 [2024-12-13 10:41:50.263609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.629 [2024-12-13 10:41:50.263633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:96640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:56.629 [2024-12-13 10:41:50.263643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.629 [2024-12-13 10:41:50.263653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:96768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:56.629 [2024-12-13 10:41:50.263663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.629 [2024-12-13 10:41:50.263674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:96896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:56.629 [2024-12-13 10:41:50.263683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.629 [2024-12-13 10:41:50.263694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:97024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:56.629 [2024-12-13 10:41:50.263703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.629 [2024-12-13 10:41:50.263714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:97152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:56.629 [2024-12-13 10:41:50.263723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.629 [2024-12-13 10:41:50.263735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:97280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:56.629 [2024-12-13 10:41:50.263744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.629 [2024-12-13 10:41:50.263756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:97408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:56.629 [2024-12-13 10:41:50.263765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.629 [2024-12-13 10:41:50.263776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:97536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:56.629 [2024-12-13 10:41:50.263785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.629 [2024-12-13 10:41:50.263796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:97664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:56.629 [2024-12-13 10:41:50.263805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.629 [2024-12-13 10:41:50.263816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:97792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:56.629 [2024-12-13 10:41:50.263826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.629 [2024-12-13 10:41:50.263837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:97920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:56.629 [2024-12-13 10:41:50.263848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.629 [2024-12-13 10:41:50.263859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:98048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:56.629 [2024-12-13 10:41:50.263868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.629 [2024-12-13 10:41:50.263879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:98176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:56.629 [2024-12-13 10:41:50.263893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:56.629 [2024-12-13 10:41:50.265132] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:39:56.629 task offset: 90112 on job bdev=Nvme0n1 fails 00:39:56.629 00:39:56.629 Latency(us) 00:39:56.629 [2024-12-13T09:41:50.520Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:56.629 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:39:56.629 Job: Nvme0n1 ended in about 0.42 seconds with error 00:39:56.629 Verification LBA range: start 0x0 length 0x400 00:39:56.629 Nvme0n1 : 0.42 1670.20 104.39 151.84 0.00 34179.40 2605.84 31332.45 00:39:56.629 [2024-12-13T09:41:50.520Z] =================================================================================================================== 00:39:56.629 [2024-12-13T09:41:50.520Z] Total : 1670.20 104.39 151.84 0.00 34179.40 2605.84 31332.45 00:39:56.629 [2024-12-13 10:41:50.281101] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:39:56.629 [2024-12-13 10:41:50.281140] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:39:56.629 [2024-12-13 10:41:50.332854] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:39:57.567 10:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 4184677 00:39:57.567 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (4184677) - No such process 00:39:57.567 10:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true 00:39:57.567 10:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:39:57.567 10:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:39:57.567 10:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:39:57.567 10:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:39:57.567 10:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:39:57.567 10:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:39:57.567 10:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:39:57.567 { 00:39:57.567 "params": { 00:39:57.567 "name": "Nvme$subsystem", 00:39:57.567 "trtype": "$TEST_TRANSPORT", 00:39:57.567 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:57.567 "adrfam": "ipv4", 00:39:57.567 "trsvcid": "$NVMF_PORT", 00:39:57.567 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:57.567 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:57.567 "hdgst": ${hdgst:-false}, 00:39:57.567 "ddgst": ${ddgst:-false} 00:39:57.567 }, 00:39:57.567 "method": "bdev_nvme_attach_controller" 00:39:57.567 } 00:39:57.567 EOF 00:39:57.567 )") 00:39:57.567 10:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:39:57.567 10:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:39:57.567 10:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:39:57.567 10:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:39:57.567 "params": { 00:39:57.567 "name": "Nvme0", 00:39:57.567 "trtype": "tcp", 00:39:57.567 "traddr": "10.0.0.2", 00:39:57.567 "adrfam": "ipv4", 00:39:57.567 "trsvcid": "4420", 00:39:57.567 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:57.567 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:39:57.567 "hdgst": false, 00:39:57.567 "ddgst": false 00:39:57.567 }, 00:39:57.567 "method": "bdev_nvme_attach_controller" 00:39:57.567 }' 00:39:57.567 [2024-12-13 10:41:51.346928] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:39:57.567 [2024-12-13 10:41:51.347010] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4185130 ] 00:39:57.826 [2024-12-13 10:41:51.459974] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:57.826 [2024-12-13 10:41:51.569905] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:39:58.394 Running I/O for 1 seconds... 00:39:59.772 1792.00 IOPS, 112.00 MiB/s 00:39:59.772 Latency(us) 00:39:59.772 [2024-12-13T09:41:53.663Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:59.772 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:39:59.772 Verification LBA range: start 0x0 length 0x400 00:39:59.772 Nvme0n1 : 1.02 1824.79 114.05 0.00 0.00 34501.48 7240.17 30957.96 00:39:59.772 [2024-12-13T09:41:53.663Z] =================================================================================================================== 00:39:59.772 [2024-12-13T09:41:53.663Z] Total : 1824.79 114.05 0.00 0.00 34501.48 7240.17 30957.96 00:40:00.339 10:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:40:00.339 10:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:40:00.339 10:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:40:00.339 10:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:40:00.339 10:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:40:00.339 10:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:40:00.339 10:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:40:00.339 10:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:40:00.339 10:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:40:00.339 10:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:40:00.339 10:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:40:00.339 rmmod nvme_tcp 00:40:00.339 rmmod nvme_fabrics 00:40:00.339 rmmod nvme_keyring 00:40:00.339 10:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:40:00.339 10:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:40:00.339 10:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:40:00.339 10:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 4184593 ']' 00:40:00.340 10:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 4184593 00:40:00.340 10:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 4184593 ']' 00:40:00.340 10:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 4184593 00:40:00.340 10:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:40:00.340 10:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:40:00.340 10:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4184593 00:40:00.598 10:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:40:00.598 10:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:40:00.598 10:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4184593' 00:40:00.598 killing process with pid 4184593 00:40:00.598 10:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 4184593 00:40:00.598 10:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 4184593 00:40:01.977 [2024-12-13 10:41:55.489241] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:40:01.977 10:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:40:01.977 10:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:40:01.977 10:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:40:01.977 10:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:40:01.977 10:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:40:01.977 10:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:40:01.977 10:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:40:01.977 10:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:40:01.977 10:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:40:01.977 10:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:01.977 10:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:01.977 10:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:03.931 10:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:40:03.931 10:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:40:03.931 00:40:03.931 real 0m14.927s 00:40:03.931 user 0m28.710s 00:40:03.931 sys 0m6.330s 00:40:03.931 10:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:03.931 10:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:40:03.931 ************************************ 00:40:03.931 END TEST nvmf_host_management 00:40:03.931 ************************************ 00:40:03.932 10:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:40:03.932 10:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:40:03.932 10:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:03.932 10:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:40:03.932 ************************************ 00:40:03.932 START TEST nvmf_lvol 00:40:03.932 ************************************ 00:40:03.932 10:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:40:03.932 * Looking for test storage... 00:40:03.932 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:40:03.932 10:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:40:03.932 10:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1711 -- # lcov --version 00:40:03.932 10:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:40:04.247 10:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:40:04.247 10:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:04.247 10:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:04.247 10:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:04.247 10:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:40:04.247 10:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:40:04.247 10:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:40:04.247 10:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:40:04.247 10:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:40:04.247 10:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:40:04.247 10:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:40:04.247 10:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:04.247 10:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:40:04.247 10:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:40:04.247 10:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:04.247 10:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:04.247 10:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:40:04.247 10:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:40:04.247 10:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:04.247 10:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:40:04.247 10:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:40:04.247 10:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:40:04.247 10:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:40:04.247 10:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:04.247 10:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:40:04.247 10:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:40:04.247 10:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:04.247 10:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:04.247 10:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:40:04.247 10:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:04.247 10:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:40:04.247 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:04.247 --rc genhtml_branch_coverage=1 00:40:04.247 --rc genhtml_function_coverage=1 00:40:04.247 --rc genhtml_legend=1 00:40:04.247 --rc geninfo_all_blocks=1 00:40:04.247 --rc geninfo_unexecuted_blocks=1 00:40:04.247 00:40:04.247 ' 00:40:04.247 10:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:40:04.247 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:04.247 --rc genhtml_branch_coverage=1 00:40:04.247 --rc genhtml_function_coverage=1 00:40:04.247 --rc genhtml_legend=1 00:40:04.247 --rc geninfo_all_blocks=1 00:40:04.247 --rc geninfo_unexecuted_blocks=1 00:40:04.247 00:40:04.247 ' 00:40:04.247 10:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:40:04.247 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:04.247 --rc genhtml_branch_coverage=1 00:40:04.247 --rc genhtml_function_coverage=1 00:40:04.247 --rc genhtml_legend=1 00:40:04.247 --rc geninfo_all_blocks=1 00:40:04.247 --rc geninfo_unexecuted_blocks=1 00:40:04.247 00:40:04.247 ' 00:40:04.247 10:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:40:04.247 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:04.247 --rc genhtml_branch_coverage=1 00:40:04.247 --rc genhtml_function_coverage=1 00:40:04.247 --rc genhtml_legend=1 00:40:04.247 --rc geninfo_all_blocks=1 00:40:04.247 --rc geninfo_unexecuted_blocks=1 00:40:04.247 00:40:04.247 ' 00:40:04.247 10:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:04.247 10:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:40:04.247 10:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:04.247 10:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:04.247 10:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:04.248 10:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:04.248 10:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:04.248 10:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:04.248 10:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:04.248 10:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:04.248 10:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:04.248 10:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:04.248 10:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:40:04.248 10:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:40:04.248 10:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:04.248 10:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:04.248 10:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:04.248 10:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:04.248 10:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:04.248 10:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:40:04.248 10:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:04.248 10:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:04.248 10:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:04.248 10:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:04.248 10:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:04.248 10:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:04.248 10:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:40:04.248 10:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:04.248 10:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:40:04.248 10:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:40:04.248 10:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:40:04.248 10:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:04.248 10:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:04.248 10:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:04.248 10:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:40:04.248 10:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:40:04.248 10:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:40:04.248 10:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:40:04.248 10:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:40:04.248 10:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:40:04.248 10:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:40:04.248 10:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:40:04.248 10:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:40:04.248 10:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:40:04.248 10:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:40:04.248 10:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:40:04.248 10:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:04.248 10:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:40:04.248 10:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:40:04.248 10:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:40:04.248 10:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:04.248 10:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:04.248 10:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:04.248 10:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:40:04.248 10:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:40:04.248 10:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:40:04.248 10:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:40:09.541 10:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:40:09.541 10:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:40:09.541 10:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:40:09.541 10:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:40:09.541 10:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:40:09.541 10:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:40:09.541 10:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:40:09.541 10:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:40:09.541 10:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:40:09.541 10:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:40:09.541 10:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:40:09.541 10:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:40:09.541 10:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:40:09.541 10:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:40:09.541 10:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:40:09.541 10:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:40:09.541 10:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:40:09.541 10:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:40:09.541 10:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:40:09.541 10:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:40:09.541 10:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:40:09.541 10:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:40:09.541 10:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:40:09.541 10:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:40:09.541 10:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:40:09.541 10:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:40:09.541 10:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:40:09.542 10:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:40:09.542 10:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:40:09.542 10:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:40:09.542 10:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:40:09.542 10:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:40:09.542 10:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:40:09.542 10:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:09.542 10:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:40:09.542 Found 0000:af:00.0 (0x8086 - 0x159b) 00:40:09.542 10:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:09.542 10:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:09.542 10:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:09.542 10:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:09.542 10:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:09.542 10:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:09.542 10:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:40:09.542 Found 0000:af:00.1 (0x8086 - 0x159b) 00:40:09.542 10:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:09.542 10:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:09.542 10:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:09.542 10:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:09.542 10:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:09.542 10:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:40:09.542 10:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:40:09.542 10:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:40:09.542 10:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:09.542 10:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:09.542 10:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:09.542 10:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:09.542 10:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:09.542 10:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:09.542 10:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:09.542 10:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:40:09.542 Found net devices under 0000:af:00.0: cvl_0_0 00:40:09.542 10:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:09.542 10:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:09.542 10:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:09.542 10:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:09.542 10:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:09.542 10:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:09.542 10:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:09.542 10:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:09.542 10:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:40:09.542 Found net devices under 0000:af:00.1: cvl_0_1 00:40:09.542 10:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:09.542 10:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:40:09.542 10:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:40:09.542 10:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:40:09.542 10:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:40:09.542 10:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:40:09.542 10:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:40:09.542 10:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:40:09.542 10:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:40:09.542 10:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:40:09.542 10:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:40:09.542 10:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:40:09.542 10:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:40:09.542 10:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:40:09.542 10:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:40:09.542 10:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:40:09.542 10:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:09.542 10:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:40:09.542 10:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:40:09.542 10:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:40:09.542 10:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:40:09.542 10:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:40:09.542 10:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:40:09.542 10:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:40:09.542 10:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:40:09.542 10:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:40:09.542 10:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:40:09.542 10:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:40:09.542 10:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:40:09.542 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:09.542 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.220 ms 00:40:09.542 00:40:09.542 --- 10.0.0.2 ping statistics --- 00:40:09.542 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:09.542 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:40:09.542 10:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:40:09.542 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:09.542 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.141 ms 00:40:09.542 00:40:09.542 --- 10.0.0.1 ping statistics --- 00:40:09.542 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:09.542 rtt min/avg/max/mdev = 0.141/0.141/0.141/0.000 ms 00:40:09.542 10:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:09.542 10:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:40:09.542 10:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:40:09.542 10:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:09.542 10:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:40:09.542 10:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:40:09.542 10:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:09.542 10:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:40:09.542 10:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:40:09.542 10:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:40:09.542 10:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:40:09.542 10:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:40:09.542 10:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:40:09.542 10:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=4189280 00:40:09.542 10:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 4189280 00:40:09.542 10:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 00:40:09.542 10:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 4189280 ']' 00:40:09.542 10:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:09.542 10:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:09.542 10:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:09.542 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:09.542 10:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:09.542 10:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:40:09.801 [2024-12-13 10:42:03.474724] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:40:09.801 [2024-12-13 10:42:03.476820] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:40:09.801 [2024-12-13 10:42:03.476887] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:09.801 [2024-12-13 10:42:03.595978] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:40:10.060 [2024-12-13 10:42:03.697897] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:10.060 [2024-12-13 10:42:03.697939] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:10.061 [2024-12-13 10:42:03.697950] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:10.061 [2024-12-13 10:42:03.697959] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:10.061 [2024-12-13 10:42:03.697985] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:10.061 [2024-12-13 10:42:03.700076] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:40:10.061 [2024-12-13 10:42:03.700143] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:40:10.061 [2024-12-13 10:42:03.700149] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:40:10.319 [2024-12-13 10:42:04.027820] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:40:10.319 [2024-12-13 10:42:04.028669] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:40:10.319 [2024-12-13 10:42:04.029424] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:40:10.319 [2024-12-13 10:42:04.029642] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:40:10.578 10:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:10.578 10:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:40:10.578 10:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:40:10.578 10:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:40:10.578 10:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:40:10.578 10:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:10.578 10:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:40:10.837 [2024-12-13 10:42:04.493186] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:10.837 10:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:40:11.096 10:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:40:11.096 10:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:40:11.356 10:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:40:11.356 10:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:40:11.356 10:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:40:11.615 10:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=a0911cfa-ced5-4130-bea7-39cd014bec8c 00:40:11.615 10:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u a0911cfa-ced5-4130-bea7-39cd014bec8c lvol 20 00:40:11.873 10:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=da663f53-30f9-4c69-8901-a854f82ad12c 00:40:11.873 10:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:40:12.132 10:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 da663f53-30f9-4c69-8901-a854f82ad12c 00:40:12.132 10:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:40:12.390 [2024-12-13 10:42:06.181072] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:12.391 10:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:40:12.649 10:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=4189832 00:40:12.649 10:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:40:12.649 10:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:40:13.585 10:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot da663f53-30f9-4c69-8901-a854f82ad12c MY_SNAPSHOT 00:40:13.844 10:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=9809894a-e789-44d0-b753-18a7fe56b847 00:40:13.844 10:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize da663f53-30f9-4c69-8901-a854f82ad12c 30 00:40:14.102 10:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 9809894a-e789-44d0-b753-18a7fe56b847 MY_CLONE 00:40:14.361 10:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=7df3fa97-ac06-4809-93d8-95f91865cdc7 00:40:14.361 10:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 7df3fa97-ac06-4809-93d8-95f91865cdc7 00:40:14.930 10:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 4189832 00:40:23.046 Initializing NVMe Controllers 00:40:23.046 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:40:23.046 Controller IO queue size 128, less than required. 00:40:23.046 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:40:23.046 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:40:23.046 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:40:23.046 Initialization complete. Launching workers. 00:40:23.046 ======================================================== 00:40:23.046 Latency(us) 00:40:23.046 Device Information : IOPS MiB/s Average min max 00:40:23.046 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 11381.50 44.46 11249.11 533.81 177194.59 00:40:23.046 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 11082.80 43.29 11555.65 3598.00 155333.43 00:40:23.046 ======================================================== 00:40:23.046 Total : 22464.30 87.75 11400.34 533.81 177194.59 00:40:23.046 00:40:23.046 10:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:40:23.304 10:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete da663f53-30f9-4c69-8901-a854f82ad12c 00:40:23.305 10:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u a0911cfa-ced5-4130-bea7-39cd014bec8c 00:40:23.563 10:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:40:23.563 10:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:40:23.563 10:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:40:23.563 10:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:40:23.563 10:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:40:23.563 10:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:40:23.563 10:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:40:23.563 10:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:40:23.563 10:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:40:23.563 rmmod nvme_tcp 00:40:23.563 rmmod nvme_fabrics 00:40:23.563 rmmod nvme_keyring 00:40:23.563 10:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:40:23.563 10:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:40:23.563 10:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:40:23.563 10:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 4189280 ']' 00:40:23.563 10:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 4189280 00:40:23.563 10:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 4189280 ']' 00:40:23.563 10:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 4189280 00:40:23.563 10:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:40:23.822 10:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:40:23.822 10:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4189280 00:40:23.822 10:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:40:23.822 10:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:40:23.822 10:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4189280' 00:40:23.822 killing process with pid 4189280 00:40:23.822 10:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 4189280 00:40:23.822 10:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 4189280 00:40:25.199 10:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:40:25.199 10:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:40:25.199 10:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:40:25.199 10:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:40:25.199 10:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:40:25.199 10:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:40:25.199 10:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:40:25.199 10:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:40:25.199 10:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:40:25.199 10:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:25.199 10:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:25.199 10:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:27.734 10:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:40:27.734 00:40:27.734 real 0m23.357s 00:40:27.734 user 0m57.619s 00:40:27.734 sys 0m9.118s 00:40:27.734 10:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:27.734 10:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:40:27.734 ************************************ 00:40:27.734 END TEST nvmf_lvol 00:40:27.734 ************************************ 00:40:27.734 10:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:40:27.734 10:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:40:27.734 10:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:27.734 10:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:40:27.734 ************************************ 00:40:27.734 START TEST nvmf_lvs_grow 00:40:27.734 ************************************ 00:40:27.734 10:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:40:27.734 * Looking for test storage... 00:40:27.734 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:40:27.734 10:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:40:27.734 10:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lcov --version 00:40:27.734 10:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:40:27.734 10:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:40:27.734 10:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:27.735 10:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:27.735 10:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:27.735 10:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:40:27.735 10:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:40:27.735 10:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:40:27.735 10:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:40:27.735 10:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:40:27.735 10:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:40:27.735 10:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:40:27.735 10:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:27.735 10:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:40:27.735 10:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:40:27.735 10:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:27.735 10:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:27.735 10:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:40:27.735 10:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:40:27.735 10:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:27.735 10:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:40:27.735 10:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:40:27.735 10:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:40:27.735 10:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:40:27.735 10:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:27.735 10:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:40:27.735 10:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:40:27.735 10:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:27.735 10:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:27.735 10:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:40:27.735 10:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:27.735 10:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:40:27.735 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:27.735 --rc genhtml_branch_coverage=1 00:40:27.735 --rc genhtml_function_coverage=1 00:40:27.735 --rc genhtml_legend=1 00:40:27.735 --rc geninfo_all_blocks=1 00:40:27.735 --rc geninfo_unexecuted_blocks=1 00:40:27.735 00:40:27.735 ' 00:40:27.735 10:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:40:27.735 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:27.735 --rc genhtml_branch_coverage=1 00:40:27.735 --rc genhtml_function_coverage=1 00:40:27.735 --rc genhtml_legend=1 00:40:27.735 --rc geninfo_all_blocks=1 00:40:27.735 --rc geninfo_unexecuted_blocks=1 00:40:27.735 00:40:27.735 ' 00:40:27.735 10:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:40:27.735 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:27.735 --rc genhtml_branch_coverage=1 00:40:27.735 --rc genhtml_function_coverage=1 00:40:27.735 --rc genhtml_legend=1 00:40:27.735 --rc geninfo_all_blocks=1 00:40:27.735 --rc geninfo_unexecuted_blocks=1 00:40:27.735 00:40:27.735 ' 00:40:27.735 10:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:40:27.735 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:27.735 --rc genhtml_branch_coverage=1 00:40:27.735 --rc genhtml_function_coverage=1 00:40:27.735 --rc genhtml_legend=1 00:40:27.735 --rc geninfo_all_blocks=1 00:40:27.735 --rc geninfo_unexecuted_blocks=1 00:40:27.735 00:40:27.735 ' 00:40:27.735 10:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:27.735 10:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:40:27.735 10:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:27.735 10:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:27.735 10:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:27.735 10:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:27.735 10:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:27.735 10:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:27.735 10:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:27.735 10:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:27.735 10:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:27.735 10:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:27.735 10:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:40:27.735 10:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:40:27.735 10:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:27.735 10:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:27.735 10:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:27.735 10:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:27.735 10:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:27.735 10:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:40:27.735 10:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:27.735 10:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:27.735 10:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:27.736 10:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:27.736 10:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:27.736 10:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:27.736 10:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:40:27.736 10:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:27.736 10:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:40:27.736 10:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:40:27.736 10:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:40:27.736 10:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:27.736 10:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:27.736 10:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:27.736 10:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:40:27.736 10:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:40:27.736 10:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:40:27.736 10:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:40:27.736 10:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:40:27.736 10:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:40:27.736 10:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:40:27.736 10:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:40:27.736 10:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:40:27.736 10:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:27.736 10:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:40:27.736 10:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:40:27.736 10:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:40:27.736 10:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:27.736 10:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:27.736 10:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:27.736 10:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:40:27.736 10:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:40:27.736 10:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:40:27.736 10:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:40:33.007 10:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:40:33.007 10:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:40:33.007 10:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:40:33.007 10:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:40:33.007 10:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:40:33.007 10:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:40:33.007 10:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:40:33.007 10:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:40:33.007 10:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:40:33.007 10:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:40:33.007 10:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:40:33.007 10:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:40:33.007 10:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:40:33.007 10:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:40:33.007 10:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:40:33.007 10:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:40:33.007 10:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:40:33.007 10:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:40:33.007 10:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:40:33.007 10:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:40:33.007 10:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:40:33.007 10:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:40:33.007 10:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:40:33.007 10:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:40:33.007 10:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:40:33.007 10:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:40:33.007 10:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:40:33.007 10:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:40:33.007 10:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:40:33.007 10:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:40:33.007 10:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:40:33.007 10:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:40:33.007 10:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:40:33.007 10:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:33.007 10:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:40:33.007 Found 0000:af:00.0 (0x8086 - 0x159b) 00:40:33.007 10:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:33.007 10:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:33.007 10:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:33.007 10:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:33.007 10:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:33.007 10:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:33.007 10:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:40:33.007 Found 0000:af:00.1 (0x8086 - 0x159b) 00:40:33.007 10:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:33.007 10:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:33.007 10:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:33.007 10:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:33.007 10:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:33.007 10:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:40:33.007 10:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:40:33.007 10:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:40:33.008 10:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:33.008 10:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:33.008 10:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:33.008 10:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:33.008 10:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:33.008 10:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:33.008 10:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:33.008 10:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:40:33.008 Found net devices under 0000:af:00.0: cvl_0_0 00:40:33.008 10:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:33.008 10:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:33.008 10:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:33.008 10:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:33.008 10:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:33.008 10:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:33.008 10:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:33.008 10:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:33.008 10:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:40:33.008 Found net devices under 0000:af:00.1: cvl_0_1 00:40:33.008 10:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:33.008 10:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:40:33.008 10:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:40:33.008 10:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:40:33.008 10:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:40:33.008 10:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:40:33.008 10:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:40:33.008 10:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:40:33.008 10:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:40:33.008 10:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:40:33.008 10:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:40:33.008 10:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:40:33.008 10:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:40:33.008 10:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:40:33.008 10:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:40:33.008 10:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:40:33.008 10:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:33.008 10:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:40:33.008 10:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:40:33.008 10:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:40:33.008 10:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:40:33.008 10:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:40:33.008 10:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:40:33.008 10:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:40:33.008 10:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:40:33.008 10:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:40:33.008 10:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:40:33.008 10:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:40:33.008 10:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:40:33.008 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:33.008 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.313 ms 00:40:33.008 00:40:33.008 --- 10.0.0.2 ping statistics --- 00:40:33.008 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:33.008 rtt min/avg/max/mdev = 0.313/0.313/0.313/0.000 ms 00:40:33.008 10:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:40:33.008 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:33.008 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.200 ms 00:40:33.008 00:40:33.008 --- 10.0.0.1 ping statistics --- 00:40:33.008 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:33.008 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:40:33.008 10:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:33.008 10:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:40:33.008 10:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:40:33.008 10:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:33.008 10:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:40:33.008 10:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:40:33.008 10:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:33.008 10:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:40:33.008 10:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:40:33.008 10:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:40:33.008 10:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:40:33.008 10:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:40:33.008 10:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:40:33.008 10:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=1850 00:40:33.008 10:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:40:33.008 10:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 1850 00:40:33.008 10:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 1850 ']' 00:40:33.008 10:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:33.008 10:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:33.008 10:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:33.008 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:33.008 10:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:33.008 10:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:40:33.008 [2024-12-13 10:42:26.866557] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:40:33.008 [2024-12-13 10:42:26.868665] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:40:33.008 [2024-12-13 10:42:26.868733] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:33.267 [2024-12-13 10:42:26.987115] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:33.267 [2024-12-13 10:42:27.091591] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:33.267 [2024-12-13 10:42:27.091630] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:33.267 [2024-12-13 10:42:27.091643] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:33.267 [2024-12-13 10:42:27.091653] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:33.267 [2024-12-13 10:42:27.091664] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:33.267 [2024-12-13 10:42:27.093093] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:40:33.526 [2024-12-13 10:42:27.407696] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:40:33.526 [2024-12-13 10:42:27.407930] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:40:33.785 10:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:33.785 10:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:40:33.785 10:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:40:33.785 10:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:40:33.785 10:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:40:34.044 10:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:34.044 10:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:40:34.044 [2024-12-13 10:42:27.873908] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:34.044 10:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:40:34.044 10:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:40:34.044 10:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:34.044 10:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:40:34.044 ************************************ 00:40:34.044 START TEST lvs_grow_clean 00:40:34.044 ************************************ 00:40:34.044 10:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:40:34.044 10:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:40:34.044 10:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:40:34.044 10:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:40:34.044 10:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:40:34.044 10:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:40:34.044 10:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:40:34.044 10:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:40:34.044 10:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:40:34.303 10:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:40:34.303 10:42:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:40:34.303 10:42:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:40:34.562 10:42:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=88b15021-1b11-41f1-a623-e30473152a6c 00:40:34.562 10:42:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 88b15021-1b11-41f1-a623-e30473152a6c 00:40:34.562 10:42:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:40:34.821 10:42:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:40:34.821 10:42:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:40:34.821 10:42:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 88b15021-1b11-41f1-a623-e30473152a6c lvol 150 00:40:34.821 10:42:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=5e8aea58-59c3-41d2-bc4f-b973366ded4e 00:40:34.821 10:42:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:40:34.821 10:42:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:40:35.080 [2024-12-13 10:42:28.861798] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:40:35.080 [2024-12-13 10:42:28.861969] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:40:35.080 true 00:40:35.080 10:42:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:40:35.080 10:42:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 88b15021-1b11-41f1-a623-e30473152a6c 00:40:35.339 10:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:40:35.339 10:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:40:35.597 10:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 5e8aea58-59c3-41d2-bc4f-b973366ded4e 00:40:35.597 10:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:40:35.856 [2024-12-13 10:42:29.622270] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:35.856 10:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:40:36.115 10:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2375 00:40:36.115 10:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:40:36.115 10:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:40:36.115 10:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2375 /var/tmp/bdevperf.sock 00:40:36.115 10:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 2375 ']' 00:40:36.115 10:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:40:36.115 10:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:36.115 10:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:40:36.115 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:40:36.115 10:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:36.115 10:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:40:36.115 [2024-12-13 10:42:29.895787] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:40:36.115 [2024-12-13 10:42:29.895872] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2375 ] 00:40:36.374 [2024-12-13 10:42:30.010085] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:36.374 [2024-12-13 10:42:30.123957] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:40:36.941 10:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:36.941 10:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:40:36.941 10:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:40:37.199 Nvme0n1 00:40:37.199 10:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:40:37.458 [ 00:40:37.458 { 00:40:37.458 "name": "Nvme0n1", 00:40:37.458 "aliases": [ 00:40:37.458 "5e8aea58-59c3-41d2-bc4f-b973366ded4e" 00:40:37.458 ], 00:40:37.458 "product_name": "NVMe disk", 00:40:37.458 "block_size": 4096, 00:40:37.458 "num_blocks": 38912, 00:40:37.458 "uuid": "5e8aea58-59c3-41d2-bc4f-b973366ded4e", 00:40:37.458 "numa_id": 1, 00:40:37.458 "assigned_rate_limits": { 00:40:37.458 "rw_ios_per_sec": 0, 00:40:37.458 "rw_mbytes_per_sec": 0, 00:40:37.458 "r_mbytes_per_sec": 0, 00:40:37.458 "w_mbytes_per_sec": 0 00:40:37.458 }, 00:40:37.458 "claimed": false, 00:40:37.458 "zoned": false, 00:40:37.458 "supported_io_types": { 00:40:37.458 "read": true, 00:40:37.458 "write": true, 00:40:37.458 "unmap": true, 00:40:37.458 "flush": true, 00:40:37.458 "reset": true, 00:40:37.458 "nvme_admin": true, 00:40:37.458 "nvme_io": true, 00:40:37.458 "nvme_io_md": false, 00:40:37.458 "write_zeroes": true, 00:40:37.458 "zcopy": false, 00:40:37.458 "get_zone_info": false, 00:40:37.458 "zone_management": false, 00:40:37.458 "zone_append": false, 00:40:37.458 "compare": true, 00:40:37.458 "compare_and_write": true, 00:40:37.458 "abort": true, 00:40:37.458 "seek_hole": false, 00:40:37.458 "seek_data": false, 00:40:37.458 "copy": true, 00:40:37.458 "nvme_iov_md": false 00:40:37.458 }, 00:40:37.458 "memory_domains": [ 00:40:37.458 { 00:40:37.458 "dma_device_id": "system", 00:40:37.458 "dma_device_type": 1 00:40:37.458 } 00:40:37.458 ], 00:40:37.458 "driver_specific": { 00:40:37.458 "nvme": [ 00:40:37.458 { 00:40:37.458 "trid": { 00:40:37.458 "trtype": "TCP", 00:40:37.458 "adrfam": "IPv4", 00:40:37.458 "traddr": "10.0.0.2", 00:40:37.458 "trsvcid": "4420", 00:40:37.458 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:40:37.458 }, 00:40:37.458 "ctrlr_data": { 00:40:37.458 "cntlid": 1, 00:40:37.458 "vendor_id": "0x8086", 00:40:37.458 "model_number": "SPDK bdev Controller", 00:40:37.458 "serial_number": "SPDK0", 00:40:37.458 "firmware_revision": "25.01", 00:40:37.458 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:40:37.458 "oacs": { 00:40:37.459 "security": 0, 00:40:37.459 "format": 0, 00:40:37.459 "firmware": 0, 00:40:37.459 "ns_manage": 0 00:40:37.459 }, 00:40:37.459 "multi_ctrlr": true, 00:40:37.459 "ana_reporting": false 00:40:37.459 }, 00:40:37.459 "vs": { 00:40:37.459 "nvme_version": "1.3" 00:40:37.459 }, 00:40:37.459 "ns_data": { 00:40:37.459 "id": 1, 00:40:37.459 "can_share": true 00:40:37.459 } 00:40:37.459 } 00:40:37.459 ], 00:40:37.459 "mp_policy": "active_passive" 00:40:37.459 } 00:40:37.459 } 00:40:37.459 ] 00:40:37.459 10:42:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2601 00:40:37.459 10:42:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:40:37.459 10:42:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:40:37.459 Running I/O for 10 seconds... 00:40:38.395 Latency(us) 00:40:38.395 [2024-12-13T09:42:32.286Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:38.395 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:38.395 Nvme0n1 : 1.00 19685.00 76.89 0.00 0.00 0.00 0.00 0.00 00:40:38.395 [2024-12-13T09:42:32.286Z] =================================================================================================================== 00:40:38.395 [2024-12-13T09:42:32.286Z] Total : 19685.00 76.89 0.00 0.00 0.00 0.00 0.00 00:40:38.395 00:40:39.330 10:42:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 88b15021-1b11-41f1-a623-e30473152a6c 00:40:39.589 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:39.589 Nvme0n1 : 2.00 20034.50 78.26 0.00 0.00 0.00 0.00 0.00 00:40:39.589 [2024-12-13T09:42:33.480Z] =================================================================================================================== 00:40:39.589 [2024-12-13T09:42:33.480Z] Total : 20034.50 78.26 0.00 0.00 0.00 0.00 0.00 00:40:39.589 00:40:39.589 true 00:40:39.589 10:42:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 88b15021-1b11-41f1-a623-e30473152a6c 00:40:39.589 10:42:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:40:39.848 10:42:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:40:39.848 10:42:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:40:39.848 10:42:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 2601 00:40:40.415 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:40.415 Nvme0n1 : 3.00 20129.67 78.63 0.00 0.00 0.00 0.00 0.00 00:40:40.415 [2024-12-13T09:42:34.306Z] =================================================================================================================== 00:40:40.415 [2024-12-13T09:42:34.306Z] Total : 20129.67 78.63 0.00 0.00 0.00 0.00 0.00 00:40:40.415 00:40:41.811 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:41.811 Nvme0n1 : 4.00 20240.75 79.07 0.00 0.00 0.00 0.00 0.00 00:40:41.811 [2024-12-13T09:42:35.702Z] =================================================================================================================== 00:40:41.811 [2024-12-13T09:42:35.702Z] Total : 20240.75 79.07 0.00 0.00 0.00 0.00 0.00 00:40:41.811 00:40:42.453 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:42.453 Nvme0n1 : 5.00 20307.40 79.33 0.00 0.00 0.00 0.00 0.00 00:40:42.453 [2024-12-13T09:42:36.344Z] =================================================================================================================== 00:40:42.453 [2024-12-13T09:42:36.344Z] Total : 20307.40 79.33 0.00 0.00 0.00 0.00 0.00 00:40:42.453 00:40:43.389 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:43.389 Nvme0n1 : 6.00 20362.50 79.54 0.00 0.00 0.00 0.00 0.00 00:40:43.389 [2024-12-13T09:42:37.280Z] =================================================================================================================== 00:40:43.389 [2024-12-13T09:42:37.280Z] Total : 20362.50 79.54 0.00 0.00 0.00 0.00 0.00 00:40:43.389 00:40:44.764 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:44.764 Nvme0n1 : 7.00 20392.71 79.66 0.00 0.00 0.00 0.00 0.00 00:40:44.764 [2024-12-13T09:42:38.655Z] =================================================================================================================== 00:40:44.764 [2024-12-13T09:42:38.655Z] Total : 20392.71 79.66 0.00 0.00 0.00 0.00 0.00 00:40:44.764 00:40:45.700 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:45.700 Nvme0n1 : 8.00 20431.25 79.81 0.00 0.00 0.00 0.00 0.00 00:40:45.700 [2024-12-13T09:42:39.591Z] =================================================================================================================== 00:40:45.700 [2024-12-13T09:42:39.591Z] Total : 20431.25 79.81 0.00 0.00 0.00 0.00 0.00 00:40:45.700 00:40:46.636 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:46.636 Nvme0n1 : 9.00 20440.11 79.84 0.00 0.00 0.00 0.00 0.00 00:40:46.636 [2024-12-13T09:42:40.527Z] =================================================================================================================== 00:40:46.636 [2024-12-13T09:42:40.527Z] Total : 20440.11 79.84 0.00 0.00 0.00 0.00 0.00 00:40:46.636 00:40:47.572 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:47.572 Nvme0n1 : 10.00 20453.50 79.90 0.00 0.00 0.00 0.00 0.00 00:40:47.572 [2024-12-13T09:42:41.463Z] =================================================================================================================== 00:40:47.572 [2024-12-13T09:42:41.463Z] Total : 20453.50 79.90 0.00 0.00 0.00 0.00 0.00 00:40:47.572 00:40:47.572 00:40:47.572 Latency(us) 00:40:47.572 [2024-12-13T09:42:41.463Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:47.572 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:47.572 Nvme0n1 : 10.00 20457.40 79.91 0.00 0.00 6253.67 3932.16 17850.76 00:40:47.572 [2024-12-13T09:42:41.463Z] =================================================================================================================== 00:40:47.572 [2024-12-13T09:42:41.463Z] Total : 20457.40 79.91 0.00 0.00 6253.67 3932.16 17850.76 00:40:47.572 { 00:40:47.572 "results": [ 00:40:47.572 { 00:40:47.572 "job": "Nvme0n1", 00:40:47.572 "core_mask": "0x2", 00:40:47.572 "workload": "randwrite", 00:40:47.572 "status": "finished", 00:40:47.572 "queue_depth": 128, 00:40:47.572 "io_size": 4096, 00:40:47.572 "runtime": 10.004349, 00:40:47.572 "iops": 20457.403075402508, 00:40:47.572 "mibps": 79.91173076329105, 00:40:47.572 "io_failed": 0, 00:40:47.572 "io_timeout": 0, 00:40:47.572 "avg_latency_us": 6253.669686832453, 00:40:47.572 "min_latency_us": 3932.16, 00:40:47.572 "max_latency_us": 17850.758095238096 00:40:47.572 } 00:40:47.572 ], 00:40:47.572 "core_count": 1 00:40:47.572 } 00:40:47.572 10:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2375 00:40:47.572 10:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 2375 ']' 00:40:47.573 10:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 2375 00:40:47.573 10:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:40:47.573 10:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:40:47.573 10:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2375 00:40:47.573 10:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:40:47.573 10:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:40:47.573 10:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2375' 00:40:47.573 killing process with pid 2375 00:40:47.573 10:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 2375 00:40:47.573 Received shutdown signal, test time was about 10.000000 seconds 00:40:47.573 00:40:47.573 Latency(us) 00:40:47.573 [2024-12-13T09:42:41.464Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:47.573 [2024-12-13T09:42:41.464Z] =================================================================================================================== 00:40:47.573 [2024-12-13T09:42:41.464Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:40:47.573 10:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 2375 00:40:48.509 10:42:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:40:48.768 10:42:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:40:48.768 10:42:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 88b15021-1b11-41f1-a623-e30473152a6c 00:40:48.768 10:42:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:40:49.027 10:42:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:40:49.027 10:42:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:40:49.027 10:42:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:40:49.285 [2024-12-13 10:42:42.993781] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:40:49.285 10:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 88b15021-1b11-41f1-a623-e30473152a6c 00:40:49.285 10:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:40:49.285 10:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 88b15021-1b11-41f1-a623-e30473152a6c 00:40:49.285 10:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:40:49.285 10:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:40:49.285 10:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:40:49.285 10:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:40:49.285 10:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:40:49.285 10:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:40:49.285 10:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:40:49.285 10:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:40:49.285 10:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 88b15021-1b11-41f1-a623-e30473152a6c 00:40:49.544 request: 00:40:49.544 { 00:40:49.544 "uuid": "88b15021-1b11-41f1-a623-e30473152a6c", 00:40:49.544 "method": "bdev_lvol_get_lvstores", 00:40:49.544 "req_id": 1 00:40:49.544 } 00:40:49.544 Got JSON-RPC error response 00:40:49.544 response: 00:40:49.544 { 00:40:49.544 "code": -19, 00:40:49.544 "message": "No such device" 00:40:49.544 } 00:40:49.544 10:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:40:49.544 10:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:40:49.544 10:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:40:49.544 10:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:40:49.544 10:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:40:49.544 aio_bdev 00:40:49.544 10:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 5e8aea58-59c3-41d2-bc4f-b973366ded4e 00:40:49.544 10:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=5e8aea58-59c3-41d2-bc4f-b973366ded4e 00:40:49.544 10:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:40:49.544 10:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:40:49.544 10:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:40:49.544 10:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:40:49.544 10:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:40:49.803 10:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 5e8aea58-59c3-41d2-bc4f-b973366ded4e -t 2000 00:40:50.062 [ 00:40:50.062 { 00:40:50.062 "name": "5e8aea58-59c3-41d2-bc4f-b973366ded4e", 00:40:50.062 "aliases": [ 00:40:50.062 "lvs/lvol" 00:40:50.062 ], 00:40:50.062 "product_name": "Logical Volume", 00:40:50.062 "block_size": 4096, 00:40:50.062 "num_blocks": 38912, 00:40:50.062 "uuid": "5e8aea58-59c3-41d2-bc4f-b973366ded4e", 00:40:50.062 "assigned_rate_limits": { 00:40:50.062 "rw_ios_per_sec": 0, 00:40:50.062 "rw_mbytes_per_sec": 0, 00:40:50.062 "r_mbytes_per_sec": 0, 00:40:50.062 "w_mbytes_per_sec": 0 00:40:50.062 }, 00:40:50.062 "claimed": false, 00:40:50.062 "zoned": false, 00:40:50.062 "supported_io_types": { 00:40:50.062 "read": true, 00:40:50.062 "write": true, 00:40:50.062 "unmap": true, 00:40:50.062 "flush": false, 00:40:50.062 "reset": true, 00:40:50.062 "nvme_admin": false, 00:40:50.062 "nvme_io": false, 00:40:50.062 "nvme_io_md": false, 00:40:50.062 "write_zeroes": true, 00:40:50.062 "zcopy": false, 00:40:50.062 "get_zone_info": false, 00:40:50.062 "zone_management": false, 00:40:50.062 "zone_append": false, 00:40:50.062 "compare": false, 00:40:50.062 "compare_and_write": false, 00:40:50.062 "abort": false, 00:40:50.062 "seek_hole": true, 00:40:50.062 "seek_data": true, 00:40:50.062 "copy": false, 00:40:50.062 "nvme_iov_md": false 00:40:50.062 }, 00:40:50.062 "driver_specific": { 00:40:50.062 "lvol": { 00:40:50.062 "lvol_store_uuid": "88b15021-1b11-41f1-a623-e30473152a6c", 00:40:50.062 "base_bdev": "aio_bdev", 00:40:50.062 "thin_provision": false, 00:40:50.062 "num_allocated_clusters": 38, 00:40:50.062 "snapshot": false, 00:40:50.062 "clone": false, 00:40:50.062 "esnap_clone": false 00:40:50.062 } 00:40:50.062 } 00:40:50.062 } 00:40:50.062 ] 00:40:50.062 10:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:40:50.062 10:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 88b15021-1b11-41f1-a623-e30473152a6c 00:40:50.062 10:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:40:50.320 10:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:40:50.320 10:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 88b15021-1b11-41f1-a623-e30473152a6c 00:40:50.320 10:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:40:50.320 10:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:40:50.321 10:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 5e8aea58-59c3-41d2-bc4f-b973366ded4e 00:40:50.579 10:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 88b15021-1b11-41f1-a623-e30473152a6c 00:40:50.838 10:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:40:51.097 10:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:40:51.097 00:40:51.097 real 0m16.892s 00:40:51.097 user 0m16.545s 00:40:51.097 sys 0m1.532s 00:40:51.097 10:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:51.097 10:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:40:51.097 ************************************ 00:40:51.097 END TEST lvs_grow_clean 00:40:51.097 ************************************ 00:40:51.097 10:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:40:51.097 10:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:40:51.097 10:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:51.097 10:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:40:51.097 ************************************ 00:40:51.097 START TEST lvs_grow_dirty 00:40:51.097 ************************************ 00:40:51.097 10:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:40:51.097 10:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:40:51.097 10:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:40:51.097 10:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:40:51.097 10:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:40:51.097 10:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:40:51.097 10:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:40:51.097 10:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:40:51.097 10:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:40:51.097 10:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:40:51.355 10:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:40:51.355 10:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:40:51.612 10:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=dc06ad4a-7c9a-47e9-b631-31d6418ad1a4 00:40:51.612 10:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dc06ad4a-7c9a-47e9-b631-31d6418ad1a4 00:40:51.612 10:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:40:51.871 10:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:40:51.871 10:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:40:51.871 10:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u dc06ad4a-7c9a-47e9-b631-31d6418ad1a4 lvol 150 00:40:51.871 10:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=c339107e-406b-497f-99d2-5f39e1e99ea5 00:40:51.871 10:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:40:51.871 10:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:40:52.129 [2024-12-13 10:42:45.905771] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:40:52.129 [2024-12-13 10:42:45.905945] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:40:52.129 true 00:40:52.129 10:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:40:52.129 10:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dc06ad4a-7c9a-47e9-b631-31d6418ad1a4 00:40:52.388 10:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:40:52.388 10:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:40:52.647 10:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 c339107e-406b-497f-99d2-5f39e1e99ea5 00:40:52.647 10:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:40:52.905 [2024-12-13 10:42:46.678021] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:52.905 10:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:40:53.163 10:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:40:53.163 10:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=5266 00:40:53.163 10:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:40:53.163 10:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 5266 /var/tmp/bdevperf.sock 00:40:53.163 10:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 5266 ']' 00:40:53.163 10:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:40:53.163 10:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:53.163 10:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:40:53.163 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:40:53.163 10:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:53.164 10:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:40:53.164 [2024-12-13 10:42:46.943147] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:40:53.164 [2024-12-13 10:42:46.943239] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid5266 ] 00:40:53.421 [2024-12-13 10:42:47.056496] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:53.421 [2024-12-13 10:42:47.158291] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:40:53.988 10:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:53.988 10:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:40:53.988 10:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:40:54.246 Nvme0n1 00:40:54.505 10:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:40:54.505 [ 00:40:54.505 { 00:40:54.505 "name": "Nvme0n1", 00:40:54.505 "aliases": [ 00:40:54.505 "c339107e-406b-497f-99d2-5f39e1e99ea5" 00:40:54.505 ], 00:40:54.505 "product_name": "NVMe disk", 00:40:54.505 "block_size": 4096, 00:40:54.505 "num_blocks": 38912, 00:40:54.505 "uuid": "c339107e-406b-497f-99d2-5f39e1e99ea5", 00:40:54.505 "numa_id": 1, 00:40:54.505 "assigned_rate_limits": { 00:40:54.505 "rw_ios_per_sec": 0, 00:40:54.505 "rw_mbytes_per_sec": 0, 00:40:54.505 "r_mbytes_per_sec": 0, 00:40:54.505 "w_mbytes_per_sec": 0 00:40:54.505 }, 00:40:54.505 "claimed": false, 00:40:54.505 "zoned": false, 00:40:54.505 "supported_io_types": { 00:40:54.505 "read": true, 00:40:54.505 "write": true, 00:40:54.505 "unmap": true, 00:40:54.505 "flush": true, 00:40:54.505 "reset": true, 00:40:54.505 "nvme_admin": true, 00:40:54.505 "nvme_io": true, 00:40:54.505 "nvme_io_md": false, 00:40:54.505 "write_zeroes": true, 00:40:54.505 "zcopy": false, 00:40:54.505 "get_zone_info": false, 00:40:54.505 "zone_management": false, 00:40:54.505 "zone_append": false, 00:40:54.505 "compare": true, 00:40:54.505 "compare_and_write": true, 00:40:54.505 "abort": true, 00:40:54.505 "seek_hole": false, 00:40:54.505 "seek_data": false, 00:40:54.505 "copy": true, 00:40:54.505 "nvme_iov_md": false 00:40:54.505 }, 00:40:54.505 "memory_domains": [ 00:40:54.505 { 00:40:54.505 "dma_device_id": "system", 00:40:54.505 "dma_device_type": 1 00:40:54.505 } 00:40:54.505 ], 00:40:54.505 "driver_specific": { 00:40:54.505 "nvme": [ 00:40:54.505 { 00:40:54.505 "trid": { 00:40:54.505 "trtype": "TCP", 00:40:54.505 "adrfam": "IPv4", 00:40:54.505 "traddr": "10.0.0.2", 00:40:54.505 "trsvcid": "4420", 00:40:54.505 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:40:54.505 }, 00:40:54.505 "ctrlr_data": { 00:40:54.505 "cntlid": 1, 00:40:54.505 "vendor_id": "0x8086", 00:40:54.505 "model_number": "SPDK bdev Controller", 00:40:54.505 "serial_number": "SPDK0", 00:40:54.505 "firmware_revision": "25.01", 00:40:54.505 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:40:54.505 "oacs": { 00:40:54.505 "security": 0, 00:40:54.505 "format": 0, 00:40:54.505 "firmware": 0, 00:40:54.505 "ns_manage": 0 00:40:54.505 }, 00:40:54.505 "multi_ctrlr": true, 00:40:54.505 "ana_reporting": false 00:40:54.505 }, 00:40:54.505 "vs": { 00:40:54.505 "nvme_version": "1.3" 00:40:54.505 }, 00:40:54.505 "ns_data": { 00:40:54.505 "id": 1, 00:40:54.505 "can_share": true 00:40:54.505 } 00:40:54.505 } 00:40:54.505 ], 00:40:54.505 "mp_policy": "active_passive" 00:40:54.505 } 00:40:54.505 } 00:40:54.505 ] 00:40:54.505 10:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:40:54.505 10:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=5495 00:40:54.505 10:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:40:54.505 Running I/O for 10 seconds... 00:40:55.880 Latency(us) 00:40:55.880 [2024-12-13T09:42:49.771Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:55.880 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:55.880 Nvme0n1 : 1.00 20066.00 78.38 0.00 0.00 0.00 0.00 0.00 00:40:55.880 [2024-12-13T09:42:49.771Z] =================================================================================================================== 00:40:55.880 [2024-12-13T09:42:49.771Z] Total : 20066.00 78.38 0.00 0.00 0.00 0.00 0.00 00:40:55.880 00:40:56.447 10:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u dc06ad4a-7c9a-47e9-b631-31d6418ad1a4 00:40:56.706 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:56.706 Nvme0n1 : 2.00 20129.50 78.63 0.00 0.00 0.00 0.00 0.00 00:40:56.706 [2024-12-13T09:42:50.597Z] =================================================================================================================== 00:40:56.706 [2024-12-13T09:42:50.597Z] Total : 20129.50 78.63 0.00 0.00 0.00 0.00 0.00 00:40:56.706 00:40:56.706 true 00:40:56.706 10:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dc06ad4a-7c9a-47e9-b631-31d6418ad1a4 00:40:56.706 10:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:40:56.964 10:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:40:56.964 10:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:40:56.964 10:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 5495 00:40:57.530 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:57.530 Nvme0n1 : 3.00 20193.00 78.88 0.00 0.00 0.00 0.00 0.00 00:40:57.530 [2024-12-13T09:42:51.421Z] =================================================================================================================== 00:40:57.530 [2024-12-13T09:42:51.421Z] Total : 20193.00 78.88 0.00 0.00 0.00 0.00 0.00 00:40:57.530 00:40:58.905 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:58.905 Nvme0n1 : 4.00 20304.25 79.31 0.00 0.00 0.00 0.00 0.00 00:40:58.905 [2024-12-13T09:42:52.796Z] =================================================================================================================== 00:40:58.905 [2024-12-13T09:42:52.796Z] Total : 20304.25 79.31 0.00 0.00 0.00 0.00 0.00 00:40:58.905 00:40:59.840 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:59.840 Nvme0n1 : 5.00 20383.60 79.62 0.00 0.00 0.00 0.00 0.00 00:40:59.840 [2024-12-13T09:42:53.731Z] =================================================================================================================== 00:40:59.840 [2024-12-13T09:42:53.731Z] Total : 20383.60 79.62 0.00 0.00 0.00 0.00 0.00 00:40:59.840 00:41:00.775 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:41:00.775 Nvme0n1 : 6.00 20415.33 79.75 0.00 0.00 0.00 0.00 0.00 00:41:00.775 [2024-12-13T09:42:54.666Z] =================================================================================================================== 00:41:00.775 [2024-12-13T09:42:54.666Z] Total : 20415.33 79.75 0.00 0.00 0.00 0.00 0.00 00:41:00.775 00:41:01.710 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:41:01.710 Nvme0n1 : 7.00 20456.14 79.91 0.00 0.00 0.00 0.00 0.00 00:41:01.710 [2024-12-13T09:42:55.601Z] =================================================================================================================== 00:41:01.710 [2024-12-13T09:42:55.601Z] Total : 20456.14 79.91 0.00 0.00 0.00 0.00 0.00 00:41:01.710 00:41:02.645 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:41:02.645 Nvme0n1 : 8.00 20486.75 80.03 0.00 0.00 0.00 0.00 0.00 00:41:02.645 [2024-12-13T09:42:56.536Z] =================================================================================================================== 00:41:02.645 [2024-12-13T09:42:56.536Z] Total : 20486.75 80.03 0.00 0.00 0.00 0.00 0.00 00:41:02.645 00:41:03.579 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:41:03.579 Nvme0n1 : 9.00 20510.56 80.12 0.00 0.00 0.00 0.00 0.00 00:41:03.579 [2024-12-13T09:42:57.470Z] =================================================================================================================== 00:41:03.579 [2024-12-13T09:42:57.470Z] Total : 20510.56 80.12 0.00 0.00 0.00 0.00 0.00 00:41:03.579 00:41:04.955 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:41:04.955 Nvme0n1 : 10.00 20491.50 80.04 0.00 0.00 0.00 0.00 0.00 00:41:04.955 [2024-12-13T09:42:58.846Z] =================================================================================================================== 00:41:04.955 [2024-12-13T09:42:58.846Z] Total : 20491.50 80.04 0.00 0.00 0.00 0.00 0.00 00:41:04.955 00:41:04.955 00:41:04.955 Latency(us) 00:41:04.955 [2024-12-13T09:42:58.846Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:04.955 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:41:04.955 Nvme0n1 : 10.00 20497.90 80.07 0.00 0.00 6241.37 3994.58 19223.89 00:41:04.955 [2024-12-13T09:42:58.846Z] =================================================================================================================== 00:41:04.955 [2024-12-13T09:42:58.846Z] Total : 20497.90 80.07 0.00 0.00 6241.37 3994.58 19223.89 00:41:04.955 { 00:41:04.955 "results": [ 00:41:04.955 { 00:41:04.955 "job": "Nvme0n1", 00:41:04.955 "core_mask": "0x2", 00:41:04.955 "workload": "randwrite", 00:41:04.955 "status": "finished", 00:41:04.955 "queue_depth": 128, 00:41:04.955 "io_size": 4096, 00:41:04.955 "runtime": 10.003124, 00:41:04.955 "iops": 20497.896457146788, 00:41:04.955 "mibps": 80.06990803572964, 00:41:04.955 "io_failed": 0, 00:41:04.955 "io_timeout": 0, 00:41:04.955 "avg_latency_us": 6241.373511665265, 00:41:04.955 "min_latency_us": 3994.575238095238, 00:41:04.955 "max_latency_us": 19223.893333333333 00:41:04.955 } 00:41:04.955 ], 00:41:04.955 "core_count": 1 00:41:04.955 } 00:41:04.956 10:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 5266 00:41:04.956 10:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 5266 ']' 00:41:04.956 10:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 5266 00:41:04.956 10:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:41:04.956 10:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:41:04.956 10:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 5266 00:41:04.956 10:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:41:04.956 10:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:41:04.956 10:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 5266' 00:41:04.956 killing process with pid 5266 00:41:04.956 10:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 5266 00:41:04.956 Received shutdown signal, test time was about 10.000000 seconds 00:41:04.956 00:41:04.956 Latency(us) 00:41:04.956 [2024-12-13T09:42:58.847Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:04.956 [2024-12-13T09:42:58.847Z] =================================================================================================================== 00:41:04.956 [2024-12-13T09:42:58.847Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:41:04.956 10:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 5266 00:41:05.522 10:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:41:05.784 10:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:41:06.042 10:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dc06ad4a-7c9a-47e9-b631-31d6418ad1a4 00:41:06.042 10:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:41:06.300 10:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:41:06.300 10:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:41:06.300 10:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 1850 00:41:06.300 10:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 1850 00:41:06.300 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 1850 Killed "${NVMF_APP[@]}" "$@" 00:41:06.300 10:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:41:06.300 10:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:41:06.300 10:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:41:06.300 10:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:41:06.300 10:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:41:06.300 10:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=7298 00:41:06.300 10:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 7298 00:41:06.300 10:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:41:06.300 10:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 7298 ']' 00:41:06.300 10:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:06.300 10:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:41:06.300 10:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:06.300 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:06.300 10:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:41:06.300 10:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:41:06.300 [2024-12-13 10:43:00.109170] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:41:06.300 [2024-12-13 10:43:00.111204] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:41:06.300 [2024-12-13 10:43:00.111272] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:41:06.559 [2024-12-13 10:43:00.233596] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:06.559 [2024-12-13 10:43:00.343656] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:41:06.559 [2024-12-13 10:43:00.343701] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:41:06.559 [2024-12-13 10:43:00.343714] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:41:06.559 [2024-12-13 10:43:00.343723] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:41:06.559 [2024-12-13 10:43:00.343732] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:41:06.559 [2024-12-13 10:43:00.345196] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:41:06.818 [2024-12-13 10:43:00.637037] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:41:06.818 [2024-12-13 10:43:00.637277] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:41:07.076 10:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:41:07.076 10:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:41:07.076 10:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:41:07.076 10:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:41:07.076 10:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:41:07.076 10:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:41:07.076 10:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:41:07.334 [2024-12-13 10:43:01.121300] blobstore.c:4899:bs_recover: *NOTICE*: Performing recovery on blobstore 00:41:07.334 [2024-12-13 10:43:01.121506] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:41:07.334 [2024-12-13 10:43:01.121567] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:41:07.334 10:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:41:07.334 10:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev c339107e-406b-497f-99d2-5f39e1e99ea5 00:41:07.334 10:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=c339107e-406b-497f-99d2-5f39e1e99ea5 00:41:07.334 10:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:41:07.334 10:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:41:07.334 10:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:41:07.334 10:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:41:07.334 10:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:41:07.593 10:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b c339107e-406b-497f-99d2-5f39e1e99ea5 -t 2000 00:41:07.851 [ 00:41:07.851 { 00:41:07.851 "name": "c339107e-406b-497f-99d2-5f39e1e99ea5", 00:41:07.851 "aliases": [ 00:41:07.851 "lvs/lvol" 00:41:07.851 ], 00:41:07.851 "product_name": "Logical Volume", 00:41:07.851 "block_size": 4096, 00:41:07.851 "num_blocks": 38912, 00:41:07.851 "uuid": "c339107e-406b-497f-99d2-5f39e1e99ea5", 00:41:07.851 "assigned_rate_limits": { 00:41:07.851 "rw_ios_per_sec": 0, 00:41:07.851 "rw_mbytes_per_sec": 0, 00:41:07.851 "r_mbytes_per_sec": 0, 00:41:07.851 "w_mbytes_per_sec": 0 00:41:07.851 }, 00:41:07.851 "claimed": false, 00:41:07.851 "zoned": false, 00:41:07.851 "supported_io_types": { 00:41:07.851 "read": true, 00:41:07.851 "write": true, 00:41:07.851 "unmap": true, 00:41:07.851 "flush": false, 00:41:07.851 "reset": true, 00:41:07.851 "nvme_admin": false, 00:41:07.851 "nvme_io": false, 00:41:07.851 "nvme_io_md": false, 00:41:07.851 "write_zeroes": true, 00:41:07.851 "zcopy": false, 00:41:07.851 "get_zone_info": false, 00:41:07.851 "zone_management": false, 00:41:07.851 "zone_append": false, 00:41:07.851 "compare": false, 00:41:07.851 "compare_and_write": false, 00:41:07.851 "abort": false, 00:41:07.851 "seek_hole": true, 00:41:07.851 "seek_data": true, 00:41:07.851 "copy": false, 00:41:07.851 "nvme_iov_md": false 00:41:07.851 }, 00:41:07.851 "driver_specific": { 00:41:07.851 "lvol": { 00:41:07.851 "lvol_store_uuid": "dc06ad4a-7c9a-47e9-b631-31d6418ad1a4", 00:41:07.851 "base_bdev": "aio_bdev", 00:41:07.851 "thin_provision": false, 00:41:07.851 "num_allocated_clusters": 38, 00:41:07.851 "snapshot": false, 00:41:07.851 "clone": false, 00:41:07.851 "esnap_clone": false 00:41:07.851 } 00:41:07.851 } 00:41:07.851 } 00:41:07.851 ] 00:41:07.851 10:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:41:07.851 10:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dc06ad4a-7c9a-47e9-b631-31d6418ad1a4 00:41:07.851 10:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:41:07.851 10:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:41:07.851 10:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dc06ad4a-7c9a-47e9-b631-31d6418ad1a4 00:41:07.851 10:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:41:08.110 10:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:41:08.110 10:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:41:08.368 [2024-12-13 10:43:02.081864] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:41:08.368 10:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dc06ad4a-7c9a-47e9-b631-31d6418ad1a4 00:41:08.368 10:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:41:08.368 10:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dc06ad4a-7c9a-47e9-b631-31d6418ad1a4 00:41:08.368 10:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:41:08.368 10:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:41:08.368 10:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:41:08.368 10:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:41:08.368 10:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:41:08.368 10:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:41:08.368 10:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:41:08.368 10:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:41:08.369 10:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dc06ad4a-7c9a-47e9-b631-31d6418ad1a4 00:41:08.627 request: 00:41:08.627 { 00:41:08.627 "uuid": "dc06ad4a-7c9a-47e9-b631-31d6418ad1a4", 00:41:08.627 "method": "bdev_lvol_get_lvstores", 00:41:08.627 "req_id": 1 00:41:08.627 } 00:41:08.627 Got JSON-RPC error response 00:41:08.627 response: 00:41:08.627 { 00:41:08.627 "code": -19, 00:41:08.627 "message": "No such device" 00:41:08.627 } 00:41:08.627 10:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:41:08.627 10:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:41:08.627 10:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:41:08.627 10:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:41:08.627 10:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:41:08.627 aio_bdev 00:41:08.627 10:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev c339107e-406b-497f-99d2-5f39e1e99ea5 00:41:08.627 10:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=c339107e-406b-497f-99d2-5f39e1e99ea5 00:41:08.627 10:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:41:08.627 10:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:41:08.627 10:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:41:08.627 10:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:41:08.627 10:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:41:08.886 10:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b c339107e-406b-497f-99d2-5f39e1e99ea5 -t 2000 00:41:09.144 [ 00:41:09.144 { 00:41:09.144 "name": "c339107e-406b-497f-99d2-5f39e1e99ea5", 00:41:09.144 "aliases": [ 00:41:09.144 "lvs/lvol" 00:41:09.144 ], 00:41:09.144 "product_name": "Logical Volume", 00:41:09.144 "block_size": 4096, 00:41:09.144 "num_blocks": 38912, 00:41:09.144 "uuid": "c339107e-406b-497f-99d2-5f39e1e99ea5", 00:41:09.144 "assigned_rate_limits": { 00:41:09.144 "rw_ios_per_sec": 0, 00:41:09.144 "rw_mbytes_per_sec": 0, 00:41:09.144 "r_mbytes_per_sec": 0, 00:41:09.144 "w_mbytes_per_sec": 0 00:41:09.144 }, 00:41:09.144 "claimed": false, 00:41:09.144 "zoned": false, 00:41:09.144 "supported_io_types": { 00:41:09.144 "read": true, 00:41:09.144 "write": true, 00:41:09.144 "unmap": true, 00:41:09.144 "flush": false, 00:41:09.144 "reset": true, 00:41:09.144 "nvme_admin": false, 00:41:09.144 "nvme_io": false, 00:41:09.144 "nvme_io_md": false, 00:41:09.144 "write_zeroes": true, 00:41:09.144 "zcopy": false, 00:41:09.144 "get_zone_info": false, 00:41:09.144 "zone_management": false, 00:41:09.144 "zone_append": false, 00:41:09.144 "compare": false, 00:41:09.144 "compare_and_write": false, 00:41:09.144 "abort": false, 00:41:09.144 "seek_hole": true, 00:41:09.144 "seek_data": true, 00:41:09.144 "copy": false, 00:41:09.144 "nvme_iov_md": false 00:41:09.144 }, 00:41:09.144 "driver_specific": { 00:41:09.144 "lvol": { 00:41:09.144 "lvol_store_uuid": "dc06ad4a-7c9a-47e9-b631-31d6418ad1a4", 00:41:09.144 "base_bdev": "aio_bdev", 00:41:09.144 "thin_provision": false, 00:41:09.144 "num_allocated_clusters": 38, 00:41:09.144 "snapshot": false, 00:41:09.144 "clone": false, 00:41:09.144 "esnap_clone": false 00:41:09.144 } 00:41:09.144 } 00:41:09.144 } 00:41:09.144 ] 00:41:09.144 10:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:41:09.144 10:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dc06ad4a-7c9a-47e9-b631-31d6418ad1a4 00:41:09.144 10:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:41:09.403 10:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:41:09.403 10:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dc06ad4a-7c9a-47e9-b631-31d6418ad1a4 00:41:09.403 10:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:41:09.403 10:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:41:09.403 10:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete c339107e-406b-497f-99d2-5f39e1e99ea5 00:41:09.661 10:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u dc06ad4a-7c9a-47e9-b631-31d6418ad1a4 00:41:09.920 10:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:41:10.178 10:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:41:10.178 00:41:10.178 real 0m18.983s 00:41:10.178 user 0m36.342s 00:41:10.178 sys 0m3.923s 00:41:10.178 10:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:10.178 10:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:41:10.178 ************************************ 00:41:10.178 END TEST lvs_grow_dirty 00:41:10.178 ************************************ 00:41:10.178 10:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:41:10.178 10:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:41:10.178 10:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:41:10.178 10:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:41:10.178 10:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:41:10.178 10:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:41:10.178 10:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:41:10.178 10:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:41:10.178 10:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:41:10.178 nvmf_trace.0 00:41:10.178 10:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:41:10.178 10:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:41:10.178 10:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:41:10.178 10:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:41:10.178 10:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:41:10.178 10:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:41:10.178 10:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:41:10.178 10:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:41:10.178 rmmod nvme_tcp 00:41:10.179 rmmod nvme_fabrics 00:41:10.179 rmmod nvme_keyring 00:41:10.179 10:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:41:10.179 10:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:41:10.179 10:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:41:10.179 10:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 7298 ']' 00:41:10.179 10:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 7298 00:41:10.179 10:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 7298 ']' 00:41:10.179 10:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 7298 00:41:10.179 10:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:41:10.179 10:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:41:10.179 10:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 7298 00:41:10.179 10:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:41:10.179 10:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:41:10.179 10:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 7298' 00:41:10.179 killing process with pid 7298 00:41:10.179 10:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 7298 00:41:10.179 10:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 7298 00:41:11.554 10:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:41:11.554 10:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:41:11.554 10:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:41:11.554 10:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:41:11.554 10:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:41:11.554 10:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:41:11.554 10:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:41:11.554 10:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:41:11.554 10:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:41:11.554 10:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:11.554 10:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:41:11.554 10:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:13.457 10:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:41:13.457 00:41:13.457 real 0m46.124s 00:41:13.457 user 0m56.625s 00:41:13.457 sys 0m10.045s 00:41:13.457 10:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:13.457 10:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:41:13.457 ************************************ 00:41:13.457 END TEST nvmf_lvs_grow 00:41:13.457 ************************************ 00:41:13.457 10:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:41:13.457 10:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:41:13.457 10:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:13.457 10:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:41:13.457 ************************************ 00:41:13.457 START TEST nvmf_bdev_io_wait 00:41:13.457 ************************************ 00:41:13.457 10:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:41:13.717 * Looking for test storage... 00:41:13.717 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:41:13.717 10:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:41:13.717 10:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lcov --version 00:41:13.717 10:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:41:13.717 10:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:41:13.717 10:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:41:13.717 10:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:41:13.717 10:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:41:13.717 10:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:41:13.717 10:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:41:13.717 10:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:41:13.717 10:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:41:13.717 10:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:41:13.717 10:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:41:13.717 10:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:41:13.717 10:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:41:13.717 10:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:41:13.717 10:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:41:13.717 10:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:41:13.717 10:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:41:13.717 10:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:41:13.717 10:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:41:13.717 10:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:41:13.717 10:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:41:13.717 10:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:41:13.717 10:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:41:13.717 10:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:41:13.717 10:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:41:13.717 10:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:41:13.717 10:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:41:13.717 10:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:41:13.717 10:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:41:13.717 10:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:41:13.717 10:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:41:13.717 10:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:41:13.717 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:13.717 --rc genhtml_branch_coverage=1 00:41:13.717 --rc genhtml_function_coverage=1 00:41:13.717 --rc genhtml_legend=1 00:41:13.717 --rc geninfo_all_blocks=1 00:41:13.717 --rc geninfo_unexecuted_blocks=1 00:41:13.717 00:41:13.717 ' 00:41:13.717 10:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:41:13.717 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:13.717 --rc genhtml_branch_coverage=1 00:41:13.717 --rc genhtml_function_coverage=1 00:41:13.717 --rc genhtml_legend=1 00:41:13.717 --rc geninfo_all_blocks=1 00:41:13.717 --rc geninfo_unexecuted_blocks=1 00:41:13.717 00:41:13.717 ' 00:41:13.717 10:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:41:13.717 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:13.717 --rc genhtml_branch_coverage=1 00:41:13.717 --rc genhtml_function_coverage=1 00:41:13.717 --rc genhtml_legend=1 00:41:13.717 --rc geninfo_all_blocks=1 00:41:13.717 --rc geninfo_unexecuted_blocks=1 00:41:13.717 00:41:13.717 ' 00:41:13.717 10:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:41:13.717 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:13.717 --rc genhtml_branch_coverage=1 00:41:13.717 --rc genhtml_function_coverage=1 00:41:13.717 --rc genhtml_legend=1 00:41:13.717 --rc geninfo_all_blocks=1 00:41:13.717 --rc geninfo_unexecuted_blocks=1 00:41:13.717 00:41:13.717 ' 00:41:13.717 10:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:41:13.717 10:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:41:13.717 10:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:41:13.717 10:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:41:13.718 10:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:41:13.718 10:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:41:13.718 10:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:41:13.718 10:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:41:13.718 10:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:41:13.718 10:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:41:13.718 10:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:41:13.718 10:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:41:13.718 10:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:41:13.718 10:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:41:13.718 10:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:41:13.718 10:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:41:13.718 10:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:41:13.718 10:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:41:13.718 10:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:41:13.718 10:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:41:13.718 10:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:13.718 10:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:13.718 10:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:13.718 10:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:13.718 10:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:13.718 10:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:13.718 10:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:41:13.718 10:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:13.718 10:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:41:13.718 10:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:41:13.718 10:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:41:13.718 10:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:41:13.718 10:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:41:13.718 10:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:41:13.718 10:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:41:13.718 10:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:41:13.718 10:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:41:13.718 10:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:41:13.718 10:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:41:13.718 10:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:41:13.718 10:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:41:13.718 10:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:41:13.718 10:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:41:13.718 10:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:41:13.718 10:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:41:13.718 10:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:41:13.718 10:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:41:13.718 10:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:13.718 10:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:41:13.718 10:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:13.718 10:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:41:13.718 10:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:41:13.718 10:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:41:13.718 10:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:41:19.153 10:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:41:19.153 10:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:41:19.153 10:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:41:19.153 10:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:41:19.153 10:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:41:19.153 10:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:41:19.153 10:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:41:19.153 10:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:41:19.153 10:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:41:19.153 10:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:41:19.153 10:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:41:19.153 10:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:41:19.153 10:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:41:19.153 10:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:41:19.153 10:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:41:19.153 10:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:41:19.153 10:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:41:19.153 10:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:41:19.153 10:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:41:19.153 10:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:41:19.153 10:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:41:19.153 10:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:41:19.153 10:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:41:19.153 10:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:41:19.153 10:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:41:19.153 10:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:41:19.153 10:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:41:19.153 10:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:41:19.153 10:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:41:19.153 10:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:41:19.153 10:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:41:19.153 10:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:41:19.153 10:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:41:19.153 10:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:19.153 10:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:41:19.153 Found 0000:af:00.0 (0x8086 - 0x159b) 00:41:19.153 10:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:19.153 10:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:19.153 10:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:19.153 10:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:19.153 10:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:19.153 10:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:19.153 10:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:41:19.153 Found 0000:af:00.1 (0x8086 - 0x159b) 00:41:19.153 10:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:19.153 10:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:19.153 10:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:19.153 10:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:19.153 10:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:19.153 10:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:41:19.153 10:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:41:19.153 10:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:41:19.153 10:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:41:19.153 10:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:19.153 10:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:41:19.153 10:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:19.153 10:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:41:19.153 10:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:41:19.153 10:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:19.153 10:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:41:19.153 Found net devices under 0000:af:00.0: cvl_0_0 00:41:19.153 10:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:41:19.153 10:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:41:19.153 10:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:19.153 10:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:41:19.153 10:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:19.153 10:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:41:19.153 10:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:41:19.153 10:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:19.153 10:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:41:19.153 Found net devices under 0000:af:00.1: cvl_0_1 00:41:19.153 10:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:41:19.153 10:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:41:19.153 10:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:41:19.153 10:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:41:19.153 10:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:41:19.153 10:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:41:19.153 10:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:41:19.153 10:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:41:19.153 10:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:41:19.153 10:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:41:19.153 10:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:41:19.153 10:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:41:19.153 10:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:41:19.153 10:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:41:19.153 10:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:41:19.153 10:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:41:19.153 10:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:41:19.153 10:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:41:19.153 10:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:41:19.153 10:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:41:19.153 10:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:41:19.153 10:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:41:19.153 10:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:41:19.153 10:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:41:19.154 10:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:41:19.154 10:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:41:19.154 10:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:41:19.154 10:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:41:19.154 10:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:41:19.154 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:41:19.154 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.310 ms 00:41:19.154 00:41:19.154 --- 10.0.0.2 ping statistics --- 00:41:19.154 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:19.154 rtt min/avg/max/mdev = 0.310/0.310/0.310/0.000 ms 00:41:19.154 10:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:41:19.154 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:41:19.154 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.147 ms 00:41:19.154 00:41:19.154 --- 10.0.0.1 ping statistics --- 00:41:19.154 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:19.154 rtt min/avg/max/mdev = 0.147/0.147/0.147/0.000 ms 00:41:19.154 10:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:41:19.154 10:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:41:19.154 10:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:41:19.154 10:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:41:19.154 10:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:41:19.154 10:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:41:19.154 10:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:41:19.154 10:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:41:19.154 10:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:41:19.154 10:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:41:19.154 10:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:41:19.154 10:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:41:19.154 10:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:41:19.154 10:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=11489 00:41:19.154 10:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 11489 00:41:19.154 10:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 00:41:19.154 10:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 11489 ']' 00:41:19.154 10:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:19.154 10:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:41:19.154 10:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:19.154 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:19.154 10:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:41:19.154 10:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:41:19.154 [2024-12-13 10:43:13.013199] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:41:19.154 [2024-12-13 10:43:13.015257] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:41:19.154 [2024-12-13 10:43:13.015323] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:41:19.413 [2024-12-13 10:43:13.133354] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:41:19.413 [2024-12-13 10:43:13.246350] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:41:19.413 [2024-12-13 10:43:13.246402] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:41:19.413 [2024-12-13 10:43:13.246417] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:41:19.413 [2024-12-13 10:43:13.246426] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:41:19.413 [2024-12-13 10:43:13.246438] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:41:19.413 [2024-12-13 10:43:13.248834] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:41:19.413 [2024-12-13 10:43:13.248856] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:41:19.413 [2024-12-13 10:43:13.248945] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:41:19.413 [2024-12-13 10:43:13.248955] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:41:19.413 [2024-12-13 10:43:13.249405] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:41:19.981 10:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:41:19.981 10:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:41:19.981 10:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:41:19.981 10:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:41:19.981 10:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:41:20.239 10:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:41:20.239 10:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:41:20.239 10:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:20.239 10:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:41:20.239 10:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:20.239 10:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:41:20.239 10:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:20.239 10:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:41:20.239 [2024-12-13 10:43:14.102185] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:41:20.239 [2024-12-13 10:43:14.102903] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:41:20.239 [2024-12-13 10:43:14.103947] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:41:20.239 [2024-12-13 10:43:14.104768] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:41:20.239 10:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:20.239 10:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:41:20.239 10:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:20.239 10:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:41:20.239 [2024-12-13 10:43:14.113907] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:41:20.239 10:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:20.239 10:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:41:20.239 10:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:20.239 10:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:41:20.498 Malloc0 00:41:20.498 10:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:20.498 10:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:41:20.498 10:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:20.498 10:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:41:20.498 10:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:20.498 10:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:41:20.498 10:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:20.498 10:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:41:20.498 10:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:20.498 10:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:41:20.498 10:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:20.498 10:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:41:20.498 [2024-12-13 10:43:14.249897] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:20.498 10:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:20.498 10:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=11729 00:41:20.498 10:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:41:20.498 10:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:41:20.498 10:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=11732 00:41:20.498 10:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:41:20.498 10:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:41:20.498 10:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:41:20.498 10:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:41:20.498 { 00:41:20.498 "params": { 00:41:20.498 "name": "Nvme$subsystem", 00:41:20.498 "trtype": "$TEST_TRANSPORT", 00:41:20.498 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:20.498 "adrfam": "ipv4", 00:41:20.498 "trsvcid": "$NVMF_PORT", 00:41:20.498 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:20.498 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:20.498 "hdgst": ${hdgst:-false}, 00:41:20.498 "ddgst": ${ddgst:-false} 00:41:20.498 }, 00:41:20.498 "method": "bdev_nvme_attach_controller" 00:41:20.498 } 00:41:20.498 EOF 00:41:20.498 )") 00:41:20.498 10:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:41:20.498 10:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=11734 00:41:20.498 10:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:41:20.498 10:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:41:20.498 10:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:41:20.498 10:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:41:20.499 10:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:41:20.499 { 00:41:20.499 "params": { 00:41:20.499 "name": "Nvme$subsystem", 00:41:20.499 "trtype": "$TEST_TRANSPORT", 00:41:20.499 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:20.499 "adrfam": "ipv4", 00:41:20.499 "trsvcid": "$NVMF_PORT", 00:41:20.499 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:20.499 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:20.499 "hdgst": ${hdgst:-false}, 00:41:20.499 "ddgst": ${ddgst:-false} 00:41:20.499 }, 00:41:20.499 "method": "bdev_nvme_attach_controller" 00:41:20.499 } 00:41:20.499 EOF 00:41:20.499 )") 00:41:20.499 10:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:41:20.499 10:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=11737 00:41:20.499 10:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:41:20.499 10:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:41:20.499 10:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:41:20.499 10:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:41:20.499 10:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:41:20.499 10:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:41:20.499 10:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:41:20.499 10:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:41:20.499 10:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:41:20.499 { 00:41:20.499 "params": { 00:41:20.499 "name": "Nvme$subsystem", 00:41:20.499 "trtype": "$TEST_TRANSPORT", 00:41:20.499 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:20.499 "adrfam": "ipv4", 00:41:20.499 "trsvcid": "$NVMF_PORT", 00:41:20.499 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:20.499 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:20.499 "hdgst": ${hdgst:-false}, 00:41:20.499 "ddgst": ${ddgst:-false} 00:41:20.499 }, 00:41:20.499 "method": "bdev_nvme_attach_controller" 00:41:20.499 } 00:41:20.499 EOF 00:41:20.499 )") 00:41:20.499 10:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:41:20.499 10:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:41:20.499 10:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:41:20.499 10:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:41:20.499 10:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:41:20.499 { 00:41:20.499 "params": { 00:41:20.499 "name": "Nvme$subsystem", 00:41:20.499 "trtype": "$TEST_TRANSPORT", 00:41:20.499 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:20.499 "adrfam": "ipv4", 00:41:20.499 "trsvcid": "$NVMF_PORT", 00:41:20.499 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:20.499 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:20.499 "hdgst": ${hdgst:-false}, 00:41:20.499 "ddgst": ${ddgst:-false} 00:41:20.499 }, 00:41:20.499 "method": "bdev_nvme_attach_controller" 00:41:20.499 } 00:41:20.499 EOF 00:41:20.499 )") 00:41:20.499 10:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:41:20.499 10:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 11729 00:41:20.499 10:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:41:20.499 10:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:41:20.499 10:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:41:20.499 10:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:41:20.499 10:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:41:20.499 "params": { 00:41:20.499 "name": "Nvme1", 00:41:20.499 "trtype": "tcp", 00:41:20.499 "traddr": "10.0.0.2", 00:41:20.499 "adrfam": "ipv4", 00:41:20.499 "trsvcid": "4420", 00:41:20.499 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:41:20.499 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:41:20.499 "hdgst": false, 00:41:20.499 "ddgst": false 00:41:20.499 }, 00:41:20.499 "method": "bdev_nvme_attach_controller" 00:41:20.499 }' 00:41:20.499 10:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:41:20.499 10:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:41:20.499 10:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:41:20.499 10:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:41:20.499 "params": { 00:41:20.499 "name": "Nvme1", 00:41:20.499 "trtype": "tcp", 00:41:20.499 "traddr": "10.0.0.2", 00:41:20.499 "adrfam": "ipv4", 00:41:20.499 "trsvcid": "4420", 00:41:20.499 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:41:20.499 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:41:20.499 "hdgst": false, 00:41:20.499 "ddgst": false 00:41:20.499 }, 00:41:20.499 "method": "bdev_nvme_attach_controller" 00:41:20.499 }' 00:41:20.499 10:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:41:20.499 10:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:41:20.499 "params": { 00:41:20.499 "name": "Nvme1", 00:41:20.499 "trtype": "tcp", 00:41:20.499 "traddr": "10.0.0.2", 00:41:20.499 "adrfam": "ipv4", 00:41:20.499 "trsvcid": "4420", 00:41:20.499 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:41:20.499 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:41:20.499 "hdgst": false, 00:41:20.499 "ddgst": false 00:41:20.499 }, 00:41:20.499 "method": "bdev_nvme_attach_controller" 00:41:20.499 }' 00:41:20.499 10:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:41:20.499 10:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:41:20.499 "params": { 00:41:20.499 "name": "Nvme1", 00:41:20.499 "trtype": "tcp", 00:41:20.499 "traddr": "10.0.0.2", 00:41:20.499 "adrfam": "ipv4", 00:41:20.499 "trsvcid": "4420", 00:41:20.499 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:41:20.499 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:41:20.499 "hdgst": false, 00:41:20.499 "ddgst": false 00:41:20.499 }, 00:41:20.499 "method": "bdev_nvme_attach_controller" 00:41:20.499 }' 00:41:20.499 [2024-12-13 10:43:14.327089] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:41:20.499 [2024-12-13 10:43:14.327184] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:41:20.499 [2024-12-13 10:43:14.327222] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:41:20.499 [2024-12-13 10:43:14.327300] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:41:20.499 [2024-12-13 10:43:14.329756] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:41:20.499 [2024-12-13 10:43:14.329828] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:41:20.499 [2024-12-13 10:43:14.330673] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:41:20.499 [2024-12-13 10:43:14.330745] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:41:20.758 [2024-12-13 10:43:14.552987] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:21.017 [2024-12-13 10:43:14.653013] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:21.017 [2024-12-13 10:43:14.697434] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 7 00:41:21.017 [2024-12-13 10:43:14.754334] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:21.017 [2024-12-13 10:43:14.763812] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 6 00:41:21.017 [2024-12-13 10:43:14.854718] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:21.017 [2024-12-13 10:43:14.860604] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 5 00:41:21.275 [2024-12-13 10:43:14.982134] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:41:21.533 Running I/O for 1 seconds... 00:41:21.533 Running I/O for 1 seconds... 00:41:21.533 Running I/O for 1 seconds... 00:41:21.533 Running I/O for 1 seconds... 00:41:22.471 213904.00 IOPS, 835.56 MiB/s 00:41:22.471 Latency(us) 00:41:22.471 [2024-12-13T09:43:16.362Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:22.471 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:41:22.471 Nvme1n1 : 1.00 213557.07 834.21 0.00 0.00 596.35 265.26 1614.99 00:41:22.471 [2024-12-13T09:43:16.362Z] =================================================================================================================== 00:41:22.471 [2024-12-13T09:43:16.362Z] Total : 213557.07 834.21 0.00 0.00 596.35 265.26 1614.99 00:41:22.471 7299.00 IOPS, 28.51 MiB/s [2024-12-13T09:43:16.362Z] 10428.00 IOPS, 40.73 MiB/s 00:41:22.471 Latency(us) 00:41:22.471 [2024-12-13T09:43:16.362Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:22.471 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:41:22.471 Nvme1n1 : 1.02 7300.17 28.52 0.00 0.00 17334.31 3214.38 31332.45 00:41:22.471 [2024-12-13T09:43:16.362Z] =================================================================================================================== 00:41:22.471 [2024-12-13T09:43:16.362Z] Total : 7300.17 28.52 0.00 0.00 17334.31 3214.38 31332.45 00:41:22.471 00:41:22.471 Latency(us) 00:41:22.471 [2024-12-13T09:43:16.362Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:22.471 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:41:22.471 Nvme1n1 : 1.05 10052.46 39.27 0.00 0.00 12176.85 6959.30 49183.21 00:41:22.471 [2024-12-13T09:43:16.362Z] =================================================================================================================== 00:41:22.471 [2024-12-13T09:43:16.362Z] Total : 10052.46 39.27 0.00 0.00 12176.85 6959.30 49183.21 00:41:22.730 7587.00 IOPS, 29.64 MiB/s 00:41:22.730 Latency(us) 00:41:22.730 [2024-12-13T09:43:16.621Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:22.730 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:41:22.730 Nvme1n1 : 1.01 7710.19 30.12 0.00 0.00 16560.32 3698.10 35202.19 00:41:22.730 [2024-12-13T09:43:16.621Z] =================================================================================================================== 00:41:22.730 [2024-12-13T09:43:16.621Z] Total : 7710.19 30.12 0.00 0.00 16560.32 3698.10 35202.19 00:41:23.297 10:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 11732 00:41:23.297 10:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 11734 00:41:23.297 10:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 11737 00:41:23.297 10:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:41:23.297 10:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:23.297 10:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:41:23.297 10:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:23.297 10:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:41:23.297 10:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:41:23.297 10:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:41:23.297 10:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:41:23.297 10:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:41:23.297 10:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:41:23.297 10:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:41:23.297 10:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:41:23.297 rmmod nvme_tcp 00:41:23.297 rmmod nvme_fabrics 00:41:23.297 rmmod nvme_keyring 00:41:23.297 10:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:41:23.297 10:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:41:23.297 10:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:41:23.297 10:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 11489 ']' 00:41:23.297 10:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 11489 00:41:23.297 10:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 11489 ']' 00:41:23.297 10:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 11489 00:41:23.297 10:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:41:23.297 10:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:41:23.297 10:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 11489 00:41:23.556 10:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:41:23.556 10:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:41:23.556 10:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 11489' 00:41:23.556 killing process with pid 11489 00:41:23.556 10:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 11489 00:41:23.556 10:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 11489 00:41:24.492 10:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:41:24.492 10:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:41:24.492 10:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:41:24.492 10:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:41:24.492 10:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:41:24.492 10:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:41:24.492 10:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:41:24.492 10:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:41:24.492 10:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:41:24.492 10:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:24.492 10:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:41:24.492 10:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:26.395 10:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:41:26.654 00:41:26.654 real 0m12.982s 00:41:26.654 user 0m23.167s 00:41:26.654 sys 0m6.712s 00:41:26.654 10:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:26.654 10:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:41:26.654 ************************************ 00:41:26.654 END TEST nvmf_bdev_io_wait 00:41:26.654 ************************************ 00:41:26.654 10:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:41:26.654 10:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:41:26.654 10:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:26.654 10:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:41:26.654 ************************************ 00:41:26.654 START TEST nvmf_queue_depth 00:41:26.654 ************************************ 00:41:26.654 10:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:41:26.654 * Looking for test storage... 00:41:26.654 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:41:26.654 10:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:41:26.654 10:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lcov --version 00:41:26.654 10:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:41:26.654 10:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:41:26.654 10:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:41:26.654 10:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:41:26.654 10:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:41:26.654 10:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:41:26.654 10:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:41:26.654 10:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:41:26.654 10:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:41:26.654 10:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:41:26.655 10:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:41:26.655 10:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:41:26.655 10:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:41:26.655 10:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:41:26.655 10:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:41:26.655 10:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:41:26.655 10:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:41:26.655 10:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:41:26.655 10:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:41:26.655 10:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:41:26.655 10:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:41:26.655 10:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:41:26.655 10:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:41:26.655 10:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:41:26.655 10:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:41:26.655 10:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:41:26.655 10:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:41:26.655 10:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:41:26.655 10:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:41:26.655 10:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:41:26.655 10:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:41:26.655 10:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:41:26.655 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:26.655 --rc genhtml_branch_coverage=1 00:41:26.655 --rc genhtml_function_coverage=1 00:41:26.655 --rc genhtml_legend=1 00:41:26.655 --rc geninfo_all_blocks=1 00:41:26.655 --rc geninfo_unexecuted_blocks=1 00:41:26.655 00:41:26.655 ' 00:41:26.655 10:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:41:26.655 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:26.655 --rc genhtml_branch_coverage=1 00:41:26.655 --rc genhtml_function_coverage=1 00:41:26.655 --rc genhtml_legend=1 00:41:26.655 --rc geninfo_all_blocks=1 00:41:26.655 --rc geninfo_unexecuted_blocks=1 00:41:26.655 00:41:26.655 ' 00:41:26.655 10:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:41:26.655 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:26.655 --rc genhtml_branch_coverage=1 00:41:26.655 --rc genhtml_function_coverage=1 00:41:26.655 --rc genhtml_legend=1 00:41:26.655 --rc geninfo_all_blocks=1 00:41:26.655 --rc geninfo_unexecuted_blocks=1 00:41:26.655 00:41:26.655 ' 00:41:26.655 10:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:41:26.655 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:26.655 --rc genhtml_branch_coverage=1 00:41:26.655 --rc genhtml_function_coverage=1 00:41:26.655 --rc genhtml_legend=1 00:41:26.655 --rc geninfo_all_blocks=1 00:41:26.655 --rc geninfo_unexecuted_blocks=1 00:41:26.655 00:41:26.655 ' 00:41:26.655 10:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:41:26.655 10:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:41:26.655 10:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:41:26.655 10:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:41:26.655 10:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:41:26.655 10:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:41:26.655 10:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:41:26.655 10:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:41:26.655 10:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:41:26.655 10:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:41:26.655 10:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:41:26.655 10:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:41:26.655 10:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:41:26.655 10:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:41:26.655 10:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:41:26.655 10:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:41:26.655 10:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:41:26.655 10:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:41:26.655 10:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:41:26.655 10:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:41:26.655 10:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:26.655 10:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:26.655 10:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:26.655 10:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:26.655 10:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:26.655 10:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:26.655 10:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:41:26.655 10:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:26.655 10:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:41:26.655 10:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:41:26.655 10:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:41:26.655 10:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:41:26.655 10:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:41:26.655 10:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:41:26.655 10:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:41:26.655 10:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:41:26.914 10:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:41:26.914 10:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:41:26.914 10:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:41:26.914 10:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:41:26.914 10:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:41:26.914 10:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:41:26.914 10:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:41:26.914 10:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:41:26.914 10:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:41:26.914 10:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:41:26.914 10:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:41:26.914 10:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:41:26.914 10:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:26.914 10:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:41:26.914 10:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:26.914 10:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:41:26.914 10:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:41:26.914 10:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:41:26.914 10:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:41:32.189 10:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:41:32.189 10:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:41:32.189 10:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:41:32.189 10:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:41:32.189 10:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:41:32.189 10:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:41:32.189 10:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:41:32.189 10:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:41:32.189 10:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:41:32.189 10:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:41:32.189 10:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:41:32.189 10:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:41:32.189 10:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:41:32.189 10:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:41:32.189 10:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:41:32.189 10:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:41:32.189 10:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:41:32.189 10:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:41:32.189 10:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:41:32.189 10:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:41:32.189 10:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:41:32.189 10:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:41:32.189 10:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:41:32.189 10:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:41:32.189 10:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:41:32.189 10:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:41:32.189 10:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:41:32.189 10:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:41:32.189 10:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:41:32.189 10:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:41:32.189 10:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:41:32.189 10:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:41:32.189 10:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:41:32.189 10:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:32.189 10:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:41:32.189 Found 0000:af:00.0 (0x8086 - 0x159b) 00:41:32.189 10:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:32.189 10:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:32.189 10:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:32.189 10:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:32.189 10:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:32.189 10:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:32.189 10:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:41:32.189 Found 0000:af:00.1 (0x8086 - 0x159b) 00:41:32.189 10:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:32.189 10:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:32.189 10:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:32.189 10:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:32.189 10:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:32.189 10:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:41:32.189 10:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:41:32.189 10:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:41:32.189 10:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:41:32.190 10:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:32.190 10:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:41:32.190 10:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:32.190 10:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:41:32.190 10:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:41:32.190 10:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:32.190 10:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:41:32.190 Found net devices under 0000:af:00.0: cvl_0_0 00:41:32.190 10:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:41:32.190 10:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:41:32.190 10:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:32.190 10:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:41:32.190 10:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:32.190 10:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:41:32.190 10:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:41:32.190 10:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:32.190 10:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:41:32.190 Found net devices under 0000:af:00.1: cvl_0_1 00:41:32.190 10:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:41:32.190 10:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:41:32.190 10:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:41:32.190 10:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:41:32.190 10:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:41:32.190 10:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:41:32.190 10:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:41:32.190 10:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:41:32.190 10:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:41:32.190 10:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:41:32.190 10:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:41:32.190 10:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:41:32.190 10:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:41:32.190 10:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:41:32.190 10:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:41:32.190 10:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:41:32.190 10:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:41:32.190 10:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:41:32.190 10:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:41:32.190 10:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:41:32.190 10:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:41:32.190 10:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:41:32.190 10:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:41:32.190 10:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:41:32.190 10:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:41:32.190 10:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:41:32.190 10:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:41:32.190 10:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:41:32.190 10:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:41:32.190 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:41:32.190 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.305 ms 00:41:32.190 00:41:32.190 --- 10.0.0.2 ping statistics --- 00:41:32.190 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:32.190 rtt min/avg/max/mdev = 0.305/0.305/0.305/0.000 ms 00:41:32.190 10:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:41:32.190 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:41:32.190 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.213 ms 00:41:32.190 00:41:32.190 --- 10.0.0.1 ping statistics --- 00:41:32.190 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:32.190 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:41:32.190 10:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:41:32.190 10:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:41:32.190 10:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:41:32.190 10:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:41:32.190 10:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:41:32.190 10:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:41:32.190 10:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:41:32.190 10:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:41:32.190 10:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:41:32.190 10:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:41:32.190 10:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:41:32.190 10:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:41:32.190 10:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:41:32.190 10:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=15678 00:41:32.190 10:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 15678 00:41:32.190 10:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 15678 ']' 00:41:32.190 10:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:32.190 10:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:41:32.190 10:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:41:32.190 10:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:32.190 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:32.190 10:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:41:32.190 10:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:41:32.190 [2024-12-13 10:43:26.009294] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:41:32.190 [2024-12-13 10:43:26.011371] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:41:32.190 [2024-12-13 10:43:26.011438] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:41:32.449 [2024-12-13 10:43:26.129567] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:32.450 [2024-12-13 10:43:26.236225] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:41:32.450 [2024-12-13 10:43:26.236264] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:41:32.450 [2024-12-13 10:43:26.236276] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:41:32.450 [2024-12-13 10:43:26.236285] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:41:32.450 [2024-12-13 10:43:26.236298] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:41:32.450 [2024-12-13 10:43:26.237700] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:41:32.708 [2024-12-13 10:43:26.553793] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:41:32.708 [2024-12-13 10:43:26.554052] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:41:32.967 10:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:41:32.967 10:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:41:32.967 10:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:41:32.967 10:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:41:32.967 10:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:41:32.967 10:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:41:32.967 10:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:41:32.967 10:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:32.967 10:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:41:32.967 [2024-12-13 10:43:26.846409] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:41:32.967 10:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:32.967 10:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:41:32.967 10:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:32.967 10:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:41:33.227 Malloc0 00:41:33.227 10:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:33.227 10:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:41:33.228 10:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:33.228 10:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:41:33.228 10:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:33.228 10:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:41:33.228 10:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:33.228 10:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:41:33.228 10:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:33.228 10:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:41:33.228 10:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:33.228 10:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:41:33.228 [2024-12-13 10:43:26.958592] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:33.228 10:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:33.228 10:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=15916 00:41:33.228 10:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:41:33.228 10:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 15916 /var/tmp/bdevperf.sock 00:41:33.228 10:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 15916 ']' 00:41:33.228 10:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:41:33.228 10:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:41:33.228 10:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:41:33.228 10:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:41:33.228 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:41:33.228 10:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:41:33.228 10:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:41:33.228 [2024-12-13 10:43:27.033183] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:41:33.228 [2024-12-13 10:43:27.033262] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid15916 ] 00:41:33.487 [2024-12-13 10:43:27.146806] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:33.487 [2024-12-13 10:43:27.256189] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:41:34.055 10:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:41:34.055 10:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:41:34.055 10:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:41:34.055 10:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:34.055 10:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:41:34.314 NVMe0n1 00:41:34.314 10:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:34.314 10:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:41:34.314 Running I/O for 10 seconds... 00:41:36.300 10261.00 IOPS, 40.08 MiB/s [2024-12-13T09:43:31.568Z] 10581.50 IOPS, 41.33 MiB/s [2024-12-13T09:43:32.505Z] 10581.33 IOPS, 41.33 MiB/s [2024-12-13T09:43:33.443Z] 10581.25 IOPS, 41.33 MiB/s [2024-12-13T09:43:34.386Z] 10648.00 IOPS, 41.59 MiB/s [2024-12-13T09:43:35.331Z] 10647.00 IOPS, 41.59 MiB/s [2024-12-13T09:43:36.271Z] 10680.43 IOPS, 41.72 MiB/s [2024-12-13T09:43:37.208Z] 10691.62 IOPS, 41.76 MiB/s [2024-12-13T09:43:38.586Z] 10701.11 IOPS, 41.80 MiB/s [2024-12-13T09:43:38.586Z] 10747.00 IOPS, 41.98 MiB/s 00:41:44.695 Latency(us) 00:41:44.695 [2024-12-13T09:43:38.587Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:44.696 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:41:44.696 Verification LBA range: start 0x0 length 0x4000 00:41:44.696 NVMe0n1 : 10.08 10763.59 42.05 0.00 0.00 94806.13 20846.69 63413.88 00:41:44.696 [2024-12-13T09:43:38.587Z] =================================================================================================================== 00:41:44.696 [2024-12-13T09:43:38.587Z] Total : 10763.59 42.05 0.00 0.00 94806.13 20846.69 63413.88 00:41:44.696 { 00:41:44.696 "results": [ 00:41:44.696 { 00:41:44.696 "job": "NVMe0n1", 00:41:44.696 "core_mask": "0x1", 00:41:44.696 "workload": "verify", 00:41:44.696 "status": "finished", 00:41:44.696 "verify_range": { 00:41:44.696 "start": 0, 00:41:44.696 "length": 16384 00:41:44.696 }, 00:41:44.696 "queue_depth": 1024, 00:41:44.696 "io_size": 4096, 00:41:44.696 "runtime": 10.07972, 00:41:44.696 "iops": 10763.59263947808, 00:41:44.696 "mibps": 42.04528374796125, 00:41:44.696 "io_failed": 0, 00:41:44.696 "io_timeout": 0, 00:41:44.696 "avg_latency_us": 94806.13140985633, 00:41:44.696 "min_latency_us": 20846.689523809524, 00:41:44.696 "max_latency_us": 63413.8819047619 00:41:44.696 } 00:41:44.696 ], 00:41:44.696 "core_count": 1 00:41:44.696 } 00:41:44.696 10:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 15916 00:41:44.696 10:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 15916 ']' 00:41:44.696 10:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 15916 00:41:44.696 10:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:41:44.696 10:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:41:44.696 10:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 15916 00:41:44.696 10:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:41:44.696 10:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:41:44.696 10:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 15916' 00:41:44.696 killing process with pid 15916 00:41:44.696 10:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 15916 00:41:44.696 Received shutdown signal, test time was about 10.000000 seconds 00:41:44.696 00:41:44.696 Latency(us) 00:41:44.696 [2024-12-13T09:43:38.587Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:44.696 [2024-12-13T09:43:38.587Z] =================================================================================================================== 00:41:44.696 [2024-12-13T09:43:38.587Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:41:44.696 10:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 15916 00:41:45.635 10:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:41:45.635 10:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:41:45.635 10:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:41:45.635 10:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:41:45.635 10:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:41:45.635 10:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:41:45.635 10:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:41:45.635 10:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:41:45.635 rmmod nvme_tcp 00:41:45.635 rmmod nvme_fabrics 00:41:45.635 rmmod nvme_keyring 00:41:45.635 10:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:41:45.635 10:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:41:45.635 10:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:41:45.635 10:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 15678 ']' 00:41:45.635 10:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 15678 00:41:45.635 10:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 15678 ']' 00:41:45.635 10:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 15678 00:41:45.635 10:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:41:45.635 10:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:41:45.635 10:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 15678 00:41:45.635 10:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:41:45.635 10:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:41:45.635 10:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 15678' 00:41:45.635 killing process with pid 15678 00:41:45.635 10:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 15678 00:41:45.635 10:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 15678 00:41:47.014 10:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:41:47.014 10:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:41:47.014 10:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:41:47.014 10:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:41:47.014 10:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:41:47.014 10:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:41:47.014 10:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:41:47.014 10:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:41:47.014 10:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:41:47.014 10:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:47.014 10:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:41:47.014 10:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:48.919 10:43:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:41:48.919 00:41:48.919 real 0m22.361s 00:41:48.919 user 0m26.989s 00:41:48.919 sys 0m6.236s 00:41:48.919 10:43:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:48.919 10:43:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:41:48.919 ************************************ 00:41:48.919 END TEST nvmf_queue_depth 00:41:48.919 ************************************ 00:41:48.919 10:43:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:41:48.919 10:43:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:41:48.919 10:43:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:48.919 10:43:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:41:48.919 ************************************ 00:41:48.919 START TEST nvmf_target_multipath 00:41:48.919 ************************************ 00:41:48.919 10:43:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:41:49.180 * Looking for test storage... 00:41:49.180 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:41:49.180 10:43:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:41:49.180 10:43:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lcov --version 00:41:49.180 10:43:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:41:49.180 10:43:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:41:49.180 10:43:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:41:49.180 10:43:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:41:49.180 10:43:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:41:49.180 10:43:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:41:49.180 10:43:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:41:49.180 10:43:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:41:49.180 10:43:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:41:49.180 10:43:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:41:49.180 10:43:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:41:49.180 10:43:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:41:49.180 10:43:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:41:49.180 10:43:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:41:49.180 10:43:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:41:49.180 10:43:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:41:49.180 10:43:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:41:49.180 10:43:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:41:49.180 10:43:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:41:49.180 10:43:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:41:49.180 10:43:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:41:49.180 10:43:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:41:49.180 10:43:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:41:49.180 10:43:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:41:49.180 10:43:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:41:49.180 10:43:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:41:49.180 10:43:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:41:49.180 10:43:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:41:49.180 10:43:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:41:49.180 10:43:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:41:49.180 10:43:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:41:49.180 10:43:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:41:49.180 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:49.180 --rc genhtml_branch_coverage=1 00:41:49.180 --rc genhtml_function_coverage=1 00:41:49.180 --rc genhtml_legend=1 00:41:49.180 --rc geninfo_all_blocks=1 00:41:49.180 --rc geninfo_unexecuted_blocks=1 00:41:49.180 00:41:49.180 ' 00:41:49.180 10:43:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:41:49.180 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:49.180 --rc genhtml_branch_coverage=1 00:41:49.180 --rc genhtml_function_coverage=1 00:41:49.180 --rc genhtml_legend=1 00:41:49.180 --rc geninfo_all_blocks=1 00:41:49.180 --rc geninfo_unexecuted_blocks=1 00:41:49.180 00:41:49.180 ' 00:41:49.180 10:43:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:41:49.180 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:49.180 --rc genhtml_branch_coverage=1 00:41:49.180 --rc genhtml_function_coverage=1 00:41:49.180 --rc genhtml_legend=1 00:41:49.180 --rc geninfo_all_blocks=1 00:41:49.180 --rc geninfo_unexecuted_blocks=1 00:41:49.180 00:41:49.180 ' 00:41:49.180 10:43:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:41:49.180 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:49.180 --rc genhtml_branch_coverage=1 00:41:49.180 --rc genhtml_function_coverage=1 00:41:49.180 --rc genhtml_legend=1 00:41:49.180 --rc geninfo_all_blocks=1 00:41:49.180 --rc geninfo_unexecuted_blocks=1 00:41:49.180 00:41:49.180 ' 00:41:49.180 10:43:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:41:49.180 10:43:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:41:49.180 10:43:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:41:49.180 10:43:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:41:49.180 10:43:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:41:49.180 10:43:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:41:49.180 10:43:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:41:49.180 10:43:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:41:49.180 10:43:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:41:49.180 10:43:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:41:49.181 10:43:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:41:49.181 10:43:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:41:49.181 10:43:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:41:49.181 10:43:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:41:49.181 10:43:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:41:49.181 10:43:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:41:49.181 10:43:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:41:49.181 10:43:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:41:49.181 10:43:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:41:49.181 10:43:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:41:49.181 10:43:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:49.181 10:43:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:49.181 10:43:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:49.181 10:43:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:49.181 10:43:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:49.181 10:43:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:49.181 10:43:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:41:49.181 10:43:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:49.181 10:43:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:41:49.181 10:43:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:41:49.181 10:43:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:41:49.181 10:43:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:41:49.181 10:43:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:41:49.181 10:43:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:41:49.181 10:43:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:41:49.181 10:43:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:41:49.181 10:43:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:41:49.181 10:43:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:41:49.181 10:43:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:41:49.181 10:43:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:41:49.181 10:43:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:41:49.181 10:43:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:41:49.181 10:43:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:41:49.181 10:43:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:41:49.181 10:43:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:41:49.181 10:43:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:41:49.181 10:43:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:41:49.181 10:43:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:41:49.181 10:43:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:41:49.181 10:43:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:49.181 10:43:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:41:49.181 10:43:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:49.181 10:43:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:41:49.181 10:43:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:41:49.181 10:43:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:41:49.181 10:43:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:41:54.459 10:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:41:54.459 10:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:41:54.459 10:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:41:54.459 10:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:41:54.459 10:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:41:54.459 10:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:41:54.459 10:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:41:54.459 10:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:41:54.459 10:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:41:54.459 10:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:41:54.459 10:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:41:54.459 10:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:41:54.459 10:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:41:54.459 10:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:41:54.459 10:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:41:54.459 10:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:41:54.459 10:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:41:54.459 10:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:41:54.459 10:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:41:54.459 10:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:41:54.459 10:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:41:54.459 10:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:41:54.459 10:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:41:54.459 10:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:41:54.459 10:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:41:54.459 10:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:41:54.459 10:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:41:54.459 10:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:41:54.459 10:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:41:54.459 10:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:41:54.459 10:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:41:54.459 10:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:41:54.459 10:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:41:54.459 10:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:54.459 10:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:41:54.459 Found 0000:af:00.0 (0x8086 - 0x159b) 00:41:54.459 10:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:54.459 10:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:54.459 10:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:54.459 10:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:54.459 10:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:54.459 10:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:54.459 10:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:41:54.459 Found 0000:af:00.1 (0x8086 - 0x159b) 00:41:54.459 10:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:54.459 10:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:54.459 10:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:54.459 10:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:54.459 10:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:54.459 10:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:41:54.459 10:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:41:54.459 10:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:41:54.459 10:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:41:54.459 10:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:54.459 10:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:41:54.459 10:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:54.459 10:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:41:54.459 10:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:41:54.459 10:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:54.459 10:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:41:54.459 Found net devices under 0000:af:00.0: cvl_0_0 00:41:54.459 10:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:41:54.459 10:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:41:54.459 10:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:54.459 10:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:41:54.459 10:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:54.460 10:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:41:54.460 10:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:41:54.460 10:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:54.460 10:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:41:54.460 Found net devices under 0000:af:00.1: cvl_0_1 00:41:54.460 10:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:41:54.460 10:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:41:54.460 10:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:41:54.460 10:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:41:54.460 10:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:41:54.460 10:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:41:54.460 10:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:41:54.460 10:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:41:54.460 10:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:41:54.460 10:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:41:54.460 10:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:41:54.460 10:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:41:54.460 10:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:41:54.460 10:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:41:54.460 10:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:41:54.460 10:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:41:54.460 10:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:41:54.460 10:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:41:54.460 10:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:41:54.460 10:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:41:54.460 10:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:41:54.719 10:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:41:54.720 10:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:41:54.720 10:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:41:54.720 10:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:41:54.979 10:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:41:54.979 10:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:41:54.979 10:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:41:54.979 10:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:41:54.979 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:41:54.979 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.303 ms 00:41:54.979 00:41:54.979 --- 10.0.0.2 ping statistics --- 00:41:54.979 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:54.979 rtt min/avg/max/mdev = 0.303/0.303/0.303/0.000 ms 00:41:54.979 10:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:41:54.979 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:41:54.979 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.168 ms 00:41:54.979 00:41:54.979 --- 10.0.0.1 ping statistics --- 00:41:54.979 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:54.979 rtt min/avg/max/mdev = 0.168/0.168/0.168/0.000 ms 00:41:54.979 10:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:41:54.979 10:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:41:54.979 10:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:41:54.979 10:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:41:54.979 10:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:41:54.979 10:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:41:54.979 10:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:41:54.979 10:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:41:54.979 10:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:41:54.979 10:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:41:54.979 10:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:41:54.979 only one NIC for nvmf test 00:41:54.979 10:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:41:54.979 10:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:41:54.979 10:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:41:54.979 10:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:41:54.979 10:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:41:54.979 10:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:41:54.979 10:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:41:54.979 rmmod nvme_tcp 00:41:54.979 rmmod nvme_fabrics 00:41:54.979 rmmod nvme_keyring 00:41:54.979 10:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:41:54.979 10:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:41:54.979 10:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:41:54.979 10:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:41:54.979 10:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:41:54.979 10:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:41:54.979 10:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:41:54.979 10:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:41:54.979 10:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:41:54.979 10:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:41:54.979 10:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:41:54.979 10:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:41:54.979 10:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:41:54.979 10:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:54.979 10:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:41:54.979 10:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:57.518 10:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:41:57.518 10:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:41:57.518 10:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:41:57.518 10:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:41:57.518 10:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:41:57.518 10:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:41:57.518 10:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:41:57.518 10:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:41:57.518 10:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:41:57.518 10:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:41:57.518 10:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:41:57.518 10:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:41:57.518 10:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:41:57.518 10:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:41:57.518 10:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:41:57.518 10:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:41:57.518 10:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:41:57.518 10:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:41:57.518 10:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:41:57.518 10:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:41:57.518 10:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:41:57.518 10:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:41:57.518 10:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:57.518 10:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:41:57.518 10:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:57.518 10:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:41:57.518 00:41:57.518 real 0m8.091s 00:41:57.518 user 0m1.753s 00:41:57.518 sys 0m4.248s 00:41:57.518 10:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:57.518 10:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:41:57.518 ************************************ 00:41:57.518 END TEST nvmf_target_multipath 00:41:57.518 ************************************ 00:41:57.518 10:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:41:57.518 10:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:41:57.518 10:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:57.518 10:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:41:57.518 ************************************ 00:41:57.518 START TEST nvmf_zcopy 00:41:57.518 ************************************ 00:41:57.518 10:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:41:57.518 * Looking for test storage... 00:41:57.518 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:41:57.518 10:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:41:57.518 10:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lcov --version 00:41:57.518 10:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:41:57.518 10:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:41:57.518 10:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:41:57.518 10:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:41:57.518 10:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:41:57.518 10:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:41:57.518 10:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:41:57.518 10:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:41:57.518 10:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:41:57.518 10:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:41:57.518 10:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:41:57.518 10:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:41:57.518 10:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:41:57.518 10:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:41:57.518 10:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:41:57.518 10:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:41:57.518 10:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:41:57.518 10:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:41:57.518 10:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:41:57.518 10:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:41:57.518 10:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:41:57.518 10:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:41:57.518 10:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:41:57.518 10:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:41:57.518 10:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:41:57.518 10:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:41:57.518 10:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:41:57.518 10:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:41:57.518 10:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:41:57.518 10:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:41:57.518 10:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:41:57.518 10:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:41:57.518 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:57.518 --rc genhtml_branch_coverage=1 00:41:57.518 --rc genhtml_function_coverage=1 00:41:57.518 --rc genhtml_legend=1 00:41:57.518 --rc geninfo_all_blocks=1 00:41:57.518 --rc geninfo_unexecuted_blocks=1 00:41:57.518 00:41:57.518 ' 00:41:57.518 10:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:41:57.518 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:57.518 --rc genhtml_branch_coverage=1 00:41:57.518 --rc genhtml_function_coverage=1 00:41:57.518 --rc genhtml_legend=1 00:41:57.518 --rc geninfo_all_blocks=1 00:41:57.519 --rc geninfo_unexecuted_blocks=1 00:41:57.519 00:41:57.519 ' 00:41:57.519 10:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:41:57.519 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:57.519 --rc genhtml_branch_coverage=1 00:41:57.519 --rc genhtml_function_coverage=1 00:41:57.519 --rc genhtml_legend=1 00:41:57.519 --rc geninfo_all_blocks=1 00:41:57.519 --rc geninfo_unexecuted_blocks=1 00:41:57.519 00:41:57.519 ' 00:41:57.519 10:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:41:57.519 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:57.519 --rc genhtml_branch_coverage=1 00:41:57.519 --rc genhtml_function_coverage=1 00:41:57.519 --rc genhtml_legend=1 00:41:57.519 --rc geninfo_all_blocks=1 00:41:57.519 --rc geninfo_unexecuted_blocks=1 00:41:57.519 00:41:57.519 ' 00:41:57.519 10:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:41:57.519 10:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:41:57.519 10:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:41:57.519 10:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:41:57.519 10:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:41:57.519 10:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:41:57.519 10:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:41:57.519 10:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:41:57.519 10:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:41:57.519 10:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:41:57.519 10:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:41:57.519 10:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:41:57.519 10:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:41:57.519 10:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:41:57.519 10:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:41:57.519 10:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:41:57.519 10:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:41:57.519 10:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:41:57.519 10:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:41:57.519 10:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:41:57.519 10:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:57.519 10:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:57.519 10:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:57.519 10:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:57.519 10:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:57.519 10:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:57.519 10:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:41:57.519 10:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:57.519 10:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:41:57.519 10:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:41:57.519 10:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:41:57.519 10:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:41:57.519 10:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:41:57.519 10:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:41:57.519 10:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:41:57.519 10:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:41:57.519 10:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:41:57.519 10:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:41:57.519 10:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:41:57.519 10:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:41:57.519 10:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:41:57.519 10:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:41:57.519 10:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:41:57.519 10:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:41:57.519 10:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:41:57.519 10:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:57.519 10:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:41:57.519 10:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:57.519 10:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:41:57.519 10:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:41:57.519 10:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:41:57.519 10:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:42:02.794 10:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:42:02.794 10:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:42:02.794 10:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:42:02.794 10:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:42:02.794 10:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:42:02.794 10:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:42:02.794 10:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:42:02.794 10:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:42:02.794 10:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:42:02.794 10:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:42:02.794 10:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:42:02.794 10:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:42:02.794 10:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:42:02.794 10:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:42:02.794 10:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:42:02.794 10:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:42:02.794 10:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:42:02.794 10:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:42:02.794 10:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:42:02.794 10:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:42:02.794 10:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:42:02.794 10:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:42:02.794 10:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:42:02.794 10:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:42:02.794 10:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:42:02.794 10:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:42:02.794 10:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:42:02.794 10:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:42:02.794 10:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:42:02.794 10:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:42:02.794 10:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:42:02.794 10:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:42:02.794 10:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:42:02.794 10:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:42:02.794 10:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:42:02.794 Found 0000:af:00.0 (0x8086 - 0x159b) 00:42:02.794 10:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:42:02.794 10:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:42:02.794 10:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:42:02.794 10:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:42:02.794 10:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:42:02.794 10:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:42:02.794 10:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:42:02.794 Found 0000:af:00.1 (0x8086 - 0x159b) 00:42:02.794 10:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:42:02.794 10:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:42:02.794 10:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:42:02.794 10:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:42:02.794 10:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:42:02.794 10:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:42:02.794 10:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:42:02.794 10:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:42:02.794 10:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:42:02.794 10:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:02.794 10:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:42:02.794 10:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:42:02.794 10:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:42:02.794 10:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:42:02.794 10:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:02.794 10:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:42:02.794 Found net devices under 0000:af:00.0: cvl_0_0 00:42:02.794 10:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:42:02.794 10:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:42:02.794 10:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:02.794 10:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:42:02.794 10:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:42:02.795 10:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:42:02.795 10:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:42:02.795 10:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:02.795 10:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:42:02.795 Found net devices under 0000:af:00.1: cvl_0_1 00:42:02.795 10:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:42:02.795 10:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:42:02.795 10:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:42:02.795 10:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:42:02.795 10:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:42:02.795 10:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:42:02.795 10:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:42:02.795 10:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:42:02.795 10:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:42:02.795 10:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:42:02.795 10:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:42:02.795 10:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:42:02.795 10:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:42:02.795 10:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:42:02.795 10:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:42:02.795 10:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:42:02.795 10:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:42:02.795 10:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:42:02.795 10:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:42:02.795 10:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:42:02.795 10:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:42:02.795 10:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:42:02.795 10:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:42:02.795 10:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:42:02.795 10:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:42:02.795 10:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:42:02.795 10:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:42:02.795 10:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:42:02.795 10:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:42:02.795 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:42:02.795 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.281 ms 00:42:02.795 00:42:02.795 --- 10.0.0.2 ping statistics --- 00:42:02.795 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:02.795 rtt min/avg/max/mdev = 0.281/0.281/0.281/0.000 ms 00:42:02.795 10:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:42:02.795 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:42:02.795 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.196 ms 00:42:02.795 00:42:02.795 --- 10.0.0.1 ping statistics --- 00:42:02.795 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:02.795 rtt min/avg/max/mdev = 0.196/0.196/0.196/0.000 ms 00:42:02.795 10:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:42:02.795 10:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:42:02.795 10:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:42:02.795 10:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:42:02.795 10:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:42:02.795 10:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:42:02.795 10:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:42:02.795 10:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:42:02.795 10:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:42:02.795 10:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:42:02.795 10:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:42:02.795 10:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:42:02.795 10:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:42:02.795 10:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=24620 00:42:02.795 10:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 24620 00:42:02.795 10:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 24620 ']' 00:42:02.795 10:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:02.795 10:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:42:02.795 10:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:02.795 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:02.795 10:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:42:02.795 10:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:42:02.795 10:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:42:02.795 [2024-12-13 10:43:56.263793] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:42:02.795 [2024-12-13 10:43:56.265852] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:42:02.795 [2024-12-13 10:43:56.265922] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:42:02.795 [2024-12-13 10:43:56.384134] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:02.795 [2024-12-13 10:43:56.490952] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:42:02.795 [2024-12-13 10:43:56.490994] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:42:02.795 [2024-12-13 10:43:56.491005] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:42:02.795 [2024-12-13 10:43:56.491014] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:42:02.795 [2024-12-13 10:43:56.491023] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:42:02.795 [2024-12-13 10:43:56.492347] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:42:03.054 [2024-12-13 10:43:56.813429] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:42:03.054 [2024-12-13 10:43:56.813679] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:42:03.314 10:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:42:03.314 10:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:42:03.314 10:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:42:03.314 10:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:42:03.314 10:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:42:03.314 10:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:42:03.314 10:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:42:03.314 10:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:42:03.314 10:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:03.314 10:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:42:03.314 [2024-12-13 10:43:57.113301] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:42:03.314 10:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:03.314 10:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:42:03.314 10:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:03.314 10:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:42:03.314 10:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:03.314 10:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:42:03.314 10:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:03.314 10:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:42:03.314 [2024-12-13 10:43:57.129476] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:42:03.314 10:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:03.314 10:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:42:03.314 10:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:03.314 10:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:42:03.314 10:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:03.314 10:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:42:03.314 10:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:03.314 10:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:42:03.314 malloc0 00:42:03.314 10:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:03.314 10:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:42:03.314 10:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:03.314 10:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:42:03.573 10:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:03.573 10:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:42:03.573 10:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:42:03.573 10:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:42:03.573 10:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:42:03.573 10:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:42:03.574 10:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:42:03.574 { 00:42:03.574 "params": { 00:42:03.574 "name": "Nvme$subsystem", 00:42:03.574 "trtype": "$TEST_TRANSPORT", 00:42:03.574 "traddr": "$NVMF_FIRST_TARGET_IP", 00:42:03.574 "adrfam": "ipv4", 00:42:03.574 "trsvcid": "$NVMF_PORT", 00:42:03.574 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:42:03.574 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:42:03.574 "hdgst": ${hdgst:-false}, 00:42:03.574 "ddgst": ${ddgst:-false} 00:42:03.574 }, 00:42:03.574 "method": "bdev_nvme_attach_controller" 00:42:03.574 } 00:42:03.574 EOF 00:42:03.574 )") 00:42:03.574 10:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:42:03.574 10:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:42:03.574 10:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:42:03.574 10:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:42:03.574 "params": { 00:42:03.574 "name": "Nvme1", 00:42:03.574 "trtype": "tcp", 00:42:03.574 "traddr": "10.0.0.2", 00:42:03.574 "adrfam": "ipv4", 00:42:03.574 "trsvcid": "4420", 00:42:03.574 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:42:03.574 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:42:03.574 "hdgst": false, 00:42:03.574 "ddgst": false 00:42:03.574 }, 00:42:03.574 "method": "bdev_nvme_attach_controller" 00:42:03.574 }' 00:42:03.574 [2024-12-13 10:43:57.281280] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:42:03.574 [2024-12-13 10:43:57.281358] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid24864 ] 00:42:03.574 [2024-12-13 10:43:57.393662] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:03.833 [2024-12-13 10:43:57.497698] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:42:04.401 Running I/O for 10 seconds... 00:42:06.276 7226.00 IOPS, 56.45 MiB/s [2024-12-13T09:44:01.103Z] 7308.00 IOPS, 57.09 MiB/s [2024-12-13T09:44:02.481Z] 7271.67 IOPS, 56.81 MiB/s [2024-12-13T09:44:03.048Z] 7285.00 IOPS, 56.91 MiB/s [2024-12-13T09:44:04.427Z] 7306.00 IOPS, 57.08 MiB/s [2024-12-13T09:44:05.363Z] 7316.33 IOPS, 57.16 MiB/s [2024-12-13T09:44:06.300Z] 7300.14 IOPS, 57.03 MiB/s [2024-12-13T09:44:07.235Z] 7296.50 IOPS, 57.00 MiB/s [2024-12-13T09:44:08.173Z] 7305.89 IOPS, 57.08 MiB/s [2024-12-13T09:44:08.173Z] 7310.30 IOPS, 57.11 MiB/s 00:42:14.282 Latency(us) 00:42:14.282 [2024-12-13T09:44:08.173Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:14.282 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:42:14.282 Verification LBA range: start 0x0 length 0x1000 00:42:14.282 Nvme1n1 : 10.01 7314.43 57.14 0.00 0.00 17451.81 2231.34 25715.08 00:42:14.282 [2024-12-13T09:44:08.173Z] =================================================================================================================== 00:42:14.282 [2024-12-13T09:44:08.173Z] Total : 7314.43 57.14 0.00 0.00 17451.81 2231.34 25715.08 00:42:15.219 10:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=26645 00:42:15.219 10:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:42:15.219 10:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:42:15.219 10:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:42:15.219 10:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:42:15.219 10:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:42:15.219 10:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:42:15.219 10:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:42:15.219 10:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:42:15.219 { 00:42:15.219 "params": { 00:42:15.219 "name": "Nvme$subsystem", 00:42:15.219 "trtype": "$TEST_TRANSPORT", 00:42:15.219 "traddr": "$NVMF_FIRST_TARGET_IP", 00:42:15.219 "adrfam": "ipv4", 00:42:15.219 "trsvcid": "$NVMF_PORT", 00:42:15.219 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:42:15.219 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:42:15.219 "hdgst": ${hdgst:-false}, 00:42:15.219 "ddgst": ${ddgst:-false} 00:42:15.219 }, 00:42:15.219 "method": "bdev_nvme_attach_controller" 00:42:15.219 } 00:42:15.219 EOF 00:42:15.219 )") 00:42:15.219 10:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:42:15.219 [2024-12-13 10:44:08.960921] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:15.219 [2024-12-13 10:44:08.960959] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:15.219 10:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:42:15.219 10:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:42:15.219 10:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:42:15.219 "params": { 00:42:15.219 "name": "Nvme1", 00:42:15.219 "trtype": "tcp", 00:42:15.219 "traddr": "10.0.0.2", 00:42:15.219 "adrfam": "ipv4", 00:42:15.219 "trsvcid": "4420", 00:42:15.219 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:42:15.219 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:42:15.219 "hdgst": false, 00:42:15.219 "ddgst": false 00:42:15.219 }, 00:42:15.219 "method": "bdev_nvme_attach_controller" 00:42:15.219 }' 00:42:15.219 [2024-12-13 10:44:08.968900] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:15.219 [2024-12-13 10:44:08.968926] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:15.219 [2024-12-13 10:44:08.976869] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:15.219 [2024-12-13 10:44:08.976890] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:15.219 [2024-12-13 10:44:08.984867] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:15.219 [2024-12-13 10:44:08.984887] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:15.219 [2024-12-13 10:44:08.992862] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:15.219 [2024-12-13 10:44:08.992884] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:15.219 [2024-12-13 10:44:09.004856] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:15.219 [2024-12-13 10:44:09.004876] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:15.219 [2024-12-13 10:44:09.012868] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:15.219 [2024-12-13 10:44:09.012888] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:15.219 [2024-12-13 10:44:09.020862] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:15.219 [2024-12-13 10:44:09.020881] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:15.219 [2024-12-13 10:44:09.026992] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:42:15.219 [2024-12-13 10:44:09.027067] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid26645 ] 00:42:15.219 [2024-12-13 10:44:09.028852] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:15.219 [2024-12-13 10:44:09.028871] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:15.219 [2024-12-13 10:44:09.036864] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:15.219 [2024-12-13 10:44:09.036882] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:15.219 [2024-12-13 10:44:09.044853] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:15.219 [2024-12-13 10:44:09.044872] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:15.219 [2024-12-13 10:44:09.052869] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:15.219 [2024-12-13 10:44:09.052889] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:15.219 [2024-12-13 10:44:09.060864] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:15.219 [2024-12-13 10:44:09.060883] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:15.220 [2024-12-13 10:44:09.068869] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:15.220 [2024-12-13 10:44:09.068887] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:15.220 [2024-12-13 10:44:09.076860] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:15.220 [2024-12-13 10:44:09.076878] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:15.220 [2024-12-13 10:44:09.084863] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:15.220 [2024-12-13 10:44:09.084882] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:15.220 [2024-12-13 10:44:09.092857] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:15.220 [2024-12-13 10:44:09.092876] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:15.220 [2024-12-13 10:44:09.100866] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:15.220 [2024-12-13 10:44:09.100886] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:15.220 [2024-12-13 10:44:09.108850] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:15.220 [2024-12-13 10:44:09.108869] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:15.479 [2024-12-13 10:44:09.116861] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:15.479 [2024-12-13 10:44:09.116879] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:15.479 [2024-12-13 10:44:09.124860] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:15.479 [2024-12-13 10:44:09.124878] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:15.479 [2024-12-13 10:44:09.132853] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:15.479 [2024-12-13 10:44:09.132874] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:15.479 [2024-12-13 10:44:09.140762] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:15.479 [2024-12-13 10:44:09.140866] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:15.479 [2024-12-13 10:44:09.140883] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:15.479 [2024-12-13 10:44:09.148861] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:15.479 [2024-12-13 10:44:09.148880] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:15.479 [2024-12-13 10:44:09.156859] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:15.479 [2024-12-13 10:44:09.156879] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:15.479 [2024-12-13 10:44:09.164881] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:15.479 [2024-12-13 10:44:09.164900] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:15.479 [2024-12-13 10:44:09.172855] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:15.479 [2024-12-13 10:44:09.172874] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:15.479 [2024-12-13 10:44:09.180862] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:15.479 [2024-12-13 10:44:09.180881] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:15.479 [2024-12-13 10:44:09.188860] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:15.479 [2024-12-13 10:44:09.188879] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:15.479 [2024-12-13 10:44:09.196851] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:15.479 [2024-12-13 10:44:09.196870] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:15.479 [2024-12-13 10:44:09.204881] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:15.479 [2024-12-13 10:44:09.204900] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:15.479 [2024-12-13 10:44:09.212861] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:15.479 [2024-12-13 10:44:09.212880] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:15.479 [2024-12-13 10:44:09.220850] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:15.479 [2024-12-13 10:44:09.220869] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:15.479 [2024-12-13 10:44:09.228863] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:15.480 [2024-12-13 10:44:09.228881] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:15.480 [2024-12-13 10:44:09.236853] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:15.480 [2024-12-13 10:44:09.236871] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:15.480 [2024-12-13 10:44:09.244860] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:15.480 [2024-12-13 10:44:09.244878] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:15.480 [2024-12-13 10:44:09.252057] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:42:15.480 [2024-12-13 10:44:09.252860] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:15.480 [2024-12-13 10:44:09.252878] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:15.480 [2024-12-13 10:44:09.260872] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:15.480 [2024-12-13 10:44:09.260891] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:15.480 [2024-12-13 10:44:09.268867] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:15.480 [2024-12-13 10:44:09.268887] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:15.480 [2024-12-13 10:44:09.276865] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:15.480 [2024-12-13 10:44:09.276886] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:15.480 [2024-12-13 10:44:09.284856] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:15.480 [2024-12-13 10:44:09.284875] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:15.480 [2024-12-13 10:44:09.292862] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:15.480 [2024-12-13 10:44:09.292882] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:15.480 [2024-12-13 10:44:09.300849] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:15.480 [2024-12-13 10:44:09.300867] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:15.480 [2024-12-13 10:44:09.308877] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:15.480 [2024-12-13 10:44:09.308896] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:15.480 [2024-12-13 10:44:09.316862] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:15.480 [2024-12-13 10:44:09.316881] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:15.480 [2024-12-13 10:44:09.324850] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:15.480 [2024-12-13 10:44:09.324868] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:15.480 [2024-12-13 10:44:09.332866] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:15.480 [2024-12-13 10:44:09.332886] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:15.480 [2024-12-13 10:44:09.340863] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:15.480 [2024-12-13 10:44:09.340883] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:15.480 [2024-12-13 10:44:09.348863] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:15.480 [2024-12-13 10:44:09.348884] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:15.480 [2024-12-13 10:44:09.356877] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:15.480 [2024-12-13 10:44:09.356895] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:15.480 [2024-12-13 10:44:09.364850] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:15.480 [2024-12-13 10:44:09.364868] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:15.739 [2024-12-13 10:44:09.372862] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:15.739 [2024-12-13 10:44:09.372880] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:15.739 [2024-12-13 10:44:09.380860] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:15.739 [2024-12-13 10:44:09.380878] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:15.739 [2024-12-13 10:44:09.388848] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:15.739 [2024-12-13 10:44:09.388867] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:15.739 [2024-12-13 10:44:09.396858] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:15.739 [2024-12-13 10:44:09.396875] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:15.739 [2024-12-13 10:44:09.404860] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:15.739 [2024-12-13 10:44:09.404878] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:15.739 [2024-12-13 10:44:09.412868] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:15.739 [2024-12-13 10:44:09.412888] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:15.739 [2024-12-13 10:44:09.420863] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:15.739 [2024-12-13 10:44:09.420882] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:15.739 [2024-12-13 10:44:09.428848] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:15.739 [2024-12-13 10:44:09.428870] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:15.739 [2024-12-13 10:44:09.436863] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:15.739 [2024-12-13 10:44:09.436881] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:15.739 [2024-12-13 10:44:09.444860] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:15.739 [2024-12-13 10:44:09.444878] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:15.739 [2024-12-13 10:44:09.452877] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:15.739 [2024-12-13 10:44:09.452896] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:15.739 [2024-12-13 10:44:09.460861] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:15.739 [2024-12-13 10:44:09.460879] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:15.739 [2024-12-13 10:44:09.468861] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:15.739 [2024-12-13 10:44:09.468880] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:15.739 [2024-12-13 10:44:09.476852] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:15.739 [2024-12-13 10:44:09.476871] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:15.739 [2024-12-13 10:44:09.484866] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:15.739 [2024-12-13 10:44:09.484886] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:15.739 [2024-12-13 10:44:09.492854] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:15.739 [2024-12-13 10:44:09.492873] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:15.739 [2024-12-13 10:44:09.500864] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:15.739 [2024-12-13 10:44:09.500882] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:15.739 [2024-12-13 10:44:09.508861] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:15.739 [2024-12-13 10:44:09.508879] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:15.739 [2024-12-13 10:44:09.516860] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:15.739 [2024-12-13 10:44:09.516879] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:15.739 [2024-12-13 10:44:09.524861] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:15.739 [2024-12-13 10:44:09.524879] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:15.739 [2024-12-13 10:44:09.532858] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:15.739 [2024-12-13 10:44:09.532877] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:15.739 [2024-12-13 10:44:09.540851] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:15.740 [2024-12-13 10:44:09.540868] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:15.740 [2024-12-13 10:44:09.548873] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:15.740 [2024-12-13 10:44:09.548890] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:15.740 [2024-12-13 10:44:09.556852] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:15.740 [2024-12-13 10:44:09.556871] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:15.740 [2024-12-13 10:44:09.564860] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:15.740 [2024-12-13 10:44:09.564878] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:15.740 [2024-12-13 10:44:09.572871] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:15.740 [2024-12-13 10:44:09.572890] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:15.740 [2024-12-13 10:44:09.580853] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:15.740 [2024-12-13 10:44:09.580876] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:15.740 [2024-12-13 10:44:09.588862] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:15.740 [2024-12-13 10:44:09.588884] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:15.740 [2024-12-13 10:44:09.596867] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:15.740 [2024-12-13 10:44:09.596889] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:15.740 [2024-12-13 10:44:09.604858] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:15.740 [2024-12-13 10:44:09.604878] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:15.740 [2024-12-13 10:44:09.612863] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:15.740 [2024-12-13 10:44:09.612883] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:15.740 [2024-12-13 10:44:09.620862] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:15.740 [2024-12-13 10:44:09.620883] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:15.740 [2024-12-13 10:44:09.628867] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:15.740 [2024-12-13 10:44:09.628890] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:15.998 [2024-12-13 10:44:09.636875] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:15.998 [2024-12-13 10:44:09.636895] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:15.998 [2024-12-13 10:44:09.644851] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:15.998 [2024-12-13 10:44:09.644870] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:15.998 [2024-12-13 10:44:09.652867] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:15.998 [2024-12-13 10:44:09.652888] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:15.998 [2024-12-13 10:44:09.660973] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:15.998 [2024-12-13 10:44:09.660995] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:15.998 [2024-12-13 10:44:09.668854] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:15.998 [2024-12-13 10:44:09.668875] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:15.998 [2024-12-13 10:44:09.676861] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:15.998 [2024-12-13 10:44:09.676880] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:15.998 [2024-12-13 10:44:09.684853] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:15.998 [2024-12-13 10:44:09.684875] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:15.998 [2024-12-13 10:44:09.692859] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:15.998 [2024-12-13 10:44:09.692878] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:15.998 [2024-12-13 10:44:09.700863] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:15.998 [2024-12-13 10:44:09.700882] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:15.998 [2024-12-13 10:44:09.708854] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:15.998 [2024-12-13 10:44:09.708874] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:15.998 [2024-12-13 10:44:09.716862] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:15.998 [2024-12-13 10:44:09.716882] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:15.998 [2024-12-13 10:44:09.724862] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:15.998 [2024-12-13 10:44:09.724881] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:15.998 [2024-12-13 10:44:09.732880] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:15.998 [2024-12-13 10:44:09.732904] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:15.998 [2024-12-13 10:44:09.740865] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:15.998 [2024-12-13 10:44:09.740886] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:15.998 [2024-12-13 10:44:09.748850] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:15.998 [2024-12-13 10:44:09.748869] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:15.998 [2024-12-13 10:44:09.756873] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:15.998 [2024-12-13 10:44:09.756893] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:15.998 [2024-12-13 10:44:09.764860] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:15.998 [2024-12-13 10:44:09.764879] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:15.998 [2024-12-13 10:44:09.772850] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:15.998 [2024-12-13 10:44:09.772869] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:15.998 [2024-12-13 10:44:09.780860] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:15.998 [2024-12-13 10:44:09.780878] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:15.998 [2024-12-13 10:44:09.788862] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:15.999 [2024-12-13 10:44:09.788882] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:15.999 [2024-12-13 10:44:09.796850] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:15.999 [2024-12-13 10:44:09.796870] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:15.999 [2024-12-13 10:44:09.804860] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:15.999 [2024-12-13 10:44:09.804881] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:15.999 [2024-12-13 10:44:09.812860] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:15.999 [2024-12-13 10:44:09.812881] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:15.999 [2024-12-13 10:44:09.820884] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:15.999 [2024-12-13 10:44:09.820904] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:15.999 Running I/O for 5 seconds... 00:42:15.999 [2024-12-13 10:44:09.834034] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:15.999 [2024-12-13 10:44:09.834060] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:15.999 [2024-12-13 10:44:09.843330] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:15.999 [2024-12-13 10:44:09.843355] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:15.999 [2024-12-13 10:44:09.856679] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:15.999 [2024-12-13 10:44:09.856703] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:15.999 [2024-12-13 10:44:09.870046] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:15.999 [2024-12-13 10:44:09.870071] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:15.999 [2024-12-13 10:44:09.877631] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:15.999 [2024-12-13 10:44:09.877654] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:16.256 [2024-12-13 10:44:09.894210] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:16.256 [2024-12-13 10:44:09.894237] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:16.256 [2024-12-13 10:44:09.902659] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:16.256 [2024-12-13 10:44:09.902683] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:16.256 [2024-12-13 10:44:09.916483] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:16.256 [2024-12-13 10:44:09.916508] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:16.256 [2024-12-13 10:44:09.931265] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:16.256 [2024-12-13 10:44:09.931290] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:16.256 [2024-12-13 10:44:09.948260] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:16.256 [2024-12-13 10:44:09.948284] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:16.256 [2024-12-13 10:44:09.961946] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:16.256 [2024-12-13 10:44:09.961970] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:16.256 [2024-12-13 10:44:09.969627] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:16.256 [2024-12-13 10:44:09.969651] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:16.256 [2024-12-13 10:44:09.986893] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:16.256 [2024-12-13 10:44:09.986919] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:16.256 [2024-12-13 10:44:10.003829] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:16.256 [2024-12-13 10:44:10.003856] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:16.256 [2024-12-13 10:44:10.017754] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:16.256 [2024-12-13 10:44:10.017780] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:16.256 [2024-12-13 10:44:10.025774] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:16.256 [2024-12-13 10:44:10.025799] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:16.256 [2024-12-13 10:44:10.042675] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:16.256 [2024-12-13 10:44:10.042702] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:16.256 [2024-12-13 10:44:10.059709] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:16.257 [2024-12-13 10:44:10.059735] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:16.257 [2024-12-13 10:44:10.074493] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:16.257 [2024-12-13 10:44:10.074519] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:16.257 [2024-12-13 10:44:10.091933] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:16.257 [2024-12-13 10:44:10.091959] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:16.257 [2024-12-13 10:44:10.106493] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:16.257 [2024-12-13 10:44:10.106518] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:16.257 [2024-12-13 10:44:10.124190] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:16.257 [2024-12-13 10:44:10.124216] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:16.257 [2024-12-13 10:44:10.137948] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:16.257 [2024-12-13 10:44:10.137972] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:16.257 [2024-12-13 10:44:10.145921] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:16.257 [2024-12-13 10:44:10.145945] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:16.515 [2024-12-13 10:44:10.155752] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:16.515 [2024-12-13 10:44:10.155776] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:16.515 [2024-12-13 10:44:10.170147] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:16.515 [2024-12-13 10:44:10.170172] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:16.515 [2024-12-13 10:44:10.178294] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:16.515 [2024-12-13 10:44:10.178318] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:16.515 [2024-12-13 10:44:10.192857] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:16.515 [2024-12-13 10:44:10.192881] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:16.515 [2024-12-13 10:44:10.206344] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:16.515 [2024-12-13 10:44:10.206368] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:16.515 [2024-12-13 10:44:10.214151] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:16.515 [2024-12-13 10:44:10.214175] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:16.515 [2024-12-13 10:44:10.224107] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:16.515 [2024-12-13 10:44:10.224131] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:16.515 [2024-12-13 10:44:10.237366] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:16.515 [2024-12-13 10:44:10.237390] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:16.515 [2024-12-13 10:44:10.250137] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:16.515 [2024-12-13 10:44:10.250161] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:16.515 [2024-12-13 10:44:10.258406] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:16.515 [2024-12-13 10:44:10.258429] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:16.515 [2024-12-13 10:44:10.272804] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:16.515 [2024-12-13 10:44:10.272829] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:16.515 [2024-12-13 10:44:10.280652] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:16.515 [2024-12-13 10:44:10.280676] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:16.515 [2024-12-13 10:44:10.294124] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:16.515 [2024-12-13 10:44:10.294149] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:16.515 [2024-12-13 10:44:10.311915] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:16.515 [2024-12-13 10:44:10.311940] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:16.515 [2024-12-13 10:44:10.326848] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:16.515 [2024-12-13 10:44:10.326872] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:16.515 [2024-12-13 10:44:10.343719] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:16.515 [2024-12-13 10:44:10.343743] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:16.515 [2024-12-13 10:44:10.357284] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:16.515 [2024-12-13 10:44:10.357308] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:16.515 [2024-12-13 10:44:10.369309] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:16.515 [2024-12-13 10:44:10.369339] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:16.515 [2024-12-13 10:44:10.381490] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:16.515 [2024-12-13 10:44:10.381514] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:16.515 [2024-12-13 10:44:10.393432] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:16.515 [2024-12-13 10:44:10.393463] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:16.515 [2024-12-13 10:44:10.405329] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:16.515 [2024-12-13 10:44:10.405357] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:16.774 [2024-12-13 10:44:10.415636] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:16.774 [2024-12-13 10:44:10.415660] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:16.774 [2024-12-13 10:44:10.431537] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:16.774 [2024-12-13 10:44:10.431563] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:16.774 [2024-12-13 10:44:10.448064] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:16.774 [2024-12-13 10:44:10.448089] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:16.774 [2024-12-13 10:44:10.463054] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:16.774 [2024-12-13 10:44:10.463079] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:16.774 [2024-12-13 10:44:10.479882] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:16.774 [2024-12-13 10:44:10.479906] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:16.774 [2024-12-13 10:44:10.493327] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:16.774 [2024-12-13 10:44:10.493351] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:16.774 [2024-12-13 10:44:10.505877] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:16.774 [2024-12-13 10:44:10.505902] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:16.774 [2024-12-13 10:44:10.513822] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:16.774 [2024-12-13 10:44:10.513846] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:16.774 [2024-12-13 10:44:10.530929] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:16.774 [2024-12-13 10:44:10.530953] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:16.774 [2024-12-13 10:44:10.548101] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:16.774 [2024-12-13 10:44:10.548129] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:16.774 [2024-12-13 10:44:10.562581] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:16.774 [2024-12-13 10:44:10.562606] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:16.774 [2024-12-13 10:44:10.570685] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:16.774 [2024-12-13 10:44:10.570709] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:16.774 [2024-12-13 10:44:10.585292] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:16.774 [2024-12-13 10:44:10.585316] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:16.774 [2024-12-13 10:44:10.597429] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:16.774 [2024-12-13 10:44:10.597460] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:16.774 [2024-12-13 10:44:10.610430] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:16.774 [2024-12-13 10:44:10.610461] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:16.774 [2024-12-13 10:44:10.618373] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:16.774 [2024-12-13 10:44:10.618398] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:16.774 [2024-12-13 10:44:10.632052] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:16.774 [2024-12-13 10:44:10.632078] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:16.774 [2024-12-13 10:44:10.647883] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:16.774 [2024-12-13 10:44:10.647907] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:16.774 [2024-12-13 10:44:10.663857] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:16.774 [2024-12-13 10:44:10.663889] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:17.033 [2024-12-13 10:44:10.677986] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:17.033 [2024-12-13 10:44:10.678012] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:17.033 [2024-12-13 10:44:10.685741] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:17.033 [2024-12-13 10:44:10.685765] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:17.033 [2024-12-13 10:44:10.703374] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:17.033 [2024-12-13 10:44:10.703399] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:17.033 [2024-12-13 10:44:10.718482] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:17.033 [2024-12-13 10:44:10.718506] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:17.033 [2024-12-13 10:44:10.726540] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:17.033 [2024-12-13 10:44:10.726564] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:17.033 [2024-12-13 10:44:10.741464] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:17.033 [2024-12-13 10:44:10.741488] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:17.033 [2024-12-13 10:44:10.749814] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:17.033 [2024-12-13 10:44:10.749838] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:17.033 [2024-12-13 10:44:10.766983] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:17.033 [2024-12-13 10:44:10.767008] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:17.033 [2024-12-13 10:44:10.783783] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:17.033 [2024-12-13 10:44:10.783807] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:17.033 [2024-12-13 10:44:10.797753] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:17.033 [2024-12-13 10:44:10.797778] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:17.033 [2024-12-13 10:44:10.808651] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:17.033 [2024-12-13 10:44:10.808675] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:17.033 [2024-12-13 10:44:10.821707] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:17.033 [2024-12-13 10:44:10.821730] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:17.033 13958.00 IOPS, 109.05 MiB/s [2024-12-13T09:44:10.924Z] [2024-12-13 10:44:10.832532] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:17.033 [2024-12-13 10:44:10.832556] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:17.033 [2024-12-13 10:44:10.847861] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:17.033 [2024-12-13 10:44:10.847885] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:17.033 [2024-12-13 10:44:10.863553] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:17.033 [2024-12-13 10:44:10.863577] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:17.033 [2024-12-13 10:44:10.878112] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:17.033 [2024-12-13 10:44:10.878136] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:17.033 [2024-12-13 10:44:10.885793] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:17.033 [2024-12-13 10:44:10.885821] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:17.033 [2024-12-13 10:44:10.896937] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:17.033 [2024-12-13 10:44:10.896961] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:17.033 [2024-12-13 10:44:10.904792] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:17.033 [2024-12-13 10:44:10.904820] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:17.033 [2024-12-13 10:44:10.914670] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:17.033 [2024-12-13 10:44:10.914693] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:17.292 [2024-12-13 10:44:10.931223] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:17.292 [2024-12-13 10:44:10.931248] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:17.292 [2024-12-13 10:44:10.947388] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:17.292 [2024-12-13 10:44:10.947412] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:17.292 [2024-12-13 10:44:10.962619] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:17.292 [2024-12-13 10:44:10.962644] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:17.292 [2024-12-13 10:44:10.979371] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:17.292 [2024-12-13 10:44:10.979395] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:17.292 [2024-12-13 10:44:10.996236] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:17.292 [2024-12-13 10:44:10.996260] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:17.292 [2024-12-13 10:44:11.011391] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:17.292 [2024-12-13 10:44:11.011415] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:17.292 [2024-12-13 10:44:11.027646] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:17.292 [2024-12-13 10:44:11.027671] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:17.292 [2024-12-13 10:44:11.043967] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:17.292 [2024-12-13 10:44:11.043991] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:17.292 [2024-12-13 10:44:11.059117] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:17.292 [2024-12-13 10:44:11.059142] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:17.292 [2024-12-13 10:44:11.074311] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:17.292 [2024-12-13 10:44:11.074336] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:17.292 [2024-12-13 10:44:11.082579] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:17.292 [2024-12-13 10:44:11.082602] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:17.292 [2024-12-13 10:44:11.095894] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:17.292 [2024-12-13 10:44:11.095918] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:17.292 [2024-12-13 10:44:11.111504] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:17.292 [2024-12-13 10:44:11.111528] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:17.292 [2024-12-13 10:44:11.125878] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:17.292 [2024-12-13 10:44:11.125903] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:17.292 [2024-12-13 10:44:11.133618] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:17.292 [2024-12-13 10:44:11.133642] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:17.292 [2024-12-13 10:44:11.145702] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:17.292 [2024-12-13 10:44:11.145726] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:17.292 [2024-12-13 10:44:11.153883] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:17.292 [2024-12-13 10:44:11.153909] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:17.292 [2024-12-13 10:44:11.165182] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:17.292 [2024-12-13 10:44:11.165206] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:17.292 [2024-12-13 10:44:11.178198] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:17.292 [2024-12-13 10:44:11.178224] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:17.551 [2024-12-13 10:44:11.186323] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:17.551 [2024-12-13 10:44:11.186348] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:17.551 [2024-12-13 10:44:11.201287] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:17.551 [2024-12-13 10:44:11.201312] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:17.551 [2024-12-13 10:44:11.213079] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:17.551 [2024-12-13 10:44:11.213110] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:17.551 [2024-12-13 10:44:11.225775] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:17.551 [2024-12-13 10:44:11.225800] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:17.551 [2024-12-13 10:44:11.234070] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:17.551 [2024-12-13 10:44:11.234093] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:17.551 [2024-12-13 10:44:11.243546] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:17.551 [2024-12-13 10:44:11.243571] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:17.551 [2024-12-13 10:44:11.256735] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:17.551 [2024-12-13 10:44:11.256761] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:17.551 [2024-12-13 10:44:11.264458] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:17.551 [2024-12-13 10:44:11.264482] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:17.551 [2024-12-13 10:44:11.276465] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:17.551 [2024-12-13 10:44:11.276489] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:17.551 [2024-12-13 10:44:11.289750] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:17.551 [2024-12-13 10:44:11.289774] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:17.551 [2024-12-13 10:44:11.299589] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:17.551 [2024-12-13 10:44:11.299612] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:17.551 [2024-12-13 10:44:11.315716] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:17.551 [2024-12-13 10:44:11.315741] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:17.551 [2024-12-13 10:44:11.332060] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:17.551 [2024-12-13 10:44:11.332085] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:17.551 [2024-12-13 10:44:11.345123] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:17.551 [2024-12-13 10:44:11.345146] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:17.551 [2024-12-13 10:44:11.357933] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:17.551 [2024-12-13 10:44:11.357958] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:17.551 [2024-12-13 10:44:11.365939] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:17.551 [2024-12-13 10:44:11.365963] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:17.551 [2024-12-13 10:44:11.375696] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:17.551 [2024-12-13 10:44:11.375720] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:17.551 [2024-12-13 10:44:11.388116] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:17.551 [2024-12-13 10:44:11.388141] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:17.551 [2024-12-13 10:44:11.403333] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:17.551 [2024-12-13 10:44:11.403357] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:17.551 [2024-12-13 10:44:11.416579] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:17.552 [2024-12-13 10:44:11.416603] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:17.552 [2024-12-13 10:44:11.430935] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:17.552 [2024-12-13 10:44:11.430960] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:17.810 [2024-12-13 10:44:11.447975] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:17.810 [2024-12-13 10:44:11.448000] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:17.810 [2024-12-13 10:44:11.461211] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:17.810 [2024-12-13 10:44:11.461235] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:17.810 [2024-12-13 10:44:11.473310] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:17.810 [2024-12-13 10:44:11.473333] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:17.810 [2024-12-13 10:44:11.484461] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:17.810 [2024-12-13 10:44:11.484488] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:17.810 [2024-12-13 10:44:11.499101] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:17.810 [2024-12-13 10:44:11.499125] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:17.810 [2024-12-13 10:44:11.516047] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:17.810 [2024-12-13 10:44:11.516073] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:17.810 [2024-12-13 10:44:11.528371] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:17.810 [2024-12-13 10:44:11.528396] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:17.810 [2024-12-13 10:44:11.543812] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:17.810 [2024-12-13 10:44:11.543836] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:17.810 [2024-12-13 10:44:11.559372] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:17.810 [2024-12-13 10:44:11.559396] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:17.810 [2024-12-13 10:44:11.576266] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:17.810 [2024-12-13 10:44:11.576291] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:17.810 [2024-12-13 10:44:11.590602] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:17.810 [2024-12-13 10:44:11.590627] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:17.810 [2024-12-13 10:44:11.607715] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:17.810 [2024-12-13 10:44:11.607739] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:17.810 [2024-12-13 10:44:11.621884] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:17.810 [2024-12-13 10:44:11.621908] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:17.810 [2024-12-13 10:44:11.639043] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:17.810 [2024-12-13 10:44:11.639067] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:17.810 [2024-12-13 10:44:11.656163] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:17.810 [2024-12-13 10:44:11.656187] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:17.810 [2024-12-13 10:44:11.671514] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:17.810 [2024-12-13 10:44:11.671539] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:17.810 [2024-12-13 10:44:11.687963] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:17.810 [2024-12-13 10:44:11.687989] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:17.810 [2024-12-13 10:44:11.702008] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:17.810 [2024-12-13 10:44:11.702035] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:18.069 [2024-12-13 10:44:11.719109] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:18.069 [2024-12-13 10:44:11.719135] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:18.069 [2024-12-13 10:44:11.735193] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:18.069 [2024-12-13 10:44:11.735218] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:18.069 [2024-12-13 10:44:11.751864] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:18.069 [2024-12-13 10:44:11.751889] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:18.069 [2024-12-13 10:44:11.767574] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:18.069 [2024-12-13 10:44:11.767599] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:18.069 [2024-12-13 10:44:11.784098] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:18.069 [2024-12-13 10:44:11.784123] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:18.069 [2024-12-13 10:44:11.798478] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:18.069 [2024-12-13 10:44:11.798502] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:18.069 [2024-12-13 10:44:11.806501] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:18.069 [2024-12-13 10:44:11.806525] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:18.069 [2024-12-13 10:44:11.822472] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:18.069 [2024-12-13 10:44:11.822497] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:18.069 14093.50 IOPS, 110.11 MiB/s [2024-12-13T09:44:11.960Z] [2024-12-13 10:44:11.839837] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:18.069 [2024-12-13 10:44:11.839861] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:18.069 [2024-12-13 10:44:11.853156] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:18.069 [2024-12-13 10:44:11.853179] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:18.069 [2024-12-13 10:44:11.865276] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:18.069 [2024-12-13 10:44:11.865299] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:18.069 [2024-12-13 10:44:11.878155] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:18.069 [2024-12-13 10:44:11.878179] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:18.069 [2024-12-13 10:44:11.886134] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:18.069 [2024-12-13 10:44:11.886157] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:18.069 [2024-12-13 10:44:11.895804] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:18.069 [2024-12-13 10:44:11.895828] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:18.069 [2024-12-13 10:44:11.910744] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:18.069 [2024-12-13 10:44:11.910768] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:18.069 [2024-12-13 10:44:11.927412] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:18.069 [2024-12-13 10:44:11.927441] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:18.069 [2024-12-13 10:44:11.943681] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:18.069 [2024-12-13 10:44:11.943705] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:18.069 [2024-12-13 10:44:11.958652] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:18.069 [2024-12-13 10:44:11.958678] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:18.328 [2024-12-13 10:44:11.975682] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:18.328 [2024-12-13 10:44:11.975707] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:18.328 [2024-12-13 10:44:11.989374] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:18.328 [2024-12-13 10:44:11.989399] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:18.328 [2024-12-13 10:44:12.001919] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:18.328 [2024-12-13 10:44:12.001943] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:18.328 [2024-12-13 10:44:12.009788] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:18.328 [2024-12-13 10:44:12.009811] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:18.328 [2024-12-13 10:44:12.027138] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:18.328 [2024-12-13 10:44:12.027163] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:18.328 [2024-12-13 10:44:12.040298] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:18.328 [2024-12-13 10:44:12.040323] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:18.328 [2024-12-13 10:44:12.055372] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:18.328 [2024-12-13 10:44:12.055396] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:18.328 [2024-12-13 10:44:12.071308] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:18.328 [2024-12-13 10:44:12.071336] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:18.328 [2024-12-13 10:44:12.087768] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:18.328 [2024-12-13 10:44:12.087792] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:18.328 [2024-12-13 10:44:12.104148] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:18.328 [2024-12-13 10:44:12.104180] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:18.328 [2024-12-13 10:44:12.116813] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:18.328 [2024-12-13 10:44:12.116837] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:18.328 [2024-12-13 10:44:12.124949] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:18.328 [2024-12-13 10:44:12.124973] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:18.328 [2024-12-13 10:44:12.134655] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:18.328 [2024-12-13 10:44:12.134679] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:18.328 [2024-12-13 10:44:12.151389] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:18.328 [2024-12-13 10:44:12.151413] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:18.328 [2024-12-13 10:44:12.167345] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:18.328 [2024-12-13 10:44:12.167369] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:18.328 [2024-12-13 10:44:12.182360] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:18.328 [2024-12-13 10:44:12.182384] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:18.328 [2024-12-13 10:44:12.190375] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:18.328 [2024-12-13 10:44:12.190403] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:18.328 [2024-12-13 10:44:12.206968] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:18.328 [2024-12-13 10:44:12.206993] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:18.586 [2024-12-13 10:44:12.223062] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:18.586 [2024-12-13 10:44:12.223086] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:18.586 [2024-12-13 10:44:12.236559] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:18.586 [2024-12-13 10:44:12.236582] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:18.586 [2024-12-13 10:44:12.250098] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:18.586 [2024-12-13 10:44:12.250121] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:18.586 [2024-12-13 10:44:12.258068] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:18.586 [2024-12-13 10:44:12.258107] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:18.586 [2024-12-13 10:44:12.273221] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:18.586 [2024-12-13 10:44:12.273244] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:18.586 [2024-12-13 10:44:12.281728] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:18.586 [2024-12-13 10:44:12.281750] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:18.586 [2024-12-13 10:44:12.299281] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:18.586 [2024-12-13 10:44:12.299306] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:18.586 [2024-12-13 10:44:12.315604] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:18.586 [2024-12-13 10:44:12.315639] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:18.586 [2024-12-13 10:44:12.330456] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:18.586 [2024-12-13 10:44:12.330481] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:18.586 [2024-12-13 10:44:12.348113] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:18.586 [2024-12-13 10:44:12.348138] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:18.586 [2024-12-13 10:44:12.362007] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:18.587 [2024-12-13 10:44:12.362031] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:18.587 [2024-12-13 10:44:12.369772] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:18.587 [2024-12-13 10:44:12.369795] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:18.587 [2024-12-13 10:44:12.387205] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:18.587 [2024-12-13 10:44:12.387230] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:18.587 [2024-12-13 10:44:12.402639] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:18.587 [2024-12-13 10:44:12.402664] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:18.587 [2024-12-13 10:44:12.419482] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:18.587 [2024-12-13 10:44:12.419507] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:18.587 [2024-12-13 10:44:12.435324] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:18.587 [2024-12-13 10:44:12.435348] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:18.587 [2024-12-13 10:44:12.452307] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:18.587 [2024-12-13 10:44:12.452332] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:18.587 [2024-12-13 10:44:12.464671] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:18.587 [2024-12-13 10:44:12.464700] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:18.587 [2024-12-13 10:44:12.477813] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:18.587 [2024-12-13 10:44:12.477838] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:18.845 [2024-12-13 10:44:12.495315] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:18.845 [2024-12-13 10:44:12.495340] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:18.845 [2024-12-13 10:44:12.511663] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:18.845 [2024-12-13 10:44:12.511688] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:18.845 [2024-12-13 10:44:12.525098] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:18.845 [2024-12-13 10:44:12.525122] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:18.845 [2024-12-13 10:44:12.533198] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:18.845 [2024-12-13 10:44:12.533223] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:18.845 [2024-12-13 10:44:12.544749] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:18.845 [2024-12-13 10:44:12.544774] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:18.845 [2024-12-13 10:44:12.553236] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:18.845 [2024-12-13 10:44:12.553258] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:18.845 [2024-12-13 10:44:12.565853] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:18.845 [2024-12-13 10:44:12.565878] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:18.845 [2024-12-13 10:44:12.574022] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:18.845 [2024-12-13 10:44:12.574046] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:18.845 [2024-12-13 10:44:12.583871] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:18.845 [2024-12-13 10:44:12.583895] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:18.845 [2024-12-13 10:44:12.596953] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:18.845 [2024-12-13 10:44:12.596978] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:18.845 [2024-12-13 10:44:12.604952] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:18.845 [2024-12-13 10:44:12.604977] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:18.845 [2024-12-13 10:44:12.614786] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:18.845 [2024-12-13 10:44:12.614811] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:18.845 [2024-12-13 10:44:12.630606] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:18.845 [2024-12-13 10:44:12.630632] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:18.845 [2024-12-13 10:44:12.647906] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:18.845 [2024-12-13 10:44:12.647932] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:18.845 [2024-12-13 10:44:12.660522] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:18.845 [2024-12-13 10:44:12.660546] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:18.845 [2024-12-13 10:44:12.675382] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:18.845 [2024-12-13 10:44:12.675406] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:18.845 [2024-12-13 10:44:12.691042] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:18.845 [2024-12-13 10:44:12.691067] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:18.845 [2024-12-13 10:44:12.707041] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:18.845 [2024-12-13 10:44:12.707069] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:18.845 [2024-12-13 10:44:12.722400] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:18.845 [2024-12-13 10:44:12.722425] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:19.104 [2024-12-13 10:44:12.739657] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:19.104 [2024-12-13 10:44:12.739684] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:19.104 [2024-12-13 10:44:12.752386] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:19.104 [2024-12-13 10:44:12.752411] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:19.104 [2024-12-13 10:44:12.767271] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:19.104 [2024-12-13 10:44:12.767296] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:19.104 [2024-12-13 10:44:12.783230] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:19.104 [2024-12-13 10:44:12.783255] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:19.104 [2024-12-13 10:44:12.799347] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:19.104 [2024-12-13 10:44:12.799371] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:19.104 [2024-12-13 10:44:12.816659] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:19.104 [2024-12-13 10:44:12.816683] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:19.104 [2024-12-13 10:44:12.829124] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:19.104 [2024-12-13 10:44:12.829151] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:19.104 14134.33 IOPS, 110.42 MiB/s [2024-12-13T09:44:12.995Z] [2024-12-13 10:44:12.840941] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:19.104 [2024-12-13 10:44:12.840967] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:19.104 [2024-12-13 10:44:12.848934] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:19.104 [2024-12-13 10:44:12.848958] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:19.104 [2024-12-13 10:44:12.858426] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:19.104 [2024-12-13 10:44:12.858457] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:19.104 [2024-12-13 10:44:12.875173] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:19.104 [2024-12-13 10:44:12.875197] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:19.104 [2024-12-13 10:44:12.890232] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:19.104 [2024-12-13 10:44:12.890257] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:19.104 [2024-12-13 10:44:12.897812] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:19.104 [2024-12-13 10:44:12.897837] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:19.104 [2024-12-13 10:44:12.914967] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:19.104 [2024-12-13 10:44:12.914995] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:19.104 [2024-12-13 10:44:12.932087] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:19.104 [2024-12-13 10:44:12.932111] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:19.104 [2024-12-13 10:44:12.945435] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:19.104 [2024-12-13 10:44:12.945467] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:19.104 [2024-12-13 10:44:12.963077] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:19.104 [2024-12-13 10:44:12.963103] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:19.104 [2024-12-13 10:44:12.979390] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:19.104 [2024-12-13 10:44:12.979421] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:19.104 [2024-12-13 10:44:12.994576] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:19.104 [2024-12-13 10:44:12.994600] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:19.363 [2024-12-13 10:44:13.012281] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:19.363 [2024-12-13 10:44:13.012306] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:19.363 [2024-12-13 10:44:13.025755] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:19.363 [2024-12-13 10:44:13.025780] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:19.363 [2024-12-13 10:44:13.033831] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:19.363 [2024-12-13 10:44:13.033855] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:19.363 [2024-12-13 10:44:13.050505] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:19.363 [2024-12-13 10:44:13.050530] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:19.363 [2024-12-13 10:44:13.068361] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:19.363 [2024-12-13 10:44:13.068387] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:19.363 [2024-12-13 10:44:13.080626] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:19.363 [2024-12-13 10:44:13.080651] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:19.363 [2024-12-13 10:44:13.095192] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:19.363 [2024-12-13 10:44:13.095217] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:19.363 [2024-12-13 10:44:13.112091] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:19.363 [2024-12-13 10:44:13.112115] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:19.363 [2024-12-13 10:44:13.124219] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:19.363 [2024-12-13 10:44:13.124244] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:19.363 [2024-12-13 10:44:13.137679] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:19.363 [2024-12-13 10:44:13.137704] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:19.363 [2024-12-13 10:44:13.148595] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:19.363 [2024-12-13 10:44:13.148619] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:19.363 [2024-12-13 10:44:13.163288] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:19.363 [2024-12-13 10:44:13.163312] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:19.363 [2024-12-13 10:44:13.179724] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:19.364 [2024-12-13 10:44:13.179749] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:19.364 [2024-12-13 10:44:13.192605] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:19.364 [2024-12-13 10:44:13.192630] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:19.364 [2024-12-13 10:44:13.207137] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:19.364 [2024-12-13 10:44:13.207163] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:19.364 [2024-12-13 10:44:13.223549] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:19.364 [2024-12-13 10:44:13.223573] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:19.364 [2024-12-13 10:44:13.238102] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:19.364 [2024-12-13 10:44:13.238126] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:19.364 [2024-12-13 10:44:13.247595] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:19.364 [2024-12-13 10:44:13.247619] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:19.622 [2024-12-13 10:44:13.264029] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:19.622 [2024-12-13 10:44:13.264053] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:19.622 [2024-12-13 10:44:13.278038] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:19.622 [2024-12-13 10:44:13.278062] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:19.622 [2024-12-13 10:44:13.286046] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:19.622 [2024-12-13 10:44:13.286069] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:19.622 [2024-12-13 10:44:13.295569] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:19.622 [2024-12-13 10:44:13.295593] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:19.622 [2024-12-13 10:44:13.307875] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:19.622 [2024-12-13 10:44:13.307898] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:19.622 [2024-12-13 10:44:13.323528] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:19.622 [2024-12-13 10:44:13.323552] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:19.622 [2024-12-13 10:44:13.340116] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:19.622 [2024-12-13 10:44:13.340140] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:19.622 [2024-12-13 10:44:13.354339] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:19.622 [2024-12-13 10:44:13.354364] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:19.622 [2024-12-13 10:44:13.362321] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:19.622 [2024-12-13 10:44:13.362344] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:19.622 [2024-12-13 10:44:13.378874] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:19.622 [2024-12-13 10:44:13.378899] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:19.622 [2024-12-13 10:44:13.395656] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:19.622 [2024-12-13 10:44:13.395681] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:19.622 [2024-12-13 10:44:13.410533] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:19.622 [2024-12-13 10:44:13.410557] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:19.622 [2024-12-13 10:44:13.418425] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:19.622 [2024-12-13 10:44:13.418456] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:19.622 [2024-12-13 10:44:13.433368] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:19.622 [2024-12-13 10:44:13.433391] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:19.622 [2024-12-13 10:44:13.445299] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:19.622 [2024-12-13 10:44:13.445323] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:19.622 [2024-12-13 10:44:13.458006] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:19.622 [2024-12-13 10:44:13.458030] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:19.622 [2024-12-13 10:44:13.466062] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:19.622 [2024-12-13 10:44:13.466085] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:19.622 [2024-12-13 10:44:13.475637] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:19.622 [2024-12-13 10:44:13.475661] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:19.622 [2024-12-13 10:44:13.489695] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:19.622 [2024-12-13 10:44:13.489729] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:19.622 [2024-12-13 10:44:13.501343] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:19.622 [2024-12-13 10:44:13.501366] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:19.622 [2024-12-13 10:44:13.513611] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:19.622 [2024-12-13 10:44:13.513635] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:19.881 [2024-12-13 10:44:13.521697] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:19.881 [2024-12-13 10:44:13.521731] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:19.881 [2024-12-13 10:44:13.532641] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:19.881 [2024-12-13 10:44:13.532665] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:19.881 [2024-12-13 10:44:13.546933] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:19.881 [2024-12-13 10:44:13.546958] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:19.881 [2024-12-13 10:44:13.563660] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:19.881 [2024-12-13 10:44:13.563684] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:19.881 [2024-12-13 10:44:13.579934] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:19.881 [2024-12-13 10:44:13.579958] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:19.881 [2024-12-13 10:44:13.594486] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:19.881 [2024-12-13 10:44:13.594510] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:19.881 [2024-12-13 10:44:13.602571] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:19.881 [2024-12-13 10:44:13.602594] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:19.881 [2024-12-13 10:44:13.616678] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:19.881 [2024-12-13 10:44:13.616702] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:19.881 [2024-12-13 10:44:13.629581] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:19.881 [2024-12-13 10:44:13.629606] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:19.882 [2024-12-13 10:44:13.641491] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:19.882 [2024-12-13 10:44:13.641514] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:19.882 [2024-12-13 10:44:13.653323] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:19.882 [2024-12-13 10:44:13.653347] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:19.882 [2024-12-13 10:44:13.665610] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:19.882 [2024-12-13 10:44:13.665634] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:19.882 [2024-12-13 10:44:13.673664] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:19.882 [2024-12-13 10:44:13.673687] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:19.882 [2024-12-13 10:44:13.691088] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:19.882 [2024-12-13 10:44:13.691112] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:19.882 [2024-12-13 10:44:13.705719] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:19.882 [2024-12-13 10:44:13.705743] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:19.882 [2024-12-13 10:44:13.716536] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:19.882 [2024-12-13 10:44:13.716565] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:19.882 [2024-12-13 10:44:13.731423] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:19.882 [2024-12-13 10:44:13.731454] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:19.882 [2024-12-13 10:44:13.747280] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:19.882 [2024-12-13 10:44:13.747305] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:19.882 [2024-12-13 10:44:13.763912] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:19.882 [2024-12-13 10:44:13.763937] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:20.141 [2024-12-13 10:44:13.777235] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:20.141 [2024-12-13 10:44:13.777259] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:20.141 [2024-12-13 10:44:13.789994] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:20.141 [2024-12-13 10:44:13.790019] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:20.141 [2024-12-13 10:44:13.797820] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:20.141 [2024-12-13 10:44:13.797844] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:20.141 [2024-12-13 10:44:13.814546] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:20.141 [2024-12-13 10:44:13.814570] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:20.141 [2024-12-13 10:44:13.831800] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:20.141 [2024-12-13 10:44:13.831831] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:20.141 14162.50 IOPS, 110.64 MiB/s [2024-12-13T09:44:14.032Z] [2024-12-13 10:44:13.846567] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:20.141 [2024-12-13 10:44:13.846591] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:20.141 [2024-12-13 10:44:13.854769] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:20.141 [2024-12-13 10:44:13.854793] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:20.141 [2024-12-13 10:44:13.869464] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:20.141 [2024-12-13 10:44:13.869488] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:20.141 [2024-12-13 10:44:13.879580] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:20.141 [2024-12-13 10:44:13.879605] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:20.141 [2024-12-13 10:44:13.896190] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:20.141 [2024-12-13 10:44:13.896216] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:20.141 [2024-12-13 10:44:13.907596] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:20.141 [2024-12-13 10:44:13.907621] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:20.141 [2024-12-13 10:44:13.923673] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:20.141 [2024-12-13 10:44:13.923697] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:20.141 [2024-12-13 10:44:13.939638] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:20.141 [2024-12-13 10:44:13.939662] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:20.141 [2024-12-13 10:44:13.953967] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:20.141 [2024-12-13 10:44:13.953992] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:20.141 [2024-12-13 10:44:13.961804] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:20.141 [2024-12-13 10:44:13.961828] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:20.141 [2024-12-13 10:44:13.978474] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:20.141 [2024-12-13 10:44:13.978502] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:20.141 [2024-12-13 10:44:13.995501] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:20.141 [2024-12-13 10:44:13.995526] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:20.141 [2024-12-13 10:44:14.008628] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:20.141 [2024-12-13 10:44:14.008652] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:20.141 [2024-12-13 10:44:14.022854] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:20.141 [2024-12-13 10:44:14.022878] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:20.400 [2024-12-13 10:44:14.039966] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:20.400 [2024-12-13 10:44:14.039991] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:20.400 [2024-12-13 10:44:14.052015] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:20.400 [2024-12-13 10:44:14.052041] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:20.400 [2024-12-13 10:44:14.067555] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:20.400 [2024-12-13 10:44:14.067579] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:20.400 [2024-12-13 10:44:14.083267] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:20.400 [2024-12-13 10:44:14.083293] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:20.400 [2024-12-13 10:44:14.098261] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:20.400 [2024-12-13 10:44:14.098286] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:20.400 [2024-12-13 10:44:14.116066] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:20.400 [2024-12-13 10:44:14.116090] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:20.400 [2024-12-13 10:44:14.128499] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:20.400 [2024-12-13 10:44:14.128524] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:20.400 [2024-12-13 10:44:14.142149] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:20.400 [2024-12-13 10:44:14.142175] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:20.400 [2024-12-13 10:44:14.149833] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:20.400 [2024-12-13 10:44:14.149857] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:20.400 [2024-12-13 10:44:14.160658] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:20.400 [2024-12-13 10:44:14.160683] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:20.400 [2024-12-13 10:44:14.175016] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:20.400 [2024-12-13 10:44:14.175041] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:20.400 [2024-12-13 10:44:14.192080] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:20.400 [2024-12-13 10:44:14.192105] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:20.400 [2024-12-13 10:44:14.206264] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:20.400 [2024-12-13 10:44:14.206289] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:20.400 [2024-12-13 10:44:14.213951] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:20.400 [2024-12-13 10:44:14.213975] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:20.400 [2024-12-13 10:44:14.223489] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:20.400 [2024-12-13 10:44:14.223513] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:20.401 [2024-12-13 10:44:14.239656] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:20.401 [2024-12-13 10:44:14.239687] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:20.401 [2024-12-13 10:44:14.254189] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:20.401 [2024-12-13 10:44:14.254213] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:20.401 [2024-12-13 10:44:14.262095] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:20.401 [2024-12-13 10:44:14.262119] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:20.401 [2024-12-13 10:44:14.271828] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:20.401 [2024-12-13 10:44:14.271852] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:20.401 [2024-12-13 10:44:14.284290] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:20.401 [2024-12-13 10:44:14.284315] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:20.659 [2024-12-13 10:44:14.299243] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:20.659 [2024-12-13 10:44:14.299269] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:20.659 [2024-12-13 10:44:14.315579] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:20.659 [2024-12-13 10:44:14.315604] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:20.659 [2024-12-13 10:44:14.331443] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:20.659 [2024-12-13 10:44:14.331475] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:20.659 [2024-12-13 10:44:14.345703] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:20.659 [2024-12-13 10:44:14.345737] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:20.659 [2024-12-13 10:44:14.357426] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:20.659 [2024-12-13 10:44:14.357460] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:20.659 [2024-12-13 10:44:14.369255] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:20.659 [2024-12-13 10:44:14.369278] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:20.659 [2024-12-13 10:44:14.381914] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:20.659 [2024-12-13 10:44:14.381939] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:20.659 [2024-12-13 10:44:14.389967] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:20.659 [2024-12-13 10:44:14.389991] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:20.659 [2024-12-13 10:44:14.399406] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:20.659 [2024-12-13 10:44:14.399431] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:20.659 [2024-12-13 10:44:14.411815] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:20.659 [2024-12-13 10:44:14.411840] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:20.659 [2024-12-13 10:44:14.427384] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:20.659 [2024-12-13 10:44:14.427409] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:20.659 [2024-12-13 10:44:14.443790] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:20.659 [2024-12-13 10:44:14.443814] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:20.659 [2024-12-13 10:44:14.459649] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:20.659 [2024-12-13 10:44:14.459673] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:20.659 [2024-12-13 10:44:14.475171] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:20.659 [2024-12-13 10:44:14.475195] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:20.659 [2024-12-13 10:44:14.491644] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:20.659 [2024-12-13 10:44:14.491669] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:20.659 [2024-12-13 10:44:14.507731] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:20.659 [2024-12-13 10:44:14.507756] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:20.659 [2024-12-13 10:44:14.522416] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:20.659 [2024-12-13 10:44:14.522440] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:20.659 [2024-12-13 10:44:14.530422] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:20.659 [2024-12-13 10:44:14.530446] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:20.659 [2024-12-13 10:44:14.544342] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:20.659 [2024-12-13 10:44:14.544366] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:20.918 [2024-12-13 10:44:14.559096] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:20.918 [2024-12-13 10:44:14.559121] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:20.918 [2024-12-13 10:44:14.574731] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:20.918 [2024-12-13 10:44:14.574756] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:20.918 [2024-12-13 10:44:14.591748] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:20.918 [2024-12-13 10:44:14.591773] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:20.918 [2024-12-13 10:44:14.604975] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:20.918 [2024-12-13 10:44:14.604999] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:20.918 [2024-12-13 10:44:14.613045] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:20.918 [2024-12-13 10:44:14.613069] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:20.918 [2024-12-13 10:44:14.622706] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:20.918 [2024-12-13 10:44:14.622729] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:20.918 [2024-12-13 10:44:14.638961] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:20.918 [2024-12-13 10:44:14.638985] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:20.918 [2024-12-13 10:44:14.656152] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:20.918 [2024-12-13 10:44:14.656177] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:20.918 [2024-12-13 10:44:14.670505] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:20.918 [2024-12-13 10:44:14.670529] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:20.918 [2024-12-13 10:44:14.678714] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:20.918 [2024-12-13 10:44:14.678737] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:20.918 [2024-12-13 10:44:14.692598] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:20.918 [2024-12-13 10:44:14.692629] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:20.918 [2024-12-13 10:44:14.706148] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:20.918 [2024-12-13 10:44:14.706172] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:20.918 [2024-12-13 10:44:14.714177] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:20.918 [2024-12-13 10:44:14.714200] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:20.918 [2024-12-13 10:44:14.724062] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:20.918 [2024-12-13 10:44:14.724086] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:20.918 [2024-12-13 10:44:14.737194] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:20.918 [2024-12-13 10:44:14.737218] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:20.918 [2024-12-13 10:44:14.748501] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:20.918 [2024-12-13 10:44:14.748525] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:20.918 [2024-12-13 10:44:14.762006] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:20.918 [2024-12-13 10:44:14.762029] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:20.918 [2024-12-13 10:44:14.770142] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:20.918 [2024-12-13 10:44:14.770166] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:20.918 [2024-12-13 10:44:14.779966] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:20.918 [2024-12-13 10:44:14.779991] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:20.918 [2024-12-13 10:44:14.792064] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:20.918 [2024-12-13 10:44:14.792089] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:20.918 [2024-12-13 10:44:14.807177] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:20.918 [2024-12-13 10:44:14.807202] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.177 [2024-12-13 10:44:14.823758] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.177 [2024-12-13 10:44:14.823781] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.177 [2024-12-13 10:44:14.836552] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.177 [2024-12-13 10:44:14.836576] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.177 14172.80 IOPS, 110.72 MiB/s 00:42:21.177 Latency(us) 00:42:21.177 [2024-12-13T09:44:15.068Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:21.177 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:42:21.177 Nvme1n1 : 5.01 14175.18 110.74 0.00 0.00 9021.15 2293.76 15478.98 00:42:21.177 [2024-12-13T09:44:15.068Z] =================================================================================================================== 00:42:21.177 [2024-12-13T09:44:15.068Z] Total : 14175.18 110.74 0.00 0.00 9021.15 2293.76 15478.98 00:42:21.177 [2024-12-13 10:44:14.844857] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.177 [2024-12-13 10:44:14.844879] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.177 [2024-12-13 10:44:14.852864] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.177 [2024-12-13 10:44:14.852884] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.177 [2024-12-13 10:44:14.860864] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.177 [2024-12-13 10:44:14.860885] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.177 [2024-12-13 10:44:14.868855] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.177 [2024-12-13 10:44:14.868874] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.177 [2024-12-13 10:44:14.876863] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.177 [2024-12-13 10:44:14.876881] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.177 [2024-12-13 10:44:14.884866] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.177 [2024-12-13 10:44:14.884885] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.177 [2024-12-13 10:44:14.892863] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.177 [2024-12-13 10:44:14.892889] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.177 [2024-12-13 10:44:14.900885] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.177 [2024-12-13 10:44:14.900907] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.177 [2024-12-13 10:44:14.920869] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.178 [2024-12-13 10:44:14.920888] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.178 [2024-12-13 10:44:14.928864] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.178 [2024-12-13 10:44:14.928883] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.178 [2024-12-13 10:44:14.936863] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.178 [2024-12-13 10:44:14.936881] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.178 [2024-12-13 10:44:14.944851] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.178 [2024-12-13 10:44:14.944870] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.178 [2024-12-13 10:44:14.952863] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.178 [2024-12-13 10:44:14.952882] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.178 [2024-12-13 10:44:14.960860] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.178 [2024-12-13 10:44:14.960879] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.178 [2024-12-13 10:44:14.968854] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.178 [2024-12-13 10:44:14.968872] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.178 [2024-12-13 10:44:14.976862] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.178 [2024-12-13 10:44:14.976880] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.178 [2024-12-13 10:44:14.984857] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.178 [2024-12-13 10:44:14.984876] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.178 [2024-12-13 10:44:14.992873] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.178 [2024-12-13 10:44:14.992894] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.178 [2024-12-13 10:44:15.000872] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.178 [2024-12-13 10:44:15.000891] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.178 [2024-12-13 10:44:15.008852] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.178 [2024-12-13 10:44:15.008871] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.178 [2024-12-13 10:44:15.016877] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.178 [2024-12-13 10:44:15.016896] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.178 [2024-12-13 10:44:15.024865] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.178 [2024-12-13 10:44:15.024885] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.178 [2024-12-13 10:44:15.032856] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.178 [2024-12-13 10:44:15.032875] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.178 [2024-12-13 10:44:15.040859] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.178 [2024-12-13 10:44:15.040877] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.178 [2024-12-13 10:44:15.048847] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.178 [2024-12-13 10:44:15.048865] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.178 [2024-12-13 10:44:15.056863] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.178 [2024-12-13 10:44:15.056887] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.178 [2024-12-13 10:44:15.064865] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.178 [2024-12-13 10:44:15.064884] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.436 [2024-12-13 10:44:15.072850] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.436 [2024-12-13 10:44:15.072869] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.436 [2024-12-13 10:44:15.080858] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.436 [2024-12-13 10:44:15.080876] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.436 [2024-12-13 10:44:15.088860] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.436 [2024-12-13 10:44:15.088878] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.436 [2024-12-13 10:44:15.096855] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.436 [2024-12-13 10:44:15.096874] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.436 [2024-12-13 10:44:15.104860] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.436 [2024-12-13 10:44:15.104879] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.436 [2024-12-13 10:44:15.112847] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.436 [2024-12-13 10:44:15.112865] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.436 [2024-12-13 10:44:15.120856] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.436 [2024-12-13 10:44:15.120874] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.436 [2024-12-13 10:44:15.128855] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.436 [2024-12-13 10:44:15.128873] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.436 [2024-12-13 10:44:15.136849] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.436 [2024-12-13 10:44:15.136868] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.436 [2024-12-13 10:44:15.144865] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.436 [2024-12-13 10:44:15.144883] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.436 [2024-12-13 10:44:15.152863] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.436 [2024-12-13 10:44:15.152882] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.436 [2024-12-13 10:44:15.160846] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.436 [2024-12-13 10:44:15.160865] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.436 [2024-12-13 10:44:15.168862] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.436 [2024-12-13 10:44:15.168880] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.436 [2024-12-13 10:44:15.176852] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.436 [2024-12-13 10:44:15.176870] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.436 [2024-12-13 10:44:15.184862] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.436 [2024-12-13 10:44:15.184880] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.436 [2024-12-13 10:44:15.192862] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.436 [2024-12-13 10:44:15.192881] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.436 [2024-12-13 10:44:15.200851] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.436 [2024-12-13 10:44:15.200870] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.436 [2024-12-13 10:44:15.208870] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.436 [2024-12-13 10:44:15.208892] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.436 [2024-12-13 10:44:15.216863] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.436 [2024-12-13 10:44:15.216881] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.436 [2024-12-13 10:44:15.224852] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.436 [2024-12-13 10:44:15.224870] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.436 [2024-12-13 10:44:15.232861] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.436 [2024-12-13 10:44:15.232879] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.436 [2024-12-13 10:44:15.240854] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.436 [2024-12-13 10:44:15.240872] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.436 [2024-12-13 10:44:15.248863] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.436 [2024-12-13 10:44:15.248883] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.436 [2024-12-13 10:44:15.256867] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.436 [2024-12-13 10:44:15.256886] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.436 [2024-12-13 10:44:15.264856] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.436 [2024-12-13 10:44:15.264880] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.436 [2024-12-13 10:44:15.272866] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.436 [2024-12-13 10:44:15.272885] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.436 [2024-12-13 10:44:15.280859] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.436 [2024-12-13 10:44:15.280877] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.436 [2024-12-13 10:44:15.288847] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.436 [2024-12-13 10:44:15.288865] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.436 [2024-12-13 10:44:15.296861] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.436 [2024-12-13 10:44:15.296880] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.436 [2024-12-13 10:44:15.304864] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.436 [2024-12-13 10:44:15.304883] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.436 [2024-12-13 10:44:15.312860] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.436 [2024-12-13 10:44:15.312878] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.436 [2024-12-13 10:44:15.320865] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.436 [2024-12-13 10:44:15.320883] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.695 [2024-12-13 10:44:15.328848] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.695 [2024-12-13 10:44:15.328867] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.695 [2024-12-13 10:44:15.336862] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.695 [2024-12-13 10:44:15.336879] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.695 [2024-12-13 10:44:15.344860] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.695 [2024-12-13 10:44:15.344878] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.695 [2024-12-13 10:44:15.352850] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.695 [2024-12-13 10:44:15.352868] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.695 [2024-12-13 10:44:15.360860] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.695 [2024-12-13 10:44:15.360882] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.695 [2024-12-13 10:44:15.368851] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.695 [2024-12-13 10:44:15.368870] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.695 [2024-12-13 10:44:15.376858] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.695 [2024-12-13 10:44:15.376878] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.695 [2024-12-13 10:44:15.384863] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.695 [2024-12-13 10:44:15.384881] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.695 [2024-12-13 10:44:15.392852] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.695 [2024-12-13 10:44:15.392870] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.695 [2024-12-13 10:44:15.400871] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.695 [2024-12-13 10:44:15.400889] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.695 [2024-12-13 10:44:15.408860] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.695 [2024-12-13 10:44:15.408878] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.695 [2024-12-13 10:44:15.416848] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.695 [2024-12-13 10:44:15.416866] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.695 [2024-12-13 10:44:15.424861] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.695 [2024-12-13 10:44:15.424879] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.695 [2024-12-13 10:44:15.432850] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.695 [2024-12-13 10:44:15.432869] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.695 [2024-12-13 10:44:15.440866] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.695 [2024-12-13 10:44:15.440884] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.695 [2024-12-13 10:44:15.448859] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.695 [2024-12-13 10:44:15.448877] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.695 [2024-12-13 10:44:15.456854] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.695 [2024-12-13 10:44:15.456872] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.695 [2024-12-13 10:44:15.464859] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.695 [2024-12-13 10:44:15.464877] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.695 [2024-12-13 10:44:15.472858] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.695 [2024-12-13 10:44:15.472876] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.695 [2024-12-13 10:44:15.480850] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.695 [2024-12-13 10:44:15.480868] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.695 [2024-12-13 10:44:15.488862] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.695 [2024-12-13 10:44:15.488880] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.695 [2024-12-13 10:44:15.496867] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.695 [2024-12-13 10:44:15.496886] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.695 [2024-12-13 10:44:15.504857] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.695 [2024-12-13 10:44:15.504876] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.695 [2024-12-13 10:44:15.512863] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.695 [2024-12-13 10:44:15.512882] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.695 [2024-12-13 10:44:15.520848] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.695 [2024-12-13 10:44:15.520867] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.695 [2024-12-13 10:44:15.528861] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.695 [2024-12-13 10:44:15.528882] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.695 [2024-12-13 10:44:15.536860] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.695 [2024-12-13 10:44:15.536879] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.695 [2024-12-13 10:44:15.544853] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.695 [2024-12-13 10:44:15.544873] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.695 [2024-12-13 10:44:15.552863] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.695 [2024-12-13 10:44:15.552882] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.695 [2024-12-13 10:44:15.560851] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.695 [2024-12-13 10:44:15.560870] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.695 [2024-12-13 10:44:15.568860] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.695 [2024-12-13 10:44:15.568878] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.695 [2024-12-13 10:44:15.576861] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.695 [2024-12-13 10:44:15.576879] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.695 [2024-12-13 10:44:15.584855] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.695 [2024-12-13 10:44:15.584874] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.954 [2024-12-13 10:44:15.592875] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.954 [2024-12-13 10:44:15.592894] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.954 [2024-12-13 10:44:15.600860] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.954 [2024-12-13 10:44:15.600879] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.954 [2024-12-13 10:44:15.608855] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.954 [2024-12-13 10:44:15.608874] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.954 [2024-12-13 10:44:15.616859] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.954 [2024-12-13 10:44:15.616877] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.954 [2024-12-13 10:44:15.624849] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.954 [2024-12-13 10:44:15.624868] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.954 [2024-12-13 10:44:15.632862] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.954 [2024-12-13 10:44:15.632881] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.954 [2024-12-13 10:44:15.640857] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.954 [2024-12-13 10:44:15.640875] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.954 [2024-12-13 10:44:15.648848] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.954 [2024-12-13 10:44:15.648866] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.954 [2024-12-13 10:44:15.656865] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.954 [2024-12-13 10:44:15.656885] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.954 [2024-12-13 10:44:15.664863] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.954 [2024-12-13 10:44:15.664882] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.954 [2024-12-13 10:44:15.672852] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.954 [2024-12-13 10:44:15.672870] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.954 [2024-12-13 10:44:15.680888] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.954 [2024-12-13 10:44:15.680908] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.954 [2024-12-13 10:44:15.688864] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.954 [2024-12-13 10:44:15.688883] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.954 [2024-12-13 10:44:15.696861] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.954 [2024-12-13 10:44:15.696881] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.954 [2024-12-13 10:44:15.704859] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.954 [2024-12-13 10:44:15.704878] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.954 [2024-12-13 10:44:15.712851] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.954 [2024-12-13 10:44:15.712869] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.954 [2024-12-13 10:44:15.720864] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.954 [2024-12-13 10:44:15.720882] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.954 [2024-12-13 10:44:15.728860] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.954 [2024-12-13 10:44:15.728879] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.954 [2024-12-13 10:44:15.736853] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.954 [2024-12-13 10:44:15.736872] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.954 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (26645) - No such process 00:42:21.954 10:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 26645 00:42:21.954 10:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:42:21.954 10:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:21.954 10:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:42:21.954 10:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:21.954 10:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:42:21.954 10:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:21.954 10:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:42:21.954 delay0 00:42:21.954 10:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:21.954 10:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:42:21.954 10:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:21.954 10:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:42:21.954 10:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:21.954 10:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:42:22.213 [2024-12-13 10:44:15.944621] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:42:28.777 Initializing NVMe Controllers 00:42:28.777 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:42:28.777 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:42:28.777 Initialization complete. Launching workers. 00:42:28.777 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 298, failed: 6247 00:42:28.778 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 6497, failed to submit 48 00:42:28.778 success 6390, unsuccessful 107, failed 0 00:42:28.778 10:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:42:28.778 10:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:42:28.778 10:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:42:28.778 10:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:42:28.778 10:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:42:28.778 10:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:42:28.778 10:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:42:28.778 10:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:42:28.778 rmmod nvme_tcp 00:42:28.778 rmmod nvme_fabrics 00:42:28.778 rmmod nvme_keyring 00:42:28.778 10:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:42:28.778 10:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:42:28.778 10:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:42:28.778 10:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 24620 ']' 00:42:28.778 10:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 24620 00:42:28.778 10:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 24620 ']' 00:42:28.778 10:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 24620 00:42:28.778 10:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:42:28.778 10:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:42:28.778 10:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 24620 00:42:28.778 10:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:42:28.778 10:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:42:28.778 10:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 24620' 00:42:28.778 killing process with pid 24620 00:42:28.778 10:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 24620 00:42:28.778 10:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 24620 00:42:29.714 10:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:42:29.714 10:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:42:29.714 10:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:42:29.714 10:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:42:29.714 10:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:42:29.714 10:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:42:29.714 10:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:42:29.714 10:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:42:29.714 10:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:42:29.714 10:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:29.714 10:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:42:29.714 10:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:31.617 10:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:42:31.617 00:42:31.617 real 0m34.451s 00:42:31.617 user 0m46.141s 00:42:31.617 sys 0m11.656s 00:42:31.617 10:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:31.617 10:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:42:31.617 ************************************ 00:42:31.617 END TEST nvmf_zcopy 00:42:31.617 ************************************ 00:42:31.617 10:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:42:31.617 10:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:42:31.617 10:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:42:31.617 10:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:42:31.617 ************************************ 00:42:31.617 START TEST nvmf_nmic 00:42:31.617 ************************************ 00:42:31.617 10:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:42:31.876 * Looking for test storage... 00:42:31.876 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:42:31.876 10:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:42:31.876 10:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1711 -- # lcov --version 00:42:31.876 10:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:42:31.876 10:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:42:31.876 10:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:42:31.876 10:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:42:31.876 10:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:42:31.876 10:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:42:31.876 10:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:42:31.876 10:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:42:31.876 10:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:42:31.876 10:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:42:31.876 10:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:42:31.876 10:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:42:31.876 10:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:42:31.876 10:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:42:31.876 10:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:42:31.876 10:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:42:31.876 10:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:42:31.876 10:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:42:31.876 10:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:42:31.876 10:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:42:31.876 10:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:42:31.876 10:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:42:31.876 10:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:42:31.876 10:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:42:31.876 10:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:42:31.876 10:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:42:31.876 10:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:42:31.876 10:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:42:31.876 10:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:42:31.876 10:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:42:31.876 10:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:42:31.876 10:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:42:31.876 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:31.876 --rc genhtml_branch_coverage=1 00:42:31.876 --rc genhtml_function_coverage=1 00:42:31.876 --rc genhtml_legend=1 00:42:31.876 --rc geninfo_all_blocks=1 00:42:31.876 --rc geninfo_unexecuted_blocks=1 00:42:31.876 00:42:31.876 ' 00:42:31.876 10:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:42:31.876 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:31.876 --rc genhtml_branch_coverage=1 00:42:31.876 --rc genhtml_function_coverage=1 00:42:31.876 --rc genhtml_legend=1 00:42:31.876 --rc geninfo_all_blocks=1 00:42:31.876 --rc geninfo_unexecuted_blocks=1 00:42:31.876 00:42:31.876 ' 00:42:31.876 10:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:42:31.876 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:31.876 --rc genhtml_branch_coverage=1 00:42:31.876 --rc genhtml_function_coverage=1 00:42:31.876 --rc genhtml_legend=1 00:42:31.876 --rc geninfo_all_blocks=1 00:42:31.876 --rc geninfo_unexecuted_blocks=1 00:42:31.876 00:42:31.876 ' 00:42:31.876 10:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:42:31.876 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:31.876 --rc genhtml_branch_coverage=1 00:42:31.876 --rc genhtml_function_coverage=1 00:42:31.876 --rc genhtml_legend=1 00:42:31.876 --rc geninfo_all_blocks=1 00:42:31.876 --rc geninfo_unexecuted_blocks=1 00:42:31.876 00:42:31.876 ' 00:42:31.876 10:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:42:31.876 10:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:42:31.876 10:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:42:31.876 10:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:42:31.876 10:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:42:31.876 10:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:42:31.876 10:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:42:31.876 10:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:42:31.876 10:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:42:31.876 10:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:42:31.876 10:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:42:31.876 10:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:42:31.876 10:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:42:31.877 10:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:42:31.877 10:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:42:31.877 10:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:42:31.877 10:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:42:31.877 10:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:42:31.877 10:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:42:31.877 10:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:42:31.877 10:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:42:31.877 10:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:42:31.877 10:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:42:31.877 10:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:31.877 10:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:31.877 10:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:31.877 10:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:42:31.877 10:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:31.877 10:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:42:31.877 10:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:42:31.877 10:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:42:31.877 10:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:42:31.877 10:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:42:31.877 10:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:42:31.877 10:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:42:31.877 10:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:42:31.877 10:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:42:31.877 10:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:42:31.877 10:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:42:31.877 10:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:42:31.877 10:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:42:31.877 10:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:42:31.877 10:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:42:31.877 10:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:42:31.877 10:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:42:31.877 10:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:42:31.877 10:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:42:31.877 10:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:31.877 10:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:42:31.877 10:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:31.877 10:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:42:31.877 10:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:42:31.877 10:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:42:31.877 10:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:42:37.155 10:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:42:37.155 10:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:42:37.155 10:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:42:37.155 10:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:42:37.155 10:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:42:37.155 10:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:42:37.155 10:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:42:37.155 10:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:42:37.155 10:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:42:37.155 10:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:42:37.155 10:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:42:37.155 10:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:42:37.155 10:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:42:37.155 10:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:42:37.155 10:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:42:37.155 10:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:42:37.155 10:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:42:37.155 10:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:42:37.155 10:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:42:37.155 10:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:42:37.155 10:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:42:37.155 10:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:42:37.155 10:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:42:37.155 10:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:42:37.155 10:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:42:37.155 10:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:42:37.155 10:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:42:37.155 10:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:42:37.155 10:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:42:37.155 10:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:42:37.155 10:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:42:37.155 10:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:42:37.155 10:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:42:37.155 10:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:42:37.155 10:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:42:37.155 Found 0000:af:00.0 (0x8086 - 0x159b) 00:42:37.155 10:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:42:37.155 10:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:42:37.155 10:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:42:37.155 10:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:42:37.155 10:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:42:37.155 10:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:42:37.155 10:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:42:37.155 Found 0000:af:00.1 (0x8086 - 0x159b) 00:42:37.155 10:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:42:37.155 10:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:42:37.155 10:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:42:37.155 10:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:42:37.155 10:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:42:37.155 10:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:42:37.155 10:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:42:37.155 10:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:42:37.155 10:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:42:37.155 10:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:37.155 10:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:42:37.155 10:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:42:37.155 10:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:42:37.155 10:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:42:37.155 10:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:37.155 10:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:42:37.155 Found net devices under 0000:af:00.0: cvl_0_0 00:42:37.155 10:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:42:37.155 10:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:42:37.155 10:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:37.155 10:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:42:37.155 10:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:42:37.155 10:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:42:37.155 10:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:42:37.155 10:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:37.155 10:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:42:37.155 Found net devices under 0000:af:00.1: cvl_0_1 00:42:37.155 10:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:42:37.155 10:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:42:37.155 10:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:42:37.156 10:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:42:37.156 10:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:42:37.156 10:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:42:37.156 10:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:42:37.156 10:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:42:37.156 10:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:42:37.156 10:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:42:37.156 10:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:42:37.156 10:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:42:37.156 10:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:42:37.156 10:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:42:37.156 10:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:42:37.156 10:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:42:37.156 10:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:42:37.156 10:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:42:37.156 10:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:42:37.156 10:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:42:37.156 10:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:42:37.156 10:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:42:37.156 10:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:42:37.156 10:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:42:37.156 10:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:42:37.156 10:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:42:37.156 10:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:42:37.156 10:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:42:37.156 10:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:42:37.156 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:42:37.156 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.227 ms 00:42:37.156 00:42:37.156 --- 10.0.0.2 ping statistics --- 00:42:37.156 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:37.156 rtt min/avg/max/mdev = 0.227/0.227/0.227/0.000 ms 00:42:37.156 10:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:42:37.156 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:42:37.156 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.137 ms 00:42:37.156 00:42:37.156 --- 10.0.0.1 ping statistics --- 00:42:37.156 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:37.156 rtt min/avg/max/mdev = 0.137/0.137/0.137/0.000 ms 00:42:37.156 10:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:42:37.156 10:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:42:37.156 10:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:42:37.156 10:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:42:37.156 10:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:42:37.156 10:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:42:37.156 10:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:42:37.156 10:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:42:37.156 10:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:42:37.156 10:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:42:37.156 10:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:42:37.156 10:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:42:37.156 10:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:42:37.156 10:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=32109 00:42:37.156 10:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 32109 00:42:37.156 10:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:42:37.156 10:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 32109 ']' 00:42:37.156 10:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:37.156 10:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:42:37.156 10:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:37.156 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:37.156 10:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:42:37.156 10:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:42:37.156 [2024-12-13 10:44:30.961340] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:42:37.156 [2024-12-13 10:44:30.963339] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:42:37.156 [2024-12-13 10:44:30.963405] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:42:37.415 [2024-12-13 10:44:31.079042] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:42:37.415 [2024-12-13 10:44:31.186187] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:42:37.415 [2024-12-13 10:44:31.186234] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:42:37.415 [2024-12-13 10:44:31.186246] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:42:37.415 [2024-12-13 10:44:31.186255] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:42:37.415 [2024-12-13 10:44:31.186264] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:42:37.415 [2024-12-13 10:44:31.188472] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:42:37.415 [2024-12-13 10:44:31.188549] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:42:37.415 [2024-12-13 10:44:31.188573] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:42:37.415 [2024-12-13 10:44:31.188585] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:42:37.675 [2024-12-13 10:44:31.539946] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:42:37.675 [2024-12-13 10:44:31.541404] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:42:37.675 [2024-12-13 10:44:31.543026] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:42:37.675 [2024-12-13 10:44:31.544194] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:42:37.675 [2024-12-13 10:44:31.544520] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:42:37.934 10:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:42:37.934 10:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:42:37.934 10:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:42:37.934 10:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:42:37.934 10:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:42:37.934 10:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:42:37.934 10:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:42:37.934 10:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:37.934 10:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:42:37.934 [2024-12-13 10:44:31.797575] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:42:37.934 10:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:37.934 10:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:42:37.934 10:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:37.934 10:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:42:38.194 Malloc0 00:42:38.194 10:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:38.194 10:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:42:38.194 10:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:38.194 10:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:42:38.194 10:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:38.194 10:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:42:38.194 10:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:38.194 10:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:42:38.194 10:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:38.194 10:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:42:38.194 10:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:38.194 10:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:42:38.194 [2024-12-13 10:44:31.909583] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:42:38.194 10:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:38.194 10:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:42:38.194 test case1: single bdev can't be used in multiple subsystems 00:42:38.194 10:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:42:38.194 10:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:38.194 10:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:42:38.194 10:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:38.194 10:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:42:38.194 10:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:38.194 10:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:42:38.194 10:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:38.194 10:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:42:38.194 10:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:42:38.194 10:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:38.194 10:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:42:38.194 [2024-12-13 10:44:31.937242] bdev.c:8538:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:42:38.194 [2024-12-13 10:44:31.937277] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:42:38.194 [2024-12-13 10:44:31.937289] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:38.194 request: 00:42:38.194 { 00:42:38.194 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:42:38.194 "namespace": { 00:42:38.194 "bdev_name": "Malloc0", 00:42:38.194 "no_auto_visible": false, 00:42:38.194 "hide_metadata": false 00:42:38.194 }, 00:42:38.194 "method": "nvmf_subsystem_add_ns", 00:42:38.194 "req_id": 1 00:42:38.194 } 00:42:38.194 Got JSON-RPC error response 00:42:38.194 response: 00:42:38.194 { 00:42:38.194 "code": -32602, 00:42:38.194 "message": "Invalid parameters" 00:42:38.194 } 00:42:38.194 10:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:42:38.194 10:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:42:38.194 10:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:42:38.194 10:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:42:38.194 Adding namespace failed - expected result. 00:42:38.194 10:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:42:38.194 test case2: host connect to nvmf target in multiple paths 00:42:38.194 10:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:42:38.194 10:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:38.194 10:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:42:38.194 [2024-12-13 10:44:31.949335] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:42:38.194 10:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:38.194 10:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:42:38.453 10:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:42:39.021 10:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:42:39.021 10:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:42:39.021 10:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:42:39.021 10:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:42:39.021 10:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:42:40.926 10:44:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:42:40.926 10:44:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:42:40.926 10:44:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:42:40.926 10:44:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:42:40.926 10:44:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:42:40.926 10:44:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:42:40.926 10:44:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:42:40.926 [global] 00:42:40.926 thread=1 00:42:40.926 invalidate=1 00:42:40.926 rw=write 00:42:40.926 time_based=1 00:42:40.926 runtime=1 00:42:40.926 ioengine=libaio 00:42:40.926 direct=1 00:42:40.926 bs=4096 00:42:40.926 iodepth=1 00:42:40.926 norandommap=0 00:42:40.926 numjobs=1 00:42:40.926 00:42:40.926 verify_dump=1 00:42:40.926 verify_backlog=512 00:42:40.926 verify_state_save=0 00:42:40.926 do_verify=1 00:42:40.926 verify=crc32c-intel 00:42:40.926 [job0] 00:42:40.926 filename=/dev/nvme0n1 00:42:40.926 Could not set queue depth (nvme0n1) 00:42:41.184 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:42:41.184 fio-3.35 00:42:41.184 Starting 1 thread 00:42:42.560 00:42:42.560 job0: (groupid=0, jobs=1): err= 0: pid=32930: Fri Dec 13 10:44:36 2024 00:42:42.560 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:42:42.560 slat (nsec): min=7144, max=32712, avg=8002.82, stdev=1017.77 00:42:42.560 clat (usec): min=206, max=416, avg=244.48, stdev= 9.69 00:42:42.560 lat (usec): min=213, max=428, avg=252.48, stdev= 9.73 00:42:42.560 clat percentiles (usec): 00:42:42.560 | 1.00th=[ 229], 5.00th=[ 235], 10.00th=[ 237], 20.00th=[ 239], 00:42:42.560 | 30.00th=[ 241], 40.00th=[ 243], 50.00th=[ 245], 60.00th=[ 245], 00:42:42.560 | 70.00th=[ 247], 80.00th=[ 249], 90.00th=[ 253], 95.00th=[ 255], 00:42:42.560 | 99.00th=[ 269], 99.50th=[ 281], 99.90th=[ 343], 99.95th=[ 416], 00:42:42.560 | 99.99th=[ 416] 00:42:42.560 write: IOPS=2552, BW=9.97MiB/s (10.5MB/s)(9.98MiB/1001msec); 0 zone resets 00:42:42.560 slat (usec): min=10, max=29269, avg=23.45, stdev=578.82 00:42:42.560 clat (usec): min=139, max=315, avg=160.25, stdev= 8.96 00:42:42.560 lat (usec): min=151, max=29567, avg=183.69, stdev=581.61 00:42:42.560 clat percentiles (usec): 00:42:42.560 | 1.00th=[ 145], 5.00th=[ 151], 10.00th=[ 153], 20.00th=[ 155], 00:42:42.560 | 30.00th=[ 157], 40.00th=[ 159], 50.00th=[ 159], 60.00th=[ 161], 00:42:42.560 | 70.00th=[ 163], 80.00th=[ 165], 90.00th=[ 169], 95.00th=[ 172], 00:42:42.560 | 99.00th=[ 196], 99.50th=[ 210], 99.90th=[ 227], 99.95th=[ 297], 00:42:42.560 | 99.99th=[ 318] 00:42:42.560 bw ( KiB/s): min= 9520, max= 9520, per=93.24%, avg=9520.00, stdev= 0.00, samples=1 00:42:42.560 iops : min= 2380, max= 2380, avg=2380.00, stdev= 0.00, samples=1 00:42:42.560 lat (usec) : 250=92.57%, 500=7.43% 00:42:42.560 cpu : usr=3.70%, sys=7.50%, ctx=4606, majf=0, minf=1 00:42:42.560 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:42.560 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:42.560 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:42.560 issued rwts: total=2048,2555,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:42.560 latency : target=0, window=0, percentile=100.00%, depth=1 00:42:42.560 00:42:42.560 Run status group 0 (all jobs): 00:42:42.560 READ: bw=8184KiB/s (8380kB/s), 8184KiB/s-8184KiB/s (8380kB/s-8380kB/s), io=8192KiB (8389kB), run=1001-1001msec 00:42:42.560 WRITE: bw=9.97MiB/s (10.5MB/s), 9.97MiB/s-9.97MiB/s (10.5MB/s-10.5MB/s), io=9.98MiB (10.5MB), run=1001-1001msec 00:42:42.560 00:42:42.560 Disk stats (read/write): 00:42:42.560 nvme0n1: ios=2018/2048, merge=0/0, ticks=1452/311, in_queue=1763, util=98.70% 00:42:42.560 10:44:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:42:42.819 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:42:42.819 10:44:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:42:42.819 10:44:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:42:42.819 10:44:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:42:42.819 10:44:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:42:42.819 10:44:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:42:42.819 10:44:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:42:42.819 10:44:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:42:42.819 10:44:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:42:42.819 10:44:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:42:42.819 10:44:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:42:42.819 10:44:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:42:42.819 10:44:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:42:42.819 10:44:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:42:42.819 10:44:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:42:42.819 10:44:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:42:42.819 rmmod nvme_tcp 00:42:42.819 rmmod nvme_fabrics 00:42:42.819 rmmod nvme_keyring 00:42:43.078 10:44:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:42:43.078 10:44:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:42:43.078 10:44:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:42:43.078 10:44:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 32109 ']' 00:42:43.078 10:44:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 32109 00:42:43.078 10:44:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 32109 ']' 00:42:43.078 10:44:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 32109 00:42:43.078 10:44:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:42:43.078 10:44:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:42:43.078 10:44:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 32109 00:42:43.078 10:44:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:42:43.078 10:44:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:42:43.078 10:44:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 32109' 00:42:43.078 killing process with pid 32109 00:42:43.078 10:44:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 32109 00:42:43.078 10:44:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 32109 00:42:44.456 10:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:42:44.456 10:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:42:44.456 10:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:42:44.456 10:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:42:44.456 10:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:42:44.456 10:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:42:44.456 10:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:42:44.456 10:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:42:44.456 10:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:42:44.456 10:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:44.456 10:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:42:44.456 10:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:46.362 10:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:42:46.362 00:42:46.362 real 0m14.694s 00:42:46.362 user 0m27.823s 00:42:46.362 sys 0m5.768s 00:42:46.362 10:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:46.362 10:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:42:46.362 ************************************ 00:42:46.362 END TEST nvmf_nmic 00:42:46.362 ************************************ 00:42:46.362 10:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:42:46.362 10:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:42:46.362 10:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:42:46.362 10:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:42:46.362 ************************************ 00:42:46.362 START TEST nvmf_fio_target 00:42:46.362 ************************************ 00:42:46.362 10:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:42:46.622 * Looking for test storage... 00:42:46.622 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:42:46.622 10:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:42:46.622 10:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lcov --version 00:42:46.622 10:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:42:46.622 10:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:42:46.622 10:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:42:46.622 10:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:42:46.622 10:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:42:46.622 10:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:42:46.622 10:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:42:46.622 10:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:42:46.622 10:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:42:46.622 10:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:42:46.622 10:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:42:46.622 10:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:42:46.622 10:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:42:46.622 10:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:42:46.622 10:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:42:46.622 10:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:42:46.622 10:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:42:46.622 10:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:42:46.622 10:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:42:46.622 10:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:42:46.622 10:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:42:46.622 10:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:42:46.622 10:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:42:46.622 10:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:42:46.622 10:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:42:46.622 10:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:42:46.622 10:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:42:46.622 10:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:42:46.622 10:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:42:46.622 10:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:42:46.622 10:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:42:46.622 10:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:42:46.622 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:46.622 --rc genhtml_branch_coverage=1 00:42:46.622 --rc genhtml_function_coverage=1 00:42:46.622 --rc genhtml_legend=1 00:42:46.622 --rc geninfo_all_blocks=1 00:42:46.622 --rc geninfo_unexecuted_blocks=1 00:42:46.622 00:42:46.622 ' 00:42:46.622 10:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:42:46.622 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:46.622 --rc genhtml_branch_coverage=1 00:42:46.622 --rc genhtml_function_coverage=1 00:42:46.622 --rc genhtml_legend=1 00:42:46.622 --rc geninfo_all_blocks=1 00:42:46.622 --rc geninfo_unexecuted_blocks=1 00:42:46.622 00:42:46.622 ' 00:42:46.622 10:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:42:46.622 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:46.622 --rc genhtml_branch_coverage=1 00:42:46.622 --rc genhtml_function_coverage=1 00:42:46.622 --rc genhtml_legend=1 00:42:46.622 --rc geninfo_all_blocks=1 00:42:46.622 --rc geninfo_unexecuted_blocks=1 00:42:46.622 00:42:46.622 ' 00:42:46.622 10:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:42:46.622 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:46.622 --rc genhtml_branch_coverage=1 00:42:46.622 --rc genhtml_function_coverage=1 00:42:46.622 --rc genhtml_legend=1 00:42:46.622 --rc geninfo_all_blocks=1 00:42:46.622 --rc geninfo_unexecuted_blocks=1 00:42:46.622 00:42:46.622 ' 00:42:46.622 10:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:42:46.622 10:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:42:46.622 10:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:42:46.622 10:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:42:46.622 10:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:42:46.622 10:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:42:46.622 10:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:42:46.622 10:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:42:46.622 10:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:42:46.623 10:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:42:46.623 10:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:42:46.623 10:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:42:46.623 10:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:42:46.623 10:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:42:46.623 10:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:42:46.623 10:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:42:46.623 10:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:42:46.623 10:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:42:46.623 10:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:42:46.623 10:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:42:46.623 10:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:42:46.623 10:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:42:46.623 10:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:42:46.623 10:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:46.623 10:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:46.623 10:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:46.623 10:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:42:46.623 10:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:46.623 10:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:42:46.623 10:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:42:46.623 10:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:42:46.623 10:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:42:46.623 10:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:42:46.623 10:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:42:46.623 10:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:42:46.623 10:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:42:46.623 10:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:42:46.623 10:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:42:46.623 10:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:42:46.623 10:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:42:46.623 10:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:42:46.623 10:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:42:46.623 10:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:42:46.623 10:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:42:46.623 10:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:42:46.623 10:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:42:46.623 10:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:42:46.623 10:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:42:46.623 10:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:46.623 10:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:42:46.623 10:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:46.623 10:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:42:46.623 10:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:42:46.623 10:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:42:46.623 10:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:42:51.899 10:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:42:51.899 10:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:42:51.899 10:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:42:51.899 10:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:42:51.899 10:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:42:51.899 10:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:42:51.899 10:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:42:51.899 10:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:42:51.899 10:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:42:51.899 10:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:42:51.899 10:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:42:51.899 10:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:42:51.899 10:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:42:51.899 10:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:42:51.899 10:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:42:51.899 10:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:42:51.899 10:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:42:51.899 10:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:42:51.899 10:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:42:51.899 10:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:42:51.899 10:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:42:51.899 10:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:42:51.899 10:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:42:51.899 10:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:42:51.899 10:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:42:51.899 10:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:42:51.899 10:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:42:51.899 10:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:42:51.899 10:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:42:51.899 10:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:42:51.899 10:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:42:51.899 10:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:42:51.899 10:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:42:51.899 10:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:42:51.899 10:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:42:51.899 Found 0000:af:00.0 (0x8086 - 0x159b) 00:42:51.899 10:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:42:51.899 10:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:42:51.899 10:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:42:51.899 10:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:42:51.899 10:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:42:51.899 10:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:42:51.899 10:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:42:51.899 Found 0000:af:00.1 (0x8086 - 0x159b) 00:42:51.899 10:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:42:51.899 10:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:42:51.899 10:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:42:51.899 10:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:42:51.899 10:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:42:51.899 10:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:42:51.899 10:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:42:51.899 10:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:42:51.899 10:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:42:51.899 10:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:51.899 10:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:42:51.899 10:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:42:51.899 10:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:42:51.899 10:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:42:51.899 10:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:51.899 10:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:42:51.899 Found net devices under 0000:af:00.0: cvl_0_0 00:42:51.899 10:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:42:51.899 10:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:42:51.899 10:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:51.899 10:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:42:51.899 10:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:42:51.899 10:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:42:51.900 10:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:42:51.900 10:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:51.900 10:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:42:51.900 Found net devices under 0000:af:00.1: cvl_0_1 00:42:51.900 10:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:42:51.900 10:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:42:51.900 10:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:42:51.900 10:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:42:51.900 10:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:42:51.900 10:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:42:51.900 10:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:42:51.900 10:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:42:51.900 10:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:42:51.900 10:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:42:51.900 10:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:42:51.900 10:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:42:51.900 10:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:42:51.900 10:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:42:51.900 10:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:42:51.900 10:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:42:51.900 10:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:42:51.900 10:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:42:51.900 10:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:42:51.900 10:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:42:51.900 10:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:42:51.900 10:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:42:51.900 10:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:42:51.900 10:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:42:51.900 10:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:42:51.900 10:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:42:51.900 10:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:42:51.900 10:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:42:51.900 10:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:42:51.900 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:42:51.900 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.238 ms 00:42:51.900 00:42:51.900 --- 10.0.0.2 ping statistics --- 00:42:51.900 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:51.900 rtt min/avg/max/mdev = 0.238/0.238/0.238/0.000 ms 00:42:51.900 10:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:42:51.900 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:42:51.900 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.222 ms 00:42:51.900 00:42:51.900 --- 10.0.0.1 ping statistics --- 00:42:51.900 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:51.900 rtt min/avg/max/mdev = 0.222/0.222/0.222/0.000 ms 00:42:51.900 10:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:42:51.900 10:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:42:51.900 10:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:42:51.900 10:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:42:51.900 10:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:42:51.900 10:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:42:51.900 10:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:42:51.900 10:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:42:51.900 10:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:42:52.159 10:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:42:52.159 10:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:42:52.159 10:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:42:52.159 10:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:42:52.159 10:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=36676 00:42:52.159 10:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 36676 00:42:52.159 10:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:42:52.159 10:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 36676 ']' 00:42:52.159 10:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:52.159 10:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:42:52.159 10:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:52.159 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:52.159 10:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:42:52.159 10:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:42:52.159 [2024-12-13 10:44:45.890922] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:42:52.159 [2024-12-13 10:44:45.893160] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:42:52.159 [2024-12-13 10:44:45.893229] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:42:52.159 [2024-12-13 10:44:46.029712] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:42:52.418 [2024-12-13 10:44:46.137641] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:42:52.418 [2024-12-13 10:44:46.137684] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:42:52.418 [2024-12-13 10:44:46.137696] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:42:52.418 [2024-12-13 10:44:46.137705] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:42:52.418 [2024-12-13 10:44:46.137719] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:42:52.418 [2024-12-13 10:44:46.140068] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:42:52.418 [2024-12-13 10:44:46.140142] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:42:52.418 [2024-12-13 10:44:46.140243] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:42:52.418 [2024-12-13 10:44:46.140253] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:42:52.678 [2024-12-13 10:44:46.445017] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:42:52.678 [2024-12-13 10:44:46.446415] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:42:52.678 [2024-12-13 10:44:46.448094] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:42:52.678 [2024-12-13 10:44:46.449442] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:42:52.678 [2024-12-13 10:44:46.449774] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:42:52.937 10:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:42:52.937 10:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:42:52.937 10:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:42:52.937 10:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:42:52.937 10:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:42:52.937 10:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:42:52.937 10:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:42:53.196 [2024-12-13 10:44:46.905266] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:42:53.196 10:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:42:53.455 10:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:42:53.455 10:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:42:53.714 10:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:42:53.714 10:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:42:53.973 10:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:42:53.973 10:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:42:54.232 10:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:42:54.232 10:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:42:54.492 10:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:42:54.751 10:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:42:54.751 10:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:42:55.010 10:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:42:55.010 10:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:42:55.269 10:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:42:55.269 10:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:42:55.269 10:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:42:55.528 10:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:42:55.528 10:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:42:55.787 10:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:42:55.787 10:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:42:56.045 10:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:42:56.045 [2024-12-13 10:44:49.897184] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:42:56.045 10:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:42:56.304 10:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:42:56.562 10:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:42:56.820 10:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:42:56.820 10:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:42:56.820 10:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:42:56.820 10:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:42:56.820 10:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:42:56.820 10:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:42:59.373 10:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:42:59.373 10:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:42:59.373 10:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:42:59.373 10:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:42:59.373 10:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:42:59.373 10:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:42:59.373 10:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:42:59.373 [global] 00:42:59.373 thread=1 00:42:59.373 invalidate=1 00:42:59.373 rw=write 00:42:59.373 time_based=1 00:42:59.373 runtime=1 00:42:59.373 ioengine=libaio 00:42:59.373 direct=1 00:42:59.373 bs=4096 00:42:59.373 iodepth=1 00:42:59.373 norandommap=0 00:42:59.373 numjobs=1 00:42:59.373 00:42:59.373 verify_dump=1 00:42:59.373 verify_backlog=512 00:42:59.373 verify_state_save=0 00:42:59.373 do_verify=1 00:42:59.373 verify=crc32c-intel 00:42:59.373 [job0] 00:42:59.373 filename=/dev/nvme0n1 00:42:59.373 [job1] 00:42:59.373 filename=/dev/nvme0n2 00:42:59.373 [job2] 00:42:59.373 filename=/dev/nvme0n3 00:42:59.373 [job3] 00:42:59.373 filename=/dev/nvme0n4 00:42:59.373 Could not set queue depth (nvme0n1) 00:42:59.373 Could not set queue depth (nvme0n2) 00:42:59.373 Could not set queue depth (nvme0n3) 00:42:59.373 Could not set queue depth (nvme0n4) 00:42:59.373 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:42:59.373 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:42:59.373 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:42:59.373 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:42:59.373 fio-3.35 00:42:59.373 Starting 4 threads 00:43:00.447 00:43:00.447 job0: (groupid=0, jobs=1): err= 0: pid=37952: Fri Dec 13 10:44:54 2024 00:43:00.447 read: IOPS=1918, BW=7672KiB/s (7856kB/s)(7680KiB/1001msec) 00:43:00.447 slat (nsec): min=6726, max=39347, avg=8426.33, stdev=2183.13 00:43:00.447 clat (usec): min=222, max=499, avg=299.47, stdev=58.84 00:43:00.447 lat (usec): min=233, max=507, avg=307.90, stdev=58.75 00:43:00.447 clat percentiles (usec): 00:43:00.447 | 1.00th=[ 239], 5.00th=[ 247], 10.00th=[ 253], 20.00th=[ 265], 00:43:00.447 | 30.00th=[ 273], 40.00th=[ 277], 50.00th=[ 281], 60.00th=[ 285], 00:43:00.447 | 70.00th=[ 289], 80.00th=[ 306], 90.00th=[ 420], 95.00th=[ 441], 00:43:00.447 | 99.00th=[ 469], 99.50th=[ 478], 99.90th=[ 498], 99.95th=[ 498], 00:43:00.447 | 99.99th=[ 498] 00:43:00.447 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:43:00.447 slat (nsec): min=9738, max=38658, avg=11472.14, stdev=2134.84 00:43:00.447 clat (usec): min=146, max=355, avg=182.36, stdev=15.72 00:43:00.447 lat (usec): min=157, max=366, avg=193.83, stdev=15.82 00:43:00.447 clat percentiles (usec): 00:43:00.447 | 1.00th=[ 157], 5.00th=[ 163], 10.00th=[ 165], 20.00th=[ 172], 00:43:00.447 | 30.00th=[ 176], 40.00th=[ 178], 50.00th=[ 180], 60.00th=[ 184], 00:43:00.447 | 70.00th=[ 188], 80.00th=[ 192], 90.00th=[ 200], 95.00th=[ 212], 00:43:00.447 | 99.00th=[ 231], 99.50th=[ 241], 99.90th=[ 277], 99.95th=[ 289], 00:43:00.447 | 99.99th=[ 355] 00:43:00.447 bw ( KiB/s): min= 8175, max= 8175, per=40.55%, avg=8175.00, stdev= 0.00, samples=1 00:43:00.447 iops : min= 2043, max= 2043, avg=2043.00, stdev= 0.00, samples=1 00:43:00.447 lat (usec) : 250=55.29%, 500=44.71% 00:43:00.447 cpu : usr=1.90%, sys=4.30%, ctx=3969, majf=0, minf=1 00:43:00.447 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:00.447 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:00.447 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:00.447 issued rwts: total=1920,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:00.447 latency : target=0, window=0, percentile=100.00%, depth=1 00:43:00.447 job1: (groupid=0, jobs=1): err= 0: pid=37965: Fri Dec 13 10:44:54 2024 00:43:00.447 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:43:00.447 slat (nsec): min=6839, max=39866, avg=8570.00, stdev=2015.14 00:43:00.447 clat (usec): min=217, max=657, avg=253.74, stdev=23.51 00:43:00.447 lat (usec): min=225, max=668, avg=262.31, stdev=23.56 00:43:00.447 clat percentiles (usec): 00:43:00.447 | 1.00th=[ 225], 5.00th=[ 233], 10.00th=[ 239], 20.00th=[ 243], 00:43:00.447 | 30.00th=[ 247], 40.00th=[ 249], 50.00th=[ 251], 60.00th=[ 253], 00:43:00.447 | 70.00th=[ 258], 80.00th=[ 262], 90.00th=[ 269], 95.00th=[ 273], 00:43:00.447 | 99.00th=[ 306], 99.50th=[ 461], 99.90th=[ 474], 99.95th=[ 474], 00:43:00.447 | 99.99th=[ 660] 00:43:00.447 write: IOPS=2121, BW=8488KiB/s (8691kB/s)(8496KiB/1001msec); 0 zone resets 00:43:00.447 slat (nsec): min=9807, max=45914, avg=11754.80, stdev=2271.26 00:43:00.447 clat (usec): min=151, max=688, avg=200.20, stdev=36.03 00:43:00.447 lat (usec): min=162, max=700, avg=211.96, stdev=36.23 00:43:00.447 clat percentiles (usec): 00:43:00.447 | 1.00th=[ 159], 5.00th=[ 167], 10.00th=[ 169], 20.00th=[ 174], 00:43:00.447 | 30.00th=[ 178], 40.00th=[ 182], 50.00th=[ 188], 60.00th=[ 198], 00:43:00.447 | 70.00th=[ 208], 80.00th=[ 221], 90.00th=[ 255], 95.00th=[ 281], 00:43:00.447 | 99.00th=[ 293], 99.50th=[ 293], 99.90th=[ 314], 99.95th=[ 529], 00:43:00.447 | 99.99th=[ 693] 00:43:00.447 bw ( KiB/s): min= 8710, max= 8710, per=43.21%, avg=8710.00, stdev= 0.00, samples=1 00:43:00.447 iops : min= 2177, max= 2177, avg=2177.00, stdev= 0.00, samples=1 00:43:00.447 lat (usec) : 250=68.58%, 500=31.35%, 750=0.07% 00:43:00.447 cpu : usr=4.00%, sys=6.30%, ctx=4173, majf=0, minf=2 00:43:00.447 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:00.447 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:00.447 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:00.447 issued rwts: total=2048,2124,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:00.447 latency : target=0, window=0, percentile=100.00%, depth=1 00:43:00.447 job2: (groupid=0, jobs=1): err= 0: pid=37980: Fri Dec 13 10:44:54 2024 00:43:00.447 read: IOPS=21, BW=85.4KiB/s (87.4kB/s)(88.0KiB/1031msec) 00:43:00.447 slat (nsec): min=12527, max=26611, avg=22895.73, stdev=4219.01 00:43:00.447 clat (usec): min=40887, max=46033, avg=41329.67, stdev=1108.36 00:43:00.447 lat (usec): min=40912, max=46060, avg=41352.56, stdev=1109.30 00:43:00.447 clat percentiles (usec): 00:43:00.447 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:43:00.447 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:43:00.447 | 70.00th=[41157], 80.00th=[41157], 90.00th=[42206], 95.00th=[42206], 00:43:00.447 | 99.00th=[45876], 99.50th=[45876], 99.90th=[45876], 99.95th=[45876], 00:43:00.447 | 99.99th=[45876] 00:43:00.447 write: IOPS=496, BW=1986KiB/s (2034kB/s)(2048KiB/1031msec); 0 zone resets 00:43:00.447 slat (nsec): min=10416, max=41127, avg=13484.54, stdev=4919.67 00:43:00.447 clat (usec): min=171, max=345, avg=220.97, stdev=21.64 00:43:00.447 lat (usec): min=183, max=375, avg=234.45, stdev=22.09 00:43:00.447 clat percentiles (usec): 00:43:00.447 | 1.00th=[ 180], 5.00th=[ 190], 10.00th=[ 198], 20.00th=[ 206], 00:43:00.447 | 30.00th=[ 210], 40.00th=[ 215], 50.00th=[ 221], 60.00th=[ 225], 00:43:00.447 | 70.00th=[ 229], 80.00th=[ 235], 90.00th=[ 243], 95.00th=[ 255], 00:43:00.447 | 99.00th=[ 297], 99.50th=[ 306], 99.90th=[ 347], 99.95th=[ 347], 00:43:00.447 | 99.99th=[ 347] 00:43:00.447 bw ( KiB/s): min= 4087, max= 4087, per=20.27%, avg=4087.00, stdev= 0.00, samples=1 00:43:00.447 iops : min= 1021, max= 1021, avg=1021.00, stdev= 0.00, samples=1 00:43:00.447 lat (usec) : 250=89.89%, 500=5.99% 00:43:00.447 lat (msec) : 50=4.12% 00:43:00.447 cpu : usr=0.39%, sys=0.58%, ctx=534, majf=0, minf=2 00:43:00.447 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:00.447 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:00.447 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:00.447 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:00.447 latency : target=0, window=0, percentile=100.00%, depth=1 00:43:00.447 job3: (groupid=0, jobs=1): err= 0: pid=37985: Fri Dec 13 10:44:54 2024 00:43:00.447 read: IOPS=138, BW=555KiB/s (568kB/s)(564KiB/1016msec) 00:43:00.447 slat (nsec): min=6792, max=43229, avg=11337.84, stdev=6464.79 00:43:00.447 clat (usec): min=212, max=41425, avg=6362.11, stdev=14531.97 00:43:00.447 lat (usec): min=221, max=41434, avg=6373.44, stdev=14536.25 00:43:00.447 clat percentiles (usec): 00:43:00.447 | 1.00th=[ 235], 5.00th=[ 243], 10.00th=[ 255], 20.00th=[ 269], 00:43:00.447 | 30.00th=[ 281], 40.00th=[ 289], 50.00th=[ 306], 60.00th=[ 326], 00:43:00.447 | 70.00th=[ 351], 80.00th=[ 388], 90.00th=[41157], 95.00th=[41157], 00:43:00.447 | 99.00th=[41157], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:43:00.447 | 99.99th=[41681] 00:43:00.447 write: IOPS=503, BW=2016KiB/s (2064kB/s)(2048KiB/1016msec); 0 zone resets 00:43:00.447 slat (nsec): min=9433, max=38272, avg=10488.07, stdev=1702.42 00:43:00.447 clat (usec): min=163, max=296, avg=215.55, stdev=19.23 00:43:00.447 lat (usec): min=174, max=306, avg=226.04, stdev=19.40 00:43:00.447 clat percentiles (usec): 00:43:00.447 | 1.00th=[ 169], 5.00th=[ 180], 10.00th=[ 190], 20.00th=[ 200], 00:43:00.447 | 30.00th=[ 208], 40.00th=[ 212], 50.00th=[ 217], 60.00th=[ 223], 00:43:00.447 | 70.00th=[ 227], 80.00th=[ 233], 90.00th=[ 239], 95.00th=[ 241], 00:43:00.447 | 99.00th=[ 255], 99.50th=[ 269], 99.90th=[ 297], 99.95th=[ 297], 00:43:00.447 | 99.99th=[ 297] 00:43:00.447 bw ( KiB/s): min= 4087, max= 4087, per=20.27%, avg=4087.00, stdev= 0.00, samples=1 00:43:00.447 iops : min= 1021, max= 1021, avg=1021.00, stdev= 0.00, samples=1 00:43:00.447 lat (usec) : 250=78.41%, 500=18.38% 00:43:00.447 lat (msec) : 50=3.22% 00:43:00.447 cpu : usr=0.59%, sys=0.30%, ctx=653, majf=0, minf=1 00:43:00.447 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:00.447 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:00.447 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:00.447 issued rwts: total=141,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:00.447 latency : target=0, window=0, percentile=100.00%, depth=1 00:43:00.447 00:43:00.447 Run status group 0 (all jobs): 00:43:00.447 READ: bw=15.7MiB/s (16.4MB/s), 85.4KiB/s-8184KiB/s (87.4kB/s-8380kB/s), io=16.1MiB (16.9MB), run=1001-1031msec 00:43:00.448 WRITE: bw=19.7MiB/s (20.6MB/s), 1986KiB/s-8488KiB/s (2034kB/s-8691kB/s), io=20.3MiB (21.3MB), run=1001-1031msec 00:43:00.448 00:43:00.448 Disk stats (read/write): 00:43:00.448 nvme0n1: ios=1562/2003, merge=0/0, ticks=1416/350, in_queue=1766, util=97.60% 00:43:00.448 nvme0n2: ios=1556/2036, merge=0/0, ticks=410/379, in_queue=789, util=87.07% 00:43:00.448 nvme0n3: ios=17/512, merge=0/0, ticks=705/111, in_queue=816, util=89.02% 00:43:00.448 nvme0n4: ios=135/512, merge=0/0, ticks=690/108, in_queue=798, util=89.67% 00:43:00.448 10:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:43:00.448 [global] 00:43:00.448 thread=1 00:43:00.448 invalidate=1 00:43:00.448 rw=randwrite 00:43:00.448 time_based=1 00:43:00.448 runtime=1 00:43:00.448 ioengine=libaio 00:43:00.448 direct=1 00:43:00.448 bs=4096 00:43:00.448 iodepth=1 00:43:00.448 norandommap=0 00:43:00.448 numjobs=1 00:43:00.448 00:43:00.448 verify_dump=1 00:43:00.448 verify_backlog=512 00:43:00.448 verify_state_save=0 00:43:00.448 do_verify=1 00:43:00.448 verify=crc32c-intel 00:43:00.448 [job0] 00:43:00.448 filename=/dev/nvme0n1 00:43:00.448 [job1] 00:43:00.448 filename=/dev/nvme0n2 00:43:00.448 [job2] 00:43:00.448 filename=/dev/nvme0n3 00:43:00.448 [job3] 00:43:00.448 filename=/dev/nvme0n4 00:43:00.448 Could not set queue depth (nvme0n1) 00:43:00.448 Could not set queue depth (nvme0n2) 00:43:00.448 Could not set queue depth (nvme0n3) 00:43:00.448 Could not set queue depth (nvme0n4) 00:43:00.758 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:43:00.758 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:43:00.758 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:43:00.758 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:43:00.758 fio-3.35 00:43:00.758 Starting 4 threads 00:43:02.135 00:43:02.135 job0: (groupid=0, jobs=1): err= 0: pid=38364: Fri Dec 13 10:44:55 2024 00:43:02.135 read: IOPS=1492, BW=5969KiB/s (6112kB/s)(6160KiB/1032msec) 00:43:02.135 slat (nsec): min=7026, max=43065, avg=8153.83, stdev=1577.54 00:43:02.135 clat (usec): min=234, max=41083, avg=368.27, stdev=2073.98 00:43:02.135 lat (usec): min=242, max=41092, avg=376.42, stdev=2074.55 00:43:02.135 clat percentiles (usec): 00:43:02.135 | 1.00th=[ 241], 5.00th=[ 247], 10.00th=[ 249], 20.00th=[ 253], 00:43:02.135 | 30.00th=[ 255], 40.00th=[ 258], 50.00th=[ 260], 60.00th=[ 265], 00:43:02.135 | 70.00th=[ 265], 80.00th=[ 269], 90.00th=[ 277], 95.00th=[ 281], 00:43:02.135 | 99.00th=[ 314], 99.50th=[ 359], 99.90th=[41157], 99.95th=[41157], 00:43:02.135 | 99.99th=[41157] 00:43:02.135 write: IOPS=1984, BW=7938KiB/s (8128kB/s)(8192KiB/1032msec); 0 zone resets 00:43:02.135 slat (usec): min=10, max=33257, avg=28.70, stdev=734.63 00:43:02.135 clat (usec): min=151, max=354, avg=186.53, stdev=14.10 00:43:02.135 lat (usec): min=161, max=33477, avg=215.23, stdev=735.51 00:43:02.135 clat percentiles (usec): 00:43:02.135 | 1.00th=[ 165], 5.00th=[ 172], 10.00th=[ 174], 20.00th=[ 178], 00:43:02.135 | 30.00th=[ 180], 40.00th=[ 182], 50.00th=[ 184], 60.00th=[ 188], 00:43:02.135 | 70.00th=[ 192], 80.00th=[ 196], 90.00th=[ 202], 95.00th=[ 210], 00:43:02.135 | 99.00th=[ 235], 99.50th=[ 253], 99.90th=[ 297], 99.95th=[ 330], 00:43:02.135 | 99.99th=[ 355] 00:43:02.135 bw ( KiB/s): min= 7808, max= 8576, per=31.47%, avg=8192.00, stdev=543.06, samples=2 00:43:02.135 iops : min= 1952, max= 2144, avg=2048.00, stdev=135.76, samples=2 00:43:02.135 lat (usec) : 250=62.68%, 500=37.18% 00:43:02.135 lat (msec) : 2=0.03%, 50=0.11% 00:43:02.135 cpu : usr=2.81%, sys=5.92%, ctx=3591, majf=0, minf=1 00:43:02.135 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:02.135 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:02.135 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:02.135 issued rwts: total=1540,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:02.135 latency : target=0, window=0, percentile=100.00%, depth=1 00:43:02.135 job1: (groupid=0, jobs=1): err= 0: pid=38376: Fri Dec 13 10:44:55 2024 00:43:02.135 read: IOPS=2048, BW=8192KiB/s (8389kB/s)(8192KiB/1000msec) 00:43:02.135 slat (nsec): min=6838, max=22469, avg=7917.26, stdev=1360.80 00:43:02.135 clat (usec): min=223, max=575, avg=267.46, stdev=42.23 00:43:02.135 lat (usec): min=231, max=583, avg=275.37, stdev=42.26 00:43:02.135 clat percentiles (usec): 00:43:02.135 | 1.00th=[ 237], 5.00th=[ 243], 10.00th=[ 247], 20.00th=[ 251], 00:43:02.135 | 30.00th=[ 253], 40.00th=[ 255], 50.00th=[ 260], 60.00th=[ 262], 00:43:02.135 | 70.00th=[ 265], 80.00th=[ 269], 90.00th=[ 277], 95.00th=[ 293], 00:43:02.135 | 99.00th=[ 469], 99.50th=[ 482], 99.90th=[ 537], 99.95th=[ 562], 00:43:02.135 | 99.99th=[ 578] 00:43:02.135 write: IOPS=2105, BW=8424KiB/s (8626kB/s)(8432KiB/1001msec); 0 zone resets 00:43:02.135 slat (nsec): min=9670, max=62622, avg=11164.84, stdev=2210.83 00:43:02.135 clat (usec): min=141, max=339, avg=189.64, stdev=20.73 00:43:02.135 lat (usec): min=165, max=370, avg=200.81, stdev=21.11 00:43:02.135 clat percentiles (usec): 00:43:02.135 | 1.00th=[ 165], 5.00th=[ 172], 10.00th=[ 174], 20.00th=[ 178], 00:43:02.135 | 30.00th=[ 180], 40.00th=[ 184], 50.00th=[ 186], 60.00th=[ 190], 00:43:02.135 | 70.00th=[ 192], 80.00th=[ 198], 90.00th=[ 206], 95.00th=[ 221], 00:43:02.135 | 99.00th=[ 297], 99.50th=[ 302], 99.90th=[ 314], 99.95th=[ 318], 00:43:02.135 | 99.99th=[ 338] 00:43:02.135 bw ( KiB/s): min= 8192, max= 8192, per=31.47%, avg=8192.00, stdev= 0.00, samples=1 00:43:02.135 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:43:02.135 lat (usec) : 250=59.36%, 500=40.50%, 750=0.14% 00:43:02.135 cpu : usr=2.70%, sys=7.30%, ctx=4157, majf=0, minf=1 00:43:02.135 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:02.135 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:02.135 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:02.135 issued rwts: total=2048,2108,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:02.135 latency : target=0, window=0, percentile=100.00%, depth=1 00:43:02.135 job2: (groupid=0, jobs=1): err= 0: pid=38391: Fri Dec 13 10:44:55 2024 00:43:02.135 read: IOPS=1878, BW=7512KiB/s (7693kB/s)(7520KiB/1001msec) 00:43:02.135 slat (nsec): min=8271, max=46403, avg=9720.37, stdev=1951.86 00:43:02.135 clat (usec): min=237, max=2494, avg=284.39, stdev=72.22 00:43:02.136 lat (usec): min=246, max=2504, avg=294.11, stdev=72.37 00:43:02.136 clat percentiles (usec): 00:43:02.136 | 1.00th=[ 251], 5.00th=[ 258], 10.00th=[ 260], 20.00th=[ 265], 00:43:02.136 | 30.00th=[ 269], 40.00th=[ 273], 50.00th=[ 273], 60.00th=[ 277], 00:43:02.136 | 70.00th=[ 285], 80.00th=[ 293], 90.00th=[ 318], 95.00th=[ 326], 00:43:02.136 | 99.00th=[ 453], 99.50th=[ 486], 99.90th=[ 2073], 99.95th=[ 2507], 00:43:02.136 | 99.99th=[ 2507] 00:43:02.136 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:43:02.136 slat (nsec): min=11171, max=44197, avg=13847.23, stdev=2316.35 00:43:02.136 clat (usec): min=165, max=369, avg=198.23, stdev=14.47 00:43:02.136 lat (usec): min=177, max=410, avg=212.08, stdev=15.08 00:43:02.136 clat percentiles (usec): 00:43:02.136 | 1.00th=[ 176], 5.00th=[ 180], 10.00th=[ 182], 20.00th=[ 186], 00:43:02.136 | 30.00th=[ 190], 40.00th=[ 194], 50.00th=[ 198], 60.00th=[ 200], 00:43:02.136 | 70.00th=[ 204], 80.00th=[ 210], 90.00th=[ 217], 95.00th=[ 223], 00:43:02.136 | 99.00th=[ 241], 99.50th=[ 249], 99.90th=[ 262], 99.95th=[ 273], 00:43:02.136 | 99.99th=[ 371] 00:43:02.136 bw ( KiB/s): min= 8192, max= 8192, per=31.47%, avg=8192.00, stdev= 0.00, samples=1 00:43:02.136 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:43:02.136 lat (usec) : 250=52.39%, 500=47.45%, 750=0.10% 00:43:02.136 lat (msec) : 4=0.05% 00:43:02.136 cpu : usr=2.80%, sys=7.90%, ctx=3929, majf=0, minf=1 00:43:02.136 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:02.136 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:02.136 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:02.136 issued rwts: total=1880,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:02.136 latency : target=0, window=0, percentile=100.00%, depth=1 00:43:02.136 job3: (groupid=0, jobs=1): err= 0: pid=38396: Fri Dec 13 10:44:55 2024 00:43:02.136 read: IOPS=21, BW=86.6KiB/s (88.7kB/s)(88.0KiB/1016msec) 00:43:02.136 slat (nsec): min=10332, max=25401, avg=23736.45, stdev=3021.28 00:43:02.136 clat (usec): min=40859, max=41052, avg=40968.59, stdev=47.13 00:43:02.136 lat (usec): min=40884, max=41076, avg=40992.33, stdev=46.19 00:43:02.136 clat percentiles (usec): 00:43:02.136 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:43:02.136 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:43:02.136 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:43:02.136 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:43:02.136 | 99.99th=[41157] 00:43:02.136 write: IOPS=503, BW=2016KiB/s (2064kB/s)(2048KiB/1016msec); 0 zone resets 00:43:02.136 slat (nsec): min=9018, max=52045, avg=10753.25, stdev=2592.65 00:43:02.136 clat (usec): min=187, max=375, avg=209.19, stdev=13.07 00:43:02.136 lat (usec): min=197, max=428, avg=219.94, stdev=14.33 00:43:02.136 clat percentiles (usec): 00:43:02.136 | 1.00th=[ 190], 5.00th=[ 194], 10.00th=[ 196], 20.00th=[ 200], 00:43:02.136 | 30.00th=[ 202], 40.00th=[ 206], 50.00th=[ 208], 60.00th=[ 210], 00:43:02.136 | 70.00th=[ 215], 80.00th=[ 217], 90.00th=[ 223], 95.00th=[ 231], 00:43:02.136 | 99.00th=[ 241], 99.50th=[ 249], 99.90th=[ 375], 99.95th=[ 375], 00:43:02.136 | 99.99th=[ 375] 00:43:02.136 bw ( KiB/s): min= 4096, max= 4096, per=15.74%, avg=4096.00, stdev= 0.00, samples=1 00:43:02.136 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:43:02.136 lat (usec) : 250=95.51%, 500=0.37% 00:43:02.136 lat (msec) : 50=4.12% 00:43:02.136 cpu : usr=0.30%, sys=0.49%, ctx=535, majf=0, minf=1 00:43:02.136 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:02.136 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:02.136 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:02.136 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:02.136 latency : target=0, window=0, percentile=100.00%, depth=1 00:43:02.136 00:43:02.136 Run status group 0 (all jobs): 00:43:02.136 READ: bw=20.8MiB/s (21.8MB/s), 86.6KiB/s-8192KiB/s (88.7kB/s-8389kB/s), io=21.4MiB (22.5MB), run=1000-1032msec 00:43:02.136 WRITE: bw=25.4MiB/s (26.7MB/s), 2016KiB/s-8424KiB/s (2064kB/s-8626kB/s), io=26.2MiB (27.5MB), run=1001-1032msec 00:43:02.136 00:43:02.136 Disk stats (read/write): 00:43:02.136 nvme0n1: ios=1564/1982, merge=0/0, ticks=1349/344, in_queue=1693, util=99.30% 00:43:02.136 nvme0n2: ios=1567/2048, merge=0/0, ticks=753/354, in_queue=1107, util=91.27% 00:43:02.136 nvme0n3: ios=1566/1836, merge=0/0, ticks=1411/335, in_queue=1746, util=98.54% 00:43:02.136 nvme0n4: ios=43/512, merge=0/0, ticks=1724/99, in_queue=1823, util=98.53% 00:43:02.136 10:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:43:02.136 [global] 00:43:02.136 thread=1 00:43:02.136 invalidate=1 00:43:02.136 rw=write 00:43:02.136 time_based=1 00:43:02.136 runtime=1 00:43:02.136 ioengine=libaio 00:43:02.136 direct=1 00:43:02.136 bs=4096 00:43:02.136 iodepth=128 00:43:02.136 norandommap=0 00:43:02.136 numjobs=1 00:43:02.136 00:43:02.136 verify_dump=1 00:43:02.136 verify_backlog=512 00:43:02.136 verify_state_save=0 00:43:02.136 do_verify=1 00:43:02.136 verify=crc32c-intel 00:43:02.136 [job0] 00:43:02.136 filename=/dev/nvme0n1 00:43:02.136 [job1] 00:43:02.136 filename=/dev/nvme0n2 00:43:02.136 [job2] 00:43:02.136 filename=/dev/nvme0n3 00:43:02.136 [job3] 00:43:02.136 filename=/dev/nvme0n4 00:43:02.136 Could not set queue depth (nvme0n1) 00:43:02.136 Could not set queue depth (nvme0n2) 00:43:02.136 Could not set queue depth (nvme0n3) 00:43:02.136 Could not set queue depth (nvme0n4) 00:43:02.395 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:43:02.395 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:43:02.395 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:43:02.395 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:43:02.395 fio-3.35 00:43:02.395 Starting 4 threads 00:43:03.794 00:43:03.794 job0: (groupid=0, jobs=1): err= 0: pid=38788: Fri Dec 13 10:44:57 2024 00:43:03.794 read: IOPS=4079, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1004msec) 00:43:03.794 slat (nsec): min=1093, max=10934k, avg=106440.72, stdev=594240.94 00:43:03.794 clat (usec): min=4398, max=32818, avg=12877.14, stdev=2950.64 00:43:03.794 lat (usec): min=4405, max=32821, avg=12983.58, stdev=2991.53 00:43:03.794 clat percentiles (usec): 00:43:03.794 | 1.00th=[ 6915], 5.00th=[ 9765], 10.00th=[10421], 20.00th=[10814], 00:43:03.794 | 30.00th=[11469], 40.00th=[12125], 50.00th=[12518], 60.00th=[12911], 00:43:03.794 | 70.00th=[13304], 80.00th=[14222], 90.00th=[15401], 95.00th=[17171], 00:43:03.794 | 99.00th=[26084], 99.50th=[30016], 99.90th=[32900], 99.95th=[32900], 00:43:03.794 | 99.99th=[32900] 00:43:03.794 write: IOPS=4217, BW=16.5MiB/s (17.3MB/s)(16.5MiB/1004msec); 0 zone resets 00:43:03.794 slat (usec): min=2, max=20908, avg=128.37, stdev=682.34 00:43:03.794 clat (usec): min=2965, max=93007, avg=17629.02, stdev=13400.21 00:43:03.794 lat (usec): min=2974, max=93016, avg=17757.39, stdev=13465.53 00:43:03.794 clat percentiles (usec): 00:43:03.794 | 1.00th=[ 5932], 5.00th=[11207], 10.00th=[11863], 20.00th=[12256], 00:43:03.794 | 30.00th=[12649], 40.00th=[12780], 50.00th=[12911], 60.00th=[13042], 00:43:03.794 | 70.00th=[14353], 80.00th=[19006], 90.00th=[23725], 95.00th=[53740], 00:43:03.794 | 99.00th=[85459], 99.50th=[89654], 99.90th=[91751], 99.95th=[92799], 00:43:03.795 | 99.99th=[92799] 00:43:03.795 bw ( KiB/s): min=14032, max=18824, per=22.96%, avg=16428.00, stdev=3388.46, samples=2 00:43:03.795 iops : min= 3508, max= 4706, avg=4107.00, stdev=847.11, samples=2 00:43:03.795 lat (msec) : 4=0.08%, 10=5.08%, 20=85.14%, 50=7.03%, 100=2.67% 00:43:03.795 cpu : usr=2.99%, sys=3.59%, ctx=583, majf=0, minf=1 00:43:03.795 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:43:03.795 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:03.795 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:43:03.795 issued rwts: total=4096,4234,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:03.795 latency : target=0, window=0, percentile=100.00%, depth=128 00:43:03.795 job1: (groupid=0, jobs=1): err= 0: pid=38799: Fri Dec 13 10:44:57 2024 00:43:03.795 read: IOPS=4580, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1006msec) 00:43:03.795 slat (nsec): min=1338, max=26690k, avg=104569.57, stdev=877926.42 00:43:03.795 clat (usec): min=6024, max=58356, avg=13964.05, stdev=5339.34 00:43:03.795 lat (usec): min=6030, max=58370, avg=14068.62, stdev=5411.77 00:43:03.795 clat percentiles (usec): 00:43:03.795 | 1.00th=[ 7767], 5.00th=[ 8455], 10.00th=[ 9110], 20.00th=[ 9765], 00:43:03.795 | 30.00th=[11338], 40.00th=[11731], 50.00th=[12518], 60.00th=[13698], 00:43:03.795 | 70.00th=[15533], 80.00th=[16909], 90.00th=[19792], 95.00th=[23200], 00:43:03.795 | 99.00th=[34866], 99.50th=[34866], 99.90th=[58459], 99.95th=[58459], 00:43:03.795 | 99.99th=[58459] 00:43:03.795 write: IOPS=4630, BW=18.1MiB/s (19.0MB/s)(18.2MiB/1006msec); 0 zone resets 00:43:03.795 slat (usec): min=2, max=24150, avg=105.34, stdev=786.43 00:43:03.795 clat (usec): min=3250, max=52769, avg=13520.92, stdev=6848.45 00:43:03.795 lat (usec): min=3259, max=52776, avg=13626.27, stdev=6891.48 00:43:03.795 clat percentiles (usec): 00:43:03.795 | 1.00th=[ 6390], 5.00th=[ 7504], 10.00th=[ 8455], 20.00th=[10552], 00:43:03.795 | 30.00th=[11076], 40.00th=[11994], 50.00th=[12387], 60.00th=[12780], 00:43:03.795 | 70.00th=[12911], 80.00th=[13042], 90.00th=[16909], 95.00th=[26608], 00:43:03.795 | 99.00th=[49546], 99.50th=[51119], 99.90th=[52167], 99.95th=[52167], 00:43:03.795 | 99.99th=[52691] 00:43:03.795 bw ( KiB/s): min=16432, max=20432, per=25.76%, avg=18432.00, stdev=2828.43, samples=2 00:43:03.795 iops : min= 4108, max= 5108, avg=4608.00, stdev=707.11, samples=2 00:43:03.795 lat (msec) : 4=0.06%, 10=19.31%, 20=71.67%, 50=8.52%, 100=0.44% 00:43:03.795 cpu : usr=4.38%, sys=4.38%, ctx=443, majf=0, minf=1 00:43:03.795 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:43:03.795 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:03.795 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:43:03.795 issued rwts: total=4608,4658,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:03.795 latency : target=0, window=0, percentile=100.00%, depth=128 00:43:03.795 job2: (groupid=0, jobs=1): err= 0: pid=38815: Fri Dec 13 10:44:57 2024 00:43:03.795 read: IOPS=4075, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1005msec) 00:43:03.795 slat (nsec): min=1657, max=12943k, avg=124488.03, stdev=974406.68 00:43:03.795 clat (usec): min=3255, max=60311, avg=14771.07, stdev=6292.00 00:43:03.795 lat (usec): min=3268, max=60316, avg=14895.56, stdev=6379.01 00:43:03.795 clat percentiles (usec): 00:43:03.795 | 1.00th=[ 7504], 5.00th=[10814], 10.00th=[11207], 20.00th=[11994], 00:43:03.795 | 30.00th=[12518], 40.00th=[12911], 50.00th=[13304], 60.00th=[13566], 00:43:03.795 | 70.00th=[13829], 80.00th=[15008], 90.00th=[19792], 95.00th=[24773], 00:43:03.795 | 99.00th=[47449], 99.50th=[57934], 99.90th=[60556], 99.95th=[60556], 00:43:03.795 | 99.99th=[60556] 00:43:03.795 write: IOPS=4427, BW=17.3MiB/s (18.1MB/s)(17.4MiB/1005msec); 0 zone resets 00:43:03.795 slat (usec): min=2, max=12086, avg=104.58, stdev=744.50 00:43:03.795 clat (usec): min=2189, max=60301, avg=15042.45, stdev=6383.83 00:43:03.795 lat (usec): min=2233, max=60304, avg=15147.02, stdev=6425.31 00:43:03.795 clat percentiles (usec): 00:43:03.795 | 1.00th=[ 4555], 5.00th=[ 8586], 10.00th=[ 9241], 20.00th=[10421], 00:43:03.795 | 30.00th=[12518], 40.00th=[13435], 50.00th=[13829], 60.00th=[14222], 00:43:03.795 | 70.00th=[14615], 80.00th=[19006], 90.00th=[23200], 95.00th=[23725], 00:43:03.795 | 99.00th=[39060], 99.50th=[49021], 99.90th=[49546], 99.95th=[49546], 00:43:03.795 | 99.99th=[60556] 00:43:03.795 bw ( KiB/s): min=16384, max=18192, per=24.16%, avg=17288.00, stdev=1278.45, samples=2 00:43:03.795 iops : min= 4096, max= 4548, avg=4322.00, stdev=319.61, samples=2 00:43:03.795 lat (msec) : 4=0.48%, 10=10.03%, 20=77.79%, 50=11.34%, 100=0.36% 00:43:03.795 cpu : usr=3.88%, sys=4.88%, ctx=315, majf=0, minf=1 00:43:03.795 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:43:03.795 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:03.795 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:43:03.795 issued rwts: total=4096,4450,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:03.795 latency : target=0, window=0, percentile=100.00%, depth=128 00:43:03.795 job3: (groupid=0, jobs=1): err= 0: pid=38821: Fri Dec 13 10:44:57 2024 00:43:03.795 read: IOPS=4562, BW=17.8MiB/s (18.7MB/s)(18.0MiB/1010msec) 00:43:03.795 slat (nsec): min=1524, max=17167k, avg=98580.53, stdev=825643.05 00:43:03.795 clat (usec): min=2625, max=41966, avg=14382.88, stdev=4689.60 00:43:03.795 lat (usec): min=2634, max=41974, avg=14481.46, stdev=4754.50 00:43:03.795 clat percentiles (usec): 00:43:03.795 | 1.00th=[ 5080], 5.00th=[ 9372], 10.00th=[10290], 20.00th=[11338], 00:43:03.795 | 30.00th=[12256], 40.00th=[12911], 50.00th=[13304], 60.00th=[13960], 00:43:03.795 | 70.00th=[14877], 80.00th=[17171], 90.00th=[20841], 95.00th=[24511], 00:43:03.795 | 99.00th=[25822], 99.50th=[38011], 99.90th=[42206], 99.95th=[42206], 00:43:03.795 | 99.99th=[42206] 00:43:03.795 write: IOPS=4680, BW=18.3MiB/s (19.2MB/s)(18.5MiB/1010msec); 0 zone resets 00:43:03.795 slat (usec): min=2, max=16646, avg=95.98, stdev=810.60 00:43:03.795 clat (usec): min=1555, max=31408, avg=13079.75, stdev=3307.47 00:43:03.795 lat (usec): min=1568, max=31422, avg=13175.73, stdev=3386.26 00:43:03.795 clat percentiles (usec): 00:43:03.795 | 1.00th=[ 5211], 5.00th=[ 8225], 10.00th=[ 8717], 20.00th=[ 9634], 00:43:03.795 | 30.00th=[11863], 40.00th=[12649], 50.00th=[13173], 60.00th=[13829], 00:43:03.795 | 70.00th=[14222], 80.00th=[15008], 90.00th=[17957], 95.00th=[18744], 00:43:03.795 | 99.00th=[19530], 99.50th=[20579], 99.90th=[27132], 99.95th=[29492], 00:43:03.795 | 99.99th=[31327] 00:43:03.795 bw ( KiB/s): min=16648, max=20216, per=25.76%, avg=18432.00, stdev=2522.96, samples=2 00:43:03.795 iops : min= 4162, max= 5054, avg=4608.00, stdev=630.74, samples=2 00:43:03.795 lat (msec) : 2=0.02%, 4=0.34%, 10=14.42%, 20=79.67%, 50=5.55% 00:43:03.795 cpu : usr=4.76%, sys=5.35%, ctx=265, majf=0, minf=2 00:43:03.795 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:43:03.795 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:03.795 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:43:03.795 issued rwts: total=4608,4727,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:03.795 latency : target=0, window=0, percentile=100.00%, depth=128 00:43:03.795 00:43:03.795 Run status group 0 (all jobs): 00:43:03.795 READ: bw=67.3MiB/s (70.6MB/s), 15.9MiB/s-17.9MiB/s (16.7MB/s-18.8MB/s), io=68.0MiB (71.3MB), run=1004-1010msec 00:43:03.795 WRITE: bw=69.9MiB/s (73.3MB/s), 16.5MiB/s-18.3MiB/s (17.3MB/s-19.2MB/s), io=70.6MiB (74.0MB), run=1004-1010msec 00:43:03.795 00:43:03.795 Disk stats (read/write): 00:43:03.795 nvme0n1: ios=3260/3584, merge=0/0, ticks=19807/43705, in_queue=63512, util=86.57% 00:43:03.795 nvme0n2: ios=3611/4062, merge=0/0, ticks=50235/54753, in_queue=104988, util=97.76% 00:43:03.795 nvme0n3: ios=3522/3584, merge=0/0, ticks=50624/53772, in_queue=104396, util=88.91% 00:43:03.795 nvme0n4: ios=3654/4096, merge=0/0, ticks=43869/47501, in_queue=91370, util=89.66% 00:43:03.795 10:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:43:03.795 [global] 00:43:03.795 thread=1 00:43:03.795 invalidate=1 00:43:03.795 rw=randwrite 00:43:03.795 time_based=1 00:43:03.795 runtime=1 00:43:03.795 ioengine=libaio 00:43:03.795 direct=1 00:43:03.795 bs=4096 00:43:03.795 iodepth=128 00:43:03.795 norandommap=0 00:43:03.795 numjobs=1 00:43:03.795 00:43:03.795 verify_dump=1 00:43:03.795 verify_backlog=512 00:43:03.795 verify_state_save=0 00:43:03.795 do_verify=1 00:43:03.795 verify=crc32c-intel 00:43:03.795 [job0] 00:43:03.795 filename=/dev/nvme0n1 00:43:03.795 [job1] 00:43:03.795 filename=/dev/nvme0n2 00:43:03.795 [job2] 00:43:03.795 filename=/dev/nvme0n3 00:43:03.795 [job3] 00:43:03.795 filename=/dev/nvme0n4 00:43:03.795 Could not set queue depth (nvme0n1) 00:43:03.795 Could not set queue depth (nvme0n2) 00:43:03.795 Could not set queue depth (nvme0n3) 00:43:03.795 Could not set queue depth (nvme0n4) 00:43:04.064 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:43:04.064 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:43:04.064 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:43:04.064 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:43:04.064 fio-3.35 00:43:04.064 Starting 4 threads 00:43:05.439 00:43:05.439 job0: (groupid=0, jobs=1): err= 0: pid=39209: Fri Dec 13 10:44:58 2024 00:43:05.439 read: IOPS=4766, BW=18.6MiB/s (19.5MB/s)(18.8MiB/1007msec) 00:43:05.439 slat (nsec): min=1343, max=14182k, avg=103591.80, stdev=767059.78 00:43:05.439 clat (usec): min=3661, max=99422, avg=11900.40, stdev=11322.02 00:43:05.439 lat (usec): min=4207, max=99432, avg=12003.99, stdev=11417.43 00:43:05.439 clat percentiles (usec): 00:43:05.439 | 1.00th=[ 5014], 5.00th=[ 6128], 10.00th=[ 7570], 20.00th=[ 8291], 00:43:05.439 | 30.00th=[ 8717], 40.00th=[ 9110], 50.00th=[ 9503], 60.00th=[ 9896], 00:43:05.439 | 70.00th=[10159], 80.00th=[10945], 90.00th=[13566], 95.00th=[23725], 00:43:05.439 | 99.00th=[76022], 99.50th=[94897], 99.90th=[99091], 99.95th=[99091], 00:43:05.439 | 99.99th=[99091] 00:43:05.439 write: IOPS=5084, BW=19.9MiB/s (20.8MB/s)(20.0MiB/1007msec); 0 zone resets 00:43:05.439 slat (usec): min=2, max=9612, avg=91.62, stdev=587.89 00:43:05.439 clat (usec): min=3160, max=99332, avg=13701.66, stdev=14078.01 00:43:05.439 lat (usec): min=3169, max=99336, avg=13793.28, stdev=14141.00 00:43:05.439 clat percentiles (usec): 00:43:05.439 | 1.00th=[ 4146], 5.00th=[ 5473], 10.00th=[ 7177], 20.00th=[ 8094], 00:43:05.439 | 30.00th=[ 8586], 40.00th=[ 8717], 50.00th=[ 8979], 60.00th=[ 9110], 00:43:05.439 | 70.00th=[10945], 80.00th=[15926], 90.00th=[19268], 95.00th=[40109], 00:43:05.439 | 99.00th=[86508], 99.50th=[88605], 99.90th=[93848], 99.95th=[93848], 00:43:05.439 | 99.99th=[99091] 00:43:05.439 bw ( KiB/s): min=16656, max=24304, per=34.50%, avg=20480.00, stdev=5407.95, samples=2 00:43:05.439 iops : min= 4164, max= 6076, avg=5120.00, stdev=1351.99, samples=2 00:43:05.439 lat (msec) : 4=0.48%, 10=65.01%, 20=26.52%, 50=4.55%, 100=3.44% 00:43:05.439 cpu : usr=3.78%, sys=6.06%, ctx=378, majf=0, minf=1 00:43:05.439 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:43:05.439 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:05.439 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:43:05.439 issued rwts: total=4800,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:05.439 latency : target=0, window=0, percentile=100.00%, depth=128 00:43:05.439 job1: (groupid=0, jobs=1): err= 0: pid=39219: Fri Dec 13 10:44:58 2024 00:43:05.439 read: IOPS=2605, BW=10.2MiB/s (10.7MB/s)(10.2MiB/1007msec) 00:43:05.439 slat (nsec): min=1430, max=24649k, avg=160209.16, stdev=1074425.20 00:43:05.439 clat (usec): min=1264, max=66504, avg=19921.03, stdev=8600.84 00:43:05.440 lat (usec): min=7502, max=66510, avg=20081.24, stdev=8685.08 00:43:05.440 clat percentiles (usec): 00:43:05.440 | 1.00th=[ 7767], 5.00th=[10945], 10.00th=[11600], 20.00th=[13698], 00:43:05.440 | 30.00th=[16057], 40.00th=[17171], 50.00th=[18482], 60.00th=[19006], 00:43:05.440 | 70.00th=[20317], 80.00th=[22414], 90.00th=[30802], 95.00th=[40109], 00:43:05.440 | 99.00th=[53740], 99.50th=[53740], 99.90th=[66323], 99.95th=[66323], 00:43:05.440 | 99.99th=[66323] 00:43:05.440 write: IOPS=3050, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1007msec); 0 zone resets 00:43:05.440 slat (usec): min=2, max=25741, avg=182.34, stdev=1213.05 00:43:05.440 clat (usec): min=9491, max=62455, avg=24324.09, stdev=10306.04 00:43:05.440 lat (usec): min=9503, max=62505, avg=24506.43, stdev=10418.80 00:43:05.440 clat percentiles (usec): 00:43:05.440 | 1.00th=[10159], 5.00th=[10814], 10.00th=[11338], 20.00th=[17171], 00:43:05.440 | 30.00th=[17957], 40.00th=[18482], 50.00th=[20579], 60.00th=[24773], 00:43:05.440 | 70.00th=[30278], 80.00th=[34341], 90.00th=[38011], 95.00th=[43254], 00:43:05.440 | 99.00th=[56886], 99.50th=[56886], 99.90th=[58983], 99.95th=[62653], 00:43:05.440 | 99.99th=[62653] 00:43:05.440 bw ( KiB/s): min= 8248, max=15816, per=20.27%, avg=12032.00, stdev=5351.38, samples=2 00:43:05.440 iops : min= 2062, max= 3954, avg=3008.00, stdev=1337.85, samples=2 00:43:05.440 lat (msec) : 2=0.02%, 10=2.02%, 20=55.09%, 50=41.45%, 100=1.42% 00:43:05.440 cpu : usr=3.28%, sys=4.47%, ctx=246, majf=0, minf=1 00:43:05.440 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:43:05.440 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:05.440 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:43:05.440 issued rwts: total=2624,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:05.440 latency : target=0, window=0, percentile=100.00%, depth=128 00:43:05.440 job2: (groupid=0, jobs=1): err= 0: pid=39233: Fri Dec 13 10:44:58 2024 00:43:05.440 read: IOPS=2542, BW=9.93MiB/s (10.4MB/s)(10.0MiB/1007msec) 00:43:05.440 slat (nsec): min=1606, max=37374k, avg=168942.41, stdev=1274613.82 00:43:05.440 clat (usec): min=9167, max=81519, avg=20891.89, stdev=11383.21 00:43:05.440 lat (usec): min=9176, max=81544, avg=21060.84, stdev=11497.02 00:43:05.440 clat percentiles (usec): 00:43:05.440 | 1.00th=[ 9372], 5.00th=[12780], 10.00th=[13042], 20.00th=[13566], 00:43:05.440 | 30.00th=[15270], 40.00th=[16909], 50.00th=[18220], 60.00th=[18744], 00:43:05.440 | 70.00th=[19268], 80.00th=[21627], 90.00th=[35914], 95.00th=[53216], 00:43:05.440 | 99.00th=[66847], 99.50th=[66847], 99.90th=[67634], 99.95th=[69731], 00:43:05.440 | 99.99th=[81265] 00:43:05.440 write: IOPS=3000, BW=11.7MiB/s (12.3MB/s)(11.8MiB/1007msec); 0 zone resets 00:43:05.440 slat (usec): min=2, max=22806, avg=182.18, stdev=1157.32 00:43:05.440 clat (usec): min=990, max=74167, avg=24497.54, stdev=12898.21 00:43:05.440 lat (usec): min=5554, max=74190, avg=24679.72, stdev=13021.12 00:43:05.440 clat percentiles (usec): 00:43:05.440 | 1.00th=[10421], 5.00th=[11469], 10.00th=[11600], 20.00th=[13829], 00:43:05.440 | 30.00th=[17433], 40.00th=[17695], 50.00th=[18482], 60.00th=[23462], 00:43:05.440 | 70.00th=[30016], 80.00th=[36439], 90.00th=[43779], 95.00th=[56361], 00:43:05.440 | 99.00th=[60031], 99.50th=[60556], 99.90th=[61080], 99.95th=[71828], 00:43:05.440 | 99.99th=[73925] 00:43:05.440 bw ( KiB/s): min= 9608, max=13536, per=19.49%, avg=11572.00, stdev=2777.52, samples=2 00:43:05.440 iops : min= 2402, max= 3384, avg=2893.00, stdev=694.38, samples=2 00:43:05.440 lat (usec) : 1000=0.02% 00:43:05.440 lat (msec) : 10=0.90%, 20=61.03%, 50=31.48%, 100=6.58% 00:43:05.440 cpu : usr=2.68%, sys=4.57%, ctx=235, majf=0, minf=2 00:43:05.440 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:43:05.440 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:05.440 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:43:05.440 issued rwts: total=2560,3021,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:05.440 latency : target=0, window=0, percentile=100.00%, depth=128 00:43:05.440 job3: (groupid=0, jobs=1): err= 0: pid=39238: Fri Dec 13 10:44:58 2024 00:43:05.440 read: IOPS=3538, BW=13.8MiB/s (14.5MB/s)(14.0MiB/1013msec) 00:43:05.440 slat (nsec): min=1160, max=11738k, avg=102978.64, stdev=808086.02 00:43:05.440 clat (usec): min=1831, max=31285, avg=13724.99, stdev=4829.08 00:43:05.440 lat (usec): min=1841, max=31290, avg=13827.97, stdev=4903.02 00:43:05.440 clat percentiles (usec): 00:43:05.440 | 1.00th=[ 2573], 5.00th=[ 6718], 10.00th=[ 9765], 20.00th=[10421], 00:43:05.440 | 30.00th=[10683], 40.00th=[10945], 50.00th=[13042], 60.00th=[14615], 00:43:05.440 | 70.00th=[16581], 80.00th=[18220], 90.00th=[19530], 95.00th=[22152], 00:43:05.440 | 99.00th=[27919], 99.50th=[28181], 99.90th=[31327], 99.95th=[31327], 00:43:05.440 | 99.99th=[31327] 00:43:05.440 write: IOPS=3772, BW=14.7MiB/s (15.5MB/s)(14.9MiB/1013msec); 0 zone resets 00:43:05.440 slat (usec): min=2, max=12538, avg=146.67, stdev=884.07 00:43:05.440 clat (usec): min=507, max=90435, avg=20708.72, stdev=18212.81 00:43:05.440 lat (usec): min=531, max=90445, avg=20855.38, stdev=18320.92 00:43:05.440 clat percentiles (usec): 00:43:05.440 | 1.00th=[ 2180], 5.00th=[ 6259], 10.00th=[ 8356], 20.00th=[ 9372], 00:43:05.440 | 30.00th=[ 9765], 40.00th=[11469], 50.00th=[13960], 60.00th=[17695], 00:43:05.440 | 70.00th=[19006], 80.00th=[30016], 90.00th=[43254], 95.00th=[66847], 00:43:05.440 | 99.00th=[87557], 99.50th=[89654], 99.90th=[90702], 99.95th=[90702], 00:43:05.440 | 99.99th=[90702] 00:43:05.440 bw ( KiB/s): min=11776, max=17776, per=24.89%, avg=14776.00, stdev=4242.64, samples=2 00:43:05.440 iops : min= 2944, max= 4444, avg=3694.00, stdev=1060.66, samples=2 00:43:05.440 lat (usec) : 750=0.03% 00:43:05.440 lat (msec) : 2=0.45%, 4=2.12%, 10=18.77%, 20=61.38%, 50=13.34% 00:43:05.440 lat (msec) : 100=3.92% 00:43:05.440 cpu : usr=1.88%, sys=5.14%, ctx=290, majf=0, minf=1 00:43:05.440 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:43:05.440 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:05.440 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:43:05.440 issued rwts: total=3584,3822,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:05.440 latency : target=0, window=0, percentile=100.00%, depth=128 00:43:05.440 00:43:05.440 Run status group 0 (all jobs): 00:43:05.440 READ: bw=52.3MiB/s (54.9MB/s), 9.93MiB/s-18.6MiB/s (10.4MB/s-19.5MB/s), io=53.0MiB (55.6MB), run=1007-1013msec 00:43:05.440 WRITE: bw=58.0MiB/s (60.8MB/s), 11.7MiB/s-19.9MiB/s (12.3MB/s-20.8MB/s), io=58.7MiB (61.6MB), run=1007-1013msec 00:43:05.440 00:43:05.440 Disk stats (read/write): 00:43:05.440 nvme0n1: ios=4122/4423, merge=0/0, ticks=42810/54972, in_queue=97782, util=98.20% 00:43:05.440 nvme0n2: ios=2063/2511, merge=0/0, ticks=21439/30999, in_queue=52438, util=99.39% 00:43:05.440 nvme0n3: ios=2075/2487, merge=0/0, ticks=22789/29943, in_queue=52732, util=95.62% 00:43:05.440 nvme0n4: ios=3112/3455, merge=0/0, ticks=41187/63543, in_queue=104730, util=99.58% 00:43:05.440 10:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:43:05.440 10:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=39317 00:43:05.440 10:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:43:05.440 10:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:43:05.440 [global] 00:43:05.440 thread=1 00:43:05.440 invalidate=1 00:43:05.440 rw=read 00:43:05.440 time_based=1 00:43:05.440 runtime=10 00:43:05.440 ioengine=libaio 00:43:05.440 direct=1 00:43:05.440 bs=4096 00:43:05.440 iodepth=1 00:43:05.440 norandommap=1 00:43:05.440 numjobs=1 00:43:05.440 00:43:05.440 [job0] 00:43:05.440 filename=/dev/nvme0n1 00:43:05.440 [job1] 00:43:05.440 filename=/dev/nvme0n2 00:43:05.440 [job2] 00:43:05.440 filename=/dev/nvme0n3 00:43:05.440 [job3] 00:43:05.440 filename=/dev/nvme0n4 00:43:05.440 Could not set queue depth (nvme0n1) 00:43:05.440 Could not set queue depth (nvme0n2) 00:43:05.440 Could not set queue depth (nvme0n3) 00:43:05.440 Could not set queue depth (nvme0n4) 00:43:05.440 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:43:05.440 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:43:05.440 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:43:05.440 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:43:05.440 fio-3.35 00:43:05.440 Starting 4 threads 00:43:08.723 10:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:43:08.723 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=8409088, buflen=4096 00:43:08.723 fio: pid=39625, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:43:08.723 10:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:43:08.723 10:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:43:08.723 10:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:43:08.723 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=299008, buflen=4096 00:43:08.723 fio: pid=39624, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:43:08.723 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=13869056, buflen=4096 00:43:08.723 fio: pid=39618, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:43:08.983 10:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:43:08.983 10:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:43:08.983 fio: io_u error on file /dev/nvme0n2: Input/output error: read offset=335872, buflen=4096 00:43:08.983 fio: pid=39623, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:43:08.983 00:43:08.983 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=39618: Fri Dec 13 10:45:02 2024 00:43:08.983 read: IOPS=1070, BW=4282KiB/s (4385kB/s)(13.2MiB/3163msec) 00:43:08.983 slat (usec): min=6, max=24841, avg=17.40, stdev=458.49 00:43:08.983 clat (usec): min=199, max=43041, avg=909.08, stdev=5026.69 00:43:08.983 lat (usec): min=206, max=66076, avg=926.48, stdev=5130.34 00:43:08.983 clat percentiles (usec): 00:43:08.983 | 1.00th=[ 239], 5.00th=[ 245], 10.00th=[ 247], 20.00th=[ 249], 00:43:08.983 | 30.00th=[ 251], 40.00th=[ 255], 50.00th=[ 297], 60.00th=[ 297], 00:43:08.983 | 70.00th=[ 302], 80.00th=[ 306], 90.00th=[ 314], 95.00th=[ 322], 00:43:08.983 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:43:08.983 | 99.99th=[43254] 00:43:08.983 bw ( KiB/s): min= 93, max=12752, per=68.27%, avg=4510.17, stdev=5437.18, samples=6 00:43:08.983 iops : min= 23, max= 3188, avg=1127.50, stdev=1359.34, samples=6 00:43:08.983 lat (usec) : 250=22.62%, 500=75.70%, 750=0.06%, 1000=0.03% 00:43:08.983 lat (msec) : 4=0.03%, 50=1.54% 00:43:08.983 cpu : usr=0.44%, sys=0.79%, ctx=3389, majf=0, minf=1 00:43:08.983 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:08.983 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:08.983 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:08.983 issued rwts: total=3387,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:08.983 latency : target=0, window=0, percentile=100.00%, depth=1 00:43:08.983 job1: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=39623: Fri Dec 13 10:45:02 2024 00:43:08.983 read: IOPS=24, BW=96.8KiB/s (99.2kB/s)(328KiB/3387msec) 00:43:08.983 slat (usec): min=9, max=23754, avg=650.92, stdev=3156.94 00:43:08.983 clat (usec): min=542, max=44953, avg=40629.47, stdev=4518.83 00:43:08.983 lat (usec): min=574, max=64933, avg=41202.25, stdev=5535.95 00:43:08.983 clat percentiles (usec): 00:43:08.983 | 1.00th=[ 545], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:43:08.983 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:43:08.983 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:43:08.983 | 99.00th=[44827], 99.50th=[44827], 99.90th=[44827], 99.95th=[44827], 00:43:08.983 | 99.99th=[44827] 00:43:08.983 bw ( KiB/s): min= 96, max= 104, per=1.47%, avg=97.83, stdev= 3.25, samples=6 00:43:08.983 iops : min= 24, max= 26, avg=24.33, stdev= 0.82, samples=6 00:43:08.983 lat (usec) : 750=1.20% 00:43:08.983 lat (msec) : 50=97.59% 00:43:08.983 cpu : usr=0.00%, sys=0.32%, ctx=86, majf=0, minf=2 00:43:08.983 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:08.983 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:08.983 complete : 0=1.2%, 4=98.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:08.983 issued rwts: total=83,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:08.983 latency : target=0, window=0, percentile=100.00%, depth=1 00:43:08.983 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=39624: Fri Dec 13 10:45:02 2024 00:43:08.983 read: IOPS=25, BW=98.8KiB/s (101kB/s)(292KiB/2956msec) 00:43:08.983 slat (nsec): min=10083, max=31948, avg=23149.32, stdev=3305.58 00:43:08.983 clat (usec): min=460, max=51893, avg=40163.80, stdev=6842.10 00:43:08.983 lat (usec): min=488, max=51916, avg=40186.96, stdev=6841.04 00:43:08.983 clat percentiles (usec): 00:43:08.983 | 1.00th=[ 461], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:43:08.983 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:43:08.983 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[42206], 00:43:08.983 | 99.00th=[51643], 99.50th=[51643], 99.90th=[51643], 99.95th=[51643], 00:43:08.983 | 99.99th=[51643] 00:43:08.983 bw ( KiB/s): min= 96, max= 104, per=1.47%, avg=97.60, stdev= 3.58, samples=5 00:43:08.983 iops : min= 24, max= 26, avg=24.40, stdev= 0.89, samples=5 00:43:08.983 lat (usec) : 500=1.35%, 750=1.35% 00:43:08.983 lat (msec) : 50=94.59%, 100=1.35% 00:43:08.983 cpu : usr=0.14%, sys=0.00%, ctx=74, majf=0, minf=2 00:43:08.983 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:08.983 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:08.983 complete : 0=1.3%, 4=98.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:08.983 issued rwts: total=74,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:08.983 latency : target=0, window=0, percentile=100.00%, depth=1 00:43:08.983 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=39625: Fri Dec 13 10:45:02 2024 00:43:08.983 read: IOPS=753, BW=3011KiB/s (3084kB/s)(8212KiB/2727msec) 00:43:08.983 slat (nsec): min=6329, max=36976, avg=7533.22, stdev=2704.33 00:43:08.983 clat (usec): min=218, max=41908, avg=1309.25, stdev=6342.25 00:43:08.983 lat (usec): min=225, max=41926, avg=1316.77, stdev=6344.50 00:43:08.983 clat percentiles (usec): 00:43:08.983 | 1.00th=[ 229], 5.00th=[ 235], 10.00th=[ 265], 20.00th=[ 297], 00:43:08.983 | 30.00th=[ 297], 40.00th=[ 302], 50.00th=[ 302], 60.00th=[ 306], 00:43:08.983 | 70.00th=[ 306], 80.00th=[ 310], 90.00th=[ 314], 95.00th=[ 318], 00:43:08.983 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 00:43:08.983 | 99.99th=[41681] 00:43:08.983 bw ( KiB/s): min= 96, max=12912, per=49.59%, avg=3276.80, stdev=5442.63, samples=5 00:43:08.983 iops : min= 24, max= 3228, avg=819.20, stdev=1360.66, samples=5 00:43:08.983 lat (usec) : 250=9.20%, 500=88.07%, 750=0.10%, 1000=0.05% 00:43:08.983 lat (msec) : 2=0.05%, 50=2.48% 00:43:08.983 cpu : usr=0.26%, sys=0.66%, ctx=2054, majf=0, minf=2 00:43:08.983 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:08.983 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:08.983 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:08.983 issued rwts: total=2054,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:08.983 latency : target=0, window=0, percentile=100.00%, depth=1 00:43:08.983 00:43:08.983 Run status group 0 (all jobs): 00:43:08.983 READ: bw=6606KiB/s (6765kB/s), 96.8KiB/s-4282KiB/s (99.2kB/s-4385kB/s), io=21.9MiB (22.9MB), run=2727-3387msec 00:43:08.983 00:43:08.983 Disk stats (read/write): 00:43:08.983 nvme0n1: ios=3385/0, merge=0/0, ticks=3037/0, in_queue=3037, util=94.73% 00:43:08.983 nvme0n2: ios=81/0, merge=0/0, ticks=3292/0, in_queue=3292, util=95.15% 00:43:08.983 nvme0n3: ios=71/0, merge=0/0, ticks=2852/0, in_queue=2852, util=96.52% 00:43:08.983 nvme0n4: ios=2050/0, merge=0/0, ticks=2546/0, in_queue=2546, util=96.45% 00:43:09.242 10:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:43:09.242 10:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:43:09.501 10:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:43:09.501 10:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:43:09.760 10:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:43:09.760 10:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:43:10.018 10:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:43:10.018 10:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:43:10.277 10:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:43:10.277 10:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:43:10.535 10:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:43:10.535 10:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 39317 00:43:10.535 10:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:43:10.535 10:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:43:11.470 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:43:11.470 10:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:43:11.470 10:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:43:11.470 10:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:43:11.470 10:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:43:11.470 10:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:43:11.470 10:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:43:11.470 10:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:43:11.470 10:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:43:11.470 10:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:43:11.470 nvmf hotplug test: fio failed as expected 00:43:11.470 10:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:43:11.728 10:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:43:11.728 10:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:43:11.728 10:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:43:11.728 10:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:43:11.729 10:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:43:11.729 10:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:43:11.729 10:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:43:11.729 10:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:43:11.729 10:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:43:11.729 10:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:43:11.729 10:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:43:11.729 rmmod nvme_tcp 00:43:11.729 rmmod nvme_fabrics 00:43:11.729 rmmod nvme_keyring 00:43:11.729 10:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:43:11.729 10:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:43:11.729 10:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:43:11.729 10:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 36676 ']' 00:43:11.729 10:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 36676 00:43:11.729 10:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 36676 ']' 00:43:11.729 10:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 36676 00:43:11.729 10:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:43:11.729 10:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:43:11.729 10:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 36676 00:43:11.729 10:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:43:11.729 10:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:43:11.729 10:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 36676' 00:43:11.729 killing process with pid 36676 00:43:11.729 10:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 36676 00:43:11.729 10:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 36676 00:43:13.100 10:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:43:13.100 10:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:43:13.100 10:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:43:13.100 10:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:43:13.100 10:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:43:13.100 10:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:43:13.100 10:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:43:13.100 10:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:43:13.100 10:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:43:13.100 10:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:43:13.100 10:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:43:13.100 10:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:43:15.003 10:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:43:15.003 00:43:15.003 real 0m28.576s 00:43:15.003 user 1m38.256s 00:43:15.003 sys 0m10.752s 00:43:15.003 10:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:43:15.003 10:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:43:15.003 ************************************ 00:43:15.003 END TEST nvmf_fio_target 00:43:15.003 ************************************ 00:43:15.003 10:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:43:15.003 10:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:43:15.003 10:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:43:15.003 10:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:43:15.003 ************************************ 00:43:15.003 START TEST nvmf_bdevio 00:43:15.003 ************************************ 00:43:15.003 10:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:43:15.262 * Looking for test storage... 00:43:15.262 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:43:15.263 10:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:43:15.263 10:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lcov --version 00:43:15.263 10:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:43:15.263 10:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:43:15.263 10:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:43:15.263 10:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:43:15.263 10:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:43:15.263 10:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:43:15.263 10:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:43:15.263 10:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:43:15.263 10:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:43:15.263 10:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:43:15.263 10:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:43:15.263 10:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:43:15.263 10:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:43:15.263 10:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:43:15.263 10:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:43:15.263 10:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:43:15.263 10:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:43:15.263 10:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:43:15.263 10:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:43:15.263 10:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:43:15.263 10:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:43:15.263 10:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:43:15.263 10:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:43:15.263 10:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:43:15.263 10:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:43:15.263 10:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:43:15.263 10:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:43:15.263 10:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:43:15.263 10:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:43:15.263 10:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:43:15.263 10:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:43:15.263 10:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:43:15.263 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:15.263 --rc genhtml_branch_coverage=1 00:43:15.263 --rc genhtml_function_coverage=1 00:43:15.263 --rc genhtml_legend=1 00:43:15.263 --rc geninfo_all_blocks=1 00:43:15.263 --rc geninfo_unexecuted_blocks=1 00:43:15.263 00:43:15.263 ' 00:43:15.263 10:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:43:15.263 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:15.263 --rc genhtml_branch_coverage=1 00:43:15.263 --rc genhtml_function_coverage=1 00:43:15.263 --rc genhtml_legend=1 00:43:15.263 --rc geninfo_all_blocks=1 00:43:15.263 --rc geninfo_unexecuted_blocks=1 00:43:15.263 00:43:15.263 ' 00:43:15.263 10:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:43:15.263 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:15.263 --rc genhtml_branch_coverage=1 00:43:15.263 --rc genhtml_function_coverage=1 00:43:15.263 --rc genhtml_legend=1 00:43:15.263 --rc geninfo_all_blocks=1 00:43:15.263 --rc geninfo_unexecuted_blocks=1 00:43:15.263 00:43:15.263 ' 00:43:15.263 10:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:43:15.263 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:15.263 --rc genhtml_branch_coverage=1 00:43:15.263 --rc genhtml_function_coverage=1 00:43:15.263 --rc genhtml_legend=1 00:43:15.263 --rc geninfo_all_blocks=1 00:43:15.263 --rc geninfo_unexecuted_blocks=1 00:43:15.263 00:43:15.263 ' 00:43:15.263 10:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:43:15.263 10:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:43:15.263 10:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:43:15.263 10:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:43:15.263 10:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:43:15.263 10:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:43:15.263 10:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:43:15.263 10:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:43:15.263 10:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:43:15.263 10:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:43:15.263 10:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:43:15.263 10:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:43:15.263 10:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:43:15.263 10:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:43:15.263 10:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:43:15.263 10:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:43:15.263 10:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:43:15.263 10:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:43:15.263 10:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:43:15.263 10:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:43:15.263 10:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:43:15.263 10:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:43:15.263 10:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:43:15.263 10:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:15.263 10:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:15.263 10:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:15.263 10:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:43:15.264 10:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:15.264 10:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:43:15.264 10:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:43:15.264 10:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:43:15.264 10:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:43:15.264 10:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:43:15.264 10:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:43:15.264 10:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:43:15.264 10:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:43:15.264 10:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:43:15.264 10:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:43:15.264 10:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:43:15.264 10:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:43:15.264 10:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:43:15.264 10:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:43:15.264 10:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:43:15.264 10:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:43:15.264 10:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:43:15.264 10:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:43:15.264 10:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:43:15.264 10:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:43:15.264 10:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:43:15.264 10:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:43:15.264 10:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:43:15.264 10:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:43:15.264 10:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:43:15.264 10:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:43:21.826 10:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:43:21.826 10:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:43:21.826 10:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:43:21.826 10:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:43:21.826 10:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:43:21.826 10:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:43:21.826 10:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:43:21.826 10:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:43:21.826 10:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:43:21.826 10:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:43:21.826 10:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:43:21.826 10:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:43:21.826 10:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:43:21.826 10:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:43:21.826 10:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:43:21.826 10:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:43:21.826 10:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:43:21.826 10:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:43:21.826 10:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:43:21.826 10:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:43:21.826 10:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:43:21.826 10:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:43:21.826 10:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:43:21.826 10:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:43:21.826 10:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:43:21.826 10:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:43:21.827 10:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:43:21.827 10:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:43:21.827 10:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:43:21.827 10:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:43:21.827 10:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:43:21.827 10:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:43:21.827 10:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:43:21.827 10:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:43:21.827 10:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:43:21.827 Found 0000:af:00.0 (0x8086 - 0x159b) 00:43:21.827 10:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:43:21.827 10:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:43:21.827 10:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:43:21.827 10:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:43:21.827 10:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:43:21.827 10:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:43:21.827 10:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:43:21.827 Found 0000:af:00.1 (0x8086 - 0x159b) 00:43:21.827 10:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:43:21.827 10:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:43:21.827 10:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:43:21.827 10:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:43:21.827 10:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:43:21.827 10:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:43:21.827 10:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:43:21.827 10:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:43:21.827 10:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:43:21.827 10:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:43:21.827 10:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:43:21.827 10:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:43:21.827 10:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:43:21.827 10:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:43:21.827 10:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:43:21.827 10:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:43:21.827 Found net devices under 0000:af:00.0: cvl_0_0 00:43:21.827 10:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:43:21.827 10:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:43:21.827 10:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:43:21.827 10:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:43:21.827 10:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:43:21.827 10:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:43:21.827 10:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:43:21.827 10:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:43:21.827 10:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:43:21.827 Found net devices under 0000:af:00.1: cvl_0_1 00:43:21.827 10:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:43:21.827 10:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:43:21.827 10:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:43:21.827 10:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:43:21.827 10:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:43:21.827 10:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:43:21.827 10:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:43:21.827 10:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:43:21.827 10:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:43:21.827 10:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:43:21.827 10:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:43:21.827 10:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:43:21.827 10:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:43:21.827 10:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:43:21.827 10:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:43:21.827 10:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:43:21.827 10:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:43:21.827 10:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:43:21.827 10:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:43:21.827 10:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:43:21.827 10:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:43:21.827 10:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:43:21.827 10:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:43:21.827 10:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:43:21.827 10:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:43:21.827 10:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:43:21.827 10:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:43:21.827 10:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:43:21.827 10:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:43:21.827 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:43:21.827 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.290 ms 00:43:21.827 00:43:21.827 --- 10.0.0.2 ping statistics --- 00:43:21.827 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:21.827 rtt min/avg/max/mdev = 0.290/0.290/0.290/0.000 ms 00:43:21.827 10:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:43:21.827 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:43:21.827 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.167 ms 00:43:21.827 00:43:21.827 --- 10.0.0.1 ping statistics --- 00:43:21.827 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:21.827 rtt min/avg/max/mdev = 0.167/0.167/0.167/0.000 ms 00:43:21.827 10:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:43:21.827 10:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:43:21.827 10:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:43:21.827 10:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:43:21.827 10:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:43:21.827 10:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:43:21.827 10:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:43:21.827 10:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:43:21.827 10:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:43:21.827 10:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:43:21.827 10:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:43:21.827 10:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:43:21.827 10:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:43:21.827 10:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=44528 00:43:21.827 10:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 00:43:21.827 10:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 44528 00:43:21.827 10:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 44528 ']' 00:43:21.827 10:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:43:21.827 10:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:43:21.827 10:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:43:21.827 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:43:21.827 10:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:43:21.827 10:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:43:21.828 [2024-12-13 10:45:14.852040] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:43:21.828 [2024-12-13 10:45:14.854122] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:43:21.828 [2024-12-13 10:45:14.854189] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:43:21.828 [2024-12-13 10:45:14.973213] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:43:21.828 [2024-12-13 10:45:15.084243] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:43:21.828 [2024-12-13 10:45:15.084283] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:43:21.828 [2024-12-13 10:45:15.084295] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:43:21.828 [2024-12-13 10:45:15.084304] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:43:21.828 [2024-12-13 10:45:15.084313] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:43:21.828 [2024-12-13 10:45:15.086839] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:43:21.828 [2024-12-13 10:45:15.086895] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 5 00:43:21.828 [2024-12-13 10:45:15.086974] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:43:21.828 [2024-12-13 10:45:15.086998] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 6 00:43:21.828 [2024-12-13 10:45:15.429638] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:43:21.828 [2024-12-13 10:45:15.431286] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:43:21.828 [2024-12-13 10:45:15.433122] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:43:21.828 [2024-12-13 10:45:15.434037] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:43:21.828 [2024-12-13 10:45:15.434341] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:43:21.828 10:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:43:21.828 10:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:43:21.828 10:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:43:21.828 10:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:43:21.828 10:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:43:21.828 10:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:43:21.828 10:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:43:21.828 10:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:21.828 10:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:43:22.086 [2024-12-13 10:45:15.719943] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:43:22.086 10:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:22.086 10:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:43:22.086 10:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:22.086 10:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:43:22.086 Malloc0 00:43:22.086 10:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:22.086 10:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:43:22.086 10:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:22.086 10:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:43:22.086 10:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:22.086 10:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:43:22.086 10:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:22.086 10:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:43:22.086 10:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:22.086 10:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:43:22.086 10:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:22.086 10:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:43:22.086 [2024-12-13 10:45:15.851946] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:43:22.086 10:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:22.086 10:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:43:22.086 10:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:43:22.086 10:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:43:22.086 10:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:43:22.086 10:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:43:22.086 10:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:43:22.086 { 00:43:22.086 "params": { 00:43:22.086 "name": "Nvme$subsystem", 00:43:22.086 "trtype": "$TEST_TRANSPORT", 00:43:22.086 "traddr": "$NVMF_FIRST_TARGET_IP", 00:43:22.086 "adrfam": "ipv4", 00:43:22.086 "trsvcid": "$NVMF_PORT", 00:43:22.086 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:43:22.086 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:43:22.086 "hdgst": ${hdgst:-false}, 00:43:22.086 "ddgst": ${ddgst:-false} 00:43:22.086 }, 00:43:22.086 "method": "bdev_nvme_attach_controller" 00:43:22.086 } 00:43:22.086 EOF 00:43:22.086 )") 00:43:22.086 10:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:43:22.086 10:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:43:22.086 10:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:43:22.086 10:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:43:22.086 "params": { 00:43:22.086 "name": "Nvme1", 00:43:22.086 "trtype": "tcp", 00:43:22.086 "traddr": "10.0.0.2", 00:43:22.086 "adrfam": "ipv4", 00:43:22.086 "trsvcid": "4420", 00:43:22.086 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:43:22.086 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:43:22.086 "hdgst": false, 00:43:22.086 "ddgst": false 00:43:22.086 }, 00:43:22.086 "method": "bdev_nvme_attach_controller" 00:43:22.086 }' 00:43:22.086 [2024-12-13 10:45:15.926805] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:43:22.086 [2024-12-13 10:45:15.926889] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid44772 ] 00:43:22.344 [2024-12-13 10:45:16.039332] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:43:22.344 [2024-12-13 10:45:16.160801] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:43:22.344 [2024-12-13 10:45:16.160815] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:43:22.344 [2024-12-13 10:45:16.160822] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:43:22.909 I/O targets: 00:43:22.909 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:43:22.909 00:43:22.909 00:43:22.909 CUnit - A unit testing framework for C - Version 2.1-3 00:43:22.909 http://cunit.sourceforge.net/ 00:43:22.909 00:43:22.909 00:43:22.909 Suite: bdevio tests on: Nvme1n1 00:43:22.909 Test: blockdev write read block ...passed 00:43:23.167 Test: blockdev write zeroes read block ...passed 00:43:23.167 Test: blockdev write zeroes read no split ...passed 00:43:23.167 Test: blockdev write zeroes read split ...passed 00:43:23.167 Test: blockdev write zeroes read split partial ...passed 00:43:23.167 Test: blockdev reset ...[2024-12-13 10:45:17.002899] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:43:23.167 [2024-12-13 10:45:17.003003] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000326480 (9): Bad file descriptor 00:43:23.167 [2024-12-13 10:45:17.009683] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:43:23.167 passed 00:43:23.167 Test: blockdev write read 8 blocks ...passed 00:43:23.167 Test: blockdev write read size > 128k ...passed 00:43:23.167 Test: blockdev write read invalid size ...passed 00:43:23.167 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:43:23.167 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:43:23.167 Test: blockdev write read max offset ...passed 00:43:23.425 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:43:23.425 Test: blockdev writev readv 8 blocks ...passed 00:43:23.425 Test: blockdev writev readv 30 x 1block ...passed 00:43:23.425 Test: blockdev writev readv block ...passed 00:43:23.425 Test: blockdev writev readv size > 128k ...passed 00:43:23.425 Test: blockdev writev readv size > 128k in two iovs ...passed 00:43:23.425 Test: blockdev comparev and writev ...[2024-12-13 10:45:17.182322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:43:23.425 [2024-12-13 10:45:17.182359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:23.425 [2024-12-13 10:45:17.182378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:43:23.425 [2024-12-13 10:45:17.182390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:43:23.425 [2024-12-13 10:45:17.182758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:43:23.425 [2024-12-13 10:45:17.182775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:43:23.425 [2024-12-13 10:45:17.182791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:43:23.425 [2024-12-13 10:45:17.182802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:43:23.425 [2024-12-13 10:45:17.183156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:43:23.425 [2024-12-13 10:45:17.183173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:43:23.425 [2024-12-13 10:45:17.183188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:43:23.425 [2024-12-13 10:45:17.183199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:43:23.425 [2024-12-13 10:45:17.183563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:43:23.425 [2024-12-13 10:45:17.183580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:43:23.425 [2024-12-13 10:45:17.183596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:43:23.425 [2024-12-13 10:45:17.183606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:43:23.425 passed 00:43:23.425 Test: blockdev nvme passthru rw ...passed 00:43:23.425 Test: blockdev nvme passthru vendor specific ...[2024-12-13 10:45:17.266897] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:43:23.425 [2024-12-13 10:45:17.266926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:43:23.425 [2024-12-13 10:45:17.267073] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:43:23.425 [2024-12-13 10:45:17.267087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:43:23.425 [2024-12-13 10:45:17.267218] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:43:23.425 [2024-12-13 10:45:17.267231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:43:23.425 [2024-12-13 10:45:17.267370] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:43:23.425 [2024-12-13 10:45:17.267383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:43:23.425 passed 00:43:23.425 Test: blockdev nvme admin passthru ...passed 00:43:23.682 Test: blockdev copy ...passed 00:43:23.682 00:43:23.682 Run Summary: Type Total Ran Passed Failed Inactive 00:43:23.682 suites 1 1 n/a 0 0 00:43:23.682 tests 23 23 23 0 0 00:43:23.682 asserts 152 152 152 0 n/a 00:43:23.682 00:43:23.682 Elapsed time = 1.234 seconds 00:43:24.617 10:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:43:24.617 10:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:24.617 10:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:43:24.617 10:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:24.617 10:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:43:24.617 10:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:43:24.617 10:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:43:24.617 10:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:43:24.617 10:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:43:24.617 10:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:43:24.617 10:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:43:24.617 10:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:43:24.617 rmmod nvme_tcp 00:43:24.617 rmmod nvme_fabrics 00:43:24.617 rmmod nvme_keyring 00:43:24.617 10:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:43:24.617 10:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:43:24.617 10:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:43:24.617 10:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 44528 ']' 00:43:24.617 10:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 44528 00:43:24.617 10:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 44528 ']' 00:43:24.617 10:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 44528 00:43:24.617 10:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:43:24.617 10:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:43:24.617 10:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 44528 00:43:24.617 10:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:43:24.617 10:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:43:24.617 10:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 44528' 00:43:24.617 killing process with pid 44528 00:43:24.617 10:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 44528 00:43:24.617 10:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 44528 00:43:25.994 10:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:43:25.994 10:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:43:25.994 10:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:43:25.994 10:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:43:25.994 10:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:43:25.994 10:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:43:25.994 10:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:43:25.994 10:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:43:25.994 10:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:43:25.994 10:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:43:25.994 10:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:43:25.994 10:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:43:27.898 10:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:43:27.898 00:43:27.898 real 0m12.847s 00:43:27.898 user 0m18.399s 00:43:27.898 sys 0m5.563s 00:43:27.898 10:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:43:27.898 10:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:43:27.898 ************************************ 00:43:27.898 END TEST nvmf_bdevio 00:43:27.899 ************************************ 00:43:27.899 10:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:43:27.899 00:43:27.899 real 4m57.892s 00:43:27.899 user 10m10.329s 00:43:27.899 sys 1m49.523s 00:43:27.899 10:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1130 -- # xtrace_disable 00:43:27.899 10:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:43:27.899 ************************************ 00:43:27.899 END TEST nvmf_target_core_interrupt_mode 00:43:27.899 ************************************ 00:43:27.899 10:45:21 nvmf_tcp -- nvmf/nvmf.sh@21 -- # run_test nvmf_interrupt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:43:27.899 10:45:21 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:43:27.899 10:45:21 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:43:27.899 10:45:21 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:43:28.158 ************************************ 00:43:28.158 START TEST nvmf_interrupt 00:43:28.158 ************************************ 00:43:28.158 10:45:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:43:28.158 * Looking for test storage... 00:43:28.158 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:43:28.158 10:45:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:43:28.158 10:45:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1711 -- # lcov --version 00:43:28.158 10:45:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:43:28.158 10:45:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:43:28.158 10:45:21 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:43:28.158 10:45:21 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:43:28.158 10:45:21 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:43:28.158 10:45:21 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-: 00:43:28.158 10:45:21 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1 00:43:28.158 10:45:21 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-: 00:43:28.158 10:45:21 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2 00:43:28.158 10:45:21 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<' 00:43:28.158 10:45:21 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2 00:43:28.158 10:45:21 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1 00:43:28.158 10:45:21 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:43:28.158 10:45:21 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in 00:43:28.158 10:45:21 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1 00:43:28.158 10:45:21 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 00:43:28.158 10:45:21 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:43:28.158 10:45:21 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1 00:43:28.158 10:45:21 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1 00:43:28.158 10:45:21 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:43:28.158 10:45:21 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1 00:43:28.158 10:45:21 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 00:43:28.158 10:45:21 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2 00:43:28.158 10:45:21 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2 00:43:28.158 10:45:21 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:43:28.158 10:45:21 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2 00:43:28.158 10:45:21 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 00:43:28.158 10:45:21 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:43:28.158 10:45:21 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:43:28.158 10:45:21 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0 00:43:28.158 10:45:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:43:28.158 10:45:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:43:28.158 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:28.158 --rc genhtml_branch_coverage=1 00:43:28.158 --rc genhtml_function_coverage=1 00:43:28.158 --rc genhtml_legend=1 00:43:28.158 --rc geninfo_all_blocks=1 00:43:28.158 --rc geninfo_unexecuted_blocks=1 00:43:28.158 00:43:28.158 ' 00:43:28.158 10:45:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:43:28.158 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:28.158 --rc genhtml_branch_coverage=1 00:43:28.158 --rc genhtml_function_coverage=1 00:43:28.158 --rc genhtml_legend=1 00:43:28.158 --rc geninfo_all_blocks=1 00:43:28.158 --rc geninfo_unexecuted_blocks=1 00:43:28.158 00:43:28.158 ' 00:43:28.158 10:45:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:43:28.158 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:28.158 --rc genhtml_branch_coverage=1 00:43:28.158 --rc genhtml_function_coverage=1 00:43:28.158 --rc genhtml_legend=1 00:43:28.158 --rc geninfo_all_blocks=1 00:43:28.158 --rc geninfo_unexecuted_blocks=1 00:43:28.158 00:43:28.158 ' 00:43:28.158 10:45:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:43:28.158 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:28.158 --rc genhtml_branch_coverage=1 00:43:28.158 --rc genhtml_function_coverage=1 00:43:28.158 --rc genhtml_legend=1 00:43:28.158 --rc geninfo_all_blocks=1 00:43:28.158 --rc geninfo_unexecuted_blocks=1 00:43:28.158 00:43:28.158 ' 00:43:28.158 10:45:21 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:43:28.158 10:45:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 00:43:28.158 10:45:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:43:28.158 10:45:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:43:28.158 10:45:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:43:28.158 10:45:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:43:28.158 10:45:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:43:28.158 10:45:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:43:28.158 10:45:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:43:28.158 10:45:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:43:28.158 10:45:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:43:28.158 10:45:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:43:28.158 10:45:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:43:28.158 10:45:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:43:28.158 10:45:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:43:28.158 10:45:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:43:28.158 10:45:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:43:28.158 10:45:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:43:28.158 10:45:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:43:28.158 10:45:21 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 00:43:28.158 10:45:21 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:43:28.158 10:45:21 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:43:28.158 10:45:21 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:43:28.158 10:45:21 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:28.158 10:45:21 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:28.158 10:45:21 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:28.158 10:45:21 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 00:43:28.158 10:45:21 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:28.158 10:45:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # : 0 00:43:28.158 10:45:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:43:28.158 10:45:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:43:28.158 10:45:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:43:28.158 10:45:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:43:28.158 10:45:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:43:28.158 10:45:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:43:28.158 10:45:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:43:28.158 10:45:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:43:28.158 10:45:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:43:28.158 10:45:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@55 -- # have_pci_nics=0 00:43:28.158 10:45:22 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/interrupt/common.sh 00:43:28.158 10:45:22 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:43:28.158 10:45:22 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit 00:43:28.158 10:45:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:43:28.158 10:45:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:43:28.158 10:45:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@476 -- # prepare_net_devs 00:43:28.159 10:45:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # local -g is_hw=no 00:43:28.159 10:45:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # remove_spdk_ns 00:43:28.159 10:45:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:43:28.159 10:45:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:43:28.159 10:45:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:43:28.159 10:45:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:43:28.159 10:45:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:43:28.159 10:45:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@309 -- # xtrace_disable 00:43:28.159 10:45:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:43:33.426 10:45:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:43:33.426 10:45:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # pci_devs=() 00:43:33.426 10:45:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # local -a pci_devs 00:43:33.426 10:45:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # pci_net_devs=() 00:43:33.426 10:45:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:43:33.426 10:45:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # pci_drivers=() 00:43:33.426 10:45:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # local -A pci_drivers 00:43:33.426 10:45:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # net_devs=() 00:43:33.426 10:45:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # local -ga net_devs 00:43:33.426 10:45:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # e810=() 00:43:33.426 10:45:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # local -ga e810 00:43:33.426 10:45:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # x722=() 00:43:33.426 10:45:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # local -ga x722 00:43:33.426 10:45:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # mlx=() 00:43:33.426 10:45:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # local -ga mlx 00:43:33.426 10:45:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:43:33.426 10:45:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:43:33.426 10:45:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:43:33.426 10:45:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:43:33.426 10:45:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:43:33.426 10:45:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:43:33.426 10:45:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:43:33.426 10:45:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:43:33.426 10:45:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:43:33.426 10:45:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:43:33.426 10:45:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:43:33.426 10:45:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:43:33.426 10:45:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:43:33.426 10:45:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:43:33.426 10:45:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:43:33.426 10:45:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:43:33.426 10:45:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:43:33.426 10:45:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:43:33.426 10:45:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:43:33.426 10:45:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:43:33.426 Found 0000:af:00.0 (0x8086 - 0x159b) 00:43:33.426 10:45:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:43:33.426 10:45:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:43:33.426 10:45:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:43:33.426 10:45:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:43:33.426 10:45:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:43:33.426 10:45:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:43:33.426 10:45:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:43:33.426 Found 0000:af:00.1 (0x8086 - 0x159b) 00:43:33.426 10:45:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:43:33.426 10:45:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:43:33.426 10:45:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:43:33.426 10:45:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:43:33.426 10:45:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:43:33.426 10:45:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:43:33.426 10:45:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:43:33.426 10:45:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:43:33.426 10:45:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:43:33.426 10:45:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:43:33.426 10:45:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:43:33.426 10:45:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:43:33.426 10:45:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:43:33.426 10:45:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:43:33.426 10:45:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:43:33.426 10:45:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:43:33.426 Found net devices under 0000:af:00.0: cvl_0_0 00:43:33.426 10:45:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:43:33.426 10:45:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:43:33.426 10:45:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:43:33.426 10:45:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:43:33.426 10:45:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:43:33.426 10:45:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:43:33.426 10:45:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:43:33.426 10:45:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:43:33.426 10:45:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:43:33.426 Found net devices under 0000:af:00.1: cvl_0_1 00:43:33.426 10:45:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:43:33.426 10:45:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:43:33.426 10:45:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # is_hw=yes 00:43:33.426 10:45:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:43:33.427 10:45:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:43:33.427 10:45:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:43:33.427 10:45:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:43:33.427 10:45:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:43:33.427 10:45:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:43:33.427 10:45:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:43:33.427 10:45:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:43:33.427 10:45:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:43:33.427 10:45:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:43:33.427 10:45:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:43:33.427 10:45:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:43:33.427 10:45:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:43:33.427 10:45:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:43:33.427 10:45:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:43:33.427 10:45:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:43:33.427 10:45:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:43:33.427 10:45:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:43:33.427 10:45:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:43:33.427 10:45:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:43:33.427 10:45:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:43:33.427 10:45:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:43:33.685 10:45:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:43:33.685 10:45:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:43:33.685 10:45:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:43:33.685 10:45:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:43:33.685 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:43:33.685 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.323 ms 00:43:33.685 00:43:33.685 --- 10.0.0.2 ping statistics --- 00:43:33.685 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:33.685 rtt min/avg/max/mdev = 0.323/0.323/0.323/0.000 ms 00:43:33.686 10:45:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:43:33.686 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:43:33.686 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.220 ms 00:43:33.686 00:43:33.686 --- 10.0.0.1 ping statistics --- 00:43:33.686 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:33.686 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:43:33.686 10:45:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:43:33.686 10:45:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@450 -- # return 0 00:43:33.686 10:45:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:43:33.686 10:45:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:43:33.686 10:45:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:43:33.686 10:45:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:43:33.686 10:45:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:43:33.686 10:45:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:43:33.686 10:45:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:43:33.686 10:45:27 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3 00:43:33.686 10:45:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:43:33.686 10:45:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@726 -- # xtrace_disable 00:43:33.686 10:45:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:43:33.686 10:45:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@509 -- # nvmfpid=48699 00:43:33.686 10:45:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:43:33.686 10:45:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@510 -- # waitforlisten 48699 00:43:33.686 10:45:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@835 -- # '[' -z 48699 ']' 00:43:33.686 10:45:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:43:33.686 10:45:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # local max_retries=100 00:43:33.686 10:45:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:43:33.686 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:43:33.686 10:45:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@844 -- # xtrace_disable 00:43:33.686 10:45:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:43:33.686 [2024-12-13 10:45:27.527148] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:43:33.686 [2024-12-13 10:45:27.529179] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:43:33.686 [2024-12-13 10:45:27.529244] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:43:33.944 [2024-12-13 10:45:27.647109] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:43:33.944 [2024-12-13 10:45:27.749103] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:43:33.944 [2024-12-13 10:45:27.749145] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:43:33.944 [2024-12-13 10:45:27.749156] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:43:33.944 [2024-12-13 10:45:27.749165] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:43:33.944 [2024-12-13 10:45:27.749178] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:43:33.944 [2024-12-13 10:45:27.751239] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:43:33.944 [2024-12-13 10:45:27.751251] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:43:34.203 [2024-12-13 10:45:28.060613] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:43:34.203 [2024-12-13 10:45:28.061367] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:43:34.203 [2024-12-13 10:45:28.061610] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:43:34.461 10:45:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:43:34.461 10:45:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@868 -- # return 0 00:43:34.461 10:45:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:43:34.461 10:45:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@732 -- # xtrace_disable 00:43:34.461 10:45:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:43:34.720 10:45:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:43:34.720 10:45:28 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio 00:43:34.720 10:45:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s 00:43:34.720 10:45:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:43:34.720 10:45:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile bs=2048 count=5000 00:43:34.720 5000+0 records in 00:43:34.720 5000+0 records out 00:43:34.720 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0166808 s, 614 MB/s 00:43:34.720 10:45:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile AIO0 2048 00:43:34.720 10:45:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:34.720 10:45:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:43:34.720 AIO0 00:43:34.720 10:45:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:34.720 10:45:28 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256 00:43:34.720 10:45:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:34.720 10:45:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:43:34.720 [2024-12-13 10:45:28.456321] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:43:34.720 10:45:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:34.720 10:45:28 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:43:34.720 10:45:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:34.720 10:45:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:43:34.720 10:45:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:34.720 10:45:28 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 00:43:34.720 10:45:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:34.720 10:45:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:43:34.720 10:45:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:34.720 10:45:28 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:43:34.720 10:45:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:34.720 10:45:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:43:34.720 [2024-12-13 10:45:28.496667] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:43:34.720 10:45:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:34.720 10:45:28 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:43:34.720 10:45:28 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 48699 0 00:43:34.720 10:45:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 48699 0 idle 00:43:34.720 10:45:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=48699 00:43:34.720 10:45:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:43:34.720 10:45:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:43:34.720 10:45:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:43:34.720 10:45:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:43:34.720 10:45:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:43:34.720 10:45:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:43:34.720 10:45:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:43:34.720 10:45:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:43:34.720 10:45:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:43:34.720 10:45:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 48699 -w 256 00:43:34.720 10:45:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:43:34.979 10:45:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 48699 root 20 0 20.1t 207360 100608 S 0.0 0.2 0:00.62 reactor_0' 00:43:34.979 10:45:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 48699 root 20 0 20.1t 207360 100608 S 0.0 0.2 0:00.62 reactor_0 00:43:34.979 10:45:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:43:34.979 10:45:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:43:34.979 10:45:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:43:34.979 10:45:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:43:34.979 10:45:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:43:34.979 10:45:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:43:34.979 10:45:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:43:34.979 10:45:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:43:34.979 10:45:28 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:43:34.979 10:45:28 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 48699 1 00:43:34.979 10:45:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 48699 1 idle 00:43:34.979 10:45:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=48699 00:43:34.979 10:45:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:43:34.979 10:45:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:43:34.979 10:45:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:43:34.979 10:45:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:43:34.979 10:45:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:43:34.979 10:45:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:43:34.979 10:45:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:43:34.979 10:45:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:43:34.979 10:45:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:43:34.979 10:45:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 48699 -w 256 00:43:34.979 10:45:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:43:34.979 10:45:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 48704 root 20 0 20.1t 207360 100608 S 0.0 0.2 0:00.00 reactor_1' 00:43:34.979 10:45:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 48704 root 20 0 20.1t 207360 100608 S 0.0 0.2 0:00.00 reactor_1 00:43:34.979 10:45:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:43:34.979 10:45:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:43:34.979 10:45:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:43:34.979 10:45:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:43:34.979 10:45:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:43:34.979 10:45:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:43:34.979 10:45:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:43:34.979 10:45:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:43:34.979 10:45:28 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:43:35.238 10:45:28 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=48963 00:43:35.238 10:45:28 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:43:35.238 10:45:28 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:43:35.238 10:45:28 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:43:35.238 10:45:28 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 48699 0 00:43:35.238 10:45:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 48699 0 busy 00:43:35.238 10:45:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=48699 00:43:35.238 10:45:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:43:35.238 10:45:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:43:35.238 10:45:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:43:35.238 10:45:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:43:35.238 10:45:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:43:35.238 10:45:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:43:35.238 10:45:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:43:35.238 10:45:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:43:35.238 10:45:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 48699 -w 256 00:43:35.238 10:45:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:43:35.238 10:45:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 48699 root 20 0 20.1t 208128 101376 S 0.0 0.2 0:00.62 reactor_0' 00:43:35.238 10:45:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 48699 root 20 0 20.1t 208128 101376 S 0.0 0.2 0:00.62 reactor_0 00:43:35.238 10:45:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:43:35.238 10:45:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:43:35.238 10:45:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:43:35.238 10:45:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:43:35.238 10:45:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:43:35.238 10:45:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:43:35.238 10:45:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@31 -- # sleep 1 00:43:36.183 10:45:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j-- )) 00:43:36.183 10:45:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:43:36.183 10:45:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 48699 -w 256 00:43:36.183 10:45:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:43:36.441 10:45:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 48699 root 20 0 20.1t 220416 101376 R 99.9 0.2 0:02.75 reactor_0' 00:43:36.441 10:45:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 48699 root 20 0 20.1t 220416 101376 R 99.9 0.2 0:02.75 reactor_0 00:43:36.441 10:45:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:43:36.441 10:45:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:43:36.441 10:45:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:43:36.441 10:45:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:43:36.442 10:45:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:43:36.442 10:45:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:43:36.442 10:45:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:43:36.442 10:45:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:43:36.442 10:45:30 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:43:36.442 10:45:30 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:43:36.442 10:45:30 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 48699 1 00:43:36.442 10:45:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 48699 1 busy 00:43:36.442 10:45:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=48699 00:43:36.442 10:45:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:43:36.442 10:45:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:43:36.442 10:45:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:43:36.442 10:45:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:43:36.442 10:45:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:43:36.442 10:45:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:43:36.442 10:45:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:43:36.442 10:45:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:43:36.442 10:45:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 48699 -w 256 00:43:36.442 10:45:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:43:36.700 10:45:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 48704 root 20 0 20.1t 220416 101376 R 99.9 0.2 0:01.26 reactor_1' 00:43:36.700 10:45:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 48704 root 20 0 20.1t 220416 101376 R 99.9 0.2 0:01.26 reactor_1 00:43:36.700 10:45:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:43:36.700 10:45:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:43:36.700 10:45:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:43:36.700 10:45:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:43:36.700 10:45:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:43:36.700 10:45:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:43:36.700 10:45:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:43:36.700 10:45:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:43:36.700 10:45:30 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 48963 00:43:46.761 Initializing NVMe Controllers 00:43:46.761 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:43:46.761 Controller IO queue size 256, less than required. 00:43:46.761 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:43:46.761 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:43:46.761 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:43:46.761 Initialization complete. Launching workers. 00:43:46.761 ======================================================== 00:43:46.761 Latency(us) 00:43:46.761 Device Information : IOPS MiB/s Average min max 00:43:46.761 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 15302.20 59.77 16737.49 5009.86 23272.33 00:43:46.761 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 15189.50 59.33 16863.01 5272.81 59591.42 00:43:46.761 ======================================================== 00:43:46.761 Total : 30491.70 119.11 16800.02 5009.86 59591.42 00:43:46.761 00:43:46.761 10:45:39 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:43:46.761 10:45:39 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 48699 0 00:43:46.762 10:45:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 48699 0 idle 00:43:46.762 10:45:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=48699 00:43:46.762 10:45:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:43:46.762 10:45:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:43:46.762 10:45:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:43:46.762 10:45:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:43:46.762 10:45:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:43:46.762 10:45:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:43:46.762 10:45:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:43:46.762 10:45:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:43:46.762 10:45:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:43:46.762 10:45:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 48699 -w 256 00:43:46.762 10:45:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:43:46.762 10:45:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 48699 root 20 0 20.1t 220416 101376 R 0.0 0.2 0:20.60 reactor_0' 00:43:46.762 10:45:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 48699 root 20 0 20.1t 220416 101376 R 0.0 0.2 0:20.60 reactor_0 00:43:46.762 10:45:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:43:46.762 10:45:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:43:46.762 10:45:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:43:46.762 10:45:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:43:46.762 10:45:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:43:46.762 10:45:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:43:46.762 10:45:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:43:46.762 10:45:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:43:46.762 10:45:39 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:43:46.762 10:45:39 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 48699 1 00:43:46.762 10:45:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 48699 1 idle 00:43:46.762 10:45:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=48699 00:43:46.762 10:45:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:43:46.762 10:45:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:43:46.762 10:45:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:43:46.762 10:45:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:43:46.762 10:45:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:43:46.762 10:45:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:43:46.762 10:45:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:43:46.762 10:45:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:43:46.762 10:45:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:43:46.762 10:45:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 48699 -w 256 00:43:46.762 10:45:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:43:46.762 10:45:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 48704 root 20 0 20.1t 220416 101376 S 0.0 0.2 0:10.00 reactor_1' 00:43:46.762 10:45:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 48704 root 20 0 20.1t 220416 101376 S 0.0 0.2 0:10.00 reactor_1 00:43:46.762 10:45:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:43:46.762 10:45:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:43:46.762 10:45:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:43:46.762 10:45:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:43:46.762 10:45:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:43:46.762 10:45:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:43:46.762 10:45:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:43:46.762 10:45:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:43:46.762 10:45:39 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:43:46.762 10:45:40 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME 00:43:46.762 10:45:40 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1202 -- # local i=0 00:43:46.762 10:45:40 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:43:46.762 10:45:40 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:43:46.762 10:45:40 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # sleep 2 00:43:48.664 10:45:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:43:48.665 10:45:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:43:48.665 10:45:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:43:48.665 10:45:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:43:48.665 10:45:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:43:48.665 10:45:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # return 0 00:43:48.665 10:45:42 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:43:48.665 10:45:42 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 48699 0 00:43:48.665 10:45:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 48699 0 idle 00:43:48.665 10:45:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=48699 00:43:48.665 10:45:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:43:48.665 10:45:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:43:48.665 10:45:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:43:48.665 10:45:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:43:48.665 10:45:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:43:48.665 10:45:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:43:48.665 10:45:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:43:48.665 10:45:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:43:48.665 10:45:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:43:48.665 10:45:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:43:48.665 10:45:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 48699 -w 256 00:43:48.665 10:45:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 48699 root 20 0 20.1t 274944 120576 S 0.0 0.3 0:20.99 reactor_0' 00:43:48.665 10:45:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 48699 root 20 0 20.1t 274944 120576 S 0.0 0.3 0:20.99 reactor_0 00:43:48.665 10:45:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:43:48.665 10:45:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:43:48.665 10:45:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:43:48.665 10:45:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:43:48.665 10:45:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:43:48.665 10:45:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:43:48.665 10:45:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:43:48.665 10:45:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:43:48.665 10:45:42 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:43:48.665 10:45:42 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 48699 1 00:43:48.665 10:45:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 48699 1 idle 00:43:48.665 10:45:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=48699 00:43:48.665 10:45:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:43:48.665 10:45:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:43:48.665 10:45:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:43:48.665 10:45:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:43:48.665 10:45:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:43:48.665 10:45:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:43:48.665 10:45:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:43:48.665 10:45:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:43:48.665 10:45:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:43:48.665 10:45:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:43:48.665 10:45:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 48699 -w 256 00:43:48.923 10:45:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 48704 root 20 0 20.1t 274944 120576 S 0.0 0.3 0:10.16 reactor_1' 00:43:48.923 10:45:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 48704 root 20 0 20.1t 274944 120576 S 0.0 0.3 0:10.16 reactor_1 00:43:48.923 10:45:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:43:48.923 10:45:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:43:48.923 10:45:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:43:48.923 10:45:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:43:48.923 10:45:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:43:48.923 10:45:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:43:48.923 10:45:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:43:48.923 10:45:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:43:48.923 10:45:42 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:43:49.490 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:43:49.490 10:45:43 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:43:49.490 10:45:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1223 -- # local i=0 00:43:49.490 10:45:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:43:49.490 10:45:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:43:49.490 10:45:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:43:49.490 10:45:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:43:49.490 10:45:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1235 -- # return 0 00:43:49.490 10:45:43 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:43:49.490 10:45:43 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini 00:43:49.490 10:45:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@516 -- # nvmfcleanup 00:43:49.490 10:45:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@121 -- # sync 00:43:49.490 10:45:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:43:49.490 10:45:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@124 -- # set +e 00:43:49.490 10:45:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # for i in {1..20} 00:43:49.490 10:45:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:43:49.490 rmmod nvme_tcp 00:43:49.490 rmmod nvme_fabrics 00:43:49.490 rmmod nvme_keyring 00:43:49.490 10:45:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:43:49.490 10:45:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@128 -- # set -e 00:43:49.490 10:45:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # return 0 00:43:49.490 10:45:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@517 -- # '[' -n 48699 ']' 00:43:49.490 10:45:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@518 -- # killprocess 48699 00:43:49.490 10:45:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@954 -- # '[' -z 48699 ']' 00:43:49.490 10:45:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@958 -- # kill -0 48699 00:43:49.490 10:45:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # uname 00:43:49.490 10:45:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:43:49.490 10:45:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 48699 00:43:49.490 10:45:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:43:49.490 10:45:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:43:49.490 10:45:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 48699' 00:43:49.490 killing process with pid 48699 00:43:49.490 10:45:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@973 -- # kill 48699 00:43:49.490 10:45:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@978 -- # wait 48699 00:43:50.866 10:45:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:43:50.866 10:45:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:43:50.866 10:45:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:43:50.866 10:45:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@297 -- # iptr 00:43:50.866 10:45:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-save 00:43:50.866 10:45:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-restore 00:43:50.866 10:45:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:43:50.866 10:45:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:43:50.866 10:45:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@302 -- # remove_spdk_ns 00:43:50.866 10:45:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:43:50.866 10:45:44 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:43:50.866 10:45:44 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:43:52.766 10:45:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:43:52.766 00:43:52.766 real 0m24.694s 00:43:52.766 user 0m42.171s 00:43:52.766 sys 0m8.165s 00:43:52.766 10:45:46 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:43:52.766 10:45:46 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:43:52.766 ************************************ 00:43:52.766 END TEST nvmf_interrupt 00:43:52.766 ************************************ 00:43:52.766 00:43:52.766 real 37m20.928s 00:43:52.766 user 92m18.694s 00:43:52.766 sys 9m46.377s 00:43:52.766 10:45:46 nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:43:52.766 10:45:46 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:43:52.766 ************************************ 00:43:52.766 END TEST nvmf_tcp 00:43:52.766 ************************************ 00:43:52.766 10:45:46 -- spdk/autotest.sh@285 -- # [[ 0 -eq 0 ]] 00:43:52.766 10:45:46 -- spdk/autotest.sh@286 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:43:52.767 10:45:46 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:43:52.767 10:45:46 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:43:52.767 10:45:46 -- common/autotest_common.sh@10 -- # set +x 00:43:52.767 ************************************ 00:43:52.767 START TEST spdkcli_nvmf_tcp 00:43:52.767 ************************************ 00:43:52.767 10:45:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:43:53.026 * Looking for test storage... 00:43:53.026 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:43:53.026 10:45:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:43:53.026 10:45:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:43:53.026 10:45:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:43:53.026 10:45:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:43:53.026 10:45:46 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:43:53.026 10:45:46 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:43:53.026 10:45:46 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:43:53.026 10:45:46 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:43:53.026 10:45:46 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:43:53.026 10:45:46 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:43:53.026 10:45:46 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:43:53.026 10:45:46 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:43:53.026 10:45:46 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:43:53.026 10:45:46 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:43:53.026 10:45:46 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:43:53.026 10:45:46 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:43:53.026 10:45:46 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:43:53.026 10:45:46 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:43:53.026 10:45:46 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:43:53.026 10:45:46 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:43:53.026 10:45:46 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:43:53.026 10:45:46 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:43:53.026 10:45:46 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:43:53.026 10:45:46 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:43:53.026 10:45:46 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:43:53.026 10:45:46 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:43:53.026 10:45:46 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:43:53.026 10:45:46 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:43:53.026 10:45:46 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:43:53.026 10:45:46 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:43:53.026 10:45:46 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:43:53.026 10:45:46 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:43:53.026 10:45:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:43:53.026 10:45:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:43:53.026 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:53.026 --rc genhtml_branch_coverage=1 00:43:53.026 --rc genhtml_function_coverage=1 00:43:53.026 --rc genhtml_legend=1 00:43:53.026 --rc geninfo_all_blocks=1 00:43:53.026 --rc geninfo_unexecuted_blocks=1 00:43:53.027 00:43:53.027 ' 00:43:53.027 10:45:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:43:53.027 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:53.027 --rc genhtml_branch_coverage=1 00:43:53.027 --rc genhtml_function_coverage=1 00:43:53.027 --rc genhtml_legend=1 00:43:53.027 --rc geninfo_all_blocks=1 00:43:53.027 --rc geninfo_unexecuted_blocks=1 00:43:53.027 00:43:53.027 ' 00:43:53.027 10:45:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:43:53.027 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:53.027 --rc genhtml_branch_coverage=1 00:43:53.027 --rc genhtml_function_coverage=1 00:43:53.027 --rc genhtml_legend=1 00:43:53.027 --rc geninfo_all_blocks=1 00:43:53.027 --rc geninfo_unexecuted_blocks=1 00:43:53.027 00:43:53.027 ' 00:43:53.027 10:45:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:43:53.027 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:53.027 --rc genhtml_branch_coverage=1 00:43:53.027 --rc genhtml_function_coverage=1 00:43:53.027 --rc genhtml_legend=1 00:43:53.027 --rc geninfo_all_blocks=1 00:43:53.027 --rc geninfo_unexecuted_blocks=1 00:43:53.027 00:43:53.027 ' 00:43:53.027 10:45:46 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:43:53.027 10:45:46 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:43:53.027 10:45:46 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:43:53.027 10:45:46 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:43:53.027 10:45:46 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:43:53.027 10:45:46 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:43:53.027 10:45:46 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:43:53.027 10:45:46 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:43:53.027 10:45:46 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:43:53.027 10:45:46 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:43:53.027 10:45:46 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:43:53.027 10:45:46 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:43:53.027 10:45:46 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:43:53.027 10:45:46 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:43:53.027 10:45:46 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:43:53.027 10:45:46 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:43:53.027 10:45:46 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:43:53.027 10:45:46 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:43:53.027 10:45:46 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:43:53.027 10:45:46 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:43:53.027 10:45:46 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:43:53.027 10:45:46 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:43:53.027 10:45:46 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob 00:43:53.027 10:45:46 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:43:53.027 10:45:46 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:43:53.027 10:45:46 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:43:53.027 10:45:46 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:53.027 10:45:46 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:53.027 10:45:46 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:53.027 10:45:46 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:43:53.027 10:45:46 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:53.027 10:45:46 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # : 0 00:43:53.027 10:45:46 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:43:53.027 10:45:46 spdkcli_nvmf_tcp -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:43:53.027 10:45:46 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:43:53.027 10:45:46 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:43:53.027 10:45:46 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:43:53.027 10:45:46 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:43:53.027 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:43:53.027 10:45:46 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:43:53.027 10:45:46 spdkcli_nvmf_tcp -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:43:53.027 10:45:46 spdkcli_nvmf_tcp -- nvmf/common.sh@55 -- # have_pci_nics=0 00:43:53.027 10:45:46 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:43:53.027 10:45:46 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:43:53.027 10:45:46 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:43:53.027 10:45:46 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:43:53.027 10:45:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:43:53.027 10:45:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:43:53.027 10:45:46 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:43:53.027 10:45:46 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=51814 00:43:53.027 10:45:46 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 51814 00:43:53.027 10:45:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # '[' -z 51814 ']' 00:43:53.027 10:45:46 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:43:53.027 10:45:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:43:53.027 10:45:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:43:53.027 10:45:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:43:53.027 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:43:53.027 10:45:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:43:53.027 10:45:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:43:53.027 [2024-12-13 10:45:46.870895] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:43:53.027 [2024-12-13 10:45:46.871000] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid51814 ] 00:43:53.286 [2024-12-13 10:45:46.983590] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:43:53.286 [2024-12-13 10:45:47.086159] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:43:53.286 [2024-12-13 10:45:47.086167] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:43:53.852 10:45:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:43:53.852 10:45:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@868 -- # return 0 00:43:53.852 10:45:47 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:43:53.852 10:45:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:43:53.852 10:45:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:43:53.852 10:45:47 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:43:53.852 10:45:47 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:43:53.852 10:45:47 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:43:53.852 10:45:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:43:53.852 10:45:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:43:53.853 10:45:47 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:43:53.853 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:43:53.853 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:43:53.853 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:43:53.853 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:43:53.853 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:43:53.853 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:43:53.853 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:43:53.853 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:43:53.853 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:43:53.853 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:43:53.853 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:43:53.853 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:43:53.853 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:43:53.853 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:43:53.853 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:43:53.853 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:43:53.853 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:43:53.853 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:43:53.853 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:43:53.853 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:43:53.853 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:43:53.853 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:43:53.853 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:43:53.853 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:43:53.853 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:43:53.853 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:43:53.853 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:43:53.853 ' 00:43:57.137 [2024-12-13 10:45:50.365021] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:43:58.071 [2024-12-13 10:45:51.705581] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:44:00.601 [2024-12-13 10:45:54.185481] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:44:02.499 [2024-12-13 10:45:56.344428] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:44:04.397 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:44:04.397 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:44:04.397 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:44:04.397 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:44:04.397 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:44:04.397 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:44:04.397 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:44:04.397 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:44:04.397 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:44:04.397 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:44:04.397 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:44:04.397 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:44:04.397 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:44:04.397 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:44:04.397 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:44:04.397 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:44:04.397 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:44:04.397 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:44:04.397 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:44:04.397 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:44:04.397 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:44:04.397 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:44:04.397 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:44:04.397 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:44:04.397 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:44:04.398 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:44:04.398 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:44:04.398 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:44:04.398 10:45:58 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:44:04.398 10:45:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:44:04.398 10:45:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:44:04.398 10:45:58 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:44:04.398 10:45:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:44:04.398 10:45:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:44:04.398 10:45:58 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:44:04.398 10:45:58 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:44:04.656 10:45:58 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:44:04.656 10:45:58 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:44:04.913 10:45:58 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:44:04.913 10:45:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:44:04.913 10:45:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:44:04.913 10:45:58 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:44:04.913 10:45:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:44:04.913 10:45:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:44:04.914 10:45:58 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:44:04.914 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:44:04.914 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:44:04.914 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:44:04.914 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:44:04.914 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:44:04.914 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:44:04.914 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:44:04.914 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:44:04.914 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:44:04.914 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:44:04.914 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:44:04.914 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:44:04.914 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:44:04.914 ' 00:44:11.466 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:44:11.466 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:44:11.466 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:44:11.466 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:44:11.466 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:44:11.466 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:44:11.466 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:44:11.466 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:44:11.466 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:44:11.466 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:44:11.466 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:44:11.466 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:44:11.466 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:44:11.466 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:44:11.466 10:46:04 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:44:11.466 10:46:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:44:11.466 10:46:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:44:11.466 10:46:04 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 51814 00:44:11.466 10:46:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 51814 ']' 00:44:11.466 10:46:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 51814 00:44:11.466 10:46:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # uname 00:44:11.467 10:46:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:44:11.467 10:46:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 51814 00:44:11.467 10:46:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:44:11.467 10:46:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:44:11.467 10:46:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 51814' 00:44:11.467 killing process with pid 51814 00:44:11.467 10:46:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@973 -- # kill 51814 00:44:11.467 10:46:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@978 -- # wait 51814 00:44:11.724 10:46:05 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:44:11.724 10:46:05 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:44:11.724 10:46:05 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 51814 ']' 00:44:11.724 10:46:05 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 51814 00:44:11.724 10:46:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 51814 ']' 00:44:11.724 10:46:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 51814 00:44:11.724 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (51814) - No such process 00:44:11.724 10:46:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@981 -- # echo 'Process with pid 51814 is not found' 00:44:11.724 Process with pid 51814 is not found 00:44:11.724 10:46:05 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:44:11.724 10:46:05 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:44:11.724 10:46:05 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:44:11.724 00:44:11.724 real 0m18.822s 00:44:11.724 user 0m39.842s 00:44:11.724 sys 0m0.908s 00:44:11.724 10:46:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:44:11.724 10:46:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:44:11.724 ************************************ 00:44:11.724 END TEST spdkcli_nvmf_tcp 00:44:11.724 ************************************ 00:44:11.724 10:46:05 -- spdk/autotest.sh@287 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:44:11.724 10:46:05 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:44:11.724 10:46:05 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:44:11.724 10:46:05 -- common/autotest_common.sh@10 -- # set +x 00:44:11.724 ************************************ 00:44:11.724 START TEST nvmf_identify_passthru 00:44:11.724 ************************************ 00:44:11.724 10:46:05 nvmf_identify_passthru -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:44:11.724 * Looking for test storage... 00:44:11.724 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:44:11.724 10:46:05 nvmf_identify_passthru -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:44:11.724 10:46:05 nvmf_identify_passthru -- common/autotest_common.sh@1711 -- # lcov --version 00:44:11.724 10:46:05 nvmf_identify_passthru -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:44:11.983 10:46:05 nvmf_identify_passthru -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:44:11.983 10:46:05 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:44:11.983 10:46:05 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l 00:44:11.983 10:46:05 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l 00:44:11.983 10:46:05 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-: 00:44:11.983 10:46:05 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1 00:44:11.983 10:46:05 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-: 00:44:11.983 10:46:05 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2 00:44:11.983 10:46:05 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<' 00:44:11.983 10:46:05 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2 00:44:11.983 10:46:05 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1 00:44:11.983 10:46:05 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:44:11.983 10:46:05 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in 00:44:11.983 10:46:05 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1 00:44:11.983 10:46:05 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 )) 00:44:11.983 10:46:05 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:44:11.983 10:46:05 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1 00:44:11.983 10:46:05 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1 00:44:11.983 10:46:05 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:44:11.983 10:46:05 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1 00:44:11.983 10:46:05 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1 00:44:11.983 10:46:05 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2 00:44:11.983 10:46:05 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2 00:44:11.983 10:46:05 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:44:11.983 10:46:05 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2 00:44:11.983 10:46:05 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2 00:44:11.983 10:46:05 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:44:11.983 10:46:05 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:44:11.983 10:46:05 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0 00:44:11.983 10:46:05 nvmf_identify_passthru -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:44:11.983 10:46:05 nvmf_identify_passthru -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:44:11.983 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:11.983 --rc genhtml_branch_coverage=1 00:44:11.983 --rc genhtml_function_coverage=1 00:44:11.983 --rc genhtml_legend=1 00:44:11.983 --rc geninfo_all_blocks=1 00:44:11.983 --rc geninfo_unexecuted_blocks=1 00:44:11.983 00:44:11.983 ' 00:44:11.983 10:46:05 nvmf_identify_passthru -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:44:11.983 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:11.983 --rc genhtml_branch_coverage=1 00:44:11.983 --rc genhtml_function_coverage=1 00:44:11.983 --rc genhtml_legend=1 00:44:11.983 --rc geninfo_all_blocks=1 00:44:11.983 --rc geninfo_unexecuted_blocks=1 00:44:11.983 00:44:11.983 ' 00:44:11.983 10:46:05 nvmf_identify_passthru -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:44:11.983 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:11.983 --rc genhtml_branch_coverage=1 00:44:11.983 --rc genhtml_function_coverage=1 00:44:11.983 --rc genhtml_legend=1 00:44:11.983 --rc geninfo_all_blocks=1 00:44:11.983 --rc geninfo_unexecuted_blocks=1 00:44:11.983 00:44:11.983 ' 00:44:11.983 10:46:05 nvmf_identify_passthru -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:44:11.983 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:11.983 --rc genhtml_branch_coverage=1 00:44:11.983 --rc genhtml_function_coverage=1 00:44:11.983 --rc genhtml_legend=1 00:44:11.983 --rc geninfo_all_blocks=1 00:44:11.983 --rc geninfo_unexecuted_blocks=1 00:44:11.983 00:44:11.983 ' 00:44:11.983 10:46:05 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:44:11.983 10:46:05 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:44:11.983 10:46:05 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:44:11.983 10:46:05 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:44:11.983 10:46:05 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:44:11.983 10:46:05 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:44:11.983 10:46:05 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:44:11.983 10:46:05 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:44:11.983 10:46:05 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:44:11.983 10:46:05 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:44:11.983 10:46:05 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:44:11.983 10:46:05 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:44:11.983 10:46:05 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:44:11.983 10:46:05 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:44:11.983 10:46:05 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:44:11.983 10:46:05 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:44:11.983 10:46:05 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:44:11.984 10:46:05 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:44:11.984 10:46:05 nvmf_identify_passthru -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:44:11.984 10:46:05 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:44:11.984 10:46:05 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:44:11.984 10:46:05 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:44:11.984 10:46:05 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:44:11.984 10:46:05 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:11.984 10:46:05 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:11.984 10:46:05 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:11.984 10:46:05 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:44:11.984 10:46:05 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:11.984 10:46:05 nvmf_identify_passthru -- nvmf/common.sh@51 -- # : 0 00:44:11.984 10:46:05 nvmf_identify_passthru -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:44:11.984 10:46:05 nvmf_identify_passthru -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:44:11.984 10:46:05 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:44:11.984 10:46:05 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:44:11.984 10:46:05 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:44:11.984 10:46:05 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:44:11.984 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:44:11.984 10:46:05 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:44:11.984 10:46:05 nvmf_identify_passthru -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:44:11.984 10:46:05 nvmf_identify_passthru -- nvmf/common.sh@55 -- # have_pci_nics=0 00:44:11.984 10:46:05 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:44:11.984 10:46:05 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:44:11.984 10:46:05 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:44:11.984 10:46:05 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:44:11.984 10:46:05 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:44:11.984 10:46:05 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:11.984 10:46:05 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:11.984 10:46:05 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:11.984 10:46:05 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:44:11.984 10:46:05 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:11.984 10:46:05 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:44:11.984 10:46:05 nvmf_identify_passthru -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:44:11.984 10:46:05 nvmf_identify_passthru -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:44:11.984 10:46:05 nvmf_identify_passthru -- nvmf/common.sh@476 -- # prepare_net_devs 00:44:11.984 10:46:05 nvmf_identify_passthru -- nvmf/common.sh@438 -- # local -g is_hw=no 00:44:11.984 10:46:05 nvmf_identify_passthru -- nvmf/common.sh@440 -- # remove_spdk_ns 00:44:11.984 10:46:05 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:44:11.984 10:46:05 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:44:11.984 10:46:05 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:44:11.984 10:46:05 nvmf_identify_passthru -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:44:11.984 10:46:05 nvmf_identify_passthru -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:44:11.984 10:46:05 nvmf_identify_passthru -- nvmf/common.sh@309 -- # xtrace_disable 00:44:11.984 10:46:05 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:44:17.246 10:46:10 nvmf_identify_passthru -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:44:17.246 10:46:10 nvmf_identify_passthru -- nvmf/common.sh@315 -- # pci_devs=() 00:44:17.246 10:46:10 nvmf_identify_passthru -- nvmf/common.sh@315 -- # local -a pci_devs 00:44:17.246 10:46:10 nvmf_identify_passthru -- nvmf/common.sh@316 -- # pci_net_devs=() 00:44:17.246 10:46:10 nvmf_identify_passthru -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:44:17.246 10:46:10 nvmf_identify_passthru -- nvmf/common.sh@317 -- # pci_drivers=() 00:44:17.246 10:46:10 nvmf_identify_passthru -- nvmf/common.sh@317 -- # local -A pci_drivers 00:44:17.246 10:46:10 nvmf_identify_passthru -- nvmf/common.sh@319 -- # net_devs=() 00:44:17.246 10:46:10 nvmf_identify_passthru -- nvmf/common.sh@319 -- # local -ga net_devs 00:44:17.246 10:46:10 nvmf_identify_passthru -- nvmf/common.sh@320 -- # e810=() 00:44:17.246 10:46:10 nvmf_identify_passthru -- nvmf/common.sh@320 -- # local -ga e810 00:44:17.246 10:46:10 nvmf_identify_passthru -- nvmf/common.sh@321 -- # x722=() 00:44:17.246 10:46:10 nvmf_identify_passthru -- nvmf/common.sh@321 -- # local -ga x722 00:44:17.246 10:46:10 nvmf_identify_passthru -- nvmf/common.sh@322 -- # mlx=() 00:44:17.246 10:46:10 nvmf_identify_passthru -- nvmf/common.sh@322 -- # local -ga mlx 00:44:17.246 10:46:10 nvmf_identify_passthru -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:44:17.246 10:46:10 nvmf_identify_passthru -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:44:17.246 10:46:10 nvmf_identify_passthru -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:44:17.246 10:46:10 nvmf_identify_passthru -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:44:17.246 10:46:10 nvmf_identify_passthru -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:44:17.246 10:46:10 nvmf_identify_passthru -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:44:17.246 10:46:10 nvmf_identify_passthru -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:44:17.246 10:46:10 nvmf_identify_passthru -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:44:17.246 10:46:10 nvmf_identify_passthru -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:44:17.246 10:46:10 nvmf_identify_passthru -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:44:17.246 10:46:10 nvmf_identify_passthru -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:44:17.246 10:46:10 nvmf_identify_passthru -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:44:17.246 10:46:10 nvmf_identify_passthru -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:44:17.246 10:46:10 nvmf_identify_passthru -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:44:17.246 10:46:10 nvmf_identify_passthru -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:44:17.246 10:46:10 nvmf_identify_passthru -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:44:17.246 10:46:10 nvmf_identify_passthru -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:44:17.247 10:46:10 nvmf_identify_passthru -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:44:17.247 10:46:10 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:44:17.247 10:46:10 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:44:17.247 Found 0000:af:00.0 (0x8086 - 0x159b) 00:44:17.247 10:46:10 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:44:17.247 10:46:10 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:44:17.247 10:46:10 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:44:17.247 10:46:10 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:44:17.247 10:46:10 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:44:17.247 10:46:10 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:44:17.247 10:46:10 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:44:17.247 Found 0000:af:00.1 (0x8086 - 0x159b) 00:44:17.247 10:46:10 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:44:17.247 10:46:10 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:44:17.247 10:46:10 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:44:17.247 10:46:10 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:44:17.247 10:46:10 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:44:17.247 10:46:10 nvmf_identify_passthru -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:44:17.247 10:46:10 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:44:17.247 10:46:10 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:44:17.247 10:46:10 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:44:17.247 10:46:10 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:44:17.247 10:46:10 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:44:17.247 10:46:10 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:44:17.247 10:46:10 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:44:17.247 10:46:10 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:44:17.247 10:46:10 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:44:17.247 10:46:10 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:44:17.247 Found net devices under 0000:af:00.0: cvl_0_0 00:44:17.247 10:46:10 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:44:17.247 10:46:10 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:44:17.247 10:46:10 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:44:17.247 10:46:10 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:44:17.247 10:46:10 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:44:17.247 10:46:10 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:44:17.247 10:46:10 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:44:17.247 10:46:10 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:44:17.247 10:46:10 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:44:17.247 Found net devices under 0000:af:00.1: cvl_0_1 00:44:17.247 10:46:10 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:44:17.247 10:46:10 nvmf_identify_passthru -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:44:17.247 10:46:10 nvmf_identify_passthru -- nvmf/common.sh@442 -- # is_hw=yes 00:44:17.247 10:46:10 nvmf_identify_passthru -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:44:17.247 10:46:10 nvmf_identify_passthru -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:44:17.247 10:46:10 nvmf_identify_passthru -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:44:17.247 10:46:10 nvmf_identify_passthru -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:44:17.247 10:46:10 nvmf_identify_passthru -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:44:17.247 10:46:10 nvmf_identify_passthru -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:44:17.247 10:46:10 nvmf_identify_passthru -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:44:17.247 10:46:10 nvmf_identify_passthru -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:44:17.247 10:46:10 nvmf_identify_passthru -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:44:17.247 10:46:10 nvmf_identify_passthru -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:44:17.247 10:46:10 nvmf_identify_passthru -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:44:17.247 10:46:10 nvmf_identify_passthru -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:44:17.247 10:46:10 nvmf_identify_passthru -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:44:17.247 10:46:10 nvmf_identify_passthru -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:44:17.247 10:46:10 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:44:17.247 10:46:10 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:44:17.247 10:46:10 nvmf_identify_passthru -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:44:17.247 10:46:10 nvmf_identify_passthru -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:44:17.247 10:46:10 nvmf_identify_passthru -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:44:17.247 10:46:10 nvmf_identify_passthru -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:44:17.247 10:46:10 nvmf_identify_passthru -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:44:17.247 10:46:10 nvmf_identify_passthru -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:44:17.247 10:46:10 nvmf_identify_passthru -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:44:17.247 10:46:10 nvmf_identify_passthru -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:44:17.247 10:46:10 nvmf_identify_passthru -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:44:17.247 10:46:10 nvmf_identify_passthru -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:44:17.247 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:44:17.247 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.265 ms 00:44:17.247 00:44:17.247 --- 10.0.0.2 ping statistics --- 00:44:17.247 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:44:17.247 rtt min/avg/max/mdev = 0.265/0.265/0.265/0.000 ms 00:44:17.247 10:46:10 nvmf_identify_passthru -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:44:17.247 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:44:17.247 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.080 ms 00:44:17.247 00:44:17.247 --- 10.0.0.1 ping statistics --- 00:44:17.247 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:44:17.247 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:44:17.247 10:46:10 nvmf_identify_passthru -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:44:17.247 10:46:10 nvmf_identify_passthru -- nvmf/common.sh@450 -- # return 0 00:44:17.247 10:46:10 nvmf_identify_passthru -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:44:17.247 10:46:10 nvmf_identify_passthru -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:44:17.247 10:46:10 nvmf_identify_passthru -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:44:17.247 10:46:10 nvmf_identify_passthru -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:44:17.247 10:46:10 nvmf_identify_passthru -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:44:17.247 10:46:10 nvmf_identify_passthru -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:44:17.247 10:46:10 nvmf_identify_passthru -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:44:17.247 10:46:10 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:44:17.247 10:46:10 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:44:17.247 10:46:10 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:44:17.247 10:46:10 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:44:17.247 10:46:10 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # bdfs=() 00:44:17.247 10:46:10 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # local bdfs 00:44:17.247 10:46:10 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:44:17.247 10:46:10 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:44:17.247 10:46:10 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # bdfs=() 00:44:17.247 10:46:10 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # local bdfs 00:44:17.247 10:46:10 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:44:17.247 10:46:10 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:44:17.247 10:46:10 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:44:17.247 10:46:10 nvmf_identify_passthru -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:44:17.247 10:46:10 nvmf_identify_passthru -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:44:17.247 10:46:10 nvmf_identify_passthru -- common/autotest_common.sh@1512 -- # echo 0000:5e:00.0 00:44:17.247 10:46:10 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:5e:00.0 00:44:17.247 10:46:10 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:5e:00.0 ']' 00:44:17.247 10:46:10 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:44:17.247 10:46:10 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 00:44:17.247 10:46:10 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:44:21.429 10:46:15 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=BTLJ7244049A1P0FGN 00:44:21.429 10:46:15 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 00:44:21.429 10:46:15 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:44:21.429 10:46:15 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:44:25.611 10:46:19 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:44:25.611 10:46:19 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:44:25.611 10:46:19 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:44:25.611 10:46:19 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:44:25.611 10:46:19 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:44:25.612 10:46:19 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:44:25.612 10:46:19 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:44:25.612 10:46:19 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=59046 00:44:25.612 10:46:19 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:44:25.612 10:46:19 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 59046 00:44:25.612 10:46:19 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # '[' -z 59046 ']' 00:44:25.612 10:46:19 nvmf_identify_passthru -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:44:25.612 10:46:19 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:44:25.612 10:46:19 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # local max_retries=100 00:44:25.612 10:46:19 nvmf_identify_passthru -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:44:25.612 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:44:25.612 10:46:19 nvmf_identify_passthru -- common/autotest_common.sh@844 -- # xtrace_disable 00:44:25.612 10:46:19 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:44:25.870 [2024-12-13 10:46:19.532182] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:44:25.870 [2024-12-13 10:46:19.532287] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:44:25.870 [2024-12-13 10:46:19.650534] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:44:25.870 [2024-12-13 10:46:19.757986] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:44:25.870 [2024-12-13 10:46:19.758032] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:44:25.870 [2024-12-13 10:46:19.758043] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:44:25.870 [2024-12-13 10:46:19.758053] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:44:25.870 [2024-12-13 10:46:19.758062] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:44:25.870 [2024-12-13 10:46:19.760351] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:44:25.870 [2024-12-13 10:46:19.760369] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:44:25.870 [2024-12-13 10:46:19.760388] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:44:25.870 [2024-12-13 10:46:19.760397] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:44:26.801 10:46:20 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:44:26.801 10:46:20 nvmf_identify_passthru -- common/autotest_common.sh@868 -- # return 0 00:44:26.801 10:46:20 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:44:26.801 10:46:20 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:26.802 10:46:20 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:44:26.802 INFO: Log level set to 20 00:44:26.802 INFO: Requests: 00:44:26.802 { 00:44:26.802 "jsonrpc": "2.0", 00:44:26.802 "method": "nvmf_set_config", 00:44:26.802 "id": 1, 00:44:26.802 "params": { 00:44:26.802 "admin_cmd_passthru": { 00:44:26.802 "identify_ctrlr": true 00:44:26.802 } 00:44:26.802 } 00:44:26.802 } 00:44:26.802 00:44:26.802 INFO: response: 00:44:26.802 { 00:44:26.802 "jsonrpc": "2.0", 00:44:26.802 "id": 1, 00:44:26.802 "result": true 00:44:26.802 } 00:44:26.802 00:44:26.802 10:46:20 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:26.802 10:46:20 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:44:26.802 10:46:20 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:26.802 10:46:20 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:44:26.802 INFO: Setting log level to 20 00:44:26.802 INFO: Setting log level to 20 00:44:26.802 INFO: Log level set to 20 00:44:26.802 INFO: Log level set to 20 00:44:26.802 INFO: Requests: 00:44:26.802 { 00:44:26.802 "jsonrpc": "2.0", 00:44:26.802 "method": "framework_start_init", 00:44:26.802 "id": 1 00:44:26.802 } 00:44:26.802 00:44:26.802 INFO: Requests: 00:44:26.802 { 00:44:26.802 "jsonrpc": "2.0", 00:44:26.802 "method": "framework_start_init", 00:44:26.802 "id": 1 00:44:26.802 } 00:44:26.802 00:44:26.802 [2024-12-13 10:46:20.690044] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:44:27.059 INFO: response: 00:44:27.059 { 00:44:27.059 "jsonrpc": "2.0", 00:44:27.059 "id": 1, 00:44:27.059 "result": true 00:44:27.059 } 00:44:27.059 00:44:27.059 INFO: response: 00:44:27.059 { 00:44:27.059 "jsonrpc": "2.0", 00:44:27.059 "id": 1, 00:44:27.059 "result": true 00:44:27.059 } 00:44:27.059 00:44:27.059 10:46:20 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:27.059 10:46:20 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:44:27.059 10:46:20 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:27.059 10:46:20 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:44:27.059 INFO: Setting log level to 40 00:44:27.059 INFO: Setting log level to 40 00:44:27.059 INFO: Setting log level to 40 00:44:27.059 [2024-12-13 10:46:20.706635] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:44:27.059 10:46:20 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:27.059 10:46:20 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:44:27.059 10:46:20 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:44:27.059 10:46:20 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:44:27.059 10:46:20 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:5e:00.0 00:44:27.059 10:46:20 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:27.059 10:46:20 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:44:30.334 Nvme0n1 00:44:30.334 10:46:23 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:30.335 10:46:23 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:44:30.335 10:46:23 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:30.335 10:46:23 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:44:30.335 10:46:23 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:30.335 10:46:23 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:44:30.335 10:46:23 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:30.335 10:46:23 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:44:30.335 10:46:23 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:30.335 10:46:23 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:44:30.335 10:46:23 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:30.335 10:46:23 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:44:30.335 [2024-12-13 10:46:23.683651] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:44:30.335 10:46:23 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:30.335 10:46:23 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:44:30.335 10:46:23 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:30.335 10:46:23 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:44:30.335 [ 00:44:30.335 { 00:44:30.335 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:44:30.335 "subtype": "Discovery", 00:44:30.335 "listen_addresses": [], 00:44:30.335 "allow_any_host": true, 00:44:30.335 "hosts": [] 00:44:30.335 }, 00:44:30.335 { 00:44:30.335 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:44:30.335 "subtype": "NVMe", 00:44:30.335 "listen_addresses": [ 00:44:30.335 { 00:44:30.335 "trtype": "TCP", 00:44:30.335 "adrfam": "IPv4", 00:44:30.335 "traddr": "10.0.0.2", 00:44:30.335 "trsvcid": "4420" 00:44:30.335 } 00:44:30.335 ], 00:44:30.335 "allow_any_host": true, 00:44:30.335 "hosts": [], 00:44:30.335 "serial_number": "SPDK00000000000001", 00:44:30.335 "model_number": "SPDK bdev Controller", 00:44:30.335 "max_namespaces": 1, 00:44:30.335 "min_cntlid": 1, 00:44:30.335 "max_cntlid": 65519, 00:44:30.335 "namespaces": [ 00:44:30.335 { 00:44:30.335 "nsid": 1, 00:44:30.335 "bdev_name": "Nvme0n1", 00:44:30.335 "name": "Nvme0n1", 00:44:30.335 "nguid": "901833ECD2FB42389D050636B312BA76", 00:44:30.335 "uuid": "901833ec-d2fb-4238-9d05-0636b312ba76" 00:44:30.335 } 00:44:30.335 ] 00:44:30.335 } 00:44:30.335 ] 00:44:30.335 10:46:23 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:30.335 10:46:23 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:44:30.335 10:46:23 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:44:30.335 10:46:23 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:44:30.335 10:46:23 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=BTLJ7244049A1P0FGN 00:44:30.335 10:46:23 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:44:30.335 10:46:23 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:44:30.335 10:46:23 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:44:30.335 10:46:24 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:44:30.335 10:46:24 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' BTLJ7244049A1P0FGN '!=' BTLJ7244049A1P0FGN ']' 00:44:30.335 10:46:24 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:44:30.335 10:46:24 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:44:30.335 10:46:24 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:30.335 10:46:24 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:44:30.335 10:46:24 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:30.335 10:46:24 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:44:30.335 10:46:24 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:44:30.335 10:46:24 nvmf_identify_passthru -- nvmf/common.sh@516 -- # nvmfcleanup 00:44:30.335 10:46:24 nvmf_identify_passthru -- nvmf/common.sh@121 -- # sync 00:44:30.335 10:46:24 nvmf_identify_passthru -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:44:30.335 10:46:24 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set +e 00:44:30.335 10:46:24 nvmf_identify_passthru -- nvmf/common.sh@125 -- # for i in {1..20} 00:44:30.335 10:46:24 nvmf_identify_passthru -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:44:30.335 rmmod nvme_tcp 00:44:30.335 rmmod nvme_fabrics 00:44:30.335 rmmod nvme_keyring 00:44:30.335 10:46:24 nvmf_identify_passthru -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:44:30.335 10:46:24 nvmf_identify_passthru -- nvmf/common.sh@128 -- # set -e 00:44:30.335 10:46:24 nvmf_identify_passthru -- nvmf/common.sh@129 -- # return 0 00:44:30.335 10:46:24 nvmf_identify_passthru -- nvmf/common.sh@517 -- # '[' -n 59046 ']' 00:44:30.335 10:46:24 nvmf_identify_passthru -- nvmf/common.sh@518 -- # killprocess 59046 00:44:30.335 10:46:24 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # '[' -z 59046 ']' 00:44:30.335 10:46:24 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # kill -0 59046 00:44:30.592 10:46:24 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # uname 00:44:30.592 10:46:24 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:44:30.592 10:46:24 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59046 00:44:30.592 10:46:24 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:44:30.592 10:46:24 nvmf_identify_passthru -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:44:30.592 10:46:24 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59046' 00:44:30.592 killing process with pid 59046 00:44:30.592 10:46:24 nvmf_identify_passthru -- common/autotest_common.sh@973 -- # kill 59046 00:44:30.592 10:46:24 nvmf_identify_passthru -- common/autotest_common.sh@978 -- # wait 59046 00:44:33.117 10:46:26 nvmf_identify_passthru -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:44:33.117 10:46:26 nvmf_identify_passthru -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:44:33.117 10:46:26 nvmf_identify_passthru -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:44:33.117 10:46:26 nvmf_identify_passthru -- nvmf/common.sh@297 -- # iptr 00:44:33.117 10:46:26 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-restore 00:44:33.117 10:46:26 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-save 00:44:33.117 10:46:26 nvmf_identify_passthru -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:44:33.117 10:46:26 nvmf_identify_passthru -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:44:33.117 10:46:26 nvmf_identify_passthru -- nvmf/common.sh@302 -- # remove_spdk_ns 00:44:33.117 10:46:26 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:44:33.117 10:46:26 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:44:33.117 10:46:26 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:44:35.019 10:46:28 nvmf_identify_passthru -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:44:35.019 00:44:35.019 real 0m23.295s 00:44:35.019 user 0m33.304s 00:44:35.019 sys 0m5.888s 00:44:35.019 10:46:28 nvmf_identify_passthru -- common/autotest_common.sh@1130 -- # xtrace_disable 00:44:35.019 10:46:28 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:44:35.019 ************************************ 00:44:35.019 END TEST nvmf_identify_passthru 00:44:35.019 ************************************ 00:44:35.019 10:46:28 -- spdk/autotest.sh@289 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:44:35.019 10:46:28 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:44:35.019 10:46:28 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:44:35.019 10:46:28 -- common/autotest_common.sh@10 -- # set +x 00:44:35.019 ************************************ 00:44:35.019 START TEST nvmf_dif 00:44:35.019 ************************************ 00:44:35.019 10:46:28 nvmf_dif -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:44:35.278 * Looking for test storage... 00:44:35.278 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:44:35.278 10:46:28 nvmf_dif -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:44:35.278 10:46:28 nvmf_dif -- common/autotest_common.sh@1711 -- # lcov --version 00:44:35.278 10:46:28 nvmf_dif -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:44:35.278 10:46:29 nvmf_dif -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:44:35.278 10:46:29 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:44:35.278 10:46:29 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:44:35.278 10:46:29 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:44:35.278 10:46:29 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:44:35.278 10:46:29 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:44:35.278 10:46:29 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:44:35.278 10:46:29 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:44:35.278 10:46:29 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:44:35.278 10:46:29 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:44:35.278 10:46:29 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:44:35.278 10:46:29 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:44:35.278 10:46:29 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:44:35.278 10:46:29 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:44:35.278 10:46:29 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:44:35.278 10:46:29 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:44:35.278 10:46:29 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:44:35.278 10:46:29 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:44:35.278 10:46:29 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:44:35.278 10:46:29 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:44:35.278 10:46:29 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:44:35.278 10:46:29 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:44:35.278 10:46:29 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:44:35.278 10:46:29 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:44:35.278 10:46:29 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:44:35.278 10:46:29 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:44:35.278 10:46:29 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:44:35.278 10:46:29 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:44:35.278 10:46:29 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:44:35.278 10:46:29 nvmf_dif -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:44:35.278 10:46:29 nvmf_dif -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:44:35.278 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:35.278 --rc genhtml_branch_coverage=1 00:44:35.278 --rc genhtml_function_coverage=1 00:44:35.278 --rc genhtml_legend=1 00:44:35.278 --rc geninfo_all_blocks=1 00:44:35.278 --rc geninfo_unexecuted_blocks=1 00:44:35.278 00:44:35.278 ' 00:44:35.278 10:46:29 nvmf_dif -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:44:35.278 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:35.278 --rc genhtml_branch_coverage=1 00:44:35.278 --rc genhtml_function_coverage=1 00:44:35.278 --rc genhtml_legend=1 00:44:35.278 --rc geninfo_all_blocks=1 00:44:35.279 --rc geninfo_unexecuted_blocks=1 00:44:35.279 00:44:35.279 ' 00:44:35.279 10:46:29 nvmf_dif -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:44:35.279 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:35.279 --rc genhtml_branch_coverage=1 00:44:35.279 --rc genhtml_function_coverage=1 00:44:35.279 --rc genhtml_legend=1 00:44:35.279 --rc geninfo_all_blocks=1 00:44:35.279 --rc geninfo_unexecuted_blocks=1 00:44:35.279 00:44:35.279 ' 00:44:35.279 10:46:29 nvmf_dif -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:44:35.279 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:35.279 --rc genhtml_branch_coverage=1 00:44:35.279 --rc genhtml_function_coverage=1 00:44:35.279 --rc genhtml_legend=1 00:44:35.279 --rc geninfo_all_blocks=1 00:44:35.279 --rc geninfo_unexecuted_blocks=1 00:44:35.279 00:44:35.279 ' 00:44:35.279 10:46:29 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:44:35.279 10:46:29 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:44:35.279 10:46:29 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:44:35.279 10:46:29 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:44:35.279 10:46:29 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:44:35.279 10:46:29 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:44:35.279 10:46:29 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:44:35.279 10:46:29 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:44:35.279 10:46:29 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:44:35.279 10:46:29 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:44:35.279 10:46:29 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:44:35.279 10:46:29 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:44:35.279 10:46:29 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:44:35.279 10:46:29 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:44:35.279 10:46:29 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:44:35.279 10:46:29 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:44:35.279 10:46:29 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:44:35.279 10:46:29 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:44:35.279 10:46:29 nvmf_dif -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:44:35.279 10:46:29 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:44:35.279 10:46:29 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:44:35.279 10:46:29 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:44:35.279 10:46:29 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:44:35.279 10:46:29 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:35.279 10:46:29 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:35.279 10:46:29 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:35.279 10:46:29 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:44:35.279 10:46:29 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:35.279 10:46:29 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:44:35.279 10:46:29 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:44:35.279 10:46:29 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:44:35.279 10:46:29 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:44:35.279 10:46:29 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:44:35.279 10:46:29 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:44:35.279 10:46:29 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:44:35.279 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:44:35.279 10:46:29 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:44:35.279 10:46:29 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:44:35.279 10:46:29 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:44:35.279 10:46:29 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:44:35.279 10:46:29 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:44:35.279 10:46:29 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:44:35.279 10:46:29 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:44:35.279 10:46:29 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:44:35.279 10:46:29 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:44:35.279 10:46:29 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:44:35.279 10:46:29 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs 00:44:35.279 10:46:29 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no 00:44:35.279 10:46:29 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns 00:44:35.279 10:46:29 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:44:35.279 10:46:29 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:44:35.279 10:46:29 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:44:35.279 10:46:29 nvmf_dif -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:44:35.279 10:46:29 nvmf_dif -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:44:35.279 10:46:29 nvmf_dif -- nvmf/common.sh@309 -- # xtrace_disable 00:44:35.279 10:46:29 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:44:40.541 10:46:34 nvmf_dif -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:44:40.541 10:46:34 nvmf_dif -- nvmf/common.sh@315 -- # pci_devs=() 00:44:40.541 10:46:34 nvmf_dif -- nvmf/common.sh@315 -- # local -a pci_devs 00:44:40.541 10:46:34 nvmf_dif -- nvmf/common.sh@316 -- # pci_net_devs=() 00:44:40.541 10:46:34 nvmf_dif -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:44:40.541 10:46:34 nvmf_dif -- nvmf/common.sh@317 -- # pci_drivers=() 00:44:40.541 10:46:34 nvmf_dif -- nvmf/common.sh@317 -- # local -A pci_drivers 00:44:40.541 10:46:34 nvmf_dif -- nvmf/common.sh@319 -- # net_devs=() 00:44:40.541 10:46:34 nvmf_dif -- nvmf/common.sh@319 -- # local -ga net_devs 00:44:40.541 10:46:34 nvmf_dif -- nvmf/common.sh@320 -- # e810=() 00:44:40.541 10:46:34 nvmf_dif -- nvmf/common.sh@320 -- # local -ga e810 00:44:40.541 10:46:34 nvmf_dif -- nvmf/common.sh@321 -- # x722=() 00:44:40.541 10:46:34 nvmf_dif -- nvmf/common.sh@321 -- # local -ga x722 00:44:40.541 10:46:34 nvmf_dif -- nvmf/common.sh@322 -- # mlx=() 00:44:40.541 10:46:34 nvmf_dif -- nvmf/common.sh@322 -- # local -ga mlx 00:44:40.541 10:46:34 nvmf_dif -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:44:40.541 10:46:34 nvmf_dif -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:44:40.541 10:46:34 nvmf_dif -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:44:40.541 10:46:34 nvmf_dif -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:44:40.541 10:46:34 nvmf_dif -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:44:40.541 10:46:34 nvmf_dif -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:44:40.541 10:46:34 nvmf_dif -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:44:40.541 10:46:34 nvmf_dif -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:44:40.541 10:46:34 nvmf_dif -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:44:40.541 10:46:34 nvmf_dif -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:44:40.541 10:46:34 nvmf_dif -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:44:40.541 10:46:34 nvmf_dif -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:44:40.541 10:46:34 nvmf_dif -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:44:40.541 10:46:34 nvmf_dif -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:44:40.541 10:46:34 nvmf_dif -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:44:40.541 10:46:34 nvmf_dif -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:44:40.541 10:46:34 nvmf_dif -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:44:40.541 10:46:34 nvmf_dif -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:44:40.541 10:46:34 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:44:40.541 10:46:34 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:44:40.541 Found 0000:af:00.0 (0x8086 - 0x159b) 00:44:40.541 10:46:34 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:44:40.541 10:46:34 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:44:40.541 10:46:34 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:44:40.541 10:46:34 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:44:40.541 10:46:34 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:44:40.541 10:46:34 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:44:40.541 10:46:34 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:44:40.541 Found 0000:af:00.1 (0x8086 - 0x159b) 00:44:40.541 10:46:34 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:44:40.541 10:46:34 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:44:40.541 10:46:34 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:44:40.541 10:46:34 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:44:40.541 10:46:34 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:44:40.541 10:46:34 nvmf_dif -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:44:40.541 10:46:34 nvmf_dif -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:44:40.541 10:46:34 nvmf_dif -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:44:40.541 10:46:34 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:44:40.541 10:46:34 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:44:40.541 10:46:34 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:44:40.541 10:46:34 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:44:40.541 10:46:34 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:44:40.541 10:46:34 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:44:40.541 10:46:34 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:44:40.541 10:46:34 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:44:40.541 Found net devices under 0000:af:00.0: cvl_0_0 00:44:40.541 10:46:34 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:44:40.541 10:46:34 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:44:40.541 10:46:34 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:44:40.541 10:46:34 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:44:40.542 10:46:34 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:44:40.542 10:46:34 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:44:40.542 10:46:34 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:44:40.542 10:46:34 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:44:40.542 10:46:34 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:44:40.542 Found net devices under 0000:af:00.1: cvl_0_1 00:44:40.542 10:46:34 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:44:40.542 10:46:34 nvmf_dif -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:44:40.542 10:46:34 nvmf_dif -- nvmf/common.sh@442 -- # is_hw=yes 00:44:40.542 10:46:34 nvmf_dif -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:44:40.542 10:46:34 nvmf_dif -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:44:40.542 10:46:34 nvmf_dif -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:44:40.542 10:46:34 nvmf_dif -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:44:40.542 10:46:34 nvmf_dif -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:44:40.542 10:46:34 nvmf_dif -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:44:40.542 10:46:34 nvmf_dif -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:44:40.542 10:46:34 nvmf_dif -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:44:40.542 10:46:34 nvmf_dif -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:44:40.542 10:46:34 nvmf_dif -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:44:40.542 10:46:34 nvmf_dif -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:44:40.542 10:46:34 nvmf_dif -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:44:40.542 10:46:34 nvmf_dif -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:44:40.542 10:46:34 nvmf_dif -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:44:40.542 10:46:34 nvmf_dif -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:44:40.542 10:46:34 nvmf_dif -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:44:40.542 10:46:34 nvmf_dif -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:44:40.542 10:46:34 nvmf_dif -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:44:40.542 10:46:34 nvmf_dif -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:44:40.542 10:46:34 nvmf_dif -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:44:40.542 10:46:34 nvmf_dif -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:44:40.542 10:46:34 nvmf_dif -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:44:40.542 10:46:34 nvmf_dif -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:44:40.542 10:46:34 nvmf_dif -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:44:40.542 10:46:34 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:44:40.542 10:46:34 nvmf_dif -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:44:40.542 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:44:40.542 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.277 ms 00:44:40.542 00:44:40.542 --- 10.0.0.2 ping statistics --- 00:44:40.542 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:44:40.542 rtt min/avg/max/mdev = 0.277/0.277/0.277/0.000 ms 00:44:40.542 10:46:34 nvmf_dif -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:44:40.542 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:44:40.542 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.150 ms 00:44:40.542 00:44:40.542 --- 10.0.0.1 ping statistics --- 00:44:40.542 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:44:40.542 rtt min/avg/max/mdev = 0.150/0.150/0.150/0.000 ms 00:44:40.542 10:46:34 nvmf_dif -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:44:40.542 10:46:34 nvmf_dif -- nvmf/common.sh@450 -- # return 0 00:44:40.542 10:46:34 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:44:40.542 10:46:34 nvmf_dif -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:44:43.069 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:44:43.069 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:44:43.069 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:44:43.069 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:44:43.069 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:44:43.069 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:44:43.069 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:44:43.069 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:44:43.069 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:44:43.069 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:44:43.069 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:44:43.069 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:44:43.069 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:44:43.069 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:44:43.069 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:44:43.069 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:44:43.069 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:44:43.069 10:46:36 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:44:43.069 10:46:36 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:44:43.069 10:46:36 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:44:43.069 10:46:36 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:44:43.069 10:46:36 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:44:43.069 10:46:36 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:44:43.069 10:46:36 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:44:43.069 10:46:36 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:44:43.069 10:46:36 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:44:43.069 10:46:36 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:44:43.069 10:46:36 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:44:43.069 10:46:36 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=64592 00:44:43.069 10:46:36 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 64592 00:44:43.069 10:46:36 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:44:43.069 10:46:36 nvmf_dif -- common/autotest_common.sh@835 -- # '[' -z 64592 ']' 00:44:43.069 10:46:36 nvmf_dif -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:44:43.069 10:46:36 nvmf_dif -- common/autotest_common.sh@840 -- # local max_retries=100 00:44:43.069 10:46:36 nvmf_dif -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:44:43.069 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:44:43.069 10:46:36 nvmf_dif -- common/autotest_common.sh@844 -- # xtrace_disable 00:44:43.069 10:46:36 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:44:43.069 [2024-12-13 10:46:36.771927] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:44:43.069 [2024-12-13 10:46:36.772013] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:44:43.069 [2024-12-13 10:46:36.889280] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:43.326 [2024-12-13 10:46:36.992217] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:44:43.326 [2024-12-13 10:46:36.992262] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:44:43.326 [2024-12-13 10:46:36.992271] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:44:43.326 [2024-12-13 10:46:36.992297] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:44:43.326 [2024-12-13 10:46:36.992305] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:44:43.326 [2024-12-13 10:46:36.993624] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:44:43.893 10:46:37 nvmf_dif -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:44:43.893 10:46:37 nvmf_dif -- common/autotest_common.sh@868 -- # return 0 00:44:43.893 10:46:37 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:44:43.893 10:46:37 nvmf_dif -- common/autotest_common.sh@732 -- # xtrace_disable 00:44:43.893 10:46:37 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:44:43.893 10:46:37 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:44:43.893 10:46:37 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:44:43.893 10:46:37 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:44:43.893 10:46:37 nvmf_dif -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:43.893 10:46:37 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:44:43.893 [2024-12-13 10:46:37.606999] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:44:43.893 10:46:37 nvmf_dif -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:43.893 10:46:37 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:44:43.893 10:46:37 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:44:43.893 10:46:37 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:44:43.893 10:46:37 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:44:43.893 ************************************ 00:44:43.893 START TEST fio_dif_1_default 00:44:43.893 ************************************ 00:44:43.893 10:46:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1129 -- # fio_dif_1 00:44:43.893 10:46:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:44:43.893 10:46:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:44:43.893 10:46:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:44:43.893 10:46:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:44:43.893 10:46:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:44:43.893 10:46:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:44:43.893 10:46:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:43.893 10:46:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:44:43.893 bdev_null0 00:44:43.893 10:46:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:43.893 10:46:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:44:43.893 10:46:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:43.893 10:46:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:44:43.893 10:46:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:43.893 10:46:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:44:43.893 10:46:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:43.893 10:46:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:44:43.893 10:46:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:43.893 10:46:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:44:43.893 10:46:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:43.894 10:46:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:44:43.894 [2024-12-13 10:46:37.663298] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:44:43.894 10:46:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:43.894 10:46:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:44:43.894 10:46:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:44:43.894 10:46:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:44:43.894 10:46:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:44:43.894 10:46:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:44:43.894 10:46:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local sanitizers 00:44:43.894 10:46:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:44:43.894 10:46:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:44:43.894 10:46:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # shift 00:44:43.894 10:46:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # local asan_lib= 00:44:43.894 10:46:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:44:43.894 10:46:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:44:43.894 10:46:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:44:43.894 10:46:37 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=() 00:44:43.894 10:46:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:44:43.894 10:46:37 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config 00:44:43.894 10:46:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:44:43.894 10:46:37 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:44:43.894 10:46:37 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:44:43.894 { 00:44:43.894 "params": { 00:44:43.894 "name": "Nvme$subsystem", 00:44:43.894 "trtype": "$TEST_TRANSPORT", 00:44:43.894 "traddr": "$NVMF_FIRST_TARGET_IP", 00:44:43.894 "adrfam": "ipv4", 00:44:43.894 "trsvcid": "$NVMF_PORT", 00:44:43.894 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:44:43.894 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:44:43.894 "hdgst": ${hdgst:-false}, 00:44:43.894 "ddgst": ${ddgst:-false} 00:44:43.894 }, 00:44:43.894 "method": "bdev_nvme_attach_controller" 00:44:43.894 } 00:44:43.894 EOF 00:44:43.894 )") 00:44:43.894 10:46:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:44:43.894 10:46:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libasan 00:44:43.894 10:46:37 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat 00:44:43.894 10:46:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:44:43.894 10:46:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:44:43.894 10:46:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:44:43.894 10:46:37 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq . 00:44:43.894 10:46:37 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=, 00:44:43.894 10:46:37 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:44:43.894 "params": { 00:44:43.894 "name": "Nvme0", 00:44:43.894 "trtype": "tcp", 00:44:43.894 "traddr": "10.0.0.2", 00:44:43.894 "adrfam": "ipv4", 00:44:43.894 "trsvcid": "4420", 00:44:43.894 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:44:43.894 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:44:43.894 "hdgst": false, 00:44:43.894 "ddgst": false 00:44:43.894 }, 00:44:43.894 "method": "bdev_nvme_attach_controller" 00:44:43.894 }' 00:44:43.894 10:46:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:44:43.894 10:46:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:44:43.894 10:46:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1351 -- # break 00:44:43.894 10:46:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:44:43.894 10:46:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:44:44.152 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:44:44.152 fio-3.35 00:44:44.152 Starting 1 thread 00:44:56.353 00:44:56.353 filename0: (groupid=0, jobs=1): err= 0: pid=65081: Fri Dec 13 10:46:48 2024 00:44:56.353 read: IOPS=97, BW=390KiB/s (399kB/s)(3904KiB/10012msec) 00:44:56.353 slat (nsec): min=6639, max=72268, avg=9018.51, stdev=3922.47 00:44:56.353 clat (usec): min=40794, max=42984, avg=41001.09, stdev=196.49 00:44:56.353 lat (usec): min=40801, max=43022, avg=41010.11, stdev=197.26 00:44:56.353 clat percentiles (usec): 00:44:56.353 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:44:56.353 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:44:56.353 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:44:56.353 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:44:56.353 | 99.99th=[42730] 00:44:56.353 bw ( KiB/s): min= 384, max= 416, per=99.50%, avg=388.80, stdev=11.72, samples=20 00:44:56.353 iops : min= 96, max= 104, avg=97.20, stdev= 2.93, samples=20 00:44:56.353 lat (msec) : 50=100.00% 00:44:56.353 cpu : usr=93.35%, sys=6.32%, ctx=13, majf=0, minf=1632 00:44:56.353 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:44:56.353 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:56.353 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:56.353 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:56.353 latency : target=0, window=0, percentile=100.00%, depth=4 00:44:56.353 00:44:56.353 Run status group 0 (all jobs): 00:44:56.353 READ: bw=390KiB/s (399kB/s), 390KiB/s-390KiB/s (399kB/s-399kB/s), io=3904KiB (3998kB), run=10012-10012msec 00:44:56.353 ----------------------------------------------------- 00:44:56.353 Suppressions used: 00:44:56.353 count bytes template 00:44:56.353 1 8 /usr/src/fio/parse.c 00:44:56.353 1 8 libtcmalloc_minimal.so 00:44:56.353 1 904 libcrypto.so 00:44:56.353 ----------------------------------------------------- 00:44:56.353 00:44:56.353 10:46:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:44:56.353 10:46:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:44:56.353 10:46:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:44:56.353 10:46:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:44:56.353 10:46:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:44:56.353 10:46:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:44:56.353 10:46:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:56.353 10:46:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:44:56.353 10:46:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:56.353 10:46:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:44:56.353 10:46:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:56.353 10:46:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:44:56.353 10:46:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:56.353 00:44:56.353 real 0m12.511s 00:44:56.353 user 0m17.028s 00:44:56.353 sys 0m1.210s 00:44:56.353 10:46:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1130 -- # xtrace_disable 00:44:56.354 10:46:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:44:56.354 ************************************ 00:44:56.354 END TEST fio_dif_1_default 00:44:56.354 ************************************ 00:44:56.354 10:46:50 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:44:56.354 10:46:50 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:44:56.354 10:46:50 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:44:56.354 10:46:50 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:44:56.354 ************************************ 00:44:56.354 START TEST fio_dif_1_multi_subsystems 00:44:56.354 ************************************ 00:44:56.354 10:46:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1129 -- # fio_dif_1_multi_subsystems 00:44:56.354 10:46:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:44:56.354 10:46:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:44:56.354 10:46:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:44:56.354 10:46:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:44:56.354 10:46:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:44:56.354 10:46:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:44:56.354 10:46:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:44:56.354 10:46:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:56.354 10:46:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:44:56.354 bdev_null0 00:44:56.354 10:46:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:56.354 10:46:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:44:56.354 10:46:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:56.354 10:46:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:44:56.354 10:46:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:56.354 10:46:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:44:56.354 10:46:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:56.354 10:46:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:44:56.354 10:46:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:56.354 10:46:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:44:56.354 10:46:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:56.354 10:46:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:44:56.613 [2024-12-13 10:46:50.246693] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:44:56.613 10:46:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:56.613 10:46:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:44:56.613 10:46:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:44:56.613 10:46:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:44:56.613 10:46:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:44:56.613 10:46:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:56.613 10:46:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:44:56.613 bdev_null1 00:44:56.613 10:46:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:56.613 10:46:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:44:56.613 10:46:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:56.613 10:46:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:44:56.613 10:46:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:56.613 10:46:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:44:56.613 10:46:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:56.613 10:46:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:44:56.613 10:46:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:56.613 10:46:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:44:56.613 10:46:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:56.613 10:46:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:44:56.613 10:46:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:56.613 10:46:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:44:56.613 10:46:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:44:56.613 10:46:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:44:56.613 10:46:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:44:56.613 10:46:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:44:56.613 10:46:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local sanitizers 00:44:56.613 10:46:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:44:56.613 10:46:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:44:56.613 10:46:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # shift 00:44:56.613 10:46:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # local asan_lib= 00:44:56.613 10:46:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:44:56.613 10:46:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:44:56.613 10:46:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:44:56.613 10:46:50 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=() 00:44:56.613 10:46:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:44:56.613 10:46:50 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config 00:44:56.613 10:46:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:44:56.613 10:46:50 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:44:56.613 10:46:50 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:44:56.613 { 00:44:56.613 "params": { 00:44:56.613 "name": "Nvme$subsystem", 00:44:56.613 "trtype": "$TEST_TRANSPORT", 00:44:56.613 "traddr": "$NVMF_FIRST_TARGET_IP", 00:44:56.613 "adrfam": "ipv4", 00:44:56.613 "trsvcid": "$NVMF_PORT", 00:44:56.613 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:44:56.613 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:44:56.613 "hdgst": ${hdgst:-false}, 00:44:56.613 "ddgst": ${ddgst:-false} 00:44:56.613 }, 00:44:56.613 "method": "bdev_nvme_attach_controller" 00:44:56.613 } 00:44:56.613 EOF 00:44:56.613 )") 00:44:56.613 10:46:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:44:56.613 10:46:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libasan 00:44:56.613 10:46:50 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:44:56.613 10:46:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:44:56.613 10:46:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:44:56.613 10:46:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:44:56.613 10:46:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:44:56.613 10:46:50 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:44:56.613 10:46:50 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:44:56.613 { 00:44:56.613 "params": { 00:44:56.613 "name": "Nvme$subsystem", 00:44:56.613 "trtype": "$TEST_TRANSPORT", 00:44:56.613 "traddr": "$NVMF_FIRST_TARGET_IP", 00:44:56.613 "adrfam": "ipv4", 00:44:56.613 "trsvcid": "$NVMF_PORT", 00:44:56.613 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:44:56.613 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:44:56.613 "hdgst": ${hdgst:-false}, 00:44:56.613 "ddgst": ${ddgst:-false} 00:44:56.613 }, 00:44:56.613 "method": "bdev_nvme_attach_controller" 00:44:56.613 } 00:44:56.613 EOF 00:44:56.613 )") 00:44:56.613 10:46:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:44:56.613 10:46:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:44:56.613 10:46:50 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:44:56.613 10:46:50 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq . 00:44:56.613 10:46:50 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=, 00:44:56.613 10:46:50 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:44:56.613 "params": { 00:44:56.613 "name": "Nvme0", 00:44:56.614 "trtype": "tcp", 00:44:56.614 "traddr": "10.0.0.2", 00:44:56.614 "adrfam": "ipv4", 00:44:56.614 "trsvcid": "4420", 00:44:56.614 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:44:56.614 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:44:56.614 "hdgst": false, 00:44:56.614 "ddgst": false 00:44:56.614 }, 00:44:56.614 "method": "bdev_nvme_attach_controller" 00:44:56.614 },{ 00:44:56.614 "params": { 00:44:56.614 "name": "Nvme1", 00:44:56.614 "trtype": "tcp", 00:44:56.614 "traddr": "10.0.0.2", 00:44:56.614 "adrfam": "ipv4", 00:44:56.614 "trsvcid": "4420", 00:44:56.614 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:44:56.614 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:44:56.614 "hdgst": false, 00:44:56.614 "ddgst": false 00:44:56.614 }, 00:44:56.614 "method": "bdev_nvme_attach_controller" 00:44:56.614 }' 00:44:56.614 10:46:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:44:56.614 10:46:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:44:56.614 10:46:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1351 -- # break 00:44:56.614 10:46:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:44:56.614 10:46:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:44:56.872 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:44:56.872 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:44:56.872 fio-3.35 00:44:56.872 Starting 2 threads 00:45:09.151 00:45:09.151 filename0: (groupid=0, jobs=1): err= 0: pid=67206: Fri Dec 13 10:47:01 2024 00:45:09.151 read: IOPS=97, BW=390KiB/s (399kB/s)(3904KiB/10015msec) 00:45:09.151 slat (nsec): min=6958, max=37722, avg=9175.17, stdev=2930.17 00:45:09.151 clat (usec): min=40725, max=44608, avg=41015.19, stdev=287.72 00:45:09.151 lat (usec): min=40732, max=44646, avg=41024.37, stdev=288.42 00:45:09.151 clat percentiles (usec): 00:45:09.152 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:45:09.152 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:45:09.152 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:45:09.152 | 99.00th=[42206], 99.50th=[42206], 99.90th=[44827], 99.95th=[44827], 00:45:09.152 | 99.99th=[44827] 00:45:09.152 bw ( KiB/s): min= 384, max= 416, per=33.97%, avg=388.80, stdev=11.72, samples=20 00:45:09.152 iops : min= 96, max= 104, avg=97.20, stdev= 2.93, samples=20 00:45:09.152 lat (msec) : 50=100.00% 00:45:09.152 cpu : usr=96.79%, sys=2.93%, ctx=13, majf=0, minf=1635 00:45:09.152 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:45:09.152 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:09.152 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:09.152 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:09.152 latency : target=0, window=0, percentile=100.00%, depth=4 00:45:09.152 filename1: (groupid=0, jobs=1): err= 0: pid=67207: Fri Dec 13 10:47:01 2024 00:45:09.152 read: IOPS=188, BW=753KiB/s (771kB/s)(7536KiB/10009msec) 00:45:09.152 slat (nsec): min=6891, max=36734, avg=8328.47, stdev=2060.83 00:45:09.152 clat (usec): min=482, max=46407, avg=21225.09, stdev=20551.63 00:45:09.152 lat (usec): min=489, max=46444, avg=21233.42, stdev=20551.14 00:45:09.152 clat percentiles (usec): 00:45:09.152 | 1.00th=[ 490], 5.00th=[ 498], 10.00th=[ 506], 20.00th=[ 523], 00:45:09.152 | 30.00th=[ 537], 40.00th=[ 635], 50.00th=[41157], 60.00th=[41157], 00:45:09.152 | 70.00th=[41681], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:45:09.152 | 99.00th=[42730], 99.50th=[42730], 99.90th=[46400], 99.95th=[46400], 00:45:09.152 | 99.99th=[46400] 00:45:09.152 bw ( KiB/s): min= 704, max= 768, per=65.83%, avg=752.00, stdev=28.43, samples=20 00:45:09.152 iops : min= 176, max= 192, avg=188.00, stdev= 7.11, samples=20 00:45:09.152 lat (usec) : 500=6.74%, 750=42.30%, 1000=0.64% 00:45:09.152 lat (msec) : 50=50.32% 00:45:09.152 cpu : usr=97.04%, sys=2.67%, ctx=15, majf=0, minf=1633 00:45:09.152 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:45:09.152 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:09.152 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:09.152 issued rwts: total=1884,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:09.152 latency : target=0, window=0, percentile=100.00%, depth=4 00:45:09.152 00:45:09.152 Run status group 0 (all jobs): 00:45:09.152 READ: bw=1142KiB/s (1170kB/s), 390KiB/s-753KiB/s (399kB/s-771kB/s), io=11.2MiB (11.7MB), run=10009-10015msec 00:45:09.152 ----------------------------------------------------- 00:45:09.152 Suppressions used: 00:45:09.152 count bytes template 00:45:09.152 2 16 /usr/src/fio/parse.c 00:45:09.152 1 8 libtcmalloc_minimal.so 00:45:09.152 1 904 libcrypto.so 00:45:09.152 ----------------------------------------------------- 00:45:09.152 00:45:09.152 10:47:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:45:09.152 10:47:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:45:09.152 10:47:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:45:09.152 10:47:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:45:09.152 10:47:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:45:09.152 10:47:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:45:09.152 10:47:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:09.152 10:47:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:45:09.152 10:47:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:09.152 10:47:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:45:09.152 10:47:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:09.152 10:47:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:45:09.152 10:47:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:09.152 10:47:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:45:09.152 10:47:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:45:09.152 10:47:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:45:09.152 10:47:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:45:09.152 10:47:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:09.152 10:47:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:45:09.152 10:47:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:09.152 10:47:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:45:09.152 10:47:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:09.152 10:47:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:45:09.152 10:47:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:09.152 00:45:09.152 real 0m12.664s 00:45:09.152 user 0m28.091s 00:45:09.152 sys 0m1.071s 00:45:09.152 10:47:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1130 -- # xtrace_disable 00:45:09.152 10:47:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:45:09.152 ************************************ 00:45:09.152 END TEST fio_dif_1_multi_subsystems 00:45:09.152 ************************************ 00:45:09.152 10:47:02 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:45:09.152 10:47:02 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:45:09.152 10:47:02 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:45:09.152 10:47:02 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:45:09.152 ************************************ 00:45:09.152 START TEST fio_dif_rand_params 00:45:09.152 ************************************ 00:45:09.152 10:47:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1129 -- # fio_dif_rand_params 00:45:09.152 10:47:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:45:09.152 10:47:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:45:09.152 10:47:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:45:09.152 10:47:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:45:09.152 10:47:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:45:09.152 10:47:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:45:09.152 10:47:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:45:09.152 10:47:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:45:09.152 10:47:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:45:09.152 10:47:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:45:09.152 10:47:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:45:09.152 10:47:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:45:09.152 10:47:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:45:09.152 10:47:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:09.152 10:47:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:09.152 bdev_null0 00:45:09.152 10:47:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:09.152 10:47:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:45:09.152 10:47:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:09.152 10:47:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:09.152 10:47:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:09.152 10:47:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:45:09.152 10:47:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:09.152 10:47:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:09.152 10:47:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:09.152 10:47:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:45:09.152 10:47:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:09.152 10:47:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:09.152 [2024-12-13 10:47:02.983830] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:45:09.152 10:47:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:09.152 10:47:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:45:09.152 10:47:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:45:09.152 10:47:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:45:09.152 10:47:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:45:09.152 10:47:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:45:09.152 10:47:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:45:09.152 10:47:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:45:09.152 10:47:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:45:09.152 10:47:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:45:09.152 10:47:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:45:09.152 10:47:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:45:09.152 10:47:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:45:09.152 10:47:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:45:09.152 10:47:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:45:09.152 { 00:45:09.152 "params": { 00:45:09.152 "name": "Nvme$subsystem", 00:45:09.152 "trtype": "$TEST_TRANSPORT", 00:45:09.152 "traddr": "$NVMF_FIRST_TARGET_IP", 00:45:09.152 "adrfam": "ipv4", 00:45:09.152 "trsvcid": "$NVMF_PORT", 00:45:09.152 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:45:09.152 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:45:09.152 "hdgst": ${hdgst:-false}, 00:45:09.152 "ddgst": ${ddgst:-false} 00:45:09.152 }, 00:45:09.152 "method": "bdev_nvme_attach_controller" 00:45:09.152 } 00:45:09.152 EOF 00:45:09.152 )") 00:45:09.152 10:47:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:45:09.152 10:47:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:45:09.152 10:47:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:45:09.152 10:47:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:45:09.152 10:47:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:45:09.152 10:47:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:45:09.152 10:47:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:45:09.152 10:47:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:45:09.152 10:47:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:45:09.152 10:47:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:45:09.152 10:47:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:45:09.152 10:47:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:45:09.152 10:47:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:45:09.152 10:47:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:45:09.152 "params": { 00:45:09.152 "name": "Nvme0", 00:45:09.152 "trtype": "tcp", 00:45:09.152 "traddr": "10.0.0.2", 00:45:09.152 "adrfam": "ipv4", 00:45:09.152 "trsvcid": "4420", 00:45:09.152 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:45:09.152 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:45:09.152 "hdgst": false, 00:45:09.152 "ddgst": false 00:45:09.152 }, 00:45:09.152 "method": "bdev_nvme_attach_controller" 00:45:09.152 }' 00:45:09.152 10:47:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:45:09.152 10:47:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:45:09.152 10:47:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1351 -- # break 00:45:09.152 10:47:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:45:09.152 10:47:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:45:09.744 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:45:09.744 ... 00:45:09.744 fio-3.35 00:45:09.744 Starting 3 threads 00:45:16.301 00:45:16.301 filename0: (groupid=0, jobs=1): err= 0: pid=69157: Fri Dec 13 10:47:09 2024 00:45:16.301 read: IOPS=283, BW=35.5MiB/s (37.2MB/s)(178MiB/5006msec) 00:45:16.301 slat (nsec): min=7504, max=43337, avg=20011.30, stdev=5303.35 00:45:16.301 clat (usec): min=4261, max=51064, avg=10543.69, stdev=3417.18 00:45:16.301 lat (usec): min=4273, max=51081, avg=10563.70, stdev=3417.28 00:45:16.301 clat percentiles (usec): 00:45:16.301 | 1.00th=[ 6456], 5.00th=[ 7439], 10.00th=[ 8717], 20.00th=[ 9503], 00:45:16.301 | 30.00th=[ 9765], 40.00th=[10028], 50.00th=[10290], 60.00th=[10683], 00:45:16.301 | 70.00th=[10945], 80.00th=[11338], 90.00th=[11994], 95.00th=[12518], 00:45:16.301 | 99.00th=[13698], 99.50th=[47973], 99.90th=[51119], 99.95th=[51119], 00:45:16.301 | 99.99th=[51119] 00:45:16.301 bw ( KiB/s): min=28928, max=41472, per=36.68%, avg=36326.40, stdev=3292.69, samples=10 00:45:16.301 iops : min= 226, max= 324, avg=283.80, stdev=25.72, samples=10 00:45:16.301 lat (msec) : 10=37.44%, 20=61.93%, 50=0.21%, 100=0.42% 00:45:16.301 cpu : usr=96.78%, sys=2.84%, ctx=11, majf=0, minf=1634 00:45:16.301 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:45:16.301 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:16.301 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:16.301 issued rwts: total=1421,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:16.301 latency : target=0, window=0, percentile=100.00%, depth=3 00:45:16.301 filename0: (groupid=0, jobs=1): err= 0: pid=69158: Fri Dec 13 10:47:09 2024 00:45:16.301 read: IOPS=231, BW=29.0MiB/s (30.4MB/s)(146MiB/5045msec) 00:45:16.301 slat (nsec): min=7568, max=42106, avg=18921.18, stdev=5544.52 00:45:16.301 clat (usec): min=4480, max=52992, avg=12886.56, stdev=4167.20 00:45:16.301 lat (usec): min=4494, max=53008, avg=12905.49, stdev=4167.45 00:45:16.301 clat percentiles (usec): 00:45:16.301 | 1.00th=[ 6521], 5.00th=[ 7701], 10.00th=[ 9634], 20.00th=[10945], 00:45:16.301 | 30.00th=[11863], 40.00th=[12518], 50.00th=[13042], 60.00th=[13435], 00:45:16.301 | 70.00th=[13698], 80.00th=[14091], 90.00th=[15008], 95.00th=[15926], 00:45:16.301 | 99.00th=[22414], 99.50th=[48497], 99.90th=[50594], 99.95th=[53216], 00:45:16.301 | 99.99th=[53216] 00:45:16.301 bw ( KiB/s): min=27648, max=34048, per=30.17%, avg=29875.20, stdev=2008.69, samples=10 00:45:16.301 iops : min= 216, max= 266, avg=233.40, stdev=15.69, samples=10 00:45:16.301 lat (msec) : 10=11.04%, 20=87.77%, 50=0.94%, 100=0.26% 00:45:16.301 cpu : usr=95.94%, sys=3.17%, ctx=317, majf=0, minf=2639 00:45:16.301 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:45:16.301 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:16.301 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:16.301 issued rwts: total=1169,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:16.301 latency : target=0, window=0, percentile=100.00%, depth=3 00:45:16.301 filename0: (groupid=0, jobs=1): err= 0: pid=69159: Fri Dec 13 10:47:09 2024 00:45:16.301 read: IOPS=262, BW=32.8MiB/s (34.4MB/s)(164MiB/5004msec) 00:45:16.301 slat (nsec): min=7314, max=47904, avg=27243.72, stdev=7278.86 00:45:16.301 clat (usec): min=3473, max=55157, avg=11400.17, stdev=6182.02 00:45:16.301 lat (usec): min=3505, max=55191, avg=11427.41, stdev=6181.52 00:45:16.301 clat percentiles (usec): 00:45:16.301 | 1.00th=[ 7898], 5.00th=[ 8717], 10.00th=[ 9110], 20.00th=[ 9503], 00:45:16.301 | 30.00th=[ 9765], 40.00th=[10028], 50.00th=[10290], 60.00th=[10683], 00:45:16.301 | 70.00th=[11076], 80.00th=[11600], 90.00th=[12387], 95.00th=[13173], 00:45:16.301 | 99.00th=[51119], 99.50th=[51643], 99.90th=[54789], 99.95th=[55313], 00:45:16.301 | 99.99th=[55313] 00:45:16.301 bw ( KiB/s): min=23040, max=38400, per=33.38%, avg=33052.44, stdev=4710.76, samples=9 00:45:16.301 iops : min= 180, max= 300, avg=258.22, stdev=36.80, samples=9 00:45:16.301 lat (msec) : 4=0.08%, 10=36.86%, 20=60.78%, 50=0.38%, 100=1.90% 00:45:16.301 cpu : usr=96.66%, sys=2.96%, ctx=8, majf=0, minf=1634 00:45:16.301 IO depths : 1=0.9%, 2=99.1%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:45:16.301 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:16.301 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:16.301 issued rwts: total=1313,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:16.301 latency : target=0, window=0, percentile=100.00%, depth=3 00:45:16.301 00:45:16.301 Run status group 0 (all jobs): 00:45:16.301 READ: bw=96.7MiB/s (101MB/s), 29.0MiB/s-35.5MiB/s (30.4MB/s-37.2MB/s), io=488MiB (512MB), run=5004-5045msec 00:45:16.868 ----------------------------------------------------- 00:45:16.868 Suppressions used: 00:45:16.868 count bytes template 00:45:16.868 5 44 /usr/src/fio/parse.c 00:45:16.868 1 8 libtcmalloc_minimal.so 00:45:16.868 1 904 libcrypto.so 00:45:16.868 ----------------------------------------------------- 00:45:16.868 00:45:16.868 10:47:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:45:16.868 10:47:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:45:16.868 10:47:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:45:16.868 10:47:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:45:16.868 10:47:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:45:16.868 10:47:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:45:16.868 10:47:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:16.868 10:47:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:16.868 10:47:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:16.868 10:47:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:45:16.868 10:47:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:16.868 10:47:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:16.868 10:47:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:16.868 10:47:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:45:16.868 10:47:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:45:16.868 10:47:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:45:16.868 10:47:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:45:16.868 10:47:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:45:16.868 10:47:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:45:16.868 10:47:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:45:16.868 10:47:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:45:16.868 10:47:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:45:16.868 10:47:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:45:16.868 10:47:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:45:16.868 10:47:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:45:16.868 10:47:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:16.868 10:47:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:16.868 bdev_null0 00:45:16.868 10:47:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:16.868 10:47:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:45:16.868 10:47:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:16.868 10:47:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:16.868 10:47:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:16.868 10:47:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:45:16.868 10:47:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:16.868 10:47:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:16.868 10:47:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:16.868 10:47:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:45:16.868 10:47:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:16.868 10:47:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:16.868 [2024-12-13 10:47:10.535352] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:45:16.868 10:47:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:16.868 10:47:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:45:16.868 10:47:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:45:16.868 10:47:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:45:16.868 10:47:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:45:16.868 10:47:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:16.868 10:47:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:16.868 bdev_null1 00:45:16.868 10:47:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:16.868 10:47:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:45:16.868 10:47:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:16.868 10:47:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:16.868 10:47:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:16.868 10:47:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:45:16.868 10:47:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:16.868 10:47:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:16.868 10:47:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:16.868 10:47:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:45:16.868 10:47:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:16.868 10:47:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:16.868 10:47:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:16.868 10:47:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:45:16.868 10:47:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:45:16.868 10:47:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:45:16.868 10:47:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:45:16.868 10:47:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:16.868 10:47:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:16.868 bdev_null2 00:45:16.868 10:47:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:16.868 10:47:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:45:16.868 10:47:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:16.868 10:47:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:16.868 10:47:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:16.868 10:47:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:45:16.868 10:47:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:16.868 10:47:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:16.868 10:47:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:16.868 10:47:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:45:16.868 10:47:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:16.868 10:47:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:16.868 10:47:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:16.868 10:47:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:45:16.868 10:47:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:45:16.868 10:47:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:45:16.868 10:47:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:45:16.868 10:47:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:45:16.868 10:47:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:45:16.868 10:47:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:45:16.868 10:47:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:45:16.868 10:47:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:45:16.868 10:47:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:45:16.868 10:47:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:45:16.868 10:47:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:45:16.868 10:47:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:45:16.868 10:47:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:45:16.868 { 00:45:16.868 "params": { 00:45:16.868 "name": "Nvme$subsystem", 00:45:16.868 "trtype": "$TEST_TRANSPORT", 00:45:16.868 "traddr": "$NVMF_FIRST_TARGET_IP", 00:45:16.868 "adrfam": "ipv4", 00:45:16.868 "trsvcid": "$NVMF_PORT", 00:45:16.868 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:45:16.868 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:45:16.868 "hdgst": ${hdgst:-false}, 00:45:16.868 "ddgst": ${ddgst:-false} 00:45:16.868 }, 00:45:16.868 "method": "bdev_nvme_attach_controller" 00:45:16.869 } 00:45:16.869 EOF 00:45:16.869 )") 00:45:16.869 10:47:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:45:16.869 10:47:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:45:16.869 10:47:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:45:16.869 10:47:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:45:16.869 10:47:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:45:16.869 10:47:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:45:16.869 10:47:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:45:16.869 10:47:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:45:16.869 10:47:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:45:16.869 10:47:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:45:16.869 10:47:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:45:16.869 10:47:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:45:16.869 10:47:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:45:16.869 10:47:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:45:16.869 { 00:45:16.869 "params": { 00:45:16.869 "name": "Nvme$subsystem", 00:45:16.869 "trtype": "$TEST_TRANSPORT", 00:45:16.869 "traddr": "$NVMF_FIRST_TARGET_IP", 00:45:16.869 "adrfam": "ipv4", 00:45:16.869 "trsvcid": "$NVMF_PORT", 00:45:16.869 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:45:16.869 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:45:16.869 "hdgst": ${hdgst:-false}, 00:45:16.869 "ddgst": ${ddgst:-false} 00:45:16.869 }, 00:45:16.869 "method": "bdev_nvme_attach_controller" 00:45:16.869 } 00:45:16.869 EOF 00:45:16.869 )") 00:45:16.869 10:47:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:45:16.869 10:47:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:45:16.869 10:47:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:45:16.869 10:47:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:45:16.869 10:47:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:45:16.869 10:47:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:45:16.869 10:47:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:45:16.869 10:47:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:45:16.869 { 00:45:16.869 "params": { 00:45:16.869 "name": "Nvme$subsystem", 00:45:16.869 "trtype": "$TEST_TRANSPORT", 00:45:16.869 "traddr": "$NVMF_FIRST_TARGET_IP", 00:45:16.869 "adrfam": "ipv4", 00:45:16.869 "trsvcid": "$NVMF_PORT", 00:45:16.869 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:45:16.869 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:45:16.869 "hdgst": ${hdgst:-false}, 00:45:16.869 "ddgst": ${ddgst:-false} 00:45:16.869 }, 00:45:16.869 "method": "bdev_nvme_attach_controller" 00:45:16.869 } 00:45:16.869 EOF 00:45:16.869 )") 00:45:16.869 10:47:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:45:16.869 10:47:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:45:16.869 10:47:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:45:16.869 10:47:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:45:16.869 "params": { 00:45:16.869 "name": "Nvme0", 00:45:16.869 "trtype": "tcp", 00:45:16.869 "traddr": "10.0.0.2", 00:45:16.869 "adrfam": "ipv4", 00:45:16.869 "trsvcid": "4420", 00:45:16.869 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:45:16.869 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:45:16.869 "hdgst": false, 00:45:16.869 "ddgst": false 00:45:16.869 }, 00:45:16.869 "method": "bdev_nvme_attach_controller" 00:45:16.869 },{ 00:45:16.869 "params": { 00:45:16.869 "name": "Nvme1", 00:45:16.869 "trtype": "tcp", 00:45:16.869 "traddr": "10.0.0.2", 00:45:16.869 "adrfam": "ipv4", 00:45:16.869 "trsvcid": "4420", 00:45:16.869 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:45:16.869 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:45:16.869 "hdgst": false, 00:45:16.869 "ddgst": false 00:45:16.869 }, 00:45:16.869 "method": "bdev_nvme_attach_controller" 00:45:16.869 },{ 00:45:16.869 "params": { 00:45:16.869 "name": "Nvme2", 00:45:16.869 "trtype": "tcp", 00:45:16.869 "traddr": "10.0.0.2", 00:45:16.869 "adrfam": "ipv4", 00:45:16.869 "trsvcid": "4420", 00:45:16.869 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:45:16.869 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:45:16.869 "hdgst": false, 00:45:16.869 "ddgst": false 00:45:16.869 }, 00:45:16.869 "method": "bdev_nvme_attach_controller" 00:45:16.869 }' 00:45:16.869 10:47:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:45:16.869 10:47:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:45:16.869 10:47:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1351 -- # break 00:45:16.869 10:47:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:45:16.869 10:47:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:45:17.127 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:45:17.127 ... 00:45:17.127 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:45:17.127 ... 00:45:17.127 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:45:17.127 ... 00:45:17.127 fio-3.35 00:45:17.127 Starting 24 threads 00:45:29.323 00:45:29.323 filename0: (groupid=0, jobs=1): err= 0: pid=70576: Fri Dec 13 10:47:22 2024 00:45:29.323 read: IOPS=441, BW=1764KiB/s (1807kB/s)(17.2MiB/10012msec) 00:45:29.323 slat (nsec): min=7449, max=79712, avg=24714.54, stdev=8718.37 00:45:29.323 clat (usec): min=13953, max=55534, avg=36070.77, stdev=1779.51 00:45:29.323 lat (usec): min=13970, max=55550, avg=36095.49, stdev=1779.10 00:45:29.323 clat percentiles (usec): 00:45:29.323 | 1.00th=[35390], 5.00th=[35914], 10.00th=[35914], 20.00th=[35914], 00:45:29.323 | 30.00th=[35914], 40.00th=[35914], 50.00th=[35914], 60.00th=[35914], 00:45:29.323 | 70.00th=[35914], 80.00th=[35914], 90.00th=[36439], 95.00th=[36439], 00:45:29.323 | 99.00th=[42206], 99.50th=[49021], 99.90th=[50594], 99.95th=[50594], 00:45:29.323 | 99.99th=[55313] 00:45:29.323 bw ( KiB/s): min= 1664, max= 1792, per=4.18%, avg=1765.05, stdev=53.61, samples=19 00:45:29.323 iops : min= 416, max= 448, avg=441.26, stdev=13.40, samples=19 00:45:29.323 lat (msec) : 20=0.36%, 50=99.28%, 100=0.36% 00:45:29.323 cpu : usr=98.49%, sys=1.03%, ctx=18, majf=0, minf=1636 00:45:29.323 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:45:29.323 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:29.323 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:29.323 issued rwts: total=4416,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:29.323 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:29.323 filename0: (groupid=0, jobs=1): err= 0: pid=70577: Fri Dec 13 10:47:22 2024 00:45:29.323 read: IOPS=439, BW=1758KiB/s (1800kB/s)(17.2MiB/10013msec) 00:45:29.323 slat (nsec): min=9329, max=73663, avg=26265.21, stdev=9219.21 00:45:29.323 clat (usec): min=27954, max=87231, avg=36157.96, stdev=3150.59 00:45:29.323 lat (usec): min=27971, max=87279, avg=36184.23, stdev=3150.83 00:45:29.323 clat percentiles (usec): 00:45:29.323 | 1.00th=[35390], 5.00th=[35914], 10.00th=[35914], 20.00th=[35914], 00:45:29.323 | 30.00th=[35914], 40.00th=[35914], 50.00th=[35914], 60.00th=[35914], 00:45:29.323 | 70.00th=[35914], 80.00th=[35914], 90.00th=[36439], 95.00th=[36439], 00:45:29.323 | 99.00th=[37487], 99.50th=[43254], 99.90th=[87557], 99.95th=[87557], 00:45:29.323 | 99.99th=[87557] 00:45:29.323 bw ( KiB/s): min= 1536, max= 1792, per=4.14%, avg=1751.58, stdev=74.55, samples=19 00:45:29.323 iops : min= 384, max= 448, avg=437.89, stdev=18.64, samples=19 00:45:29.323 lat (msec) : 50=99.64%, 100=0.36% 00:45:29.323 cpu : usr=98.43%, sys=1.10%, ctx=18, majf=0, minf=1634 00:45:29.323 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:45:29.323 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:29.323 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:29.323 issued rwts: total=4400,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:29.323 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:29.323 filename0: (groupid=0, jobs=1): err= 0: pid=70578: Fri Dec 13 10:47:22 2024 00:45:29.323 read: IOPS=438, BW=1753KiB/s (1795kB/s)(17.1MiB/10004msec) 00:45:29.323 slat (nsec): min=4242, max=96845, avg=22789.28, stdev=8829.50 00:45:29.323 clat (msec): min=25, max=108, avg=36.30, stdev= 4.40 00:45:29.323 lat (msec): min=25, max=108, avg=36.33, stdev= 4.40 00:45:29.323 clat percentiles (msec): 00:45:29.323 | 1.00th=[ 36], 5.00th=[ 36], 10.00th=[ 36], 20.00th=[ 36], 00:45:29.323 | 30.00th=[ 36], 40.00th=[ 36], 50.00th=[ 36], 60.00th=[ 36], 00:45:29.323 | 70.00th=[ 36], 80.00th=[ 36], 90.00th=[ 37], 95.00th=[ 37], 00:45:29.323 | 99.00th=[ 41], 99.50th=[ 41], 99.90th=[ 109], 99.95th=[ 109], 00:45:29.323 | 99.99th=[ 109] 00:45:29.323 bw ( KiB/s): min= 1408, max= 1792, per=4.14%, avg=1751.58, stdev=95.91, samples=19 00:45:29.323 iops : min= 352, max= 448, avg=437.89, stdev=23.98, samples=19 00:45:29.323 lat (msec) : 50=99.64%, 250=0.36% 00:45:29.323 cpu : usr=98.56%, sys=1.03%, ctx=18, majf=0, minf=1634 00:45:29.323 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:45:29.323 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:29.323 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:29.323 issued rwts: total=4384,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:29.323 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:29.323 filename0: (groupid=0, jobs=1): err= 0: pid=70579: Fri Dec 13 10:47:22 2024 00:45:29.323 read: IOPS=441, BW=1766KiB/s (1808kB/s)(17.2MiB/10004msec) 00:45:29.323 slat (usec): min=5, max=195, avg=21.40, stdev= 9.47 00:45:29.323 clat (usec): min=18660, max=50698, avg=36077.86, stdev=1451.59 00:45:29.323 lat (usec): min=18680, max=50726, avg=36099.25, stdev=1451.16 00:45:29.323 clat percentiles (usec): 00:45:29.323 | 1.00th=[35390], 5.00th=[35914], 10.00th=[35914], 20.00th=[35914], 00:45:29.323 | 30.00th=[35914], 40.00th=[35914], 50.00th=[35914], 60.00th=[35914], 00:45:29.323 | 70.00th=[35914], 80.00th=[36439], 90.00th=[36439], 95.00th=[36439], 00:45:29.323 | 99.00th=[37487], 99.50th=[43779], 99.90th=[50594], 99.95th=[50594], 00:45:29.323 | 99.99th=[50594] 00:45:29.323 bw ( KiB/s): min= 1664, max= 1792, per=4.18%, avg=1765.05, stdev=53.61, samples=19 00:45:29.323 iops : min= 416, max= 448, avg=441.26, stdev=13.40, samples=19 00:45:29.323 lat (msec) : 20=0.36%, 50=99.28%, 100=0.36% 00:45:29.323 cpu : usr=98.33%, sys=1.22%, ctx=16, majf=0, minf=1634 00:45:29.323 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:45:29.323 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:29.323 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:29.323 issued rwts: total=4416,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:29.323 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:29.323 filename0: (groupid=0, jobs=1): err= 0: pid=70580: Fri Dec 13 10:47:22 2024 00:45:29.323 read: IOPS=439, BW=1757KiB/s (1800kB/s)(17.2MiB/10015msec) 00:45:29.323 slat (nsec): min=4102, max=83265, avg=26560.78, stdev=7740.97 00:45:29.323 clat (usec): min=27963, max=88991, avg=36174.76, stdev=3262.62 00:45:29.323 lat (usec): min=27983, max=89006, avg=36201.32, stdev=3261.49 00:45:29.323 clat percentiles (usec): 00:45:29.323 | 1.00th=[35390], 5.00th=[35914], 10.00th=[35914], 20.00th=[35914], 00:45:29.323 | 30.00th=[35914], 40.00th=[35914], 50.00th=[35914], 60.00th=[35914], 00:45:29.323 | 70.00th=[35914], 80.00th=[35914], 90.00th=[36439], 95.00th=[36439], 00:45:29.323 | 99.00th=[37487], 99.50th=[43254], 99.90th=[88605], 99.95th=[88605], 00:45:29.323 | 99.99th=[88605] 00:45:29.323 bw ( KiB/s): min= 1536, max= 1792, per=4.14%, avg=1751.58, stdev=74.55, samples=19 00:45:29.323 iops : min= 384, max= 448, avg=437.89, stdev=18.64, samples=19 00:45:29.323 lat (msec) : 50=99.64%, 100=0.36% 00:45:29.323 cpu : usr=98.60%, sys=0.94%, ctx=21, majf=0, minf=1633 00:45:29.323 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:45:29.323 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:29.323 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:29.323 issued rwts: total=4400,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:29.323 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:29.323 filename0: (groupid=0, jobs=1): err= 0: pid=70581: Fri Dec 13 10:47:22 2024 00:45:29.323 read: IOPS=440, BW=1761KiB/s (1803kB/s)(17.2MiB/10032msec) 00:45:29.323 slat (nsec): min=4653, max=57764, avg=24853.61, stdev=7311.30 00:45:29.323 clat (usec): min=14060, max=83853, avg=36139.93, stdev=2925.75 00:45:29.323 lat (usec): min=14072, max=83870, avg=36164.78, stdev=2924.85 00:45:29.323 clat percentiles (usec): 00:45:29.323 | 1.00th=[35390], 5.00th=[35914], 10.00th=[35914], 20.00th=[35914], 00:45:29.323 | 30.00th=[35914], 40.00th=[35914], 50.00th=[35914], 60.00th=[35914], 00:45:29.323 | 70.00th=[35914], 80.00th=[35914], 90.00th=[36439], 95.00th=[36439], 00:45:29.323 | 99.00th=[37487], 99.50th=[43779], 99.90th=[79168], 99.95th=[79168], 00:45:29.323 | 99.99th=[83362] 00:45:29.323 bw ( KiB/s): min= 1664, max= 1792, per=4.16%, avg=1758.32, stdev=57.91, samples=19 00:45:29.323 iops : min= 416, max= 448, avg=439.58, stdev=14.48, samples=19 00:45:29.323 lat (msec) : 20=0.36%, 50=99.28%, 100=0.36% 00:45:29.323 cpu : usr=98.30%, sys=1.25%, ctx=16, majf=0, minf=1637 00:45:29.324 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:45:29.324 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:29.324 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:29.324 issued rwts: total=4416,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:29.324 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:29.324 filename0: (groupid=0, jobs=1): err= 0: pid=70582: Fri Dec 13 10:47:22 2024 00:45:29.324 read: IOPS=438, BW=1756KiB/s (1798kB/s)(17.2MiB/10024msec) 00:45:29.324 slat (nsec): min=4791, max=60883, avg=26833.96, stdev=7670.76 00:45:29.324 clat (usec): min=27919, max=97394, avg=36211.89, stdev=3757.83 00:45:29.324 lat (usec): min=27944, max=97411, avg=36238.72, stdev=3756.71 00:45:29.324 clat percentiles (usec): 00:45:29.324 | 1.00th=[35390], 5.00th=[35914], 10.00th=[35914], 20.00th=[35914], 00:45:29.324 | 30.00th=[35914], 40.00th=[35914], 50.00th=[35914], 60.00th=[35914], 00:45:29.324 | 70.00th=[35914], 80.00th=[35914], 90.00th=[36439], 95.00th=[36439], 00:45:29.324 | 99.00th=[37487], 99.50th=[43254], 99.90th=[96994], 99.95th=[96994], 00:45:29.324 | 99.99th=[96994] 00:45:29.324 bw ( KiB/s): min= 1536, max= 1792, per=4.14%, avg=1751.58, stdev=74.55, samples=19 00:45:29.324 iops : min= 384, max= 448, avg=437.89, stdev=18.64, samples=19 00:45:29.324 lat (msec) : 50=99.64%, 100=0.36% 00:45:29.324 cpu : usr=98.44%, sys=1.11%, ctx=20, majf=0, minf=1632 00:45:29.324 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:45:29.324 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:29.324 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:29.324 issued rwts: total=4400,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:29.324 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:29.324 filename0: (groupid=0, jobs=1): err= 0: pid=70583: Fri Dec 13 10:47:22 2024 00:45:29.324 read: IOPS=440, BW=1761KiB/s (1804kB/s)(17.2MiB/10029msec) 00:45:29.324 slat (nsec): min=4642, max=80874, avg=32097.33, stdev=11960.81 00:45:29.324 clat (usec): min=18196, max=75769, avg=36042.62, stdev=2692.78 00:45:29.324 lat (usec): min=18226, max=75787, avg=36074.72, stdev=2690.85 00:45:29.324 clat percentiles (usec): 00:45:29.324 | 1.00th=[35390], 5.00th=[35390], 10.00th=[35390], 20.00th=[35914], 00:45:29.324 | 30.00th=[35914], 40.00th=[35914], 50.00th=[35914], 60.00th=[35914], 00:45:29.324 | 70.00th=[35914], 80.00th=[35914], 90.00th=[36439], 95.00th=[36439], 00:45:29.324 | 99.00th=[37487], 99.50th=[43779], 99.90th=[76022], 99.95th=[76022], 00:45:29.324 | 99.99th=[76022] 00:45:29.324 bw ( KiB/s): min= 1660, max= 1792, per=4.16%, avg=1758.11, stdev=58.28, samples=19 00:45:29.324 iops : min= 415, max= 448, avg=439.53, stdev=14.57, samples=19 00:45:29.324 lat (msec) : 20=0.36%, 50=99.28%, 100=0.36% 00:45:29.324 cpu : usr=98.54%, sys=0.99%, ctx=16, majf=0, minf=1632 00:45:29.324 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:45:29.324 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:29.324 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:29.324 issued rwts: total=4416,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:29.324 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:29.324 filename1: (groupid=0, jobs=1): err= 0: pid=70584: Fri Dec 13 10:47:22 2024 00:45:29.324 read: IOPS=439, BW=1758KiB/s (1800kB/s)(17.2MiB/10013msec) 00:45:29.324 slat (nsec): min=6945, max=73722, avg=25863.00, stdev=8502.94 00:45:29.324 clat (usec): min=24542, max=90125, avg=36162.69, stdev=3368.32 00:45:29.324 lat (usec): min=24552, max=90149, avg=36188.55, stdev=3367.67 00:45:29.324 clat percentiles (usec): 00:45:29.324 | 1.00th=[35390], 5.00th=[35914], 10.00th=[35914], 20.00th=[35914], 00:45:29.324 | 30.00th=[35914], 40.00th=[35914], 50.00th=[35914], 60.00th=[35914], 00:45:29.324 | 70.00th=[35914], 80.00th=[35914], 90.00th=[36439], 95.00th=[36439], 00:45:29.324 | 99.00th=[37487], 99.50th=[43779], 99.90th=[89654], 99.95th=[89654], 00:45:29.324 | 99.99th=[89654] 00:45:29.324 bw ( KiB/s): min= 1536, max= 1792, per=4.14%, avg=1751.58, stdev=74.55, samples=19 00:45:29.324 iops : min= 384, max= 448, avg=437.89, stdev=18.64, samples=19 00:45:29.324 lat (msec) : 50=99.64%, 100=0.36% 00:45:29.324 cpu : usr=98.33%, sys=1.21%, ctx=14, majf=0, minf=1638 00:45:29.324 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:45:29.324 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:29.324 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:29.324 issued rwts: total=4400,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:29.324 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:29.324 filename1: (groupid=0, jobs=1): err= 0: pid=70585: Fri Dec 13 10:47:22 2024 00:45:29.324 read: IOPS=439, BW=1756KiB/s (1798kB/s)(17.2MiB/10022msec) 00:45:29.324 slat (nsec): min=7548, max=94372, avg=31259.63, stdev=18692.03 00:45:29.324 clat (usec): min=17695, max=96659, avg=36083.14, stdev=3844.52 00:45:29.324 lat (usec): min=17721, max=96685, avg=36114.40, stdev=3843.83 00:45:29.324 clat percentiles (usec): 00:45:29.324 | 1.00th=[35390], 5.00th=[35390], 10.00th=[35390], 20.00th=[35914], 00:45:29.324 | 30.00th=[35914], 40.00th=[35914], 50.00th=[35914], 60.00th=[35914], 00:45:29.324 | 70.00th=[35914], 80.00th=[35914], 90.00th=[35914], 95.00th=[36439], 00:45:29.324 | 99.00th=[39584], 99.50th=[40633], 99.90th=[96994], 99.95th=[96994], 00:45:29.324 | 99.99th=[96994] 00:45:29.324 bw ( KiB/s): min= 1539, max= 1792, per=4.14%, avg=1751.74, stdev=74.07, samples=19 00:45:29.324 iops : min= 384, max= 448, avg=437.89, stdev=18.64, samples=19 00:45:29.324 lat (msec) : 20=0.32%, 50=99.32%, 100=0.36% 00:45:29.324 cpu : usr=98.43%, sys=1.13%, ctx=14, majf=0, minf=1634 00:45:29.324 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:45:29.324 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:29.324 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:29.324 issued rwts: total=4400,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:29.324 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:29.324 filename1: (groupid=0, jobs=1): err= 0: pid=70586: Fri Dec 13 10:47:22 2024 00:45:29.324 read: IOPS=439, BW=1756KiB/s (1799kB/s)(17.2MiB/10020msec) 00:45:29.324 slat (nsec): min=5681, max=83957, avg=26931.18, stdev=7767.29 00:45:29.324 clat (usec): min=27912, max=93647, avg=36198.26, stdev=3536.88 00:45:29.324 lat (usec): min=27942, max=93666, avg=36225.20, stdev=3535.72 00:45:29.324 clat percentiles (usec): 00:45:29.324 | 1.00th=[35390], 5.00th=[35914], 10.00th=[35914], 20.00th=[35914], 00:45:29.324 | 30.00th=[35914], 40.00th=[35914], 50.00th=[35914], 60.00th=[35914], 00:45:29.324 | 70.00th=[35914], 80.00th=[35914], 90.00th=[36439], 95.00th=[36439], 00:45:29.324 | 99.00th=[37487], 99.50th=[43779], 99.90th=[93848], 99.95th=[93848], 00:45:29.324 | 99.99th=[93848] 00:45:29.324 bw ( KiB/s): min= 1536, max= 1792, per=4.14%, avg=1751.58, stdev=74.55, samples=19 00:45:29.324 iops : min= 384, max= 448, avg=437.89, stdev=18.64, samples=19 00:45:29.324 lat (msec) : 50=99.64%, 100=0.36% 00:45:29.324 cpu : usr=98.45%, sys=1.09%, ctx=26, majf=0, minf=1635 00:45:29.324 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:45:29.324 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:29.324 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:29.324 issued rwts: total=4400,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:29.324 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:29.324 filename1: (groupid=0, jobs=1): err= 0: pid=70587: Fri Dec 13 10:47:22 2024 00:45:29.324 read: IOPS=440, BW=1761KiB/s (1804kB/s)(17.2MiB/10028msec) 00:45:29.324 slat (nsec): min=8891, max=55155, avg=25939.75, stdev=7165.10 00:45:29.324 clat (usec): min=18683, max=74810, avg=36098.38, stdev=2601.08 00:45:29.324 lat (usec): min=18693, max=74833, avg=36124.32, stdev=2600.55 00:45:29.324 clat percentiles (usec): 00:45:29.324 | 1.00th=[35390], 5.00th=[35914], 10.00th=[35914], 20.00th=[35914], 00:45:29.324 | 30.00th=[35914], 40.00th=[35914], 50.00th=[35914], 60.00th=[35914], 00:45:29.324 | 70.00th=[35914], 80.00th=[35914], 90.00th=[36439], 95.00th=[36439], 00:45:29.324 | 99.00th=[37487], 99.50th=[43779], 99.90th=[74974], 99.95th=[74974], 00:45:29.324 | 99.99th=[74974] 00:45:29.324 bw ( KiB/s): min= 1664, max= 1792, per=4.16%, avg=1758.32, stdev=57.91, samples=19 00:45:29.324 iops : min= 416, max= 448, avg=439.58, stdev=14.48, samples=19 00:45:29.324 lat (msec) : 20=0.36%, 50=99.28%, 100=0.36% 00:45:29.324 cpu : usr=98.47%, sys=1.07%, ctx=14, majf=0, minf=1638 00:45:29.324 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:45:29.324 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:29.324 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:29.324 issued rwts: total=4416,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:29.324 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:29.324 filename1: (groupid=0, jobs=1): err= 0: pid=70588: Fri Dec 13 10:47:22 2024 00:45:29.324 read: IOPS=439, BW=1759KiB/s (1801kB/s)(17.2MiB/10007msec) 00:45:29.324 slat (nsec): min=9316, max=59757, avg=26427.88, stdev=7971.49 00:45:29.324 clat (usec): min=34352, max=73219, avg=36141.40, stdev=2293.37 00:45:29.324 lat (usec): min=34369, max=73255, avg=36167.83, stdev=2292.99 00:45:29.324 clat percentiles (usec): 00:45:29.324 | 1.00th=[35390], 5.00th=[35914], 10.00th=[35914], 20.00th=[35914], 00:45:29.324 | 30.00th=[35914], 40.00th=[35914], 50.00th=[35914], 60.00th=[35914], 00:45:29.324 | 70.00th=[35914], 80.00th=[35914], 90.00th=[36439], 95.00th=[36439], 00:45:29.324 | 99.00th=[37487], 99.50th=[43779], 99.90th=[72877], 99.95th=[72877], 00:45:29.324 | 99.99th=[72877] 00:45:29.324 bw ( KiB/s): min= 1536, max= 1792, per=4.16%, avg=1758.32, stdev=71.93, samples=19 00:45:29.324 iops : min= 384, max= 448, avg=439.58, stdev=17.98, samples=19 00:45:29.324 lat (msec) : 50=99.64%, 100=0.36% 00:45:29.324 cpu : usr=98.60%, sys=0.95%, ctx=26, majf=0, minf=1638 00:45:29.324 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:45:29.324 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:29.324 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:29.324 issued rwts: total=4400,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:29.324 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:29.324 filename1: (groupid=0, jobs=1): err= 0: pid=70589: Fri Dec 13 10:47:22 2024 00:45:29.324 read: IOPS=439, BW=1758KiB/s (1800kB/s)(17.2MiB/10011msec) 00:45:29.324 slat (nsec): min=5666, max=94059, avg=30562.90, stdev=18285.48 00:45:29.324 clat (usec): min=17587, max=97551, avg=36086.44, stdev=3898.70 00:45:29.324 lat (usec): min=17616, max=97570, avg=36117.00, stdev=3897.83 00:45:29.324 clat percentiles (usec): 00:45:29.324 | 1.00th=[35390], 5.00th=[35390], 10.00th=[35914], 20.00th=[35914], 00:45:29.324 | 30.00th=[35914], 40.00th=[35914], 50.00th=[35914], 60.00th=[35914], 00:45:29.324 | 70.00th=[35914], 80.00th=[35914], 90.00th=[35914], 95.00th=[36439], 00:45:29.325 | 99.00th=[39584], 99.50th=[40633], 99.90th=[98042], 99.95th=[98042], 00:45:29.325 | 99.99th=[98042] 00:45:29.325 bw ( KiB/s): min= 1536, max= 1792, per=4.14%, avg=1751.58, stdev=74.55, samples=19 00:45:29.325 iops : min= 384, max= 448, avg=437.89, stdev=18.64, samples=19 00:45:29.325 lat (msec) : 20=0.36%, 50=99.27%, 100=0.36% 00:45:29.325 cpu : usr=98.30%, sys=1.25%, ctx=14, majf=0, minf=1633 00:45:29.325 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:45:29.325 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:29.325 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:29.325 issued rwts: total=4400,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:29.325 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:29.325 filename1: (groupid=0, jobs=1): err= 0: pid=70590: Fri Dec 13 10:47:22 2024 00:45:29.325 read: IOPS=446, BW=1786KiB/s (1829kB/s)(17.5MiB/10031msec) 00:45:29.325 slat (nsec): min=6222, max=57142, avg=20356.73, stdev=7610.21 00:45:29.325 clat (usec): min=4465, max=44879, avg=35657.34, stdev=3448.98 00:45:29.325 lat (usec): min=4481, max=44901, avg=35677.69, stdev=3449.43 00:45:29.325 clat percentiles (usec): 00:45:29.325 | 1.00th=[ 8979], 5.00th=[35914], 10.00th=[35914], 20.00th=[35914], 00:45:29.325 | 30.00th=[35914], 40.00th=[35914], 50.00th=[35914], 60.00th=[35914], 00:45:29.325 | 70.00th=[35914], 80.00th=[36439], 90.00th=[36439], 95.00th=[36439], 00:45:29.325 | 99.00th=[37487], 99.50th=[43779], 99.90th=[44827], 99.95th=[44827], 00:45:29.325 | 99.99th=[44827] 00:45:29.325 bw ( KiB/s): min= 1664, max= 2176, per=4.22%, avg=1785.26, stdev=108.56, samples=19 00:45:29.325 iops : min= 416, max= 544, avg=446.32, stdev=27.14, samples=19 00:45:29.325 lat (msec) : 10=1.07%, 20=0.36%, 50=98.57% 00:45:29.325 cpu : usr=98.48%, sys=1.06%, ctx=15, majf=0, minf=1636 00:45:29.325 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:45:29.325 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:29.325 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:29.325 issued rwts: total=4480,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:29.325 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:29.325 filename1: (groupid=0, jobs=1): err= 0: pid=70591: Fri Dec 13 10:47:22 2024 00:45:29.325 read: IOPS=439, BW=1758KiB/s (1800kB/s)(17.2MiB/10010msec) 00:45:29.325 slat (nsec): min=6963, max=94727, avg=30442.42, stdev=18350.82 00:45:29.325 clat (usec): min=17541, max=96913, avg=36084.16, stdev=3862.68 00:45:29.325 lat (usec): min=17553, max=96937, avg=36114.60, stdev=3861.90 00:45:29.325 clat percentiles (usec): 00:45:29.325 | 1.00th=[35390], 5.00th=[35390], 10.00th=[35914], 20.00th=[35914], 00:45:29.325 | 30.00th=[35914], 40.00th=[35914], 50.00th=[35914], 60.00th=[35914], 00:45:29.325 | 70.00th=[35914], 80.00th=[35914], 90.00th=[35914], 95.00th=[36439], 00:45:29.325 | 99.00th=[39584], 99.50th=[40633], 99.90th=[96994], 99.95th=[96994], 00:45:29.325 | 99.99th=[96994] 00:45:29.325 bw ( KiB/s): min= 1539, max= 1792, per=4.14%, avg=1751.74, stdev=74.07, samples=19 00:45:29.325 iops : min= 384, max= 448, avg=437.89, stdev=18.64, samples=19 00:45:29.325 lat (msec) : 20=0.36%, 50=99.27%, 100=0.36% 00:45:29.325 cpu : usr=98.47%, sys=1.08%, ctx=16, majf=0, minf=1635 00:45:29.325 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:45:29.325 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:29.325 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:29.325 issued rwts: total=4400,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:29.325 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:29.325 filename2: (groupid=0, jobs=1): err= 0: pid=70592: Fri Dec 13 10:47:22 2024 00:45:29.325 read: IOPS=439, BW=1757KiB/s (1799kB/s)(17.2MiB/10019msec) 00:45:29.325 slat (nsec): min=4129, max=58419, avg=26141.45, stdev=6902.29 00:45:29.325 clat (usec): min=27913, max=92264, avg=36197.25, stdev=3455.05 00:45:29.325 lat (usec): min=27945, max=92278, avg=36223.39, stdev=3453.85 00:45:29.325 clat percentiles (usec): 00:45:29.325 | 1.00th=[35390], 5.00th=[35914], 10.00th=[35914], 20.00th=[35914], 00:45:29.325 | 30.00th=[35914], 40.00th=[35914], 50.00th=[35914], 60.00th=[35914], 00:45:29.325 | 70.00th=[35914], 80.00th=[35914], 90.00th=[36439], 95.00th=[36439], 00:45:29.325 | 99.00th=[37487], 99.50th=[43254], 99.90th=[91751], 99.95th=[91751], 00:45:29.325 | 99.99th=[91751] 00:45:29.325 bw ( KiB/s): min= 1536, max= 1792, per=4.14%, avg=1751.58, stdev=74.55, samples=19 00:45:29.325 iops : min= 384, max= 448, avg=437.89, stdev=18.64, samples=19 00:45:29.325 lat (msec) : 50=99.64%, 100=0.36% 00:45:29.325 cpu : usr=98.52%, sys=1.02%, ctx=18, majf=0, minf=1634 00:45:29.325 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:45:29.325 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:29.325 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:29.325 issued rwts: total=4400,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:29.325 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:29.325 filename2: (groupid=0, jobs=1): err= 0: pid=70593: Fri Dec 13 10:47:22 2024 00:45:29.325 read: IOPS=438, BW=1755KiB/s (1798kB/s)(17.2MiB/10026msec) 00:45:29.325 slat (nsec): min=4315, max=55194, avg=24944.68, stdev=7970.15 00:45:29.325 clat (usec): min=28030, max=99219, avg=36252.54, stdev=3862.58 00:45:29.325 lat (usec): min=28058, max=99236, avg=36277.48, stdev=3861.44 00:45:29.325 clat percentiles (usec): 00:45:29.325 | 1.00th=[35390], 5.00th=[35914], 10.00th=[35914], 20.00th=[35914], 00:45:29.325 | 30.00th=[35914], 40.00th=[35914], 50.00th=[35914], 60.00th=[35914], 00:45:29.325 | 70.00th=[35914], 80.00th=[35914], 90.00th=[36439], 95.00th=[36439], 00:45:29.325 | 99.00th=[37487], 99.50th=[43254], 99.90th=[99091], 99.95th=[99091], 00:45:29.325 | 99.99th=[99091] 00:45:29.325 bw ( KiB/s): min= 1536, max= 1792, per=4.14%, avg=1751.58, stdev=74.55, samples=19 00:45:29.325 iops : min= 384, max= 448, avg=437.89, stdev=18.64, samples=19 00:45:29.325 lat (msec) : 50=99.64%, 100=0.36% 00:45:29.325 cpu : usr=98.40%, sys=1.14%, ctx=17, majf=0, minf=1634 00:45:29.325 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:45:29.325 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:29.325 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:29.325 issued rwts: total=4400,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:29.325 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:29.325 filename2: (groupid=0, jobs=1): err= 0: pid=70594: Fri Dec 13 10:47:22 2024 00:45:29.325 read: IOPS=439, BW=1758KiB/s (1800kB/s)(17.2MiB/10011msec) 00:45:29.325 slat (nsec): min=4175, max=96608, avg=20480.45, stdev=15524.27 00:45:29.325 clat (usec): min=23918, max=83995, avg=36236.46, stdev=2974.56 00:45:29.325 lat (usec): min=23928, max=84016, avg=36256.94, stdev=2973.42 00:45:29.325 clat percentiles (usec): 00:45:29.325 | 1.00th=[35390], 5.00th=[35914], 10.00th=[35914], 20.00th=[35914], 00:45:29.325 | 30.00th=[35914], 40.00th=[35914], 50.00th=[35914], 60.00th=[35914], 00:45:29.325 | 70.00th=[35914], 80.00th=[36439], 90.00th=[36439], 95.00th=[36439], 00:45:29.325 | 99.00th=[39584], 99.50th=[41157], 99.90th=[84411], 99.95th=[84411], 00:45:29.325 | 99.99th=[84411] 00:45:29.325 bw ( KiB/s): min= 1536, max= 1792, per=4.16%, avg=1758.32, stdev=71.93, samples=19 00:45:29.325 iops : min= 384, max= 448, avg=439.58, stdev=17.98, samples=19 00:45:29.325 lat (msec) : 50=99.64%, 100=0.36% 00:45:29.325 cpu : usr=98.25%, sys=1.30%, ctx=21, majf=0, minf=1635 00:45:29.325 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:45:29.325 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:29.325 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:29.325 issued rwts: total=4400,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:29.325 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:29.325 filename2: (groupid=0, jobs=1): err= 0: pid=70595: Fri Dec 13 10:47:22 2024 00:45:29.325 read: IOPS=457, BW=1830KiB/s (1874kB/s)(17.9MiB/10032msec) 00:45:29.325 slat (nsec): min=3930, max=86462, avg=12567.00, stdev=4975.82 00:45:29.325 clat (usec): min=1464, max=49548, avg=34861.10, stdev=6119.77 00:45:29.325 lat (usec): min=1473, max=49566, avg=34873.67, stdev=6120.20 00:45:29.325 clat percentiles (usec): 00:45:29.325 | 1.00th=[ 1811], 5.00th=[35914], 10.00th=[35914], 20.00th=[35914], 00:45:29.325 | 30.00th=[35914], 40.00th=[35914], 50.00th=[35914], 60.00th=[35914], 00:45:29.325 | 70.00th=[35914], 80.00th=[36439], 90.00th=[36439], 95.00th=[36439], 00:45:29.325 | 99.00th=[38011], 99.50th=[41157], 99.90th=[49546], 99.95th=[49546], 00:45:29.325 | 99.99th=[49546] 00:45:29.325 bw ( KiB/s): min= 1664, max= 3048, per=4.33%, avg=1829.20, stdev=291.58, samples=20 00:45:29.325 iops : min= 416, max= 762, avg=457.30, stdev=72.90, samples=20 00:45:29.325 lat (msec) : 2=1.39%, 10=1.94%, 20=0.63%, 50=96.03% 00:45:29.325 cpu : usr=98.41%, sys=1.14%, ctx=16, majf=0, minf=1635 00:45:29.325 IO depths : 1=5.9%, 2=12.0%, 4=24.2%, 8=51.3%, 16=6.6%, 32=0.0%, >=64=0.0% 00:45:29.325 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:29.325 complete : 0=0.0%, 4=93.9%, 8=0.2%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:29.325 issued rwts: total=4589,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:29.325 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:29.325 filename2: (groupid=0, jobs=1): err= 0: pid=70596: Fri Dec 13 10:47:22 2024 00:45:29.325 read: IOPS=440, BW=1764KiB/s (1806kB/s)(17.2MiB/10014msec) 00:45:29.325 slat (nsec): min=4123, max=53722, avg=26355.02, stdev=7848.57 00:45:29.326 clat (usec): min=18681, max=60565, avg=36037.48, stdev=1876.01 00:45:29.326 lat (usec): min=18697, max=60581, avg=36063.83, stdev=1875.44 00:45:29.326 clat percentiles (usec): 00:45:29.326 | 1.00th=[35390], 5.00th=[35914], 10.00th=[35914], 20.00th=[35914], 00:45:29.326 | 30.00th=[35914], 40.00th=[35914], 50.00th=[35914], 60.00th=[35914], 00:45:29.326 | 70.00th=[35914], 80.00th=[35914], 90.00th=[36439], 95.00th=[36439], 00:45:29.326 | 99.00th=[37487], 99.50th=[43779], 99.90th=[60556], 99.95th=[60556], 00:45:29.326 | 99.99th=[60556] 00:45:29.326 bw ( KiB/s): min= 1664, max= 1792, per=4.18%, avg=1765.05, stdev=53.61, samples=19 00:45:29.326 iops : min= 416, max= 448, avg=441.26, stdev=13.40, samples=19 00:45:29.326 lat (msec) : 20=0.36%, 50=99.28%, 100=0.36% 00:45:29.326 cpu : usr=98.57%, sys=0.97%, ctx=15, majf=0, minf=1633 00:45:29.326 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:45:29.326 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:29.326 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:29.326 issued rwts: total=4416,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:29.326 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:29.326 filename2: (groupid=0, jobs=1): err= 0: pid=70597: Fri Dec 13 10:47:22 2024 00:45:29.326 read: IOPS=446, BW=1785KiB/s (1828kB/s)(17.4MiB/10002msec) 00:45:29.326 slat (nsec): min=4458, max=53892, avg=14132.11, stdev=5152.58 00:45:29.326 clat (usec): min=4449, max=44811, avg=35724.29, stdev=3313.19 00:45:29.326 lat (usec): min=4466, max=44835, avg=35738.42, stdev=3312.84 00:45:29.326 clat percentiles (usec): 00:45:29.326 | 1.00th=[ 8979], 5.00th=[35914], 10.00th=[35914], 20.00th=[35914], 00:45:29.326 | 30.00th=[35914], 40.00th=[35914], 50.00th=[35914], 60.00th=[35914], 00:45:29.326 | 70.00th=[35914], 80.00th=[36439], 90.00th=[36439], 95.00th=[36439], 00:45:29.326 | 99.00th=[37487], 99.50th=[43254], 99.90th=[44827], 99.95th=[44827], 00:45:29.326 | 99.99th=[44827] 00:45:29.326 bw ( KiB/s): min= 1664, max= 2048, per=4.22%, avg=1785.26, stdev=79.52, samples=19 00:45:29.326 iops : min= 416, max= 512, avg=446.32, stdev=19.88, samples=19 00:45:29.326 lat (msec) : 10=1.08%, 50=98.92% 00:45:29.326 cpu : usr=98.28%, sys=1.25%, ctx=31, majf=0, minf=1634 00:45:29.326 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:45:29.326 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:29.326 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:29.326 issued rwts: total=4464,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:29.326 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:29.326 filename2: (groupid=0, jobs=1): err= 0: pid=70598: Fri Dec 13 10:47:22 2024 00:45:29.326 read: IOPS=439, BW=1759KiB/s (1802kB/s)(17.2MiB/10004msec) 00:45:29.326 slat (nsec): min=9675, max=78759, avg=31502.63, stdev=12388.40 00:45:29.326 clat (usec): min=27923, max=78132, avg=36083.88, stdev=2627.29 00:45:29.326 lat (usec): min=27940, max=78159, avg=36115.38, stdev=2626.55 00:45:29.326 clat percentiles (usec): 00:45:29.326 | 1.00th=[35390], 5.00th=[35390], 10.00th=[35390], 20.00th=[35914], 00:45:29.326 | 30.00th=[35914], 40.00th=[35914], 50.00th=[35914], 60.00th=[35914], 00:45:29.326 | 70.00th=[35914], 80.00th=[35914], 90.00th=[36439], 95.00th=[36439], 00:45:29.326 | 99.00th=[37487], 99.50th=[43254], 99.90th=[78119], 99.95th=[78119], 00:45:29.326 | 99.99th=[78119] 00:45:29.326 bw ( KiB/s): min= 1536, max= 1792, per=4.16%, avg=1758.32, stdev=71.93, samples=19 00:45:29.326 iops : min= 384, max= 448, avg=439.58, stdev=17.98, samples=19 00:45:29.326 lat (msec) : 50=99.64%, 100=0.36% 00:45:29.326 cpu : usr=98.40%, sys=1.14%, ctx=15, majf=0, minf=1632 00:45:29.326 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:45:29.326 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:29.326 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:29.326 issued rwts: total=4400,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:29.326 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:29.326 filename2: (groupid=0, jobs=1): err= 0: pid=70599: Fri Dec 13 10:47:22 2024 00:45:29.326 read: IOPS=439, BW=1758KiB/s (1800kB/s)(17.2MiB/10014msec) 00:45:29.326 slat (nsec): min=7846, max=96507, avg=31345.03, stdev=18750.67 00:45:29.326 clat (msec): min=17, max=100, avg=36.09, stdev= 4.09 00:45:29.326 lat (msec): min=17, max=100, avg=36.12, stdev= 4.09 00:45:29.326 clat percentiles (msec): 00:45:29.326 | 1.00th=[ 35], 5.00th=[ 36], 10.00th=[ 36], 20.00th=[ 36], 00:45:29.326 | 30.00th=[ 36], 40.00th=[ 36], 50.00th=[ 36], 60.00th=[ 36], 00:45:29.326 | 70.00th=[ 36], 80.00th=[ 36], 90.00th=[ 36], 95.00th=[ 37], 00:45:29.326 | 99.00th=[ 40], 99.50th=[ 41], 99.90th=[ 102], 99.95th=[ 102], 00:45:29.326 | 99.99th=[ 102] 00:45:29.326 bw ( KiB/s): min= 1539, max= 1792, per=4.15%, avg=1753.75, stdev=72.65, samples=20 00:45:29.326 iops : min= 384, max= 448, avg=438.40, stdev=18.28, samples=20 00:45:29.326 lat (msec) : 20=0.36%, 50=99.27%, 250=0.36% 00:45:29.326 cpu : usr=98.32%, sys=1.22%, ctx=15, majf=0, minf=1632 00:45:29.326 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:45:29.326 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:29.326 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:29.326 issued rwts: total=4400,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:29.326 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:29.326 00:45:29.326 Run status group 0 (all jobs): 00:45:29.326 READ: bw=41.3MiB/s (43.3MB/s), 1753KiB/s-1830KiB/s (1795kB/s-1874kB/s), io=414MiB (434MB), run=10002-10032msec 00:45:29.893 ----------------------------------------------------- 00:45:29.893 Suppressions used: 00:45:29.893 count bytes template 00:45:29.893 45 402 /usr/src/fio/parse.c 00:45:29.893 1 8 libtcmalloc_minimal.so 00:45:29.893 1 904 libcrypto.so 00:45:29.893 ----------------------------------------------------- 00:45:29.893 00:45:29.893 10:47:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:45:29.893 10:47:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:45:29.893 10:47:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:45:29.893 10:47:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:45:29.893 10:47:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:45:29.893 10:47:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:45:29.893 10:47:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:29.893 10:47:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:29.893 10:47:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:29.893 10:47:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:45:29.893 10:47:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:29.893 10:47:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:29.893 10:47:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:29.893 10:47:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:45:29.893 10:47:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:45:29.893 10:47:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:45:29.893 10:47:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:45:29.893 10:47:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:29.893 10:47:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:29.893 10:47:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:29.893 10:47:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:45:29.893 10:47:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:29.893 10:47:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:29.893 10:47:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:29.893 10:47:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:45:29.893 10:47:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:45:29.893 10:47:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:45:29.893 10:47:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:45:29.893 10:47:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:29.893 10:47:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:29.893 10:47:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:29.893 10:47:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:45:29.893 10:47:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:29.893 10:47:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:29.893 10:47:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:29.893 10:47:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:45:29.893 10:47:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:45:29.893 10:47:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:45:29.893 10:47:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:45:29.893 10:47:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:45:29.893 10:47:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:45:29.893 10:47:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:45:29.893 10:47:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:45:29.893 10:47:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:45:29.893 10:47:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:45:29.893 10:47:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:45:29.893 10:47:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:45:29.893 10:47:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:29.893 10:47:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:29.893 bdev_null0 00:45:29.893 10:47:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:29.893 10:47:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:45:29.893 10:47:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:29.893 10:47:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:29.893 10:47:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:29.893 10:47:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:45:29.893 10:47:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:29.893 10:47:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:29.893 10:47:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:29.893 10:47:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:45:29.893 10:47:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:29.893 10:47:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:29.893 [2024-12-13 10:47:23.774577] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:45:29.893 10:47:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:29.893 10:47:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:45:29.893 10:47:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:45:29.893 10:47:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:45:29.893 10:47:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:45:29.893 10:47:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:29.893 10:47:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:30.152 bdev_null1 00:45:30.152 10:47:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:30.152 10:47:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:45:30.152 10:47:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:30.152 10:47:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:30.152 10:47:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:30.152 10:47:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:45:30.152 10:47:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:30.152 10:47:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:30.152 10:47:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:30.152 10:47:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:45:30.152 10:47:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:30.152 10:47:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:30.152 10:47:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:30.152 10:47:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:45:30.152 10:47:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:45:30.152 10:47:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:45:30.152 10:47:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:45:30.152 10:47:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:45:30.152 10:47:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:45:30.152 10:47:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:45:30.152 10:47:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:45:30.152 10:47:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:45:30.152 10:47:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:45:30.152 10:47:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:45:30.152 10:47:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:45:30.152 10:47:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:45:30.152 10:47:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:45:30.152 10:47:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:45:30.152 10:47:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:45:30.152 10:47:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:45:30.152 10:47:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:45:30.152 10:47:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:45:30.152 { 00:45:30.152 "params": { 00:45:30.152 "name": "Nvme$subsystem", 00:45:30.152 "trtype": "$TEST_TRANSPORT", 00:45:30.153 "traddr": "$NVMF_FIRST_TARGET_IP", 00:45:30.153 "adrfam": "ipv4", 00:45:30.153 "trsvcid": "$NVMF_PORT", 00:45:30.153 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:45:30.153 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:45:30.153 "hdgst": ${hdgst:-false}, 00:45:30.153 "ddgst": ${ddgst:-false} 00:45:30.153 }, 00:45:30.153 "method": "bdev_nvme_attach_controller" 00:45:30.153 } 00:45:30.153 EOF 00:45:30.153 )") 00:45:30.153 10:47:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:45:30.153 10:47:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:45:30.153 10:47:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:45:30.153 10:47:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:45:30.153 10:47:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:45:30.153 10:47:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:45:30.153 10:47:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:45:30.153 10:47:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:45:30.153 10:47:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:45:30.153 { 00:45:30.153 "params": { 00:45:30.153 "name": "Nvme$subsystem", 00:45:30.153 "trtype": "$TEST_TRANSPORT", 00:45:30.153 "traddr": "$NVMF_FIRST_TARGET_IP", 00:45:30.153 "adrfam": "ipv4", 00:45:30.153 "trsvcid": "$NVMF_PORT", 00:45:30.153 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:45:30.153 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:45:30.153 "hdgst": ${hdgst:-false}, 00:45:30.153 "ddgst": ${ddgst:-false} 00:45:30.153 }, 00:45:30.153 "method": "bdev_nvme_attach_controller" 00:45:30.153 } 00:45:30.153 EOF 00:45:30.153 )") 00:45:30.153 10:47:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:45:30.153 10:47:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:45:30.153 10:47:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:45:30.153 10:47:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:45:30.153 10:47:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:45:30.153 10:47:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:45:30.153 "params": { 00:45:30.153 "name": "Nvme0", 00:45:30.153 "trtype": "tcp", 00:45:30.153 "traddr": "10.0.0.2", 00:45:30.153 "adrfam": "ipv4", 00:45:30.153 "trsvcid": "4420", 00:45:30.153 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:45:30.153 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:45:30.153 "hdgst": false, 00:45:30.153 "ddgst": false 00:45:30.153 }, 00:45:30.153 "method": "bdev_nvme_attach_controller" 00:45:30.153 },{ 00:45:30.153 "params": { 00:45:30.153 "name": "Nvme1", 00:45:30.153 "trtype": "tcp", 00:45:30.153 "traddr": "10.0.0.2", 00:45:30.153 "adrfam": "ipv4", 00:45:30.153 "trsvcid": "4420", 00:45:30.153 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:45:30.153 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:45:30.153 "hdgst": false, 00:45:30.153 "ddgst": false 00:45:30.153 }, 00:45:30.153 "method": "bdev_nvme_attach_controller" 00:45:30.153 }' 00:45:30.153 10:47:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:45:30.153 10:47:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:45:30.153 10:47:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1351 -- # break 00:45:30.153 10:47:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:45:30.153 10:47:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:45:30.411 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:45:30.411 ... 00:45:30.411 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:45:30.411 ... 00:45:30.411 fio-3.35 00:45:30.411 Starting 4 threads 00:45:36.969 00:45:36.969 filename0: (groupid=0, jobs=1): err= 0: pid=72722: Fri Dec 13 10:47:30 2024 00:45:36.969 read: IOPS=2280, BW=17.8MiB/s (18.7MB/s)(89.1MiB/5002msec) 00:45:36.969 slat (nsec): min=7024, max=48118, avg=16058.98, stdev=4996.16 00:45:36.969 clat (usec): min=750, max=7298, avg=3447.11, stdev=414.96 00:45:36.969 lat (usec): min=764, max=7313, avg=3463.17, stdev=415.64 00:45:36.969 clat percentiles (usec): 00:45:36.969 | 1.00th=[ 2278], 5.00th=[ 2802], 10.00th=[ 3097], 20.00th=[ 3261], 00:45:36.969 | 30.00th=[ 3359], 40.00th=[ 3392], 50.00th=[ 3425], 60.00th=[ 3490], 00:45:36.969 | 70.00th=[ 3556], 80.00th=[ 3687], 90.00th=[ 3884], 95.00th=[ 4047], 00:45:36.969 | 99.00th=[ 4555], 99.50th=[ 5014], 99.90th=[ 5866], 99.95th=[ 6652], 00:45:36.969 | 99.99th=[ 7177] 00:45:36.969 bw ( KiB/s): min=17536, max=20080, per=25.50%, avg=18355.56, stdev=763.48, samples=9 00:45:36.969 iops : min= 2192, max= 2510, avg=2294.44, stdev=95.43, samples=9 00:45:36.969 lat (usec) : 1000=0.24% 00:45:36.969 lat (msec) : 2=0.60%, 4=93.30%, 10=5.87% 00:45:36.969 cpu : usr=94.24%, sys=4.14%, ctx=245, majf=0, minf=1634 00:45:36.969 IO depths : 1=0.9%, 2=20.6%, 4=53.3%, 8=25.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:45:36.969 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:36.969 complete : 0=0.0%, 4=90.8%, 8=9.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:36.969 issued rwts: total=11406,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:36.969 latency : target=0, window=0, percentile=100.00%, depth=8 00:45:36.969 filename0: (groupid=0, jobs=1): err= 0: pid=72723: Fri Dec 13 10:47:30 2024 00:45:36.969 read: IOPS=2249, BW=17.6MiB/s (18.4MB/s)(87.9MiB/5003msec) 00:45:36.969 slat (nsec): min=6830, max=58111, avg=16073.69, stdev=5160.95 00:45:36.969 clat (usec): min=737, max=6695, avg=3493.80, stdev=394.09 00:45:36.969 lat (usec): min=752, max=6710, avg=3509.88, stdev=394.08 00:45:36.969 clat percentiles (usec): 00:45:36.969 | 1.00th=[ 2409], 5.00th=[ 2999], 10.00th=[ 3163], 20.00th=[ 3294], 00:45:36.969 | 30.00th=[ 3359], 40.00th=[ 3392], 50.00th=[ 3458], 60.00th=[ 3490], 00:45:36.969 | 70.00th=[ 3589], 80.00th=[ 3720], 90.00th=[ 3949], 95.00th=[ 4080], 00:45:36.969 | 99.00th=[ 4752], 99.50th=[ 5145], 99.90th=[ 6259], 99.95th=[ 6456], 00:45:36.969 | 99.99th=[ 6718] 00:45:36.969 bw ( KiB/s): min=16880, max=19104, per=25.22%, avg=18151.11, stdev=687.82, samples=9 00:45:36.969 iops : min= 2110, max= 2388, avg=2268.89, stdev=85.98, samples=9 00:45:36.969 lat (usec) : 750=0.01%, 1000=0.04% 00:45:36.969 lat (msec) : 2=0.36%, 4=91.85%, 10=7.74% 00:45:36.969 cpu : usr=92.36%, sys=5.04%, ctx=214, majf=0, minf=1632 00:45:36.969 IO depths : 1=2.6%, 2=19.8%, 4=54.1%, 8=23.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:45:36.969 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:36.969 complete : 0=0.0%, 4=90.7%, 8=9.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:36.969 issued rwts: total=11253,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:36.969 latency : target=0, window=0, percentile=100.00%, depth=8 00:45:36.969 filename1: (groupid=0, jobs=1): err= 0: pid=72724: Fri Dec 13 10:47:30 2024 00:45:36.969 read: IOPS=2237, BW=17.5MiB/s (18.3MB/s)(87.4MiB/5001msec) 00:45:36.969 slat (nsec): min=7107, max=54782, avg=15220.92, stdev=4981.01 00:45:36.969 clat (usec): min=726, max=7625, avg=3524.98, stdev=453.67 00:45:36.969 lat (usec): min=738, max=7640, avg=3540.20, stdev=453.62 00:45:36.969 clat percentiles (usec): 00:45:36.969 | 1.00th=[ 2311], 5.00th=[ 2966], 10.00th=[ 3195], 20.00th=[ 3326], 00:45:36.969 | 30.00th=[ 3392], 40.00th=[ 3425], 50.00th=[ 3458], 60.00th=[ 3523], 00:45:36.969 | 70.00th=[ 3589], 80.00th=[ 3720], 90.00th=[ 3982], 95.00th=[ 4178], 00:45:36.969 | 99.00th=[ 5276], 99.50th=[ 5735], 99.90th=[ 6390], 99.95th=[ 6652], 00:45:36.969 | 99.99th=[ 7570] 00:45:36.969 bw ( KiB/s): min=17024, max=19184, per=25.04%, avg=18019.56, stdev=611.98, samples=9 00:45:36.969 iops : min= 2128, max= 2398, avg=2252.44, stdev=76.50, samples=9 00:45:36.970 lat (usec) : 750=0.01%, 1000=0.04% 00:45:36.970 lat (msec) : 2=0.46%, 4=90.82%, 10=8.68% 00:45:36.970 cpu : usr=96.08%, sys=3.46%, ctx=18, majf=0, minf=1634 00:45:36.970 IO depths : 1=0.2%, 2=11.5%, 4=61.2%, 8=27.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:45:36.970 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:36.970 complete : 0=0.0%, 4=91.8%, 8=8.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:36.970 issued rwts: total=11190,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:36.970 latency : target=0, window=0, percentile=100.00%, depth=8 00:45:36.970 filename1: (groupid=0, jobs=1): err= 0: pid=72725: Fri Dec 13 10:47:30 2024 00:45:36.970 read: IOPS=2230, BW=17.4MiB/s (18.3MB/s)(87.2MiB/5003msec) 00:45:36.970 slat (usec): min=6, max=204, avg=13.61, stdev= 5.55 00:45:36.970 clat (usec): min=955, max=43485, avg=3543.77, stdev=1125.39 00:45:36.970 lat (usec): min=970, max=43510, avg=3557.39, stdev=1125.28 00:45:36.970 clat percentiles (usec): 00:45:36.970 | 1.00th=[ 2606], 5.00th=[ 3064], 10.00th=[ 3195], 20.00th=[ 3326], 00:45:36.970 | 30.00th=[ 3392], 40.00th=[ 3425], 50.00th=[ 3458], 60.00th=[ 3523], 00:45:36.970 | 70.00th=[ 3589], 80.00th=[ 3720], 90.00th=[ 3949], 95.00th=[ 4080], 00:45:36.970 | 99.00th=[ 4686], 99.50th=[ 5014], 99.90th=[ 6521], 99.95th=[43254], 00:45:36.970 | 99.99th=[43254] 00:45:36.970 bw ( KiB/s): min=16192, max=19168, per=24.93%, avg=17944.89, stdev=907.92, samples=9 00:45:36.970 iops : min= 2024, max= 2396, avg=2243.11, stdev=113.49, samples=9 00:45:36.970 lat (usec) : 1000=0.01% 00:45:36.970 lat (msec) : 2=0.22%, 4=92.51%, 10=7.19%, 50=0.07% 00:45:36.970 cpu : usr=95.78%, sys=3.80%, ctx=12, majf=0, minf=1635 00:45:36.970 IO depths : 1=1.4%, 2=7.7%, 4=65.9%, 8=24.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:45:36.970 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:36.970 complete : 0=0.0%, 4=91.0%, 8=9.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:36.970 issued rwts: total=11158,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:36.970 latency : target=0, window=0, percentile=100.00%, depth=8 00:45:36.970 00:45:36.970 Run status group 0 (all jobs): 00:45:36.970 READ: bw=70.3MiB/s (73.7MB/s), 17.4MiB/s-17.8MiB/s (18.3MB/s-18.7MB/s), io=352MiB (369MB), run=5001-5003msec 00:45:37.905 ----------------------------------------------------- 00:45:37.905 Suppressions used: 00:45:37.905 count bytes template 00:45:37.905 6 52 /usr/src/fio/parse.c 00:45:37.905 1 8 libtcmalloc_minimal.so 00:45:37.905 1 904 libcrypto.so 00:45:37.905 ----------------------------------------------------- 00:45:37.905 00:45:37.905 10:47:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:45:37.905 10:47:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:45:37.905 10:47:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:45:37.905 10:47:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:45:37.905 10:47:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:45:37.905 10:47:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:45:37.905 10:47:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:37.905 10:47:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:37.905 10:47:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:37.905 10:47:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:45:37.905 10:47:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:37.905 10:47:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:37.905 10:47:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:37.905 10:47:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:45:37.905 10:47:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:45:37.905 10:47:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:45:37.905 10:47:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:45:37.905 10:47:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:37.905 10:47:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:37.905 10:47:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:37.905 10:47:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:45:37.905 10:47:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:37.905 10:47:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:37.905 10:47:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:37.905 00:45:37.905 real 0m28.572s 00:45:37.905 user 4m56.815s 00:45:37.905 sys 0m5.698s 00:45:37.905 10:47:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1130 -- # xtrace_disable 00:45:37.905 10:47:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:37.905 ************************************ 00:45:37.905 END TEST fio_dif_rand_params 00:45:37.905 ************************************ 00:45:37.905 10:47:31 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:45:37.905 10:47:31 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:45:37.905 10:47:31 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:45:37.905 10:47:31 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:45:37.905 ************************************ 00:45:37.905 START TEST fio_dif_digest 00:45:37.905 ************************************ 00:45:37.905 10:47:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1129 -- # fio_dif_digest 00:45:37.905 10:47:31 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:45:37.905 10:47:31 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:45:37.905 10:47:31 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:45:37.905 10:47:31 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:45:37.905 10:47:31 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:45:37.905 10:47:31 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:45:37.905 10:47:31 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:45:37.905 10:47:31 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:45:37.905 10:47:31 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:45:37.905 10:47:31 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:45:37.905 10:47:31 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:45:37.905 10:47:31 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:45:37.905 10:47:31 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:45:37.905 10:47:31 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:45:37.905 10:47:31 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:45:37.905 10:47:31 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:45:37.905 10:47:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:37.905 10:47:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:45:37.905 bdev_null0 00:45:37.905 10:47:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:37.905 10:47:31 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:45:37.905 10:47:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:37.905 10:47:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:45:37.905 10:47:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:37.905 10:47:31 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:45:37.905 10:47:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:37.905 10:47:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:45:37.905 10:47:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:37.905 10:47:31 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:45:37.905 10:47:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:37.905 10:47:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:45:37.905 [2024-12-13 10:47:31.623175] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:45:37.905 10:47:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:37.905 10:47:31 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:45:37.905 10:47:31 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:45:37.905 10:47:31 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:45:37.905 10:47:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:45:37.905 10:47:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:45:37.905 10:47:31 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:45:37.905 10:47:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:45:37.905 10:47:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local sanitizers 00:45:37.906 10:47:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:45:37.906 10:47:31 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # config=() 00:45:37.906 10:47:31 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:45:37.906 10:47:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # shift 00:45:37.906 10:47:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # local asan_lib= 00:45:37.906 10:47:31 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # local subsystem config 00:45:37.906 10:47:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:45:37.906 10:47:31 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:45:37.906 10:47:31 nvmf_dif.fio_dif_digest -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:45:37.906 10:47:31 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:45:37.906 10:47:31 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:45:37.906 { 00:45:37.906 "params": { 00:45:37.906 "name": "Nvme$subsystem", 00:45:37.906 "trtype": "$TEST_TRANSPORT", 00:45:37.906 "traddr": "$NVMF_FIRST_TARGET_IP", 00:45:37.906 "adrfam": "ipv4", 00:45:37.906 "trsvcid": "$NVMF_PORT", 00:45:37.906 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:45:37.906 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:45:37.906 "hdgst": ${hdgst:-false}, 00:45:37.906 "ddgst": ${ddgst:-false} 00:45:37.906 }, 00:45:37.906 "method": "bdev_nvme_attach_controller" 00:45:37.906 } 00:45:37.906 EOF 00:45:37.906 )") 00:45:37.906 10:47:31 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # cat 00:45:37.906 10:47:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:45:37.906 10:47:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libasan 00:45:37.906 10:47:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:45:37.906 10:47:31 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:45:37.906 10:47:31 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:45:37.906 10:47:31 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # jq . 00:45:37.906 10:47:31 nvmf_dif.fio_dif_digest -- nvmf/common.sh@585 -- # IFS=, 00:45:37.906 10:47:31 nvmf_dif.fio_dif_digest -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:45:37.906 "params": { 00:45:37.906 "name": "Nvme0", 00:45:37.906 "trtype": "tcp", 00:45:37.906 "traddr": "10.0.0.2", 00:45:37.906 "adrfam": "ipv4", 00:45:37.906 "trsvcid": "4420", 00:45:37.906 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:45:37.906 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:45:37.906 "hdgst": true, 00:45:37.906 "ddgst": true 00:45:37.906 }, 00:45:37.906 "method": "bdev_nvme_attach_controller" 00:45:37.906 }' 00:45:37.906 10:47:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:45:37.906 10:47:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:45:37.906 10:47:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1351 -- # break 00:45:37.906 10:47:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:45:37.906 10:47:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:45:38.164 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:45:38.164 ... 00:45:38.164 fio-3.35 00:45:38.164 Starting 3 threads 00:45:50.359 00:45:50.359 filename0: (groupid=0, jobs=1): err= 0: pid=74045: Fri Dec 13 10:47:43 2024 00:45:50.359 read: IOPS=240, BW=30.0MiB/s (31.5MB/s)(302MiB/10048msec) 00:45:50.359 slat (nsec): min=7453, max=37170, avg=14103.26, stdev=1607.00 00:45:50.359 clat (usec): min=9426, max=53339, avg=12457.97, stdev=1405.41 00:45:50.359 lat (usec): min=9439, max=53353, avg=12472.07, stdev=1405.49 00:45:50.359 clat percentiles (usec): 00:45:50.359 | 1.00th=[10552], 5.00th=[10945], 10.00th=[11338], 20.00th=[11731], 00:45:50.359 | 30.00th=[11994], 40.00th=[12256], 50.00th=[12387], 60.00th=[12649], 00:45:50.359 | 70.00th=[12911], 80.00th=[13042], 90.00th=[13435], 95.00th=[13829], 00:45:50.359 | 99.00th=[14615], 99.50th=[15139], 99.90th=[15926], 99.95th=[49021], 00:45:50.359 | 99.99th=[53216] 00:45:50.359 bw ( KiB/s): min=29952, max=32000, per=34.92%, avg=30860.80, stdev=515.19, samples=20 00:45:50.359 iops : min= 234, max= 250, avg=241.10, stdev= 4.02, samples=20 00:45:50.359 lat (msec) : 10=0.12%, 20=99.79%, 50=0.04%, 100=0.04% 00:45:50.359 cpu : usr=94.52%, sys=5.13%, ctx=26, majf=0, minf=1634 00:45:50.359 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:45:50.359 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:50.359 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:50.359 issued rwts: total=2413,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:50.359 latency : target=0, window=0, percentile=100.00%, depth=3 00:45:50.359 filename0: (groupid=0, jobs=1): err= 0: pid=74046: Fri Dec 13 10:47:43 2024 00:45:50.359 read: IOPS=222, BW=27.8MiB/s (29.2MB/s)(279MiB/10046msec) 00:45:50.359 slat (nsec): min=7885, max=51878, avg=14695.13, stdev=1836.90 00:45:50.359 clat (usec): min=9626, max=52951, avg=13447.26, stdev=1408.37 00:45:50.359 lat (usec): min=9645, max=52967, avg=13461.95, stdev=1408.56 00:45:50.359 clat percentiles (usec): 00:45:50.359 | 1.00th=[11469], 5.00th=[11994], 10.00th=[12387], 20.00th=[12780], 00:45:50.359 | 30.00th=[12911], 40.00th=[13173], 50.00th=[13435], 60.00th=[13566], 00:45:50.359 | 70.00th=[13829], 80.00th=[14091], 90.00th=[14484], 95.00th=[14877], 00:45:50.359 | 99.00th=[15533], 99.50th=[15795], 99.90th=[16909], 99.95th=[47973], 00:45:50.359 | 99.99th=[52691] 00:45:50.359 bw ( KiB/s): min=26507, max=29184, per=32.23%, avg=28486.95, stdev=624.51, samples=20 00:45:50.359 iops : min= 207, max= 228, avg=222.55, stdev= 4.89, samples=20 00:45:50.359 lat (msec) : 10=0.04%, 20=99.87%, 50=0.04%, 100=0.04% 00:45:50.359 cpu : usr=94.50%, sys=5.13%, ctx=19, majf=0, minf=1639 00:45:50.359 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:45:50.359 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:50.359 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:50.359 issued rwts: total=2235,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:50.359 latency : target=0, window=0, percentile=100.00%, depth=3 00:45:50.359 filename0: (groupid=0, jobs=1): err= 0: pid=74047: Fri Dec 13 10:47:43 2024 00:45:50.359 read: IOPS=228, BW=28.6MiB/s (30.0MB/s)(286MiB/10006msec) 00:45:50.359 slat (nsec): min=7599, max=39488, avg=14475.00, stdev=1605.20 00:45:50.359 clat (usec): min=6360, max=16191, avg=13091.54, stdev=897.12 00:45:50.359 lat (usec): min=6374, max=16206, avg=13106.02, stdev=897.31 00:45:50.359 clat percentiles (usec): 00:45:50.359 | 1.00th=[10945], 5.00th=[11600], 10.00th=[11994], 20.00th=[12387], 00:45:50.359 | 30.00th=[12649], 40.00th=[12911], 50.00th=[13042], 60.00th=[13304], 00:45:50.359 | 70.00th=[13566], 80.00th=[13829], 90.00th=[14222], 95.00th=[14615], 00:45:50.359 | 99.00th=[15270], 99.50th=[15401], 99.90th=[16188], 99.95th=[16188], 00:45:50.359 | 99.99th=[16188] 00:45:50.359 bw ( KiB/s): min=28416, max=29952, per=33.11%, avg=29264.84, stdev=451.97, samples=19 00:45:50.359 iops : min= 222, max= 234, avg=228.63, stdev= 3.53, samples=19 00:45:50.359 lat (msec) : 10=0.09%, 20=99.91% 00:45:50.359 cpu : usr=94.49%, sys=5.15%, ctx=28, majf=0, minf=1633 00:45:50.359 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:45:50.359 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:50.359 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:50.359 issued rwts: total=2290,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:50.359 latency : target=0, window=0, percentile=100.00%, depth=3 00:45:50.359 00:45:50.359 Run status group 0 (all jobs): 00:45:50.359 READ: bw=86.3MiB/s (90.5MB/s), 27.8MiB/s-30.0MiB/s (29.2MB/s-31.5MB/s), io=867MiB (909MB), run=10006-10048msec 00:45:50.359 ----------------------------------------------------- 00:45:50.359 Suppressions used: 00:45:50.359 count bytes template 00:45:50.359 5 44 /usr/src/fio/parse.c 00:45:50.359 1 8 libtcmalloc_minimal.so 00:45:50.359 1 904 libcrypto.so 00:45:50.359 ----------------------------------------------------- 00:45:50.359 00:45:50.359 10:47:44 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:45:50.359 10:47:44 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:45:50.359 10:47:44 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:45:50.359 10:47:44 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:45:50.359 10:47:44 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:45:50.359 10:47:44 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:45:50.359 10:47:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:50.359 10:47:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:45:50.359 10:47:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:50.359 10:47:44 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:45:50.359 10:47:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:50.359 10:47:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:45:50.359 10:47:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:50.359 00:45:50.359 real 0m12.614s 00:45:50.359 user 0m37.206s 00:45:50.359 sys 0m2.062s 00:45:50.359 10:47:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:45:50.359 10:47:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:45:50.359 ************************************ 00:45:50.359 END TEST fio_dif_digest 00:45:50.359 ************************************ 00:45:50.359 10:47:44 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:45:50.359 10:47:44 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:45:50.359 10:47:44 nvmf_dif -- nvmf/common.sh@516 -- # nvmfcleanup 00:45:50.359 10:47:44 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:45:50.359 10:47:44 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:45:50.359 10:47:44 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:45:50.359 10:47:44 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:45:50.359 10:47:44 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:45:50.618 rmmod nvme_tcp 00:45:50.618 rmmod nvme_fabrics 00:45:50.618 rmmod nvme_keyring 00:45:50.618 10:47:44 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:45:50.618 10:47:44 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:45:50.618 10:47:44 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:45:50.618 10:47:44 nvmf_dif -- nvmf/common.sh@517 -- # '[' -n 64592 ']' 00:45:50.618 10:47:44 nvmf_dif -- nvmf/common.sh@518 -- # killprocess 64592 00:45:50.618 10:47:44 nvmf_dif -- common/autotest_common.sh@954 -- # '[' -z 64592 ']' 00:45:50.618 10:47:44 nvmf_dif -- common/autotest_common.sh@958 -- # kill -0 64592 00:45:50.618 10:47:44 nvmf_dif -- common/autotest_common.sh@959 -- # uname 00:45:50.618 10:47:44 nvmf_dif -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:45:50.618 10:47:44 nvmf_dif -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64592 00:45:50.618 10:47:44 nvmf_dif -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:45:50.618 10:47:44 nvmf_dif -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:45:50.618 10:47:44 nvmf_dif -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64592' 00:45:50.618 killing process with pid 64592 00:45:50.618 10:47:44 nvmf_dif -- common/autotest_common.sh@973 -- # kill 64592 00:45:50.618 10:47:44 nvmf_dif -- common/autotest_common.sh@978 -- # wait 64592 00:45:51.993 10:47:45 nvmf_dif -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:45:51.993 10:47:45 nvmf_dif -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:45:53.895 Waiting for block devices as requested 00:45:53.895 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:45:53.895 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:45:54.153 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:45:54.153 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:45:54.153 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:45:54.153 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:45:54.412 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:45:54.412 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:45:54.412 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:45:54.412 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:45:54.671 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:45:54.671 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:45:54.671 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:45:54.930 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:45:54.930 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:45:54.930 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:45:54.930 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:45:55.188 10:47:48 nvmf_dif -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:45:55.188 10:47:48 nvmf_dif -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:45:55.188 10:47:48 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:45:55.188 10:47:48 nvmf_dif -- nvmf/common.sh@791 -- # iptables-save 00:45:55.188 10:47:48 nvmf_dif -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:45:55.188 10:47:48 nvmf_dif -- nvmf/common.sh@791 -- # iptables-restore 00:45:55.188 10:47:48 nvmf_dif -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:45:55.188 10:47:48 nvmf_dif -- nvmf/common.sh@302 -- # remove_spdk_ns 00:45:55.188 10:47:48 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:45:55.188 10:47:48 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:45:55.188 10:47:48 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:45:57.090 10:47:50 nvmf_dif -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:45:57.090 00:45:57.090 real 1m22.092s 00:45:57.090 user 7m27.790s 00:45:57.090 sys 0m20.369s 00:45:57.090 10:47:50 nvmf_dif -- common/autotest_common.sh@1130 -- # xtrace_disable 00:45:57.090 10:47:50 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:45:57.090 ************************************ 00:45:57.090 END TEST nvmf_dif 00:45:57.090 ************************************ 00:45:57.349 10:47:50 -- spdk/autotest.sh@290 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:45:57.349 10:47:50 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:45:57.349 10:47:50 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:45:57.349 10:47:50 -- common/autotest_common.sh@10 -- # set +x 00:45:57.349 ************************************ 00:45:57.349 START TEST nvmf_abort_qd_sizes 00:45:57.349 ************************************ 00:45:57.349 10:47:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:45:57.349 * Looking for test storage... 00:45:57.349 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:45:57.349 10:47:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:45:57.349 10:47:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # lcov --version 00:45:57.349 10:47:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:45:57.349 10:47:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:45:57.349 10:47:51 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:45:57.349 10:47:51 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:45:57.349 10:47:51 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:45:57.349 10:47:51 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:45:57.349 10:47:51 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:45:57.349 10:47:51 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:45:57.349 10:47:51 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:45:57.349 10:47:51 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:45:57.349 10:47:51 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:45:57.349 10:47:51 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:45:57.349 10:47:51 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:45:57.349 10:47:51 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:45:57.349 10:47:51 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:45:57.349 10:47:51 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:45:57.349 10:47:51 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:45:57.349 10:47:51 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:45:57.349 10:47:51 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:45:57.349 10:47:51 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:45:57.349 10:47:51 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:45:57.349 10:47:51 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:45:57.349 10:47:51 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:45:57.349 10:47:51 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:45:57.349 10:47:51 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:45:57.349 10:47:51 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:45:57.349 10:47:51 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:45:57.349 10:47:51 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:45:57.349 10:47:51 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:45:57.349 10:47:51 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:45:57.349 10:47:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:45:57.349 10:47:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:45:57.349 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:57.349 --rc genhtml_branch_coverage=1 00:45:57.349 --rc genhtml_function_coverage=1 00:45:57.349 --rc genhtml_legend=1 00:45:57.349 --rc geninfo_all_blocks=1 00:45:57.349 --rc geninfo_unexecuted_blocks=1 00:45:57.349 00:45:57.349 ' 00:45:57.349 10:47:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:45:57.349 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:57.349 --rc genhtml_branch_coverage=1 00:45:57.349 --rc genhtml_function_coverage=1 00:45:57.349 --rc genhtml_legend=1 00:45:57.349 --rc geninfo_all_blocks=1 00:45:57.349 --rc geninfo_unexecuted_blocks=1 00:45:57.349 00:45:57.349 ' 00:45:57.349 10:47:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:45:57.349 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:57.349 --rc genhtml_branch_coverage=1 00:45:57.349 --rc genhtml_function_coverage=1 00:45:57.349 --rc genhtml_legend=1 00:45:57.349 --rc geninfo_all_blocks=1 00:45:57.349 --rc geninfo_unexecuted_blocks=1 00:45:57.349 00:45:57.349 ' 00:45:57.349 10:47:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:45:57.349 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:57.349 --rc genhtml_branch_coverage=1 00:45:57.349 --rc genhtml_function_coverage=1 00:45:57.349 --rc genhtml_legend=1 00:45:57.349 --rc geninfo_all_blocks=1 00:45:57.349 --rc geninfo_unexecuted_blocks=1 00:45:57.349 00:45:57.349 ' 00:45:57.349 10:47:51 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:45:57.349 10:47:51 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:45:57.349 10:47:51 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:45:57.349 10:47:51 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:45:57.349 10:47:51 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:45:57.349 10:47:51 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:45:57.349 10:47:51 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:45:57.349 10:47:51 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:45:57.349 10:47:51 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:45:57.349 10:47:51 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:45:57.349 10:47:51 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:45:57.349 10:47:51 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:45:57.349 10:47:51 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:45:57.349 10:47:51 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:45:57.349 10:47:51 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:45:57.349 10:47:51 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:45:57.349 10:47:51 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:45:57.349 10:47:51 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:45:57.349 10:47:51 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:45:57.349 10:47:51 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:45:57.349 10:47:51 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:45:57.349 10:47:51 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:45:57.349 10:47:51 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:45:57.349 10:47:51 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:57.349 10:47:51 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:57.349 10:47:51 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:57.349 10:47:51 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:45:57.349 10:47:51 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:57.349 10:47:51 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:45:57.350 10:47:51 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:45:57.350 10:47:51 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:45:57.350 10:47:51 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:45:57.350 10:47:51 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:45:57.350 10:47:51 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:45:57.350 10:47:51 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:45:57.350 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:45:57.350 10:47:51 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:45:57.350 10:47:51 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:45:57.350 10:47:51 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:45:57.350 10:47:51 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:45:57.350 10:47:51 nvmf_abort_qd_sizes -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:45:57.350 10:47:51 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:45:57.350 10:47:51 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # prepare_net_devs 00:45:57.350 10:47:51 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # local -g is_hw=no 00:45:57.350 10:47:51 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # remove_spdk_ns 00:45:57.350 10:47:51 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:45:57.350 10:47:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:45:57.350 10:47:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:45:57.350 10:47:51 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:45:57.350 10:47:51 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:45:57.350 10:47:51 nvmf_abort_qd_sizes -- nvmf/common.sh@309 -- # xtrace_disable 00:45:57.350 10:47:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:46:02.612 10:47:56 nvmf_abort_qd_sizes -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:46:02.612 10:47:56 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # pci_devs=() 00:46:02.612 10:47:56 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # local -a pci_devs 00:46:02.612 10:47:56 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # pci_net_devs=() 00:46:02.612 10:47:56 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:46:02.612 10:47:56 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # pci_drivers=() 00:46:02.612 10:47:56 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # local -A pci_drivers 00:46:02.612 10:47:56 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # net_devs=() 00:46:02.612 10:47:56 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # local -ga net_devs 00:46:02.612 10:47:56 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # e810=() 00:46:02.612 10:47:56 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # local -ga e810 00:46:02.612 10:47:56 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # x722=() 00:46:02.612 10:47:56 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # local -ga x722 00:46:02.612 10:47:56 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # mlx=() 00:46:02.612 10:47:56 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # local -ga mlx 00:46:02.612 10:47:56 nvmf_abort_qd_sizes -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:46:02.612 10:47:56 nvmf_abort_qd_sizes -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:46:02.612 10:47:56 nvmf_abort_qd_sizes -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:46:02.612 10:47:56 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:46:02.612 10:47:56 nvmf_abort_qd_sizes -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:46:02.612 10:47:56 nvmf_abort_qd_sizes -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:46:02.612 10:47:56 nvmf_abort_qd_sizes -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:46:02.612 10:47:56 nvmf_abort_qd_sizes -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:46:02.612 10:47:56 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:46:02.612 10:47:56 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:46:02.612 10:47:56 nvmf_abort_qd_sizes -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:46:02.612 10:47:56 nvmf_abort_qd_sizes -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:46:02.612 10:47:56 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:46:02.612 10:47:56 nvmf_abort_qd_sizes -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:46:02.612 10:47:56 nvmf_abort_qd_sizes -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:46:02.612 10:47:56 nvmf_abort_qd_sizes -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:46:02.612 10:47:56 nvmf_abort_qd_sizes -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:46:02.612 10:47:56 nvmf_abort_qd_sizes -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:46:02.612 10:47:56 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:46:02.612 10:47:56 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:46:02.612 Found 0000:af:00.0 (0x8086 - 0x159b) 00:46:02.612 10:47:56 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:46:02.612 10:47:56 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:46:02.612 10:47:56 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:46:02.612 10:47:56 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:46:02.612 10:47:56 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:46:02.612 10:47:56 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:46:02.612 10:47:56 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:46:02.612 Found 0000:af:00.1 (0x8086 - 0x159b) 00:46:02.612 10:47:56 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:46:02.612 10:47:56 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:46:02.612 10:47:56 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:46:02.612 10:47:56 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:46:02.612 10:47:56 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:46:02.612 10:47:56 nvmf_abort_qd_sizes -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:46:02.612 10:47:56 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:46:02.612 10:47:56 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:46:02.612 10:47:56 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:46:02.612 10:47:56 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:46:02.613 10:47:56 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:46:02.613 10:47:56 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:46:02.613 10:47:56 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:46:02.613 10:47:56 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:46:02.613 10:47:56 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:46:02.613 10:47:56 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:46:02.613 Found net devices under 0000:af:00.0: cvl_0_0 00:46:02.613 10:47:56 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:46:02.613 10:47:56 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:46:02.613 10:47:56 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:46:02.613 10:47:56 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:46:02.613 10:47:56 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:46:02.613 10:47:56 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:46:02.613 10:47:56 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:46:02.613 10:47:56 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:46:02.613 10:47:56 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:46:02.613 Found net devices under 0000:af:00.1: cvl_0_1 00:46:02.613 10:47:56 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:46:02.613 10:47:56 nvmf_abort_qd_sizes -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:46:02.613 10:47:56 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # is_hw=yes 00:46:02.613 10:47:56 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:46:02.613 10:47:56 nvmf_abort_qd_sizes -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:46:02.613 10:47:56 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:46:02.613 10:47:56 nvmf_abort_qd_sizes -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:46:02.613 10:47:56 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:46:02.613 10:47:56 nvmf_abort_qd_sizes -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:46:02.613 10:47:56 nvmf_abort_qd_sizes -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:46:02.613 10:47:56 nvmf_abort_qd_sizes -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:46:02.613 10:47:56 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:46:02.613 10:47:56 nvmf_abort_qd_sizes -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:46:02.613 10:47:56 nvmf_abort_qd_sizes -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:46:02.613 10:47:56 nvmf_abort_qd_sizes -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:46:02.613 10:47:56 nvmf_abort_qd_sizes -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:46:02.613 10:47:56 nvmf_abort_qd_sizes -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:46:02.613 10:47:56 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:46:02.613 10:47:56 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:46:02.613 10:47:56 nvmf_abort_qd_sizes -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:46:02.613 10:47:56 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:46:02.613 10:47:56 nvmf_abort_qd_sizes -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:46:02.613 10:47:56 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:46:02.613 10:47:56 nvmf_abort_qd_sizes -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:46:02.613 10:47:56 nvmf_abort_qd_sizes -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:46:02.613 10:47:56 nvmf_abort_qd_sizes -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:46:02.613 10:47:56 nvmf_abort_qd_sizes -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:46:02.613 10:47:56 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:46:02.613 10:47:56 nvmf_abort_qd_sizes -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:46:02.613 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:46:02.613 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.358 ms 00:46:02.613 00:46:02.613 --- 10.0.0.2 ping statistics --- 00:46:02.613 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:46:02.613 rtt min/avg/max/mdev = 0.358/0.358/0.358/0.000 ms 00:46:02.613 10:47:56 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:46:02.613 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:46:02.613 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.122 ms 00:46:02.613 00:46:02.613 --- 10.0.0.1 ping statistics --- 00:46:02.613 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:46:02.613 rtt min/avg/max/mdev = 0.122/0.122/0.122/0.000 ms 00:46:02.613 10:47:56 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:46:02.613 10:47:56 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # return 0 00:46:02.613 10:47:56 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:46:02.613 10:47:56 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:46:05.144 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:46:05.144 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:46:05.144 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:46:05.144 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:46:05.144 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:46:05.144 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:46:05.144 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:46:05.144 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:46:05.144 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:46:05.144 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:46:05.144 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:46:05.144 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:46:05.144 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:46:05.144 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:46:05.144 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:46:05.144 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:46:05.711 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:46:05.711 10:47:59 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:46:05.711 10:47:59 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:46:05.711 10:47:59 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:46:05.711 10:47:59 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:46:05.711 10:47:59 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:46:05.711 10:47:59 nvmf_abort_qd_sizes -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:46:05.711 10:47:59 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:46:05.711 10:47:59 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:46:05.711 10:47:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:46:05.711 10:47:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:46:05.969 10:47:59 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # nvmfpid=81832 00:46:05.969 10:47:59 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # waitforlisten 81832 00:46:05.969 10:47:59 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:46:05.969 10:47:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # '[' -z 81832 ']' 00:46:05.969 10:47:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:46:05.969 10:47:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # local max_retries=100 00:46:05.970 10:47:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:46:05.970 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:46:05.970 10:47:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@844 -- # xtrace_disable 00:46:05.970 10:47:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:46:05.970 [2024-12-13 10:47:59.679605] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:46:05.970 [2024-12-13 10:47:59.679691] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:46:05.970 [2024-12-13 10:47:59.797757] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:46:06.228 [2024-12-13 10:47:59.908362] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:46:06.228 [2024-12-13 10:47:59.908408] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:46:06.228 [2024-12-13 10:47:59.908422] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:46:06.228 [2024-12-13 10:47:59.908436] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:46:06.228 [2024-12-13 10:47:59.908446] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:46:06.228 [2024-12-13 10:47:59.910869] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:46:06.228 [2024-12-13 10:47:59.910945] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:46:06.228 [2024-12-13 10:47:59.911009] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:46:06.228 [2024-12-13 10:47:59.911015] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:46:06.794 10:48:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:46:06.794 10:48:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@868 -- # return 0 00:46:06.794 10:48:00 nvmf_abort_qd_sizes -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:46:06.794 10:48:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@732 -- # xtrace_disable 00:46:06.794 10:48:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:46:06.794 10:48:00 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:46:06.794 10:48:00 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:46:06.794 10:48:00 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:46:06.794 10:48:00 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:46:06.794 10:48:00 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:46:06.794 10:48:00 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:46:06.794 10:48:00 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n 0000:5e:00.0 ]] 00:46:06.794 10:48:00 nvmf_abort_qd_sizes -- scripts/common.sh@316 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:46:06.794 10:48:00 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:46:06.794 10:48:00 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:5e:00.0 ]] 00:46:06.794 10:48:00 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:46:06.794 10:48:00 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:46:06.794 10:48:00 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:46:06.794 10:48:00 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 1 )) 00:46:06.794 10:48:00 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:5e:00.0 00:46:06.794 10:48:00 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:46:06.794 10:48:00 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:5e:00.0 00:46:06.794 10:48:00 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:46:06.794 10:48:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:46:06.794 10:48:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:46:06.794 10:48:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:46:06.794 ************************************ 00:46:06.794 START TEST spdk_target_abort 00:46:06.794 ************************************ 00:46:06.794 10:48:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1129 -- # spdk_target 00:46:06.794 10:48:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:46:06.794 10:48:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:5e:00.0 -b spdk_target 00:46:06.794 10:48:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:06.794 10:48:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:46:10.073 spdk_targetn1 00:46:10.073 10:48:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:10.073 10:48:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:46:10.073 10:48:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:10.073 10:48:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:46:10.073 [2024-12-13 10:48:03.463852] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:46:10.073 10:48:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:10.073 10:48:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:46:10.073 10:48:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:10.073 10:48:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:46:10.073 10:48:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:10.073 10:48:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:46:10.073 10:48:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:10.073 10:48:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:46:10.073 10:48:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:10.073 10:48:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:46:10.073 10:48:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:10.073 10:48:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:46:10.073 [2024-12-13 10:48:03.507946] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:46:10.073 10:48:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:10.073 10:48:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:46:10.073 10:48:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:46:10.073 10:48:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:46:10.073 10:48:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:46:10.073 10:48:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:46:10.073 10:48:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:46:10.073 10:48:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:46:10.073 10:48:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:46:10.073 10:48:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:46:10.073 10:48:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:46:10.073 10:48:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:46:10.073 10:48:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:46:10.073 10:48:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:46:10.073 10:48:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:46:10.073 10:48:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:46:10.073 10:48:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:46:10.074 10:48:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:46:10.074 10:48:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:46:10.074 10:48:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:46:10.074 10:48:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:46:10.074 10:48:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:46:13.348 Initializing NVMe Controllers 00:46:13.348 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:46:13.348 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:46:13.348 Initialization complete. Launching workers. 00:46:13.348 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 13920, failed: 0 00:46:13.348 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1342, failed to submit 12578 00:46:13.348 success 686, unsuccessful 656, failed 0 00:46:13.348 10:48:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:46:13.348 10:48:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:46:16.627 Initializing NVMe Controllers 00:46:16.627 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:46:16.627 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:46:16.627 Initialization complete. Launching workers. 00:46:16.627 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8606, failed: 0 00:46:16.627 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1238, failed to submit 7368 00:46:16.627 success 307, unsuccessful 931, failed 0 00:46:16.627 10:48:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:46:16.627 10:48:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:46:19.928 Initializing NVMe Controllers 00:46:19.928 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:46:19.928 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:46:19.928 Initialization complete. Launching workers. 00:46:19.928 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 33605, failed: 0 00:46:19.928 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2781, failed to submit 30824 00:46:19.928 success 584, unsuccessful 2197, failed 0 00:46:19.928 10:48:13 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:46:19.928 10:48:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:19.928 10:48:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:46:19.928 10:48:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:19.928 10:48:13 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:46:19.928 10:48:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:19.928 10:48:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:46:21.397 10:48:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:21.397 10:48:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 81832 00:46:21.397 10:48:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # '[' -z 81832 ']' 00:46:21.397 10:48:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # kill -0 81832 00:46:21.397 10:48:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # uname 00:46:21.397 10:48:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:46:21.397 10:48:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81832 00:46:21.397 10:48:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:46:21.397 10:48:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:46:21.397 10:48:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81832' 00:46:21.397 killing process with pid 81832 00:46:21.397 10:48:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@973 -- # kill 81832 00:46:21.397 10:48:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@978 -- # wait 81832 00:46:22.329 00:46:22.329 real 0m15.339s 00:46:22.329 user 1m0.091s 00:46:22.329 sys 0m2.716s 00:46:22.329 10:48:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:46:22.329 10:48:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:46:22.329 ************************************ 00:46:22.329 END TEST spdk_target_abort 00:46:22.329 ************************************ 00:46:22.329 10:48:15 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:46:22.329 10:48:15 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:46:22.329 10:48:15 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:46:22.329 10:48:15 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:46:22.329 ************************************ 00:46:22.329 START TEST kernel_target_abort 00:46:22.329 ************************************ 00:46:22.329 10:48:15 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1129 -- # kernel_target 00:46:22.329 10:48:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:46:22.329 10:48:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # local ip 00:46:22.329 10:48:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates=() 00:46:22.329 10:48:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # local -A ip_candidates 00:46:22.329 10:48:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:46:22.329 10:48:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:46:22.329 10:48:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:46:22.329 10:48:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:46:22.329 10:48:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:46:22.329 10:48:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:46:22.329 10:48:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:46:22.329 10:48:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:46:22.329 10:48:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:46:22.329 10:48:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:46:22.329 10:48:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:46:22.329 10:48:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:46:22.329 10:48:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:46:22.329 10:48:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # local block nvme 00:46:22.329 10:48:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:46:22.329 10:48:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # modprobe nvmet 00:46:22.329 10:48:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:46:22.329 10:48:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:46:24.861 Waiting for block devices as requested 00:46:24.861 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:46:24.861 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:46:24.861 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:46:25.119 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:46:25.119 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:46:25.119 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:46:25.119 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:46:25.378 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:46:25.378 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:46:25.378 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:46:25.378 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:46:25.636 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:46:25.636 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:46:25.636 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:46:25.894 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:46:25.894 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:46:25.894 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:46:26.458 10:48:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:46:26.458 10:48:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:46:26.458 10:48:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:46:26.458 10:48:20 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:46:26.458 10:48:20 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:46:26.458 10:48:20 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:46:26.458 10:48:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:46:26.458 10:48:20 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:46:26.458 10:48:20 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:46:26.458 No valid GPT data, bailing 00:46:26.458 10:48:20 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:46:26.716 10:48:20 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:46:26.716 10:48:20 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:46:26.716 10:48:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:46:26.716 10:48:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:46:26.716 10:48:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:46:26.716 10:48:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:46:26.716 10:48:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:46:26.716 10:48:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:46:26.716 10:48:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:46:26.716 10:48:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:46:26.716 10:48:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 1 00:46:26.716 10:48:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:46:26.716 10:48:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo tcp 00:46:26.716 10:48:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # echo 4420 00:46:26.716 10:48:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@702 -- # echo ipv4 00:46:26.716 10:48:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:46:26.716 10:48:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:46:26.716 00:46:26.716 Discovery Log Number of Records 2, Generation counter 2 00:46:26.716 =====Discovery Log Entry 0====== 00:46:26.716 trtype: tcp 00:46:26.716 adrfam: ipv4 00:46:26.716 subtype: current discovery subsystem 00:46:26.716 treq: not specified, sq flow control disable supported 00:46:26.716 portid: 1 00:46:26.716 trsvcid: 4420 00:46:26.716 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:46:26.716 traddr: 10.0.0.1 00:46:26.716 eflags: none 00:46:26.716 sectype: none 00:46:26.716 =====Discovery Log Entry 1====== 00:46:26.716 trtype: tcp 00:46:26.716 adrfam: ipv4 00:46:26.716 subtype: nvme subsystem 00:46:26.716 treq: not specified, sq flow control disable supported 00:46:26.716 portid: 1 00:46:26.716 trsvcid: 4420 00:46:26.716 subnqn: nqn.2016-06.io.spdk:testnqn 00:46:26.716 traddr: 10.0.0.1 00:46:26.716 eflags: none 00:46:26.716 sectype: none 00:46:26.716 10:48:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:46:26.716 10:48:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:46:26.716 10:48:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:46:26.716 10:48:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:46:26.716 10:48:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:46:26.716 10:48:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:46:26.716 10:48:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:46:26.716 10:48:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:46:26.716 10:48:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:46:26.716 10:48:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:46:26.716 10:48:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:46:26.716 10:48:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:46:26.716 10:48:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:46:26.716 10:48:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:46:26.716 10:48:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:46:26.716 10:48:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:46:26.716 10:48:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:46:26.716 10:48:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:46:26.716 10:48:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:46:26.716 10:48:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:46:26.716 10:48:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:46:30.000 Initializing NVMe Controllers 00:46:30.000 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:46:30.000 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:46:30.000 Initialization complete. Launching workers. 00:46:30.000 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 79157, failed: 0 00:46:30.000 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 79157, failed to submit 0 00:46:30.000 success 0, unsuccessful 79157, failed 0 00:46:30.000 10:48:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:46:30.000 10:48:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:46:33.282 Initializing NVMe Controllers 00:46:33.282 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:46:33.282 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:46:33.282 Initialization complete. Launching workers. 00:46:33.282 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 126276, failed: 0 00:46:33.282 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 31694, failed to submit 94582 00:46:33.282 success 0, unsuccessful 31694, failed 0 00:46:33.282 10:48:26 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:46:33.282 10:48:26 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:46:36.565 Initializing NVMe Controllers 00:46:36.565 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:46:36.565 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:46:36.565 Initialization complete. Launching workers. 00:46:36.565 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 120119, failed: 0 00:46:36.565 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 30062, failed to submit 90057 00:46:36.565 success 0, unsuccessful 30062, failed 0 00:46:36.565 10:48:29 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:46:36.565 10:48:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:46:36.565 10:48:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # echo 0 00:46:36.565 10:48:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:46:36.565 10:48:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:46:36.565 10:48:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:46:36.565 10:48:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:46:36.565 10:48:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:46:36.565 10:48:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:46:36.565 10:48:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:46:39.098 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:46:39.098 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:46:39.098 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:46:39.098 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:46:39.098 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:46:39.098 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:46:39.098 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:46:39.098 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:46:39.098 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:46:39.098 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:46:39.098 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:46:39.098 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:46:39.098 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:46:39.098 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:46:39.098 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:46:39.098 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:46:39.665 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:46:39.924 00:46:39.924 real 0m17.622s 00:46:39.924 user 0m9.150s 00:46:39.924 sys 0m5.279s 00:46:39.924 10:48:33 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:46:39.924 10:48:33 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:46:39.924 ************************************ 00:46:39.924 END TEST kernel_target_abort 00:46:39.924 ************************************ 00:46:39.924 10:48:33 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:46:39.924 10:48:33 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:46:39.924 10:48:33 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # nvmfcleanup 00:46:39.924 10:48:33 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:46:39.924 10:48:33 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:46:39.924 10:48:33 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:46:39.924 10:48:33 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:46:39.924 10:48:33 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:46:39.924 rmmod nvme_tcp 00:46:39.924 rmmod nvme_fabrics 00:46:39.924 rmmod nvme_keyring 00:46:39.924 10:48:33 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:46:39.924 10:48:33 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:46:39.924 10:48:33 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:46:39.924 10:48:33 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # '[' -n 81832 ']' 00:46:39.924 10:48:33 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # killprocess 81832 00:46:39.924 10:48:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # '[' -z 81832 ']' 00:46:39.924 10:48:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@958 -- # kill -0 81832 00:46:39.924 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (81832) - No such process 00:46:39.924 10:48:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@981 -- # echo 'Process with pid 81832 is not found' 00:46:39.924 Process with pid 81832 is not found 00:46:39.924 10:48:33 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:46:39.924 10:48:33 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:46:42.453 Waiting for block devices as requested 00:46:42.453 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:46:42.712 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:46:42.712 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:46:42.712 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:46:42.712 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:46:42.972 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:46:42.972 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:46:42.972 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:46:42.972 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:46:43.231 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:46:43.231 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:46:43.231 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:46:43.490 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:46:43.490 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:46:43.490 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:46:43.490 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:46:43.748 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:46:43.748 10:48:37 nvmf_abort_qd_sizes -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:46:43.748 10:48:37 nvmf_abort_qd_sizes -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:46:43.748 10:48:37 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:46:43.748 10:48:37 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-save 00:46:43.748 10:48:37 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:46:43.748 10:48:37 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-restore 00:46:43.748 10:48:37 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:46:43.748 10:48:37 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # remove_spdk_ns 00:46:43.748 10:48:37 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:46:43.748 10:48:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:46:43.748 10:48:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:46:46.281 10:48:39 nvmf_abort_qd_sizes -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:46:46.281 00:46:46.281 real 0m48.540s 00:46:46.281 user 1m13.095s 00:46:46.281 sys 0m15.717s 00:46:46.281 10:48:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:46:46.281 10:48:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:46:46.281 ************************************ 00:46:46.281 END TEST nvmf_abort_qd_sizes 00:46:46.281 ************************************ 00:46:46.281 10:48:39 -- spdk/autotest.sh@292 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:46:46.281 10:48:39 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:46:46.281 10:48:39 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:46:46.281 10:48:39 -- common/autotest_common.sh@10 -- # set +x 00:46:46.281 ************************************ 00:46:46.281 START TEST keyring_file 00:46:46.281 ************************************ 00:46:46.281 10:48:39 keyring_file -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:46:46.281 * Looking for test storage... 00:46:46.281 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:46:46.281 10:48:39 keyring_file -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:46:46.281 10:48:39 keyring_file -- common/autotest_common.sh@1711 -- # lcov --version 00:46:46.281 10:48:39 keyring_file -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:46:46.281 10:48:39 keyring_file -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:46:46.281 10:48:39 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:46:46.281 10:48:39 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:46:46.281 10:48:39 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:46:46.282 10:48:39 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:46:46.282 10:48:39 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:46:46.282 10:48:39 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:46:46.282 10:48:39 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:46:46.282 10:48:39 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:46:46.282 10:48:39 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:46:46.282 10:48:39 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:46:46.282 10:48:39 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:46:46.282 10:48:39 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:46:46.282 10:48:39 keyring_file -- scripts/common.sh@345 -- # : 1 00:46:46.282 10:48:39 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:46:46.282 10:48:39 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:46:46.282 10:48:39 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:46:46.282 10:48:39 keyring_file -- scripts/common.sh@353 -- # local d=1 00:46:46.282 10:48:39 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:46:46.282 10:48:39 keyring_file -- scripts/common.sh@355 -- # echo 1 00:46:46.282 10:48:39 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:46:46.282 10:48:39 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:46:46.282 10:48:39 keyring_file -- scripts/common.sh@353 -- # local d=2 00:46:46.282 10:48:39 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:46:46.282 10:48:39 keyring_file -- scripts/common.sh@355 -- # echo 2 00:46:46.282 10:48:39 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:46:46.282 10:48:39 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:46:46.282 10:48:39 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:46:46.282 10:48:39 keyring_file -- scripts/common.sh@368 -- # return 0 00:46:46.282 10:48:39 keyring_file -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:46:46.282 10:48:39 keyring_file -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:46:46.282 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:46:46.282 --rc genhtml_branch_coverage=1 00:46:46.282 --rc genhtml_function_coverage=1 00:46:46.282 --rc genhtml_legend=1 00:46:46.282 --rc geninfo_all_blocks=1 00:46:46.282 --rc geninfo_unexecuted_blocks=1 00:46:46.282 00:46:46.282 ' 00:46:46.282 10:48:39 keyring_file -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:46:46.282 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:46:46.282 --rc genhtml_branch_coverage=1 00:46:46.282 --rc genhtml_function_coverage=1 00:46:46.282 --rc genhtml_legend=1 00:46:46.282 --rc geninfo_all_blocks=1 00:46:46.282 --rc geninfo_unexecuted_blocks=1 00:46:46.282 00:46:46.282 ' 00:46:46.282 10:48:39 keyring_file -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:46:46.282 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:46:46.282 --rc genhtml_branch_coverage=1 00:46:46.282 --rc genhtml_function_coverage=1 00:46:46.282 --rc genhtml_legend=1 00:46:46.282 --rc geninfo_all_blocks=1 00:46:46.282 --rc geninfo_unexecuted_blocks=1 00:46:46.282 00:46:46.282 ' 00:46:46.282 10:48:39 keyring_file -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:46:46.282 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:46:46.282 --rc genhtml_branch_coverage=1 00:46:46.282 --rc genhtml_function_coverage=1 00:46:46.282 --rc genhtml_legend=1 00:46:46.282 --rc geninfo_all_blocks=1 00:46:46.282 --rc geninfo_unexecuted_blocks=1 00:46:46.282 00:46:46.282 ' 00:46:46.282 10:48:39 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:46:46.282 10:48:39 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:46:46.282 10:48:39 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:46:46.282 10:48:39 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:46:46.282 10:48:39 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:46:46.282 10:48:39 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:46:46.282 10:48:39 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:46:46.282 10:48:39 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:46:46.282 10:48:39 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:46:46.282 10:48:39 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:46:46.282 10:48:39 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:46:46.282 10:48:39 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:46:46.282 10:48:39 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:46:46.282 10:48:39 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:46:46.282 10:48:39 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:46:46.282 10:48:39 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:46:46.282 10:48:39 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:46:46.282 10:48:39 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:46:46.282 10:48:39 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:46:46.282 10:48:39 keyring_file -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:46:46.282 10:48:39 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:46:46.282 10:48:39 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:46:46.282 10:48:39 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:46:46.282 10:48:39 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:46:46.282 10:48:39 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:46.282 10:48:39 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:46.282 10:48:39 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:46.282 10:48:39 keyring_file -- paths/export.sh@5 -- # export PATH 00:46:46.282 10:48:39 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:46.282 10:48:39 keyring_file -- nvmf/common.sh@51 -- # : 0 00:46:46.282 10:48:39 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:46:46.282 10:48:39 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:46:46.282 10:48:39 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:46:46.282 10:48:39 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:46:46.282 10:48:39 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:46:46.282 10:48:39 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:46:46.282 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:46:46.282 10:48:39 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:46:46.282 10:48:39 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:46:46.282 10:48:39 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:46:46.282 10:48:39 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:46:46.282 10:48:39 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:46:46.282 10:48:39 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:46:46.282 10:48:39 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:46:46.282 10:48:39 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:46:46.282 10:48:39 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:46:46.282 10:48:39 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:46:46.282 10:48:39 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:46:46.282 10:48:39 keyring_file -- keyring/common.sh@17 -- # name=key0 00:46:46.282 10:48:39 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:46:46.282 10:48:39 keyring_file -- keyring/common.sh@17 -- # digest=0 00:46:46.282 10:48:39 keyring_file -- keyring/common.sh@18 -- # mktemp 00:46:46.282 10:48:39 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.9Ck65pkG3o 00:46:46.282 10:48:39 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:46:46.282 10:48:39 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:46:46.282 10:48:39 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:46:46.282 10:48:39 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:46:46.282 10:48:39 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:46:46.282 10:48:39 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:46:46.282 10:48:39 keyring_file -- nvmf/common.sh@733 -- # python - 00:46:46.282 10:48:39 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.9Ck65pkG3o 00:46:46.282 10:48:39 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.9Ck65pkG3o 00:46:46.282 10:48:39 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.9Ck65pkG3o 00:46:46.282 10:48:39 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:46:46.282 10:48:39 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:46:46.282 10:48:39 keyring_file -- keyring/common.sh@17 -- # name=key1 00:46:46.282 10:48:39 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:46:46.283 10:48:39 keyring_file -- keyring/common.sh@17 -- # digest=0 00:46:46.283 10:48:39 keyring_file -- keyring/common.sh@18 -- # mktemp 00:46:46.283 10:48:39 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.dkE91TGToA 00:46:46.283 10:48:39 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:46:46.283 10:48:39 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:46:46.283 10:48:39 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:46:46.283 10:48:39 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:46:46.283 10:48:39 keyring_file -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:46:46.283 10:48:39 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:46:46.283 10:48:39 keyring_file -- nvmf/common.sh@733 -- # python - 00:46:46.283 10:48:39 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.dkE91TGToA 00:46:46.283 10:48:39 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.dkE91TGToA 00:46:46.283 10:48:39 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.dkE91TGToA 00:46:46.283 10:48:39 keyring_file -- keyring/file.sh@30 -- # tgtpid=91383 00:46:46.283 10:48:39 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:46:46.283 10:48:39 keyring_file -- keyring/file.sh@32 -- # waitforlisten 91383 00:46:46.283 10:48:39 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 91383 ']' 00:46:46.283 10:48:39 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:46:46.283 10:48:39 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:46:46.283 10:48:39 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:46:46.283 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:46:46.283 10:48:39 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:46:46.283 10:48:39 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:46:46.283 [2024-12-13 10:48:40.018227] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:46:46.283 [2024-12-13 10:48:40.018318] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91383 ] 00:46:46.283 [2024-12-13 10:48:40.134950] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:46:46.541 [2024-12-13 10:48:40.242705] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:46:47.477 10:48:41 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:46:47.477 10:48:41 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:46:47.477 10:48:41 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:46:47.477 10:48:41 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:47.477 10:48:41 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:46:47.477 [2024-12-13 10:48:41.072300] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:46:47.477 null0 00:46:47.477 [2024-12-13 10:48:41.104343] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:46:47.477 [2024-12-13 10:48:41.104694] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:46:47.477 10:48:41 keyring_file -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:47.477 10:48:41 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:46:47.477 10:48:41 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:46:47.477 10:48:41 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:46:47.477 10:48:41 keyring_file -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:46:47.477 10:48:41 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:46:47.477 10:48:41 keyring_file -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:46:47.477 10:48:41 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:46:47.477 10:48:41 keyring_file -- common/autotest_common.sh@655 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:46:47.477 10:48:41 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:47.477 10:48:41 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:46:47.477 [2024-12-13 10:48:41.132402] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:46:47.477 request: 00:46:47.477 { 00:46:47.477 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:46:47.477 "secure_channel": false, 00:46:47.477 "listen_address": { 00:46:47.477 "trtype": "tcp", 00:46:47.477 "traddr": "127.0.0.1", 00:46:47.477 "trsvcid": "4420" 00:46:47.477 }, 00:46:47.477 "method": "nvmf_subsystem_add_listener", 00:46:47.477 "req_id": 1 00:46:47.477 } 00:46:47.477 Got JSON-RPC error response 00:46:47.477 response: 00:46:47.477 { 00:46:47.477 "code": -32602, 00:46:47.477 "message": "Invalid parameters" 00:46:47.477 } 00:46:47.477 10:48:41 keyring_file -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:46:47.477 10:48:41 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:46:47.477 10:48:41 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:46:47.477 10:48:41 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:46:47.477 10:48:41 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:46:47.477 10:48:41 keyring_file -- keyring/file.sh@47 -- # bperfpid=91595 00:46:47.477 10:48:41 keyring_file -- keyring/file.sh@49 -- # waitforlisten 91595 /var/tmp/bperf.sock 00:46:47.477 10:48:41 keyring_file -- keyring/file.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:46:47.477 10:48:41 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 91595 ']' 00:46:47.477 10:48:41 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:46:47.477 10:48:41 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:46:47.477 10:48:41 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:46:47.477 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:46:47.477 10:48:41 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:46:47.477 10:48:41 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:46:47.477 [2024-12-13 10:48:41.210444] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:46:47.477 [2024-12-13 10:48:41.210553] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91595 ] 00:46:47.477 [2024-12-13 10:48:41.322871] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:46:47.735 [2024-12-13 10:48:41.431656] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:46:48.302 10:48:42 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:46:48.302 10:48:42 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:46:48.302 10:48:42 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.9Ck65pkG3o 00:46:48.302 10:48:42 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.9Ck65pkG3o 00:46:48.560 10:48:42 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.dkE91TGToA 00:46:48.560 10:48:42 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.dkE91TGToA 00:46:48.560 10:48:42 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:46:48.560 10:48:42 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:46:48.560 10:48:42 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:46:48.560 10:48:42 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:46:48.560 10:48:42 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:46:48.819 10:48:42 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.9Ck65pkG3o == \/\t\m\p\/\t\m\p\.\9\C\k\6\5\p\k\G\3\o ]] 00:46:48.819 10:48:42 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:46:48.819 10:48:42 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:46:48.819 10:48:42 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:46:48.819 10:48:42 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:46:48.819 10:48:42 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:46:49.077 10:48:42 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.dkE91TGToA == \/\t\m\p\/\t\m\p\.\d\k\E\9\1\T\G\T\o\A ]] 00:46:49.077 10:48:42 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:46:49.077 10:48:42 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:46:49.077 10:48:42 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:46:49.077 10:48:42 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:46:49.077 10:48:42 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:46:49.077 10:48:42 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:46:49.335 10:48:42 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:46:49.335 10:48:42 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:46:49.335 10:48:42 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:46:49.335 10:48:42 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:46:49.335 10:48:42 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:46:49.335 10:48:42 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:46:49.335 10:48:42 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:46:49.335 10:48:43 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:46:49.335 10:48:43 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:46:49.335 10:48:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:46:49.594 [2024-12-13 10:48:43.359822] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:46:49.594 nvme0n1 00:46:49.594 10:48:43 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:46:49.594 10:48:43 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:46:49.594 10:48:43 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:46:49.594 10:48:43 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:46:49.594 10:48:43 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:46:49.594 10:48:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:46:49.852 10:48:43 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:46:49.852 10:48:43 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:46:49.852 10:48:43 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:46:49.852 10:48:43 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:46:49.852 10:48:43 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:46:49.852 10:48:43 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:46:49.852 10:48:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:46:50.110 10:48:43 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:46:50.110 10:48:43 keyring_file -- keyring/file.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:46:50.110 Running I/O for 1 seconds... 00:46:51.044 14965.00 IOPS, 58.46 MiB/s 00:46:51.044 Latency(us) 00:46:51.044 [2024-12-13T09:48:44.935Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:46:51.044 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:46:51.044 nvme0n1 : 1.01 15017.41 58.66 0.00 0.00 8505.42 3588.88 18849.40 00:46:51.044 [2024-12-13T09:48:44.935Z] =================================================================================================================== 00:46:51.044 [2024-12-13T09:48:44.935Z] Total : 15017.41 58.66 0.00 0.00 8505.42 3588.88 18849.40 00:46:51.044 { 00:46:51.044 "results": [ 00:46:51.044 { 00:46:51.044 "job": "nvme0n1", 00:46:51.044 "core_mask": "0x2", 00:46:51.044 "workload": "randrw", 00:46:51.044 "percentage": 50, 00:46:51.044 "status": "finished", 00:46:51.044 "queue_depth": 128, 00:46:51.044 "io_size": 4096, 00:46:51.044 "runtime": 1.0051, 00:46:51.044 "iops": 15017.411202865387, 00:46:51.044 "mibps": 58.66176251119292, 00:46:51.044 "io_failed": 0, 00:46:51.044 "io_timeout": 0, 00:46:51.044 "avg_latency_us": 8505.42475079975, 00:46:51.044 "min_latency_us": 3588.8761904761905, 00:46:51.044 "max_latency_us": 18849.401904761904 00:46:51.044 } 00:46:51.044 ], 00:46:51.044 "core_count": 1 00:46:51.044 } 00:46:51.044 10:48:44 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:46:51.044 10:48:44 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:46:51.303 10:48:45 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:46:51.303 10:48:45 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:46:51.303 10:48:45 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:46:51.303 10:48:45 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:46:51.303 10:48:45 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:46:51.303 10:48:45 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:46:51.560 10:48:45 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:46:51.560 10:48:45 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:46:51.560 10:48:45 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:46:51.560 10:48:45 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:46:51.560 10:48:45 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:46:51.560 10:48:45 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:46:51.560 10:48:45 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:46:51.818 10:48:45 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:46:51.818 10:48:45 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:46:51.818 10:48:45 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:46:51.818 10:48:45 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:46:51.818 10:48:45 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:46:51.818 10:48:45 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:46:51.818 10:48:45 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:46:51.818 10:48:45 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:46:51.818 10:48:45 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:46:51.818 10:48:45 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:46:52.077 [2024-12-13 10:48:45.712958] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:46:52.077 [2024-12-13 10:48:45.713277] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032ad00 (107): Transport endpoint is not connected 00:46:52.077 [2024-12-13 10:48:45.714261] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032ad00 (9): Bad file descriptor 00:46:52.077 [2024-12-13 10:48:45.715257] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:46:52.077 [2024-12-13 10:48:45.715276] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:46:52.077 [2024-12-13 10:48:45.715288] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:46:52.077 [2024-12-13 10:48:45.715300] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:46:52.077 request: 00:46:52.077 { 00:46:52.077 "name": "nvme0", 00:46:52.077 "trtype": "tcp", 00:46:52.078 "traddr": "127.0.0.1", 00:46:52.078 "adrfam": "ipv4", 00:46:52.078 "trsvcid": "4420", 00:46:52.078 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:46:52.078 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:46:52.078 "prchk_reftag": false, 00:46:52.078 "prchk_guard": false, 00:46:52.078 "hdgst": false, 00:46:52.078 "ddgst": false, 00:46:52.078 "psk": "key1", 00:46:52.078 "allow_unrecognized_csi": false, 00:46:52.078 "method": "bdev_nvme_attach_controller", 00:46:52.078 "req_id": 1 00:46:52.078 } 00:46:52.078 Got JSON-RPC error response 00:46:52.078 response: 00:46:52.078 { 00:46:52.078 "code": -5, 00:46:52.078 "message": "Input/output error" 00:46:52.078 } 00:46:52.078 10:48:45 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:46:52.078 10:48:45 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:46:52.078 10:48:45 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:46:52.078 10:48:45 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:46:52.078 10:48:45 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:46:52.078 10:48:45 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:46:52.078 10:48:45 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:46:52.078 10:48:45 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:46:52.078 10:48:45 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:46:52.078 10:48:45 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:46:52.078 10:48:45 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:46:52.078 10:48:45 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:46:52.078 10:48:45 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:46:52.078 10:48:45 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:46:52.078 10:48:45 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:46:52.078 10:48:45 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:46:52.078 10:48:45 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:46:52.337 10:48:46 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:46:52.337 10:48:46 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:46:52.337 10:48:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:46:52.595 10:48:46 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:46:52.595 10:48:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:46:52.595 10:48:46 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:46:52.595 10:48:46 keyring_file -- keyring/file.sh@78 -- # jq length 00:46:52.595 10:48:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:46:52.853 10:48:46 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:46:52.853 10:48:46 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.9Ck65pkG3o 00:46:52.853 10:48:46 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.9Ck65pkG3o 00:46:52.853 10:48:46 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:46:52.853 10:48:46 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.9Ck65pkG3o 00:46:52.853 10:48:46 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:46:52.853 10:48:46 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:46:52.853 10:48:46 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:46:52.853 10:48:46 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:46:52.853 10:48:46 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.9Ck65pkG3o 00:46:52.853 10:48:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.9Ck65pkG3o 00:46:53.111 [2024-12-13 10:48:46.859770] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.9Ck65pkG3o': 0100660 00:46:53.111 [2024-12-13 10:48:46.859807] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:46:53.111 request: 00:46:53.111 { 00:46:53.111 "name": "key0", 00:46:53.111 "path": "/tmp/tmp.9Ck65pkG3o", 00:46:53.111 "method": "keyring_file_add_key", 00:46:53.111 "req_id": 1 00:46:53.111 } 00:46:53.111 Got JSON-RPC error response 00:46:53.111 response: 00:46:53.111 { 00:46:53.111 "code": -1, 00:46:53.111 "message": "Operation not permitted" 00:46:53.111 } 00:46:53.111 10:48:46 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:46:53.111 10:48:46 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:46:53.111 10:48:46 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:46:53.111 10:48:46 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:46:53.111 10:48:46 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.9Ck65pkG3o 00:46:53.111 10:48:46 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.9Ck65pkG3o 00:46:53.111 10:48:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.9Ck65pkG3o 00:46:53.370 10:48:47 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.9Ck65pkG3o 00:46:53.370 10:48:47 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:46:53.370 10:48:47 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:46:53.370 10:48:47 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:46:53.370 10:48:47 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:46:53.370 10:48:47 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:46:53.370 10:48:47 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:46:53.628 10:48:47 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:46:53.628 10:48:47 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:46:53.628 10:48:47 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:46:53.628 10:48:47 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:46:53.628 10:48:47 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:46:53.628 10:48:47 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:46:53.628 10:48:47 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:46:53.628 10:48:47 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:46:53.628 10:48:47 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:46:53.628 10:48:47 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:46:53.628 [2024-12-13 10:48:47.437325] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.9Ck65pkG3o': No such file or directory 00:46:53.628 [2024-12-13 10:48:47.437358] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:46:53.628 [2024-12-13 10:48:47.437378] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:46:53.628 [2024-12-13 10:48:47.437406] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:46:53.628 [2024-12-13 10:48:47.437416] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:46:53.628 [2024-12-13 10:48:47.437426] bdev_nvme.c:6801:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:46:53.628 request: 00:46:53.628 { 00:46:53.628 "name": "nvme0", 00:46:53.628 "trtype": "tcp", 00:46:53.628 "traddr": "127.0.0.1", 00:46:53.628 "adrfam": "ipv4", 00:46:53.628 "trsvcid": "4420", 00:46:53.628 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:46:53.628 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:46:53.628 "prchk_reftag": false, 00:46:53.628 "prchk_guard": false, 00:46:53.628 "hdgst": false, 00:46:53.628 "ddgst": false, 00:46:53.628 "psk": "key0", 00:46:53.628 "allow_unrecognized_csi": false, 00:46:53.628 "method": "bdev_nvme_attach_controller", 00:46:53.628 "req_id": 1 00:46:53.628 } 00:46:53.628 Got JSON-RPC error response 00:46:53.628 response: 00:46:53.628 { 00:46:53.628 "code": -19, 00:46:53.628 "message": "No such device" 00:46:53.628 } 00:46:53.628 10:48:47 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:46:53.628 10:48:47 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:46:53.628 10:48:47 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:46:53.628 10:48:47 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:46:53.628 10:48:47 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:46:53.628 10:48:47 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:46:53.886 10:48:47 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:46:53.886 10:48:47 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:46:53.886 10:48:47 keyring_file -- keyring/common.sh@17 -- # name=key0 00:46:53.886 10:48:47 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:46:53.886 10:48:47 keyring_file -- keyring/common.sh@17 -- # digest=0 00:46:53.886 10:48:47 keyring_file -- keyring/common.sh@18 -- # mktemp 00:46:53.886 10:48:47 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.YPJZ8KK7Pq 00:46:53.886 10:48:47 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:46:53.886 10:48:47 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:46:53.886 10:48:47 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:46:53.886 10:48:47 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:46:53.886 10:48:47 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:46:53.886 10:48:47 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:46:53.886 10:48:47 keyring_file -- nvmf/common.sh@733 -- # python - 00:46:53.886 10:48:47 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.YPJZ8KK7Pq 00:46:53.886 10:48:47 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.YPJZ8KK7Pq 00:46:53.886 10:48:47 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.YPJZ8KK7Pq 00:46:53.886 10:48:47 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.YPJZ8KK7Pq 00:46:53.886 10:48:47 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.YPJZ8KK7Pq 00:46:54.144 10:48:47 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:46:54.144 10:48:47 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:46:54.401 nvme0n1 00:46:54.402 10:48:48 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:46:54.402 10:48:48 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:46:54.402 10:48:48 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:46:54.402 10:48:48 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:46:54.402 10:48:48 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:46:54.402 10:48:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:46:54.659 10:48:48 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:46:54.659 10:48:48 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:46:54.659 10:48:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:46:54.659 10:48:48 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:46:54.659 10:48:48 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:46:54.659 10:48:48 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:46:54.659 10:48:48 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:46:54.659 10:48:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:46:54.917 10:48:48 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:46:54.917 10:48:48 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:46:54.917 10:48:48 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:46:54.917 10:48:48 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:46:54.917 10:48:48 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:46:54.917 10:48:48 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:46:54.917 10:48:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:46:55.175 10:48:48 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:46:55.175 10:48:48 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:46:55.175 10:48:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:46:55.432 10:48:49 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:46:55.432 10:48:49 keyring_file -- keyring/file.sh@105 -- # jq length 00:46:55.432 10:48:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:46:55.432 10:48:49 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:46:55.432 10:48:49 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.YPJZ8KK7Pq 00:46:55.432 10:48:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.YPJZ8KK7Pq 00:46:55.689 10:48:49 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.dkE91TGToA 00:46:55.689 10:48:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.dkE91TGToA 00:46:55.947 10:48:49 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:46:55.947 10:48:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:46:56.204 nvme0n1 00:46:56.204 10:48:49 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:46:56.204 10:48:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:46:56.462 10:48:50 keyring_file -- keyring/file.sh@113 -- # config='{ 00:46:56.462 "subsystems": [ 00:46:56.462 { 00:46:56.462 "subsystem": "keyring", 00:46:56.462 "config": [ 00:46:56.462 { 00:46:56.462 "method": "keyring_file_add_key", 00:46:56.462 "params": { 00:46:56.462 "name": "key0", 00:46:56.462 "path": "/tmp/tmp.YPJZ8KK7Pq" 00:46:56.462 } 00:46:56.462 }, 00:46:56.462 { 00:46:56.462 "method": "keyring_file_add_key", 00:46:56.462 "params": { 00:46:56.462 "name": "key1", 00:46:56.462 "path": "/tmp/tmp.dkE91TGToA" 00:46:56.462 } 00:46:56.462 } 00:46:56.462 ] 00:46:56.462 }, 00:46:56.462 { 00:46:56.462 "subsystem": "iobuf", 00:46:56.462 "config": [ 00:46:56.462 { 00:46:56.462 "method": "iobuf_set_options", 00:46:56.462 "params": { 00:46:56.462 "small_pool_count": 8192, 00:46:56.462 "large_pool_count": 1024, 00:46:56.462 "small_bufsize": 8192, 00:46:56.462 "large_bufsize": 135168, 00:46:56.462 "enable_numa": false 00:46:56.462 } 00:46:56.462 } 00:46:56.462 ] 00:46:56.462 }, 00:46:56.462 { 00:46:56.462 "subsystem": "sock", 00:46:56.462 "config": [ 00:46:56.462 { 00:46:56.462 "method": "sock_set_default_impl", 00:46:56.462 "params": { 00:46:56.462 "impl_name": "posix" 00:46:56.462 } 00:46:56.462 }, 00:46:56.462 { 00:46:56.462 "method": "sock_impl_set_options", 00:46:56.462 "params": { 00:46:56.462 "impl_name": "ssl", 00:46:56.462 "recv_buf_size": 4096, 00:46:56.462 "send_buf_size": 4096, 00:46:56.462 "enable_recv_pipe": true, 00:46:56.462 "enable_quickack": false, 00:46:56.462 "enable_placement_id": 0, 00:46:56.462 "enable_zerocopy_send_server": true, 00:46:56.462 "enable_zerocopy_send_client": false, 00:46:56.462 "zerocopy_threshold": 0, 00:46:56.462 "tls_version": 0, 00:46:56.462 "enable_ktls": false 00:46:56.462 } 00:46:56.462 }, 00:46:56.462 { 00:46:56.462 "method": "sock_impl_set_options", 00:46:56.462 "params": { 00:46:56.462 "impl_name": "posix", 00:46:56.462 "recv_buf_size": 2097152, 00:46:56.462 "send_buf_size": 2097152, 00:46:56.462 "enable_recv_pipe": true, 00:46:56.462 "enable_quickack": false, 00:46:56.462 "enable_placement_id": 0, 00:46:56.462 "enable_zerocopy_send_server": true, 00:46:56.462 "enable_zerocopy_send_client": false, 00:46:56.462 "zerocopy_threshold": 0, 00:46:56.462 "tls_version": 0, 00:46:56.462 "enable_ktls": false 00:46:56.462 } 00:46:56.462 } 00:46:56.462 ] 00:46:56.462 }, 00:46:56.462 { 00:46:56.462 "subsystem": "vmd", 00:46:56.462 "config": [] 00:46:56.462 }, 00:46:56.462 { 00:46:56.462 "subsystem": "accel", 00:46:56.462 "config": [ 00:46:56.462 { 00:46:56.462 "method": "accel_set_options", 00:46:56.462 "params": { 00:46:56.462 "small_cache_size": 128, 00:46:56.462 "large_cache_size": 16, 00:46:56.462 "task_count": 2048, 00:46:56.462 "sequence_count": 2048, 00:46:56.462 "buf_count": 2048 00:46:56.462 } 00:46:56.462 } 00:46:56.462 ] 00:46:56.462 }, 00:46:56.462 { 00:46:56.462 "subsystem": "bdev", 00:46:56.462 "config": [ 00:46:56.462 { 00:46:56.462 "method": "bdev_set_options", 00:46:56.462 "params": { 00:46:56.462 "bdev_io_pool_size": 65535, 00:46:56.462 "bdev_io_cache_size": 256, 00:46:56.462 "bdev_auto_examine": true, 00:46:56.462 "iobuf_small_cache_size": 128, 00:46:56.462 "iobuf_large_cache_size": 16 00:46:56.462 } 00:46:56.462 }, 00:46:56.462 { 00:46:56.462 "method": "bdev_raid_set_options", 00:46:56.462 "params": { 00:46:56.462 "process_window_size_kb": 1024, 00:46:56.462 "process_max_bandwidth_mb_sec": 0 00:46:56.462 } 00:46:56.462 }, 00:46:56.462 { 00:46:56.462 "method": "bdev_iscsi_set_options", 00:46:56.462 "params": { 00:46:56.462 "timeout_sec": 30 00:46:56.462 } 00:46:56.462 }, 00:46:56.462 { 00:46:56.462 "method": "bdev_nvme_set_options", 00:46:56.462 "params": { 00:46:56.462 "action_on_timeout": "none", 00:46:56.462 "timeout_us": 0, 00:46:56.462 "timeout_admin_us": 0, 00:46:56.462 "keep_alive_timeout_ms": 10000, 00:46:56.462 "arbitration_burst": 0, 00:46:56.462 "low_priority_weight": 0, 00:46:56.462 "medium_priority_weight": 0, 00:46:56.462 "high_priority_weight": 0, 00:46:56.462 "nvme_adminq_poll_period_us": 10000, 00:46:56.462 "nvme_ioq_poll_period_us": 0, 00:46:56.462 "io_queue_requests": 512, 00:46:56.462 "delay_cmd_submit": true, 00:46:56.462 "transport_retry_count": 4, 00:46:56.462 "bdev_retry_count": 3, 00:46:56.462 "transport_ack_timeout": 0, 00:46:56.462 "ctrlr_loss_timeout_sec": 0, 00:46:56.462 "reconnect_delay_sec": 0, 00:46:56.462 "fast_io_fail_timeout_sec": 0, 00:46:56.462 "disable_auto_failback": false, 00:46:56.462 "generate_uuids": false, 00:46:56.462 "transport_tos": 0, 00:46:56.462 "nvme_error_stat": false, 00:46:56.462 "rdma_srq_size": 0, 00:46:56.462 "io_path_stat": false, 00:46:56.462 "allow_accel_sequence": false, 00:46:56.462 "rdma_max_cq_size": 0, 00:46:56.462 "rdma_cm_event_timeout_ms": 0, 00:46:56.462 "dhchap_digests": [ 00:46:56.462 "sha256", 00:46:56.462 "sha384", 00:46:56.462 "sha512" 00:46:56.462 ], 00:46:56.462 "dhchap_dhgroups": [ 00:46:56.462 "null", 00:46:56.462 "ffdhe2048", 00:46:56.462 "ffdhe3072", 00:46:56.462 "ffdhe4096", 00:46:56.462 "ffdhe6144", 00:46:56.462 "ffdhe8192" 00:46:56.462 ], 00:46:56.462 "rdma_umr_per_io": false 00:46:56.462 } 00:46:56.462 }, 00:46:56.462 { 00:46:56.462 "method": "bdev_nvme_attach_controller", 00:46:56.462 "params": { 00:46:56.462 "name": "nvme0", 00:46:56.463 "trtype": "TCP", 00:46:56.463 "adrfam": "IPv4", 00:46:56.463 "traddr": "127.0.0.1", 00:46:56.463 "trsvcid": "4420", 00:46:56.463 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:46:56.463 "prchk_reftag": false, 00:46:56.463 "prchk_guard": false, 00:46:56.463 "ctrlr_loss_timeout_sec": 0, 00:46:56.463 "reconnect_delay_sec": 0, 00:46:56.463 "fast_io_fail_timeout_sec": 0, 00:46:56.463 "psk": "key0", 00:46:56.463 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:46:56.463 "hdgst": false, 00:46:56.463 "ddgst": false, 00:46:56.463 "multipath": "multipath" 00:46:56.463 } 00:46:56.463 }, 00:46:56.463 { 00:46:56.463 "method": "bdev_nvme_set_hotplug", 00:46:56.463 "params": { 00:46:56.463 "period_us": 100000, 00:46:56.463 "enable": false 00:46:56.463 } 00:46:56.463 }, 00:46:56.463 { 00:46:56.463 "method": "bdev_wait_for_examine" 00:46:56.463 } 00:46:56.463 ] 00:46:56.463 }, 00:46:56.463 { 00:46:56.463 "subsystem": "nbd", 00:46:56.463 "config": [] 00:46:56.463 } 00:46:56.463 ] 00:46:56.463 }' 00:46:56.463 10:48:50 keyring_file -- keyring/file.sh@115 -- # killprocess 91595 00:46:56.463 10:48:50 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 91595 ']' 00:46:56.463 10:48:50 keyring_file -- common/autotest_common.sh@958 -- # kill -0 91595 00:46:56.463 10:48:50 keyring_file -- common/autotest_common.sh@959 -- # uname 00:46:56.463 10:48:50 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:46:56.463 10:48:50 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 91595 00:46:56.463 10:48:50 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:46:56.463 10:48:50 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:46:56.463 10:48:50 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 91595' 00:46:56.463 killing process with pid 91595 00:46:56.463 10:48:50 keyring_file -- common/autotest_common.sh@973 -- # kill 91595 00:46:56.463 Received shutdown signal, test time was about 1.000000 seconds 00:46:56.463 00:46:56.463 Latency(us) 00:46:56.463 [2024-12-13T09:48:50.354Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:46:56.463 [2024-12-13T09:48:50.354Z] =================================================================================================================== 00:46:56.463 [2024-12-13T09:48:50.354Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:46:56.463 10:48:50 keyring_file -- common/autotest_common.sh@978 -- # wait 91595 00:46:57.397 10:48:51 keyring_file -- keyring/file.sh@118 -- # bperfpid=93179 00:46:57.397 10:48:51 keyring_file -- keyring/file.sh@120 -- # waitforlisten 93179 /var/tmp/bperf.sock 00:46:57.397 10:48:51 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 93179 ']' 00:46:57.397 10:48:51 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:46:57.397 10:48:51 keyring_file -- keyring/file.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:46:57.397 10:48:51 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:46:57.397 10:48:51 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:46:57.397 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:46:57.397 10:48:51 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:46:57.397 "subsystems": [ 00:46:57.397 { 00:46:57.397 "subsystem": "keyring", 00:46:57.397 "config": [ 00:46:57.397 { 00:46:57.397 "method": "keyring_file_add_key", 00:46:57.397 "params": { 00:46:57.397 "name": "key0", 00:46:57.397 "path": "/tmp/tmp.YPJZ8KK7Pq" 00:46:57.397 } 00:46:57.397 }, 00:46:57.397 { 00:46:57.397 "method": "keyring_file_add_key", 00:46:57.397 "params": { 00:46:57.397 "name": "key1", 00:46:57.397 "path": "/tmp/tmp.dkE91TGToA" 00:46:57.397 } 00:46:57.397 } 00:46:57.397 ] 00:46:57.397 }, 00:46:57.397 { 00:46:57.397 "subsystem": "iobuf", 00:46:57.397 "config": [ 00:46:57.397 { 00:46:57.397 "method": "iobuf_set_options", 00:46:57.397 "params": { 00:46:57.397 "small_pool_count": 8192, 00:46:57.397 "large_pool_count": 1024, 00:46:57.397 "small_bufsize": 8192, 00:46:57.397 "large_bufsize": 135168, 00:46:57.397 "enable_numa": false 00:46:57.397 } 00:46:57.397 } 00:46:57.397 ] 00:46:57.397 }, 00:46:57.397 { 00:46:57.397 "subsystem": "sock", 00:46:57.397 "config": [ 00:46:57.397 { 00:46:57.397 "method": "sock_set_default_impl", 00:46:57.397 "params": { 00:46:57.397 "impl_name": "posix" 00:46:57.397 } 00:46:57.397 }, 00:46:57.397 { 00:46:57.397 "method": "sock_impl_set_options", 00:46:57.397 "params": { 00:46:57.397 "impl_name": "ssl", 00:46:57.397 "recv_buf_size": 4096, 00:46:57.397 "send_buf_size": 4096, 00:46:57.397 "enable_recv_pipe": true, 00:46:57.397 "enable_quickack": false, 00:46:57.397 "enable_placement_id": 0, 00:46:57.397 "enable_zerocopy_send_server": true, 00:46:57.397 "enable_zerocopy_send_client": false, 00:46:57.397 "zerocopy_threshold": 0, 00:46:57.397 "tls_version": 0, 00:46:57.397 "enable_ktls": false 00:46:57.397 } 00:46:57.397 }, 00:46:57.397 { 00:46:57.397 "method": "sock_impl_set_options", 00:46:57.397 "params": { 00:46:57.397 "impl_name": "posix", 00:46:57.397 "recv_buf_size": 2097152, 00:46:57.397 "send_buf_size": 2097152, 00:46:57.397 "enable_recv_pipe": true, 00:46:57.397 "enable_quickack": false, 00:46:57.397 "enable_placement_id": 0, 00:46:57.397 "enable_zerocopy_send_server": true, 00:46:57.397 "enable_zerocopy_send_client": false, 00:46:57.397 "zerocopy_threshold": 0, 00:46:57.397 "tls_version": 0, 00:46:57.397 "enable_ktls": false 00:46:57.397 } 00:46:57.397 } 00:46:57.397 ] 00:46:57.397 }, 00:46:57.397 { 00:46:57.397 "subsystem": "vmd", 00:46:57.397 "config": [] 00:46:57.397 }, 00:46:57.397 { 00:46:57.397 "subsystem": "accel", 00:46:57.397 "config": [ 00:46:57.397 { 00:46:57.397 "method": "accel_set_options", 00:46:57.397 "params": { 00:46:57.397 "small_cache_size": 128, 00:46:57.397 "large_cache_size": 16, 00:46:57.397 "task_count": 2048, 00:46:57.397 "sequence_count": 2048, 00:46:57.397 "buf_count": 2048 00:46:57.397 } 00:46:57.397 } 00:46:57.397 ] 00:46:57.397 }, 00:46:57.397 { 00:46:57.397 "subsystem": "bdev", 00:46:57.397 "config": [ 00:46:57.397 { 00:46:57.397 "method": "bdev_set_options", 00:46:57.397 "params": { 00:46:57.397 "bdev_io_pool_size": 65535, 00:46:57.397 "bdev_io_cache_size": 256, 00:46:57.397 "bdev_auto_examine": true, 00:46:57.397 "iobuf_small_cache_size": 128, 00:46:57.397 "iobuf_large_cache_size": 16 00:46:57.397 } 00:46:57.397 }, 00:46:57.397 { 00:46:57.397 "method": "bdev_raid_set_options", 00:46:57.397 "params": { 00:46:57.397 "process_window_size_kb": 1024, 00:46:57.397 "process_max_bandwidth_mb_sec": 0 00:46:57.397 } 00:46:57.397 }, 00:46:57.397 { 00:46:57.397 "method": "bdev_iscsi_set_options", 00:46:57.397 "params": { 00:46:57.397 "timeout_sec": 30 00:46:57.397 } 00:46:57.397 }, 00:46:57.397 { 00:46:57.397 "method": "bdev_nvme_set_options", 00:46:57.397 "params": { 00:46:57.397 "action_on_timeout": "none", 00:46:57.397 "timeout_us": 0, 00:46:57.397 "timeout_admin_us": 0, 00:46:57.397 "keep_alive_timeout_ms": 10000, 00:46:57.397 "arbitration_burst": 0, 00:46:57.397 "low_priority_weight": 0, 00:46:57.397 "medium_priority_weight": 0, 00:46:57.397 "high_priority_weight": 0, 00:46:57.397 "nvme_adminq_poll_period_us": 10000, 00:46:57.397 "nvme_ioq_poll_period_us": 0, 00:46:57.397 "io_queue_requests": 512, 00:46:57.397 "delay_cmd_submit": true, 00:46:57.397 "transport_retry_count": 4, 00:46:57.397 "bdev_retry_count": 3, 00:46:57.397 "transport_ack_timeout": 0, 00:46:57.397 "ctrlr_loss_timeout_sec": 0, 00:46:57.397 "reconnect_delay_sec": 0, 00:46:57.397 "fast_io_fail_timeout_sec": 0, 00:46:57.397 "disable_auto_failback": false, 00:46:57.397 "generate_uuids": false, 00:46:57.397 "transport_tos": 0, 00:46:57.397 "nvme_error_stat": false, 00:46:57.397 "rdma_srq_size": 0, 00:46:57.397 "io_path_stat": false, 00:46:57.397 "allow_accel_sequence": false, 00:46:57.397 "rdma_max_cq_size": 0, 00:46:57.397 "rdma_cm_event_timeout_ms": 0, 00:46:57.397 "dhchap_digests": [ 00:46:57.397 "sha256", 00:46:57.397 "sha384", 00:46:57.397 "sha512" 00:46:57.397 ], 00:46:57.397 "dhchap_dhgroups": [ 00:46:57.397 "null", 00:46:57.397 "ffdhe2048", 00:46:57.397 "ffdhe3072", 00:46:57.397 "ffdhe4096", 00:46:57.397 "ffdhe6144", 00:46:57.397 "ffdhe8192" 00:46:57.397 ], 00:46:57.397 "rdma_umr_per_io": false 00:46:57.398 } 00:46:57.398 }, 00:46:57.398 { 00:46:57.398 "method": "bdev_nvme_attach_controller", 00:46:57.398 "params": { 00:46:57.398 "name": "nvme0", 00:46:57.398 "trtype": "TCP", 00:46:57.398 "adrfam": "IPv4", 00:46:57.398 "traddr": "127.0.0.1", 00:46:57.398 "trsvcid": "4420", 00:46:57.398 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:46:57.398 "prchk_reftag": false, 00:46:57.398 "prchk_guard": false, 00:46:57.398 "ctrlr_loss_timeout_sec": 0, 00:46:57.398 "reconnect_delay_sec": 0, 00:46:57.398 "fast_io_fail_timeout_sec": 0, 00:46:57.398 "psk": "key0", 00:46:57.398 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:46:57.398 "hdgst": false, 00:46:57.398 "ddgst": false, 00:46:57.398 "multipath": "multipath" 00:46:57.398 } 00:46:57.398 }, 00:46:57.398 { 00:46:57.398 "method": "bdev_nvme_set_hotplug", 00:46:57.398 "params": { 00:46:57.398 "period_us": 100000, 00:46:57.398 "enable": false 00:46:57.398 } 00:46:57.398 }, 00:46:57.398 { 00:46:57.398 "method": "bdev_wait_for_examine" 00:46:57.398 } 00:46:57.398 ] 00:46:57.398 }, 00:46:57.398 { 00:46:57.398 "subsystem": "nbd", 00:46:57.398 "config": [] 00:46:57.398 } 00:46:57.398 ] 00:46:57.398 }' 00:46:57.398 10:48:51 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:46:57.398 10:48:51 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:46:57.398 [2024-12-13 10:48:51.177626] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:46:57.398 [2024-12-13 10:48:51.177717] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93179 ] 00:46:57.655 [2024-12-13 10:48:51.291220] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:46:57.655 [2024-12-13 10:48:51.400172] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:46:58.220 [2024-12-13 10:48:51.820860] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:46:58.220 10:48:51 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:46:58.220 10:48:51 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:46:58.220 10:48:51 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:46:58.220 10:48:51 keyring_file -- keyring/file.sh@121 -- # jq length 00:46:58.220 10:48:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:46:58.478 10:48:52 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:46:58.478 10:48:52 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:46:58.478 10:48:52 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:46:58.478 10:48:52 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:46:58.478 10:48:52 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:46:58.478 10:48:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:46:58.478 10:48:52 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:46:58.478 10:48:52 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:46:58.478 10:48:52 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:46:58.478 10:48:52 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:46:58.478 10:48:52 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:46:58.478 10:48:52 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:46:58.478 10:48:52 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:46:58.478 10:48:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:46:58.735 10:48:52 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:46:58.735 10:48:52 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:46:58.735 10:48:52 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:46:58.735 10:48:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:46:58.993 10:48:52 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:46:58.993 10:48:52 keyring_file -- keyring/file.sh@1 -- # cleanup 00:46:58.993 10:48:52 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.YPJZ8KK7Pq /tmp/tmp.dkE91TGToA 00:46:58.993 10:48:52 keyring_file -- keyring/file.sh@20 -- # killprocess 93179 00:46:58.993 10:48:52 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 93179 ']' 00:46:58.993 10:48:52 keyring_file -- common/autotest_common.sh@958 -- # kill -0 93179 00:46:58.993 10:48:52 keyring_file -- common/autotest_common.sh@959 -- # uname 00:46:58.993 10:48:52 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:46:58.993 10:48:52 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 93179 00:46:58.993 10:48:52 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:46:58.993 10:48:52 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:46:58.993 10:48:52 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 93179' 00:46:58.993 killing process with pid 93179 00:46:58.993 10:48:52 keyring_file -- common/autotest_common.sh@973 -- # kill 93179 00:46:58.993 Received shutdown signal, test time was about 1.000000 seconds 00:46:58.993 00:46:58.993 Latency(us) 00:46:58.993 [2024-12-13T09:48:52.884Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:46:58.993 [2024-12-13T09:48:52.884Z] =================================================================================================================== 00:46:58.993 [2024-12-13T09:48:52.884Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:46:58.993 10:48:52 keyring_file -- common/autotest_common.sh@978 -- # wait 93179 00:46:59.927 10:48:53 keyring_file -- keyring/file.sh@21 -- # killprocess 91383 00:46:59.927 10:48:53 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 91383 ']' 00:46:59.927 10:48:53 keyring_file -- common/autotest_common.sh@958 -- # kill -0 91383 00:46:59.927 10:48:53 keyring_file -- common/autotest_common.sh@959 -- # uname 00:46:59.927 10:48:53 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:46:59.927 10:48:53 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 91383 00:46:59.927 10:48:53 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:46:59.927 10:48:53 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:46:59.927 10:48:53 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 91383' 00:46:59.927 killing process with pid 91383 00:46:59.927 10:48:53 keyring_file -- common/autotest_common.sh@973 -- # kill 91383 00:46:59.927 10:48:53 keyring_file -- common/autotest_common.sh@978 -- # wait 91383 00:47:02.455 00:47:02.455 real 0m16.428s 00:47:02.455 user 0m35.603s 00:47:02.455 sys 0m2.978s 00:47:02.455 10:48:56 keyring_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:47:02.455 10:48:56 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:47:02.455 ************************************ 00:47:02.455 END TEST keyring_file 00:47:02.455 ************************************ 00:47:02.455 10:48:56 -- spdk/autotest.sh@293 -- # [[ y == y ]] 00:47:02.455 10:48:56 -- spdk/autotest.sh@294 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:47:02.455 10:48:56 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:47:02.455 10:48:56 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:47:02.455 10:48:56 -- common/autotest_common.sh@10 -- # set +x 00:47:02.455 ************************************ 00:47:02.455 START TEST keyring_linux 00:47:02.455 ************************************ 00:47:02.455 10:48:56 keyring_linux -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:47:02.455 Joined session keyring: 485814601 00:47:02.455 * Looking for test storage... 00:47:02.455 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:47:02.455 10:48:56 keyring_linux -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:47:02.455 10:48:56 keyring_linux -- common/autotest_common.sh@1711 -- # lcov --version 00:47:02.455 10:48:56 keyring_linux -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:47:02.455 10:48:56 keyring_linux -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:47:02.455 10:48:56 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:47:02.455 10:48:56 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:47:02.455 10:48:56 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:47:02.455 10:48:56 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:47:02.455 10:48:56 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:47:02.455 10:48:56 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:47:02.455 10:48:56 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:47:02.455 10:48:56 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:47:02.455 10:48:56 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:47:02.455 10:48:56 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:47:02.455 10:48:56 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:47:02.455 10:48:56 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:47:02.455 10:48:56 keyring_linux -- scripts/common.sh@345 -- # : 1 00:47:02.455 10:48:56 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:47:02.455 10:48:56 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:47:02.455 10:48:56 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:47:02.455 10:48:56 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:47:02.455 10:48:56 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:47:02.455 10:48:56 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:47:02.455 10:48:56 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:47:02.455 10:48:56 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:47:02.455 10:48:56 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:47:02.455 10:48:56 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:47:02.455 10:48:56 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:47:02.455 10:48:56 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:47:02.455 10:48:56 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:47:02.455 10:48:56 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:47:02.455 10:48:56 keyring_linux -- scripts/common.sh@368 -- # return 0 00:47:02.455 10:48:56 keyring_linux -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:47:02.455 10:48:56 keyring_linux -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:47:02.455 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:47:02.455 --rc genhtml_branch_coverage=1 00:47:02.455 --rc genhtml_function_coverage=1 00:47:02.455 --rc genhtml_legend=1 00:47:02.455 --rc geninfo_all_blocks=1 00:47:02.455 --rc geninfo_unexecuted_blocks=1 00:47:02.455 00:47:02.455 ' 00:47:02.456 10:48:56 keyring_linux -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:47:02.456 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:47:02.456 --rc genhtml_branch_coverage=1 00:47:02.456 --rc genhtml_function_coverage=1 00:47:02.456 --rc genhtml_legend=1 00:47:02.456 --rc geninfo_all_blocks=1 00:47:02.456 --rc geninfo_unexecuted_blocks=1 00:47:02.456 00:47:02.456 ' 00:47:02.456 10:48:56 keyring_linux -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:47:02.456 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:47:02.456 --rc genhtml_branch_coverage=1 00:47:02.456 --rc genhtml_function_coverage=1 00:47:02.456 --rc genhtml_legend=1 00:47:02.456 --rc geninfo_all_blocks=1 00:47:02.456 --rc geninfo_unexecuted_blocks=1 00:47:02.456 00:47:02.456 ' 00:47:02.456 10:48:56 keyring_linux -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:47:02.456 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:47:02.456 --rc genhtml_branch_coverage=1 00:47:02.456 --rc genhtml_function_coverage=1 00:47:02.456 --rc genhtml_legend=1 00:47:02.456 --rc geninfo_all_blocks=1 00:47:02.456 --rc geninfo_unexecuted_blocks=1 00:47:02.456 00:47:02.456 ' 00:47:02.456 10:48:56 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:47:02.456 10:48:56 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:47:02.456 10:48:56 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:47:02.456 10:48:56 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:47:02.456 10:48:56 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:47:02.456 10:48:56 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:47:02.456 10:48:56 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:47:02.456 10:48:56 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:47:02.456 10:48:56 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:47:02.456 10:48:56 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:47:02.456 10:48:56 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:47:02.456 10:48:56 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:47:02.456 10:48:56 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:47:02.456 10:48:56 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:47:02.456 10:48:56 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:47:02.456 10:48:56 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:47:02.456 10:48:56 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:47:02.456 10:48:56 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:47:02.456 10:48:56 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:47:02.456 10:48:56 keyring_linux -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:47:02.456 10:48:56 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:47:02.456 10:48:56 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:47:02.456 10:48:56 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:47:02.456 10:48:56 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:47:02.456 10:48:56 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:02.456 10:48:56 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:02.456 10:48:56 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:02.456 10:48:56 keyring_linux -- paths/export.sh@5 -- # export PATH 00:47:02.456 10:48:56 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:02.456 10:48:56 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:47:02.456 10:48:56 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:47:02.456 10:48:56 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:47:02.456 10:48:56 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:47:02.456 10:48:56 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:47:02.456 10:48:56 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:47:02.456 10:48:56 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:47:02.456 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:47:02.456 10:48:56 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:47:02.456 10:48:56 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:47:02.456 10:48:56 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:47:02.456 10:48:56 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:47:02.456 10:48:56 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:47:02.456 10:48:56 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:47:02.456 10:48:56 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:47:02.456 10:48:56 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:47:02.456 10:48:56 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:47:02.456 10:48:56 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:47:02.456 10:48:56 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:47:02.456 10:48:56 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:47:02.456 10:48:56 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:47:02.456 10:48:56 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:47:02.456 10:48:56 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:47:02.456 10:48:56 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:47:02.456 10:48:56 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:47:02.456 10:48:56 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:47:02.456 10:48:56 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:47:02.456 10:48:56 keyring_linux -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:47:02.456 10:48:56 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:47:02.456 10:48:56 keyring_linux -- nvmf/common.sh@733 -- # python - 00:47:02.714 10:48:56 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:47:02.714 10:48:56 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:47:02.714 /tmp/:spdk-test:key0 00:47:02.714 10:48:56 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:47:02.714 10:48:56 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:47:02.714 10:48:56 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:47:02.714 10:48:56 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:47:02.714 10:48:56 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:47:02.714 10:48:56 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:47:02.714 10:48:56 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:47:02.714 10:48:56 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:47:02.714 10:48:56 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:47:02.714 10:48:56 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:47:02.714 10:48:56 keyring_linux -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:47:02.714 10:48:56 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:47:02.714 10:48:56 keyring_linux -- nvmf/common.sh@733 -- # python - 00:47:02.714 10:48:56 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:47:02.714 10:48:56 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:47:02.714 /tmp/:spdk-test:key1 00:47:02.714 10:48:56 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=94090 00:47:02.714 10:48:56 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:47:02.714 10:48:56 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 94090 00:47:02.714 10:48:56 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 94090 ']' 00:47:02.714 10:48:56 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:47:02.714 10:48:56 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:47:02.714 10:48:56 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:47:02.714 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:47:02.714 10:48:56 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:47:02.714 10:48:56 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:47:02.714 [2024-12-13 10:48:56.504931] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:47:02.714 [2024-12-13 10:48:56.505018] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94090 ] 00:47:02.972 [2024-12-13 10:48:56.615328] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:47:02.972 [2024-12-13 10:48:56.717724] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:47:03.905 10:48:57 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:47:03.905 10:48:57 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:47:03.905 10:48:57 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:47:03.905 10:48:57 keyring_linux -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:03.905 10:48:57 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:47:03.905 [2024-12-13 10:48:57.573248] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:47:03.905 null0 00:47:03.905 [2024-12-13 10:48:57.605281] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:47:03.905 [2024-12-13 10:48:57.605645] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:47:03.905 10:48:57 keyring_linux -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:03.905 10:48:57 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:47:03.905 442328232 00:47:03.905 10:48:57 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:47:03.905 1019090726 00:47:03.905 10:48:57 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=94317 00:47:03.905 10:48:57 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:47:03.905 10:48:57 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 94317 /var/tmp/bperf.sock 00:47:03.905 10:48:57 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 94317 ']' 00:47:03.905 10:48:57 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:47:03.905 10:48:57 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:47:03.905 10:48:57 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:47:03.905 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:47:03.905 10:48:57 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:47:03.905 10:48:57 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:47:03.905 [2024-12-13 10:48:57.705425] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:47:03.905 [2024-12-13 10:48:57.705508] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94317 ] 00:47:04.163 [2024-12-13 10:48:57.818152] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:47:04.163 [2024-12-13 10:48:57.928565] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:47:04.728 10:48:58 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:47:04.728 10:48:58 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:47:04.728 10:48:58 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:47:04.728 10:48:58 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:47:04.986 10:48:58 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:47:04.986 10:48:58 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:47:05.552 10:48:59 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:47:05.552 10:48:59 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:47:05.552 [2024-12-13 10:48:59.348022] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:47:05.552 nvme0n1 00:47:05.552 10:48:59 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:47:05.552 10:48:59 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:47:05.552 10:48:59 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:47:05.552 10:48:59 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:47:05.552 10:48:59 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:47:05.552 10:48:59 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:47:05.809 10:48:59 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:47:05.809 10:48:59 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:47:05.809 10:48:59 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:47:05.810 10:48:59 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:47:05.810 10:48:59 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:47:05.810 10:48:59 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:47:05.810 10:48:59 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:47:06.067 10:48:59 keyring_linux -- keyring/linux.sh@25 -- # sn=442328232 00:47:06.067 10:48:59 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:47:06.067 10:48:59 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:47:06.067 10:48:59 keyring_linux -- keyring/linux.sh@26 -- # [[ 442328232 == \4\4\2\3\2\8\2\3\2 ]] 00:47:06.067 10:48:59 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 442328232 00:47:06.067 10:48:59 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:47:06.067 10:48:59 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:47:06.067 Running I/O for 1 seconds... 00:47:07.440 15889.00 IOPS, 62.07 MiB/s 00:47:07.441 Latency(us) 00:47:07.441 [2024-12-13T09:49:01.332Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:47:07.441 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:47:07.441 nvme0n1 : 1.01 15891.49 62.08 0.00 0.00 8020.04 3479.65 10985.08 00:47:07.441 [2024-12-13T09:49:01.332Z] =================================================================================================================== 00:47:07.441 [2024-12-13T09:49:01.332Z] Total : 15891.49 62.08 0.00 0.00 8020.04 3479.65 10985.08 00:47:07.441 { 00:47:07.441 "results": [ 00:47:07.441 { 00:47:07.441 "job": "nvme0n1", 00:47:07.441 "core_mask": "0x2", 00:47:07.441 "workload": "randread", 00:47:07.441 "status": "finished", 00:47:07.441 "queue_depth": 128, 00:47:07.441 "io_size": 4096, 00:47:07.441 "runtime": 1.007898, 00:47:07.441 "iops": 15891.489019722234, 00:47:07.441 "mibps": 62.076128983289976, 00:47:07.441 "io_failed": 0, 00:47:07.441 "io_timeout": 0, 00:47:07.441 "avg_latency_us": 8020.036608246595, 00:47:07.441 "min_latency_us": 3479.649523809524, 00:47:07.441 "max_latency_us": 10985.081904761904 00:47:07.441 } 00:47:07.441 ], 00:47:07.441 "core_count": 1 00:47:07.441 } 00:47:07.441 10:49:00 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:47:07.441 10:49:00 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:47:07.441 10:49:01 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:47:07.441 10:49:01 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:47:07.441 10:49:01 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:47:07.441 10:49:01 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:47:07.441 10:49:01 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:47:07.441 10:49:01 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:47:07.699 10:49:01 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:47:07.699 10:49:01 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:47:07.699 10:49:01 keyring_linux -- keyring/linux.sh@23 -- # return 00:47:07.699 10:49:01 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:47:07.699 10:49:01 keyring_linux -- common/autotest_common.sh@652 -- # local es=0 00:47:07.699 10:49:01 keyring_linux -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:47:07.699 10:49:01 keyring_linux -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:47:07.699 10:49:01 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:47:07.699 10:49:01 keyring_linux -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:47:07.699 10:49:01 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:47:07.699 10:49:01 keyring_linux -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:47:07.699 10:49:01 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:47:07.699 [2024-12-13 10:49:01.538816] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:47:07.699 [2024-12-13 10:49:01.538876] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032ad00 (107): Transport endpoint is not connected 00:47:07.699 [2024-12-13 10:49:01.539859] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032ad00 (9): Bad file descriptor 00:47:07.699 [2024-12-13 10:49:01.540857] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:47:07.699 [2024-12-13 10:49:01.540876] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:47:07.699 [2024-12-13 10:49:01.540892] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:47:07.699 [2024-12-13 10:49:01.540904] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:47:07.699 request: 00:47:07.699 { 00:47:07.699 "name": "nvme0", 00:47:07.699 "trtype": "tcp", 00:47:07.699 "traddr": "127.0.0.1", 00:47:07.699 "adrfam": "ipv4", 00:47:07.699 "trsvcid": "4420", 00:47:07.699 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:47:07.699 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:47:07.699 "prchk_reftag": false, 00:47:07.699 "prchk_guard": false, 00:47:07.699 "hdgst": false, 00:47:07.699 "ddgst": false, 00:47:07.699 "psk": ":spdk-test:key1", 00:47:07.699 "allow_unrecognized_csi": false, 00:47:07.699 "method": "bdev_nvme_attach_controller", 00:47:07.699 "req_id": 1 00:47:07.699 } 00:47:07.699 Got JSON-RPC error response 00:47:07.699 response: 00:47:07.699 { 00:47:07.699 "code": -5, 00:47:07.699 "message": "Input/output error" 00:47:07.699 } 00:47:07.699 10:49:01 keyring_linux -- common/autotest_common.sh@655 -- # es=1 00:47:07.699 10:49:01 keyring_linux -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:47:07.699 10:49:01 keyring_linux -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:47:07.699 10:49:01 keyring_linux -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:47:07.699 10:49:01 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:47:07.699 10:49:01 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:47:07.699 10:49:01 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:47:07.699 10:49:01 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:47:07.699 10:49:01 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:47:07.699 10:49:01 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:47:07.699 10:49:01 keyring_linux -- keyring/linux.sh@33 -- # sn=442328232 00:47:07.699 10:49:01 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 442328232 00:47:07.699 1 links removed 00:47:07.699 10:49:01 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:47:07.699 10:49:01 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:47:07.699 10:49:01 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:47:07.699 10:49:01 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:47:07.699 10:49:01 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:47:07.699 10:49:01 keyring_linux -- keyring/linux.sh@33 -- # sn=1019090726 00:47:07.699 10:49:01 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 1019090726 00:47:07.699 1 links removed 00:47:07.699 10:49:01 keyring_linux -- keyring/linux.sh@41 -- # killprocess 94317 00:47:07.699 10:49:01 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 94317 ']' 00:47:07.699 10:49:01 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 94317 00:47:07.699 10:49:01 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:47:07.699 10:49:01 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:47:07.699 10:49:01 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 94317 00:47:07.957 10:49:01 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:47:07.957 10:49:01 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:47:07.957 10:49:01 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 94317' 00:47:07.957 killing process with pid 94317 00:47:07.957 10:49:01 keyring_linux -- common/autotest_common.sh@973 -- # kill 94317 00:47:07.957 Received shutdown signal, test time was about 1.000000 seconds 00:47:07.957 00:47:07.957 Latency(us) 00:47:07.957 [2024-12-13T09:49:01.848Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:47:07.957 [2024-12-13T09:49:01.848Z] =================================================================================================================== 00:47:07.957 [2024-12-13T09:49:01.848Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:47:07.957 10:49:01 keyring_linux -- common/autotest_common.sh@978 -- # wait 94317 00:47:08.939 10:49:02 keyring_linux -- keyring/linux.sh@42 -- # killprocess 94090 00:47:08.939 10:49:02 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 94090 ']' 00:47:08.939 10:49:02 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 94090 00:47:08.939 10:49:02 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:47:08.939 10:49:02 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:47:08.939 10:49:02 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 94090 00:47:08.939 10:49:02 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:47:08.939 10:49:02 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:47:08.939 10:49:02 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 94090' 00:47:08.939 killing process with pid 94090 00:47:08.939 10:49:02 keyring_linux -- common/autotest_common.sh@973 -- # kill 94090 00:47:08.939 10:49:02 keyring_linux -- common/autotest_common.sh@978 -- # wait 94090 00:47:11.545 00:47:11.545 real 0m8.806s 00:47:11.545 user 0m14.331s 00:47:11.545 sys 0m1.633s 00:47:11.545 10:49:04 keyring_linux -- common/autotest_common.sh@1130 -- # xtrace_disable 00:47:11.545 10:49:04 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:47:11.545 ************************************ 00:47:11.545 END TEST keyring_linux 00:47:11.545 ************************************ 00:47:11.545 10:49:04 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:47:11.545 10:49:04 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:47:11.545 10:49:04 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:47:11.545 10:49:04 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:47:11.545 10:49:04 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:47:11.545 10:49:04 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:47:11.545 10:49:04 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:47:11.545 10:49:04 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:47:11.545 10:49:04 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:47:11.545 10:49:04 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:47:11.545 10:49:04 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:47:11.545 10:49:04 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:47:11.545 10:49:04 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:47:11.545 10:49:04 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:47:11.545 10:49:04 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:47:11.545 10:49:04 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:47:11.545 10:49:04 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:47:11.545 10:49:04 -- common/autotest_common.sh@726 -- # xtrace_disable 00:47:11.545 10:49:04 -- common/autotest_common.sh@10 -- # set +x 00:47:11.545 10:49:04 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:47:11.545 10:49:04 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:47:11.545 10:49:04 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:47:11.545 10:49:04 -- common/autotest_common.sh@10 -- # set +x 00:47:16.811 INFO: APP EXITING 00:47:16.812 INFO: killing all VMs 00:47:16.812 INFO: killing vhost app 00:47:16.812 INFO: EXIT DONE 00:47:19.342 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:47:19.342 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:47:19.342 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:47:19.342 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:47:19.342 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:47:19.342 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:47:19.342 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:47:19.342 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:47:19.342 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:47:19.342 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:47:19.342 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:47:19.342 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:47:19.342 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:47:19.342 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:47:19.342 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:47:19.342 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:47:19.342 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:47:22.629 Cleaning 00:47:22.629 Removing: /var/run/dpdk/spdk0/config 00:47:22.629 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:47:22.629 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:47:22.629 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:47:22.629 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:47:22.629 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:47:22.629 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:47:22.629 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:47:22.629 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:47:22.629 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:47:22.629 Removing: /var/run/dpdk/spdk0/hugepage_info 00:47:22.629 Removing: /var/run/dpdk/spdk1/config 00:47:22.629 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:47:22.629 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:47:22.629 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:47:22.629 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:47:22.629 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:47:22.629 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:47:22.629 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:47:22.629 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:47:22.629 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:47:22.629 Removing: /var/run/dpdk/spdk1/hugepage_info 00:47:22.629 Removing: /var/run/dpdk/spdk2/config 00:47:22.629 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:47:22.629 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:47:22.629 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:47:22.629 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:47:22.629 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:47:22.629 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:47:22.629 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:47:22.629 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:47:22.629 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:47:22.629 Removing: /var/run/dpdk/spdk2/hugepage_info 00:47:22.629 Removing: /var/run/dpdk/spdk3/config 00:47:22.629 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:47:22.629 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:47:22.629 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:47:22.629 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:47:22.629 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:47:22.629 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:47:22.629 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:47:22.629 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:47:22.629 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:47:22.629 Removing: /var/run/dpdk/spdk3/hugepage_info 00:47:22.629 Removing: /var/run/dpdk/spdk4/config 00:47:22.629 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:47:22.629 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:47:22.629 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:47:22.629 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:47:22.629 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:47:22.629 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:47:22.629 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:47:22.629 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:47:22.629 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:47:22.629 Removing: /var/run/dpdk/spdk4/hugepage_info 00:47:22.629 Removing: /dev/shm/bdev_svc_trace.1 00:47:22.629 Removing: /dev/shm/nvmf_trace.0 00:47:22.629 Removing: /dev/shm/spdk_tgt_trace.pid3697049 00:47:22.629 Removing: /var/run/dpdk/spdk0 00:47:22.629 Removing: /var/run/dpdk/spdk1 00:47:22.629 Removing: /var/run/dpdk/spdk2 00:47:22.629 Removing: /var/run/dpdk/spdk3 00:47:22.629 Removing: /var/run/dpdk/spdk4 00:47:22.629 Removing: /var/run/dpdk/spdk_pid15916 00:47:22.629 Removing: /var/run/dpdk/spdk_pid2375 00:47:22.629 Removing: /var/run/dpdk/spdk_pid24864 00:47:22.629 Removing: /var/run/dpdk/spdk_pid26645 00:47:22.629 Removing: /var/run/dpdk/spdk_pid27666 00:47:22.629 Removing: /var/run/dpdk/spdk_pid3693182 00:47:22.629 Removing: /var/run/dpdk/spdk_pid3694658 00:47:22.629 Removing: /var/run/dpdk/spdk_pid3697049 00:47:22.629 Removing: /var/run/dpdk/spdk_pid3698157 00:47:22.629 Removing: /var/run/dpdk/spdk_pid3699779 00:47:22.629 Removing: /var/run/dpdk/spdk_pid3700479 00:47:22.629 Removing: /var/run/dpdk/spdk_pid3701765 00:47:22.629 Removing: /var/run/dpdk/spdk_pid3701888 00:47:22.629 Removing: /var/run/dpdk/spdk_pid3702679 00:47:22.629 Removing: /var/run/dpdk/spdk_pid3704384 00:47:22.629 Removing: /var/run/dpdk/spdk_pid3705861 00:47:22.629 Removing: /var/run/dpdk/spdk_pid3706687 00:47:22.629 Removing: /var/run/dpdk/spdk_pid3707369 00:47:22.629 Removing: /var/run/dpdk/spdk_pid3708172 00:47:22.629 Removing: /var/run/dpdk/spdk_pid3708819 00:47:22.629 Removing: /var/run/dpdk/spdk_pid3709175 00:47:22.629 Removing: /var/run/dpdk/spdk_pid3709526 00:47:22.629 Removing: /var/run/dpdk/spdk_pid3709806 00:47:22.629 Removing: /var/run/dpdk/spdk_pid3710759 00:47:22.629 Removing: /var/run/dpdk/spdk_pid3714147 00:47:22.629 Removing: /var/run/dpdk/spdk_pid3714849 00:47:22.629 Removing: /var/run/dpdk/spdk_pid3715550 00:47:22.629 Removing: /var/run/dpdk/spdk_pid3715774 00:47:22.629 Removing: /var/run/dpdk/spdk_pid3717388 00:47:22.629 Removing: /var/run/dpdk/spdk_pid3717614 00:47:22.630 Removing: /var/run/dpdk/spdk_pid3719390 00:47:22.630 Removing: /var/run/dpdk/spdk_pid3719469 00:47:22.630 Removing: /var/run/dpdk/spdk_pid3720138 00:47:22.630 Removing: /var/run/dpdk/spdk_pid3720366 00:47:22.630 Removing: /var/run/dpdk/spdk_pid3720848 00:47:22.630 Removing: /var/run/dpdk/spdk_pid3721081 00:47:22.630 Removing: /var/run/dpdk/spdk_pid3722534 00:47:22.630 Removing: /var/run/dpdk/spdk_pid3722780 00:47:22.630 Removing: /var/run/dpdk/spdk_pid3723140 00:47:22.630 Removing: /var/run/dpdk/spdk_pid3727386 00:47:22.630 Removing: /var/run/dpdk/spdk_pid3731900 00:47:22.630 Removing: /var/run/dpdk/spdk_pid3742600 00:47:22.630 Removing: /var/run/dpdk/spdk_pid3743276 00:47:22.630 Removing: /var/run/dpdk/spdk_pid3747698 00:47:22.630 Removing: /var/run/dpdk/spdk_pid3748155 00:47:22.630 Removing: /var/run/dpdk/spdk_pid3752790 00:47:22.630 Removing: /var/run/dpdk/spdk_pid3758988 00:47:22.630 Removing: /var/run/dpdk/spdk_pid3761749 00:47:22.630 Removing: /var/run/dpdk/spdk_pid3772648 00:47:22.630 Removing: /var/run/dpdk/spdk_pid3782054 00:47:22.630 Removing: /var/run/dpdk/spdk_pid3784381 00:47:22.630 Removing: /var/run/dpdk/spdk_pid3785516 00:47:22.630 Removing: /var/run/dpdk/spdk_pid3803176 00:47:22.630 Removing: /var/run/dpdk/spdk_pid3807519 00:47:22.630 Removing: /var/run/dpdk/spdk_pid3892712 00:47:22.630 Removing: /var/run/dpdk/spdk_pid3898077 00:47:22.630 Removing: /var/run/dpdk/spdk_pid3904162 00:47:22.630 Removing: /var/run/dpdk/spdk_pid3914395 00:47:22.630 Removing: /var/run/dpdk/spdk_pid3943204 00:47:22.630 Removing: /var/run/dpdk/spdk_pid3948106 00:47:22.630 Removing: /var/run/dpdk/spdk_pid3949869 00:47:22.630 Removing: /var/run/dpdk/spdk_pid3951776 00:47:22.630 Removing: /var/run/dpdk/spdk_pid3952126 00:47:22.630 Removing: /var/run/dpdk/spdk_pid3952580 00:47:22.630 Removing: /var/run/dpdk/spdk_pid3952822 00:47:22.630 Removing: /var/run/dpdk/spdk_pid3953767 00:47:22.630 Removing: /var/run/dpdk/spdk_pid3955765 00:47:22.630 Removing: /var/run/dpdk/spdk_pid3957399 00:47:22.630 Removing: /var/run/dpdk/spdk_pid3958120 00:47:22.630 Removing: /var/run/dpdk/spdk_pid3960609 00:47:22.630 Removing: /var/run/dpdk/spdk_pid3961545 00:47:22.630 Removing: /var/run/dpdk/spdk_pid3962651 00:47:22.630 Removing: /var/run/dpdk/spdk_pid3966890 00:47:22.630 Removing: /var/run/dpdk/spdk_pid3972742 00:47:22.630 Removing: /var/run/dpdk/spdk_pid3972744 00:47:22.630 Removing: /var/run/dpdk/spdk_pid3972746 00:47:22.630 Removing: /var/run/dpdk/spdk_pid3976762 00:47:22.630 Removing: /var/run/dpdk/spdk_pid3980889 00:47:22.630 Removing: /var/run/dpdk/spdk_pid3986303 00:47:22.630 Removing: /var/run/dpdk/spdk_pid4023019 00:47:22.630 Removing: /var/run/dpdk/spdk_pid4027368 00:47:22.630 Removing: /var/run/dpdk/spdk_pid4033468 00:47:22.630 Removing: /var/run/dpdk/spdk_pid4035628 00:47:22.630 Removing: /var/run/dpdk/spdk_pid4037810 00:47:22.630 Removing: /var/run/dpdk/spdk_pid4039777 00:47:22.630 Removing: /var/run/dpdk/spdk_pid4044821 00:47:22.630 Removing: /var/run/dpdk/spdk_pid4049773 00:47:22.630 Removing: /var/run/dpdk/spdk_pid4054159 00:47:22.630 Removing: /var/run/dpdk/spdk_pid4061879 00:47:22.630 Removing: /var/run/dpdk/spdk_pid4061912 00:47:22.630 Removing: /var/run/dpdk/spdk_pid4067241 00:47:22.630 Removing: /var/run/dpdk/spdk_pid4067471 00:47:22.630 Removing: /var/run/dpdk/spdk_pid4067690 00:47:22.630 Removing: /var/run/dpdk/spdk_pid4068142 00:47:22.630 Removing: /var/run/dpdk/spdk_pid4068295 00:47:22.630 Removing: /var/run/dpdk/spdk_pid4069721 00:47:22.630 Removing: /var/run/dpdk/spdk_pid4071274 00:47:22.630 Removing: /var/run/dpdk/spdk_pid4073046 00:47:22.630 Removing: /var/run/dpdk/spdk_pid4074619 00:47:22.630 Removing: /var/run/dpdk/spdk_pid4076172 00:47:22.630 Removing: /var/run/dpdk/spdk_pid4077938 00:47:22.630 Removing: /var/run/dpdk/spdk_pid4083889 00:47:22.630 Removing: /var/run/dpdk/spdk_pid4084654 00:47:22.630 Removing: /var/run/dpdk/spdk_pid4086354 00:47:22.630 Removing: /var/run/dpdk/spdk_pid4087366 00:47:22.630 Removing: /var/run/dpdk/spdk_pid4093405 00:47:22.630 Removing: /var/run/dpdk/spdk_pid4096303 00:47:22.630 Removing: /var/run/dpdk/spdk_pid4102338 00:47:22.630 Removing: /var/run/dpdk/spdk_pid4107792 00:47:22.630 Removing: /var/run/dpdk/spdk_pid4116813 00:47:22.630 Removing: /var/run/dpdk/spdk_pid4124114 00:47:22.630 Removing: /var/run/dpdk/spdk_pid4124140 00:47:22.889 Removing: /var/run/dpdk/spdk_pid4142559 00:47:22.889 Removing: /var/run/dpdk/spdk_pid4143449 00:47:22.889 Removing: /var/run/dpdk/spdk_pid4144138 00:47:22.889 Removing: /var/run/dpdk/spdk_pid4145158 00:47:22.889 Removing: /var/run/dpdk/spdk_pid4146720 00:47:22.889 Removing: /var/run/dpdk/spdk_pid4147416 00:47:22.889 Removing: /var/run/dpdk/spdk_pid4148194 00:47:22.889 Removing: /var/run/dpdk/spdk_pid4148990 00:47:22.889 Removing: /var/run/dpdk/spdk_pid4153402 00:47:22.889 Removing: /var/run/dpdk/spdk_pid4153850 00:47:22.889 Removing: /var/run/dpdk/spdk_pid4160014 00:47:22.889 Removing: /var/run/dpdk/spdk_pid4160286 00:47:22.889 Removing: /var/run/dpdk/spdk_pid4165869 00:47:22.889 Removing: /var/run/dpdk/spdk_pid4170134 00:47:22.889 Removing: /var/run/dpdk/spdk_pid4180003 00:47:22.889 Removing: /var/run/dpdk/spdk_pid4180486 00:47:22.889 Removing: /var/run/dpdk/spdk_pid4184677 00:47:22.889 Removing: /var/run/dpdk/spdk_pid4185130 00:47:22.889 Removing: /var/run/dpdk/spdk_pid4189832 00:47:22.889 Removing: /var/run/dpdk/spdk_pid44772 00:47:22.889 Removing: /var/run/dpdk/spdk_pid48963 00:47:22.889 Removing: /var/run/dpdk/spdk_pid51814 00:47:22.889 Removing: /var/run/dpdk/spdk_pid5266 00:47:22.889 Removing: /var/run/dpdk/spdk_pid59693 00:47:22.889 Removing: /var/run/dpdk/spdk_pid59810 00:47:22.889 Removing: /var/run/dpdk/spdk_pid64764 00:47:22.889 Removing: /var/run/dpdk/spdk_pid66889 00:47:22.889 Removing: /var/run/dpdk/spdk_pid69002 00:47:22.889 Removing: /var/run/dpdk/spdk_pid70252 00:47:22.889 Removing: /var/run/dpdk/spdk_pid72407 00:47:22.889 Removing: /var/run/dpdk/spdk_pid73856 00:47:22.889 Removing: /var/run/dpdk/spdk_pid82572 00:47:22.889 Removing: /var/run/dpdk/spdk_pid83027 00:47:22.889 Removing: /var/run/dpdk/spdk_pid84056 00:47:22.889 Removing: /var/run/dpdk/spdk_pid86707 00:47:22.889 Removing: /var/run/dpdk/spdk_pid87164 00:47:22.889 Removing: /var/run/dpdk/spdk_pid87616 00:47:22.889 Removing: /var/run/dpdk/spdk_pid91383 00:47:22.889 Removing: /var/run/dpdk/spdk_pid91595 00:47:22.889 Removing: /var/run/dpdk/spdk_pid93179 00:47:22.889 Removing: /var/run/dpdk/spdk_pid94090 00:47:22.889 Removing: /var/run/dpdk/spdk_pid94317 00:47:22.889 Clean 00:47:22.889 10:49:16 -- common/autotest_common.sh@1453 -- # return 0 00:47:22.889 10:49:16 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:47:22.889 10:49:16 -- common/autotest_common.sh@732 -- # xtrace_disable 00:47:22.889 10:49:16 -- common/autotest_common.sh@10 -- # set +x 00:47:22.889 10:49:16 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:47:22.889 10:49:16 -- common/autotest_common.sh@732 -- # xtrace_disable 00:47:22.889 10:49:16 -- common/autotest_common.sh@10 -- # set +x 00:47:23.147 10:49:16 -- spdk/autotest.sh@392 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:47:23.147 10:49:16 -- spdk/autotest.sh@394 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:47:23.147 10:49:16 -- spdk/autotest.sh@394 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:47:23.147 10:49:16 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:47:23.147 10:49:16 -- spdk/autotest.sh@398 -- # hostname 00:47:23.147 10:49:16 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-wfp-04 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:47:23.147 geninfo: WARNING: invalid characters removed from testname! 00:47:45.067 10:49:36 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:47:45.326 10:49:39 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:47:47.226 10:49:40 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:47:49.125 10:49:42 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:47:51.022 10:49:44 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:47:52.396 10:49:46 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:47:54.300 10:49:48 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:47:54.300 10:49:48 -- spdk/autorun.sh@1 -- $ timing_finish 00:47:54.300 10:49:48 -- common/autotest_common.sh@738 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt ]] 00:47:54.300 10:49:48 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:47:54.300 10:49:48 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:47:54.300 10:49:48 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:47:54.300 + [[ -n 3614924 ]] 00:47:54.300 + sudo kill 3614924 00:47:54.309 [Pipeline] } 00:47:54.325 [Pipeline] // stage 00:47:54.330 [Pipeline] } 00:47:54.344 [Pipeline] // timeout 00:47:54.349 [Pipeline] } 00:47:54.362 [Pipeline] // catchError 00:47:54.368 [Pipeline] } 00:47:54.382 [Pipeline] // wrap 00:47:54.388 [Pipeline] } 00:47:54.400 [Pipeline] // catchError 00:47:54.409 [Pipeline] stage 00:47:54.411 [Pipeline] { (Epilogue) 00:47:54.423 [Pipeline] catchError 00:47:54.425 [Pipeline] { 00:47:54.437 [Pipeline] echo 00:47:54.439 Cleanup processes 00:47:54.450 [Pipeline] sh 00:47:54.737 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:47:54.737 106794 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:47:54.750 [Pipeline] sh 00:47:55.036 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:47:55.036 ++ grep -v 'sudo pgrep' 00:47:55.036 ++ awk '{print $1}' 00:47:55.036 + sudo kill -9 00:47:55.036 + true 00:47:55.048 [Pipeline] sh 00:47:55.332 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:48:07.543 [Pipeline] sh 00:48:07.824 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:48:07.824 Artifacts sizes are good 00:48:07.839 [Pipeline] archiveArtifacts 00:48:07.846 Archiving artifacts 00:48:08.019 [Pipeline] sh 00:48:08.345 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:48:08.358 [Pipeline] cleanWs 00:48:08.368 [WS-CLEANUP] Deleting project workspace... 00:48:08.368 [WS-CLEANUP] Deferred wipeout is used... 00:48:08.375 [WS-CLEANUP] done 00:48:08.376 [Pipeline] } 00:48:08.393 [Pipeline] // catchError 00:48:08.404 [Pipeline] sh 00:48:08.685 + logger -p user.info -t JENKINS-CI 00:48:08.693 [Pipeline] } 00:48:08.706 [Pipeline] // stage 00:48:08.711 [Pipeline] } 00:48:08.724 [Pipeline] // node 00:48:08.728 [Pipeline] End of Pipeline 00:48:08.765 Finished: SUCCESS